id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
7338849
Standardized coefficient
Estimates from regression analysis on data with unit variance In statistics, standardized (regression) coefficients, also called beta coefficients or beta weights, are the estimates resulting from a regression analysis where the underlying data have been standardized so that the variances of dependent and independent variables are equal to 1. Therefore, standardized coefficients are unitless and refer to how many standard deviations a dependent variable will change, per standard deviation increase in the predictor variable. Usage. Standardization of the coefficient is usually done to answer the question of which of the independent variables have a greater effect on the dependent variable in a multiple regression analysis where the variables are measured in different units of measurement (for example, income measured in dollars and family size measured in number of individuals). It may also be considered a general measure of effect size, quantifying the "magnitude" of the effect of one variable on another. For simple linear regression with orthogonal predictors, the standardized regression coefficient equals the correlation between the independent and dependent variables. Implementation. A regression carried out on original (unstandardized) variables produces unstandardized coefficients. A regression carried out on standardized variables produces standardized coefficients. Values for standardized and unstandardized coefficients can also be re-scaled to one another subsequent to either type of analysis. Suppose that formula_0 is the regression coefficient resulting from a linear regression (predicting formula_1 by formula_2). The standardized coefficient simply results as formula_3, where formula_4 and formula_5 are the (estimated) standard deviations of formula_2 and formula_1, respectively. Sometimes, standardization is done only without respect to the standard deviation of the regressor (the independent variable formula_2). Advantages and disadvantages. Standardized coefficients' advocates note that the coefficients are independent of the involved variables' units of measurement (i.e., standardized coefficients are "unitless"), which makes comparisons easy. Critics voice concerns that such a standardization can be very misleading. Due to the re-scaling based on sample standard deviations, any effect apparent in the standardized coefficient may be due to confounding with the particularities (especially: variability) of the involved data sample(s). Also, the interpretation or meaning of a "one standard deviation change" in the regressor formula_2 may vary markedly between non-normal distributions (e.g., when skewed, asymmetric or multimodal). Terminology. Some statistical software packages like PSPP, SPSS and SYSTAT label the standardized regression coefficients as "Beta" while the unstandardized coefficients are labeled "B". Others, like DAP/SAS label them "Standardized Coefficient". Sometimes the unstandardized variables are also labeled as "b". References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\beta " }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "\\beta^\\ast = \\frac{s_x}{s_y} \\beta" }, { "math_id": 4, "text": "s_x" }, { "math_id": 5, "text": "s_y" } ]
https://en.wikipedia.org/wiki?curid=7338849
73390
Residue theorem
Concept of complex analysis In complex analysis, the residue theorem, sometimes called Cauchy's residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals and infinite series as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. The residue theorem should not be confused with special cases of the generalized Stokes' theorem; however, the latter can be used as an ingredient of its proof. Statement of Cauchy's residue theorem. The statement is as follows: Let formula_0 be a simply connected open subset of the complex plane containing a finite list of points formula_1 formula_2 and a function formula_3 holomorphic on formula_4 Letting formula_5 be a closed rectifiable curve in formula_6 and denoting the residue of formula_3 at each point formula_7 by formula_8 and the winding number of formula_5 around formula_7 by formula_9 the line integral of formula_3 around formula_5 is equal to formula_10 times the sum of residues, each counted as many times as formula_5 winds around the respective point: formula_11 If formula_5 is a positively oriented simple closed curve, formula_12 is formula_13 if formula_7 is in the interior of formula_5 and formula_14 if not, therefore formula_15 with the sum over those formula_7 inside formula_16 The relationship of the residue theorem to Stokes' theorem is given by the Jordan curve theorem. The general plane curve γ must first be reduced to a set of simple closed curves formula_17 whose total is equivalent to formula_5 for integration purposes; this reduces the problem to finding the integral of formula_18 along a Jordan curve formula_19 with interior formula_20 The requirement that formula_3 be holomorphic on formula_21 is equivalent to the statement that the exterior derivative formula_22 on formula_4 Thus if two planar regions formula_23 and formula_24 of formula_0 enclose the same subset formula_25 of formula_26 the regions formula_27 and formula_28 lie entirely in formula_6 hence formula_29 is well-defined and equal to zero. Consequently, the contour integral of formula_18 along formula_30 is equal to the sum of a set of integrals along paths formula_31 each enclosing an arbitrarily small region around a single formula_32 — the residues of formula_3 (up to the conventional factor formula_10 at formula_33 Summing over formula_34 we recover the final expression of the contour integral in terms of the winding numbers formula_35 In order to evaluate real integrals, the residue theorem is used in the following manner: the integrand is extended to the complex plane and its residues are computed (which is usually easy), and a part of the real axis is extended to a closed curve by attaching a half-circle in the upper or lower half-plane, forming a semicircle. The integral over this curve can then be computed using the residue theorem. Often, the half-circle part of the integral will tend towards zero as the radius of the half-circle grows, leaving only the real-axis part of the integral, the one we were originally interested in. Examples. An integral along the real axis. The integral formula_36 arises in probability theory when calculating the characteristic function of the Cauchy distribution. It resists the techniques of elementary calculus but can be evaluated by expressing it as a limit of contour integrals. Suppose "t" > 0 and define the contour C that goes along the real line from −"a" to a and then counterclockwise along a semicircle centered at 0 from a to −"a". Take a to be greater than 1, so that the imaginary unit i is enclosed within the curve. Now consider the contour integral formula_37 Since "e""itz" is an entire function (having no singularities at any point in the complex plane), this function has singularities only where the denominator "z"2 + 1 is zero. Since "z"2 + 1 = ("z" + "i")("z" − "i"), that happens only where "z" = "i" or "z" = −"i". Only one of those points is in the region bounded by this contour. Because "f"("z") is formula_38 the residue of "f"("z") at "z" = "i" is formula_39 According to the residue theorem, then, we have formula_40 The contour C may be split into a straight part and a curved arc, so that formula_41 and thus formula_42 Using some estimations, we have formula_43 and formula_44 The estimate on the numerator follows since "t" > 0, and for complex numbers z along the arc (which lies in the upper half-plane), the argument φ of z lies between 0 and π. So, formula_45 Therefore, formula_46 If "t" < 0 then a similar argument with an arc "C"′ that winds around −"i" rather than "i" shows that formula_47 and finally we have formula_48 Evaluating zeta functions. The fact that "π" cot("πz") has simple poles with residue 1 at each integer can be used to compute the sum formula_49 Consider, for example, "f"("z") = "z"−2. Let Γ"N" be the rectangle that is the boundary of [−"N" − , "N" + ]2 with positive orientation, with an integer N. By the residue formula, formula_50 The left-hand side goes to zero as "N" → ∞ since formula_51 is uniformly bounded on the contour, thanks to using formula_52 on the left and right side of the contour, and so the integrand has order formula_53 over the entire contour. On the other hand, formula_54 where the Bernoulli number formula_55 (In fact, cot() = −.) Thus, the residue Res"z"=0 is −. We conclude: formula_56 which is a proof of the Basel problem. The same argument works for all formula_57 where formula_58 is a positive integer, giving usformula_59The trick does not work when formula_60, since in this case, the residue at zero vanishes, and we obtain the useless identity formula_61. Evaluating Eisenstein series. The same trick can be used to establish the sum of the Eisenstein series:formula_62 <templatestyles src="Math_proof/styles.css" />Proof Pick an arbitrary formula_63. As above, define formula_64 By the Cauchy residue theorem, for all formula_65 large enough such that formula_66 encircles formula_67, formula_68 It remains to prove the integral converges to zero. Since formula_69 is an even function, and formula_66 is symmetric about the origin, we have formula_70, and so formula_71 Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "U" }, { "math_id": 1, "text": "a_1, \\ldots, a_n," }, { "math_id": 2, "text": "U_0 = U \\smallsetminus \\{a_1, \\ldots, a_n\\}," }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "U_0." }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": "U_0," }, { "math_id": 7, "text": "a_k" }, { "math_id": 8, "text": "\\operatorname{Res}(f, a_k)" }, { "math_id": 9, "text": "\\operatorname{I}(\\gamma, a_k)," }, { "math_id": 10, "text": "2\\pi i" }, { "math_id": 11, "text": "\n\\oint_\\gamma f(z)\\, dz = 2\\pi i \\sum_{k=1}^n \\operatorname{I}(\\gamma, a_k) \\operatorname{Res}(f, a_k).\n" }, { "math_id": 12, "text": "\\operatorname{I}(\\gamma, a_k)" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "0" }, { "math_id": 15, "text": "\n\\oint_\\gamma f(z)\\, dz = 2\\pi i \\sum \\operatorname{Res}(f, a_k)\n" }, { "math_id": 16, "text": "\\gamma." }, { "math_id": 17, "text": "\\{\\gamma_i\\}" }, { "math_id": 18, "text": "f\\, dz" }, { "math_id": 19, "text": "\\gamma_i" }, { "math_id": 20, "text": "V." }, { "math_id": 21, "text": "U_0 = U \\smallsetminus \\{a_k\\}" }, { "math_id": 22, "text": "d(f\\, dz) = 0" }, { "math_id": 23, "text": "V" }, { "math_id": 24, "text": "W" }, { "math_id": 25, "text": "\\{a_j\\}" }, { "math_id": 26, "text": "\\{a_k\\}," }, { "math_id": 27, "text": "V \\smallsetminus W" }, { "math_id": 28, "text": "W \\smallsetminus V" }, { "math_id": 29, "text": "\n\\int_{V \\smallsetminus W} d(f \\, dz) - \\int_{W \\smallsetminus V} d(f \\, dz)\n" }, { "math_id": 30, "text": "\\gamma_j = \\partial V" }, { "math_id": 31, "text": "\\gamma_j," }, { "math_id": 32, "text": "a_j" }, { "math_id": 33, "text": "\\{a_j\\}." }, { "math_id": 34, "text": "\\{\\gamma_j\\}," }, { "math_id": 35, "text": "\\{\\operatorname{I}(\\gamma, a_k)\\}." }, { "math_id": 36, "text": "\\int_{-\\infty}^\\infty \\frac{e^{itx}}{x^2+1}\\,dx" }, { "math_id": 37, "text": "\\int_C {f(z)}\\,dz = \\int_C \\frac{e^{itz}}{z^2+1}\\,dz." }, { "math_id": 38, "text": "\\begin{align}\n\\frac{e^{itz}}{z^2+1} & =\\frac{e^{itz}}{2i}\\left(\\frac{1}{z-i}-\\frac{1}{z+i}\\right) \\\\\n& =\\frac{e^{itz}}{2i(z-i)} -\\frac{e^{itz}}{2i(z+i)} ,\n\\end{align}" }, { "math_id": 39, "text": "\\operatorname{Res}_{z=i}f(z)=\\frac{e^{-t}}{2i}." }, { "math_id": 40, "text": "\\int_C f(z)\\,dz=2\\pi i\\cdot\\operatorname{Res}\\limits_{z=i}f(z)=2\\pi i \\frac{e^{-t}}{2i} = \\pi e^{-t}." }, { "math_id": 41, "text": "\\int_{\\mathrm{straight}} f(z)\\,dz+\\int_{\\mathrm{arc}} f(z)\\,dz=\\pi e^{-t}" }, { "math_id": 42, "text": "\\int_{-a}^a f(z)\\,dz =\\pi e^{-t}-\\int_{\\mathrm{arc}} f(z)\\,dz." }, { "math_id": 43, "text": "\\left|\\int_{\\mathrm{arc}}\\frac{e^{itz}}{z^2+1}\\,dz\\right| \\leq \\pi a \\cdot \\sup_{\\text{arc}} \\left| \\frac{e^{itz}}{z^2+1} \\right| \\leq \\pi a \\cdot \\sup_{\\text{arc}} \\frac{1}{|z^2+1|} \\leq \\frac{\\pi a}{a^2 - 1}," }, { "math_id": 44, "text": "\\lim_{a \\to \\infty} \\frac{\\pi a}{a^2-1} = 0." }, { "math_id": 45, "text": "\\left|e^{itz}\\right| = \\left|e^{it|z|(\\cos\\varphi + i\\sin\\varphi)}\\right|=\\left|e^{-t|z|\\sin\\varphi + it|z|\\cos\\varphi}\\right|=e^{-t|z| \\sin\\varphi} \\le 1." }, { "math_id": 46, "text": "\\int_{-\\infty}^\\infty \\frac{e^{itz}}{z^2+1}\\,dz=\\pi e^{-t}." }, { "math_id": 47, "text": "\\int_{-\\infty}^\\infty\\frac{e^{itz}}{z^2+1}\\,dz=\\pi e^t," }, { "math_id": 48, "text": "\\int_{-\\infty}^\\infty\\frac{e^{itz}}{z^2+1}\\,dz=\\pi e^{-\\left|t\\right|}." }, { "math_id": 49, "text": " \\sum_{n=-\\infty}^\\infty f(n)." }, { "math_id": 50, "text": "\\frac{1}{2 \\pi i} \\int_{\\Gamma_N} f(z) \\pi \\cot(\\pi z) \\, dz = \\operatorname{Res}\\limits_{z = 0} + \\sum_{n = -N \\atop n\\ne 0}^N n^{-2}." }, { "math_id": 51, "text": "|\\cot(\\pi z)|" }, { "math_id": 52, "text": "x = \\pm \\left(\\frac 12 + N\\right)" }, { "math_id": 53, "text": "O(N^{-2})" }, { "math_id": 54, "text": "\\frac{z}{2} \\cot\\left(\\frac{z}{2}\\right) = 1 - B_2 \\frac{z^2}{2!} + \\cdots " }, { "math_id": 55, "text": "B_2 = \\frac{1}{6}." }, { "math_id": 56, "text": "\\sum_{n = 1}^\\infty \\frac{1}{n^2} = \\frac{\\pi^2}{6}" }, { "math_id": 57, "text": "f(x) = x^{-2n}" }, { "math_id": 58, "text": "n" }, { "math_id": 59, "text": " \\zeta(2n) = \\frac{(-1)^{n+1}B_{2n}(2\\pi)^{2n}}{2(2n)!}." }, { "math_id": 60, "text": "f(x) = x^{-2n-1}" }, { "math_id": 61, "text": "0 + \\zeta(2n+1) - \\zeta(2n+1) = 0" }, { "math_id": 62, "text": "\\pi \\cot(\\pi z) = \\lim_{N \\to \\infty} \\sum_{n=-N}^N (z - n)^{-1}." }, { "math_id": 63, "text": "w \\in \\mathbb C\\setminus \\Z" }, { "math_id": 64, "text": "g(z) := \\frac{1}{w-z} \\pi \\cot(\\pi z)" }, { "math_id": 65, "text": "N" }, { "math_id": 66, "text": "\\Gamma_N" }, { "math_id": 67, "text": "w" }, { "math_id": 68, "text": " \\frac{1}{2 \\pi i} \\oint_{\\Gamma_N} g(z) dz = -\\pi \\cot(\\pi z) + \\sum_{n=-N}^N \\frac{1}{z-n}" }, { "math_id": 69, "text": "\\pi\\cot(\\pi z) /z" }, { "math_id": 70, "text": "\\oint_{\\Gamma_N} \\pi\\cot(\\pi z) /z dz = 0" }, { "math_id": 71, "text": "\\oint_{\\Gamma_N} g(z) dz = \\oint_{\\Gamma_N} \\left(\\frac 1z + \\frac{1}{w-z}\\right) \\pi\\cot(\\pi z)dz = -w \\oint_{\\Gamma_N} \\frac{1}{z(z-w)} \\pi\\cot(\\pi z) dz = O(1/N)" } ]
https://en.wikipedia.org/wiki?curid=73390
73390595
Subdivision bifiltration
Technique in topological data analysis In topological data analysis, a subdivision bifiltration is a collection of filtered simplicial complexes, typically built upon a set of data points in a metric space, that captures shape and density information about the underlying data set. The subdivision bifiltration relies on a natural filtration of the barycentric subdivision of a simplicial complex by flags of minimum dimension, which encodes density information about the metric space upon which the complex is built. The subdivision bifiltration was first introduced by Donald Sheehy in 2011 as part of his doctoral thesis (later subsumed by a conference paper in 2012) as a discrete model of the multicover bifiltration, a continuous construction whose underlying framework dates back to the 1970s. In particular, Sheehy applied the construction to both the Vietoris-Rips and Čech filtrations, two common objects in the field of topological data analysis. Whereas single parameter filtrations are not robust with respect to outliers in the data, the subdivision-Rips and -Cech bifiltrations satisfy several desirable stability properties. Definition. Let formula_0 be a simplicial complex. Then a nested sequence of simplices formula_1 of formula_0 is called a "flag" or "chain" of formula_0. The set of all flags of formula_0 comprises an abstract simplicial complex, known as the "barycentric subdivision" of formula_0, denoted by formula_2. The barycentric subdivision is naturally identified with a geometric subdivision of formula_0, created by "starring" the geometric realization of formula_0 at the barycenter of each simplex. There is a natural filtration on formula_2 by considering for each natural number formula_3 the maximal subcomplex of formula_2 spanned by vertices of formula_2 corresponding to simplices of formula_0 of dimension at least formula_4, which is denoted formula_5. In particular, by this convention, then formula_6. Considering the sequence of nested subcomplexes given by varying the parameter formula_3, we obtain a filtration on formula_2 known as the "subdivision filtration." Since the complexes in the subdivision filtration shrink as formula_3 increases, we can regard it as a functor formula_7 from the opposite posetal category formula_8 to the category formula_9 of simplicial complexes and simplicial maps. Let formula_10 be a partially ordered set. Given a simplicial filtration formula_11, regarded as a functor from the posetal category of formula_10 to the category formula_9, by applying the subdivision filtration object-wise on formula_12, we obtain a two-parameter filtration formula_13, called the "subdivision bifiltration." In particular, when we take formula_12 to be the Rips or Čech filtration, we obtain bifiltrations formula_14 and formula_15, respectively. Properties. The subdivision-Čech bifiltration is weakly equivalent to the multicover bifiltration, implying that they have isomorphic persistent homology. A combinatorial proof of this statement was given in Sheehy's original conference paper, but a more algebraic version was presented in 2017 by Cavanna et al. The ideas from Cavanna's proof were later generalized by Blumberg and Lesnick in a 2022 paper on 2-parameter persistent homology. By the "size" of a bifiltration, we mean the number of simplices in the largest complex. The subdivision-Čech bifiltration has exponential size as a function of the number of vertices. This implies that its homology cannot be directly computed in polynomial time. However, for points in Euclidean space, the homology of subdivision-Čech can be computed in polynomial time, up to weak equivalence, via a construction known as the "rhomboid bifiltration." As a precursor to the rhomboid bifiltration, Edelsbrunner and Osang presented in 2021 a polyhedral cell complex called the "rhomboid tiling", which they used to compute horizontal or vertices slices of the multicover bifiltration up to weak equivalence. This was extended a year later by Corbet et al. to the rhomboid bifiltration, which is weakly equivalent to the multicover bifiltration, but has polynomial size.
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "\\sigma_1 \\subset \\sigma_2 \\subset \\cdots \\subset \\sigma_k" }, { "math_id": 2, "text": "\\operatorname{Bary}(T)" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "k-1" }, { "math_id": 5, "text": "\\tilde \\mathcal S (T)_k" }, { "math_id": 6, "text": "\\tilde \\mathcal S (T)_1 = \\operatorname{Bary}(T)" }, { "math_id": 7, "text": "\\tilde \\mathcal S (-): \\mathbb N^\\operatorname{op} \\to \\mathbf{Simp}" }, { "math_id": 8, "text": "\\mathbb N^\\operatorname{op}" }, { "math_id": 9, "text": "\\mathbf{Simp}" }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "F: P \\to \\mathbf{Simp}" }, { "math_id": 12, "text": "F" }, { "math_id": 13, "text": "\\mathcal S (F): \\mathbb N^\\operatorname{op}\\times P \\to \\mathbf{Simp}" }, { "math_id": 14, "text": "\\mathcal S \\operatorname{Rips}(-)" }, { "math_id": 15, "text": "\\mathcal S \\operatorname{\\check{C}ech}(-)" } ]
https://en.wikipedia.org/wiki?curid=73390595
73415
Sieve of Eratosthenes
Ancient algorithm for generating prime numbers In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the first prime number, 2. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime. Once all the multiples of each discovered prime have been marked as composites, the remaining unmarked numbers are primes. The earliest known reference to the sieve (, "kóskinon Eratosthénous") is in Nicomachus of Gerasa's "Introduction to Arithmetic", an early 2nd cent. CE book which attributes it to Eratosthenes of Cyrene, a 3rd cent. BCE Greek mathematician, though describing the sieving by odd numbers instead of by primes. One of a number of prime number sieves, it is one of the most efficient ways to find all of the smaller primes. It may be used to find primes in arithmetic progressions. Overview. <templatestyles src="Template:Quote_box/styles.css" /> "Sift the Two's and Sift the Three's:""The Sieve of Eratosthenes.""When the multiples sublime,""The numbers that remain are Prime." Anonymous A prime number is a natural number that has exactly two distinct natural number divisors: the number 1 and itself. To find all the prime numbers less than or equal to a given integer n by Eratosthenes' method: The main idea here is that every value given to p will be prime, because if it were composite it would be marked as a multiple of some other, smaller prime. Note that some of the numbers may be marked more than once (e.g., 15 will be marked both for 3 and 5). As a refinement, it is sufficient to mark the numbers in step 3 starting from "p"2, as all the smaller multiples of p will have already been marked at that point. This means that the algorithm is allowed to terminate in step 4 when "p"2 is greater than n. Another refinement is to initially list odd numbers only, (3, 5, ..., "n"), and count in increments of 2"p" in step 3, thus marking only odd multiples of p. This actually appears in the original algorithm. This can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds (i.e., numbers coprime with 2), and counting in the correspondingly adjusted increments so that only such multiples of p are generated that are coprime with those small primes, in the first place. Example. To find all the prime numbers less than or equal to 30, proceed as follows. First, generate a list of integers from 2 to 30:  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The first number in the list is 2; cross out every 2nd number in the list after 2 by counting up from 2 in increments of 2 (these will be all the multiples of 2 in the list):  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The next number in the list after 2 is 3; cross out every 3rd number in the list after 3 by counting up from 3 in increments of 3 (these will be all the multiples of 3 in the list):  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The next number not yet crossed out in the list after 3 is 5; cross out every 5th number in the list after 5 by counting up from 5 in increments of 5 (i.e. all the multiples of 5):  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The next number not yet crossed out in the list after 5 is 7; the next step would be to cross out every 7th number in the list after 7, but they are all already crossed out at this point, as these numbers (14, 21, 28) are also multiples of smaller primes because 7 × 7 is greater than 30. The numbers not crossed out at this point in the list are all the prime numbers below 30:  2 3 5 7 11 13 17 19 23 29 Algorithm and variants. Pseudocode. The sieve of Eratosthenes can be expressed in pseudocode, as follows: algorithm Sieve of Eratosthenes is input: an integer "n" > 1. output: all prime numbers from 2 through "n". let "A" be an array of Boolean values, indexed by integers 2 to "n", initially all set to true. for "i" = 2, 3, 4, ..., not exceeding "√n" do if "A"["i"] is true for "j" = "i"2, "i"2+"i", "i"2+2"i", "i"2+3"i", ..., not exceeding "n" do set "A"["j"] := false return all "i" such that "A"["i"] is true. This algorithm produces all primes not greater than n. It includes a common optimization, which is to start enumerating the multiples of each prime i from "i"2. The time complexity of this algorithm is "O"("n" log log "n"), provided the array update is an "O"(1) operation, as is usually the case. Segmented sieve. As Sorenson notes, the problem with the sieve of Eratosthenes is not the number of operations it performs but rather its memory requirements. For large n, the range of primes may not fit in memory; worse, even for moderate n, its cache use is highly suboptimal. The algorithm walks through the entire array A, exhibiting almost no locality of reference. A solution to these problems is offered by "segmented" sieves, where only portions of the range are sieved at a time. These have been known since the 1970s, and work as follows: If Δ is chosen to be , the space complexity of the algorithm is "O"(√"n"), while the time complexity is the same as that of the regular sieve. For ranges with upper limit "n" so large that the sieving primes below as required by the page segmented sieve of Eratosthenes cannot fit in memory, a slower but much more space-efficient sieve like the pseudosquares prime sieve, developed by Jonathan P. Sorenson, can be used instead. Incremental sieve. An incremental formulation of the sieve generates primes indefinitely (i.e., without an upper bound) by interleaving the generation of primes with the generation of their multiples (so that primes can be found in gaps between the multiples), where the multiples of each prime p are generated directly by counting up from the square of the prime in increments of p (or 2"p" for odd primes). The generation must be initiated only when the prime's square is reached, to avoid adverse effects on efficiency. It can be expressed symbolically under the dataflow paradigm as "primes" = ["2", "3", ...] \ set subtraction]] of [[arithmetic progressions]] of numbers. Primes can also be produced by iteratively sieving out the composites through [[Trial division|divisibility testing]] by sequential primes, one prime at a time. It is not the sieve of Eratosthenes but is often confused with it, even though the sieve of Eratosthenes directly generates the composites instead of testing for them. Trial division has worse theoretical [[Analysis of algorithms|complexity]] than that of the sieve of Eratosthenes in generating ranges of primes. When testing each prime, the "optimal" trial division algorithm uses all prime numbers not exceeding its square root, whereas the sieve of Eratosthenes produces each composite from its prime factors only, and gets the primes "for free", between the composites. The widely known 1975 [[functional programming|functional]] sieve code by [[David Turner (computer scientist)|David Turner]] is often presented as an example of the sieve of Eratosthenes but is actually a sub-optimal trial division sieve. Algorithmic complexity. The sieve of Eratosthenes is a popular way to benchmark computer performance. The time complexity of calculating all primes below n in the random access machine model is "O"("n" log log "n") operations, a direct consequence of the fact that the prime harmonic series asymptotically approaches log log "n". It has an exponential time complexity with regard to length of the input, though, which makes it a pseudo-polynomial algorithm. The basic algorithm requires "O"("n") of memory. The bit complexity of the algorithm is "O"("n" (log "n") (log log "n")) bit operations with a memory requirement of "O"("n"). The normally implemented page segmented version has the same operational complexity of "O"("n" log log "n") as the non-segmented version but reduces the space requirements to the very minimal size of the segment page plus the memory required to store the base primes less than the square root of the range used to cull composites from successive page segments of size "O"(). A special (rarely, if ever, implemented) segmented version of the sieve of Eratosthenes, with basic optimizations, uses "O"("n") operations and "O"(√"n") bits of memory. Using big O notation ignores constant factors and offsets that may be very significant for practical ranges: The sieve of Eratosthenes variation known as the Pritchard wheel sieve has an "O"("n") performance, but its basic implementation requires either a "one large array" algorithm which limits its usable range to the amount of available memory else it needs to be page segmented to reduce memory use. When implemented with page segmentation in order to save memory, the basic algorithm still requires about "O"() bits of memory (much more than the requirement of the basic page segmented sieve of Eratosthenes using "O"() bits of memory). Pritchard's work reduced the memory requirement at the cost of a large constant factor. Although the resulting wheel sieve has "O"("n") performance and an acceptable memory requirement, it is not faster than a reasonably Wheel Factorized basic sieve of Eratosthenes for practical sieving ranges. Euler's sieve. Euler's proof of the zeta product formula contains a version of the sieve of Eratosthenes in which each composite number is eliminated exactly once. The same sieve was rediscovered and observed to take linear time by . It, too, starts with a list of numbers from 2 to n in order. On each step the first element is identified as the next prime, is multiplied with each element of the list (thus starting with itself), and the results are marked in the list for subsequent deletion. The initial element and the marked elements are then removed from the working sequence, and the process is repeated:  [2] (3) 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 ...  [3] (5) 7 11 13 17 19 23 25 29 31 35 37 41 43 47 49 53 55 59 61 65 67 71 73 77 79 ...  [4] (7) 11 13 17 19 23 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 ...  [5] (11) 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 ... Here the example is shown starting from odds, after the first step of the algorithm. Thus, on the kth step all the remaining multiples of the kth prime are removed from the list, which will thereafter contain only numbers coprime with the first k primes (cf. wheel factorization), so that the list will start with the next prime, and all the numbers in it below the square of its first element will be prime too. Thus, when generating a bounded sequence of primes, when the next identified prime exceeds the square root of the upper limit, all the remaining numbers in the list are prime. In the example given above that is achieved on identifying 11 as next prime, giving a list of all primes less than or equal to 80. Note that numbers that will be discarded by a step are still used while marking the multiples in that step, e.g., for the multiples of 3 it is 3 × 3 = 9, 3 × 5 = 15, 3 × 7 = 21, 3 × 9 = 27, ..., 3 × 15 = 45, ..., so care must be taken dealing with this. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "(k\\Delta + 1)^2 > (k+1)\\Delta" } ]
https://en.wikipedia.org/wiki?curid=73415
7341926
Academic grading in Germany
Overview of academic grading in Germany Germany uses a 5- or 6-point grading scale (GPA) to evaluate academic performance for the youngest to the oldest students. Grades vary from 1 (excellent, "sehr gut") to 5 (resp. 6) (insufficient, "nicht genügend"). In the final classes of German Gymnasium schools that prepare for university studies, a point system is used with 15 points being the best grade and 0 points the worst. The percentage causing the grade can vary from teacher to teacher. Grades by education. Primary and lower secondary education. In primary and lower secondary education (1st to 10th grade), German school children receive grades based on a 6-point grading scale ranging from 1 (excellent, "sehr gut") to 6 (insufficient, "ungenügend"). Variations on the traditional six grade system allow for awarding grades suffixed with "+" and "−". To calculate averages of suffixed grades, they are assigned fractional values, where 1 is 1.0, 1− is 1.3, 2+ is 1.7, 2 is 2.0, 2− is 2.3 and so on. As schools are governed by the states, not by the federal government, there are slight differences. Often a more granular scale of "1−" (equal to 1.25), "1-2" (= 1.5), "2+" (= 1.75), etc. is used; sometimes even decimal grading (1.0, 1.1, 1.2 and so on) is applied. In end-of-year report cards, only unmodified integer grades may be used; in some regions they are written in text form. Many states currently also prescribe the use of behaviour-based notes ("Kopfnoten"), which grade things such as "Orderliness" or "General Behaviour". Pedagogic grading. Teachers who teach Grundschule (primary school) or Sonderschule (special education school) are allowed to use "pädagogische Noten" ("pedagogic grades"). Thus if a student tries very hard, but still does very poorly compared to the rest of the class, the teachers are allowed to give them good grades because they tried so hard. Upper secondary education. In the final classes of Gymnasium schools (11th to 12th/13th grade) the grades are converted to numbers ("points"), where "1+" equals 15 points and "6" equals 0 points. Since 1+ exists in this system, theoretically a final Abitur grade of less than 1.0 is possible and such grades are used in an informal setting, although officially any student with less than 1.0 will be awarded a 1.0 Abitur. When the point system is used, a grade of 4 (5 points) is the lowest passing grade, and 4− (4 points) the highest failing grade. Tertiary education. German universities (except for law schools) grade with a scale of 1 to 5: Most of the universities use "mit Auszeichnung bestanden" (passed with distinction/excellent) if the grade is a perfect score of 1.0. Law schools. For law students at German universities, a similar system to the 1 to 5 scale is used that comprises one more grade that is inserted between 2 ("gut") and 3 ("befriedigend"), named "vollbefriedigend". This is because the grades 2 ("gut") and 1 ("sehr gut") are extremely rare, so an additional grade was created below "gut" to increase differentiation. Every grade is converted into points similarly to the "Gymnasium" system described above, starting at 18 points (excellent) down to 0 points (poor). 4 points is the lowest pass grade. Conversion of grades. A matter of particular interest for those considering studying abroad or even enrolling full-time in a German university is the conversion of grades. While the below information may prove useful, it is recommended to contact the interested university directly to inquire which method they use to convert grades. Modified Bavarian formula. A number of systems exist for the conversion of grades from other countries into German grades. One such system, used by most universities in North Rhine-Westphalia and Bavaria, is called the "Modified Bavarian Formula": formula_0 where formula_1 = German grade, formula_2 = best possible score in foreign country's grading system, formula_3 = lowest passing score in foreign grading system and formula_4 = obtained foreign grade (to be converted into German grade). The resulting value is rounded to the next German grade (e.g. 1.6 is rounded to the German grade 1.7 and 2.4 is rounded to 2.3). For resulting values between two German grades, the score is rounded to the better grade (e.g. 2.5 is rounded to the German grade 2.3 and 1.15 is rounded to 1.0). Latin grades. In particular doctorate's degrees, e.g. Dr. phil. or Dr. rer. nat., are graded by using the Latin versions. In this case the grade (Note/Zensur) is called "Prädikat". The following rough guide may be used to convert into standard German grades: There is no fail grade; in that case the dissertation is "formally rejected" without a grade. East Germany (1950s–1980s). In former East Germany, a 5-point grading scale was used until July 1991: With the polytechnic reform of the school system initiated by the "Act on Socialistic Development of the School System in the German Democratic Republic" the Ministry of People's Education wanted to adapt academic grading for all institutions in its jurisdiction, which were general educational schools, vocational schools and professional schools for the qualification of lower classes teachers, educators and kindergartners. Therefore, a reorganized grading scale was enacted in "Directive on the introduction of a unified grading scale for secondary schools, extended secondary schools, special schools, vocational schools, institutes of vocational masters' education, institutes of vocational school teachers' education, institutes of vocational teachers' further education, institutes of teachers' education and pedagogic institutes". This directive was unchangedly effective from September 1, 1960, to August 25, 1993. For all of the different subjects there were further recommendations with even more specific descriptions in relation to the general grading scale. These particular comments should help the teacher to grade the achievements of the students as objectively as possible. This scale is identical to the current Austrian grading scale. Criticism of German grading policies. The case of Sabine Czerny. At public schools in Germany, teachers are supposed to evaluate students against fixed course-specific criteria, but often feel implicit pressure to grade students on a curve where grades are awarded based on performance relative to all other individuals rather than performance relative to the difficulty of a specific course. Specifically, in the 2008 case of Sabine Czerny, a Bavarian primary school teacher, Czerny thought that 91% of the class would be able to make a successful transition into a Realschule or a Gymnasium (high schools for which normally only about 50% of Bavarian children qualify based on their educational achievements). While the parents liked this result, the educational authorities questioned Czerny's grading standards. Czerny claims that her students' results stood up in cross-classroom tests; nonetheless she was transferred to another school. Czerny received much public sympathy and later went on to write a book about her experiences. Comparisons between Gymnasium and Gesamtschule (comprehensive school). German Gymnasiums are schools which aim to prepare students for college education. These schools are selective, and tough grading is a traditional element of the teaching. The culture of these works against students of average academic ability who barely qualify for a Gymnasium place, and who may then find themselves on the bottom of their class; these same students would have achieved better grades for the same effort if they had attended a non-selective comprehensive school (Gesamtschule). A study revealed that a sample of Gymnasium high school seniors of average mathematical ability who chose to attend advanced college-preparatory math classes at their school ("Leistungskurs") found themselves in the very bottom of their class and had an average grade of 5 (i.e. failed the class). Comprehensive school students of equal mathematical ability found themselves in the upper half of the equivalent course in their school and obtained an average grade of 3+. It was found that students who graduated from a Gesamtschule tend to do worse in college than their grades in high school classes would predict - and vice versa for Gymnasium students. Predictive ability. Often the German grades are treated like an interval scale to calculate means and deviations for comparisons. Even though it lacks any psychometric standardization, the grading system is often compared to normally distributed norm-referenced assessments. Using an expected value of 3 and a standard deviation of 1, transformations into other statistical measures like Percentiles, T, Stanine etc. or (like in the PISA studies) an IQ scale are then possible. This transformation is problematic both for high school grades and for university grades: At high school level, schooling in most of Germany is selective—thus for instance a "Gymnasium" student who is underperforming compared to his classmates may still be close to or above average when compared to his entire age group. At university level, the distribution is highly non-normal and idiosyncratic to the subject. Substantially more German students fail exams in university than in other countries (usually about 20–40%, often even more). Grades awarded vary widely between fields of study and between universities. (In law degrees, for instance, only 10–15% of candidates are awarded a grade better than ""befriedigend".) This might be one reason for the low graduation rates at university in international comparisons, as well as for the small number of people who obtain an "Abitur"" in the first place. However, several empirical psychological studies show that the grades awarded in Germany at school and university have a high reliability when taking up higher education and research jobs. The universities usually demand high grades in Diploma Thesis or a Master Thesis. Thesis grades are by far the most critical factor while applying for job or higher education e.g. PhDs. One study from 1995 found that GPAs from school are a mild (weak) predictor for success in university and to a slightly better degree for success in vocational trainings, and that GPAs from school or university have nearly no predictive value for job performance. Nevertheless, due to rarity of psychometric testing (like Scholastic Aptitude Test (SAT) or the Medical College Admission Test (MCAT) and the like in the US) the GPA is usually used as the most predictive criterion available within an application process. For job recruiting, school/university grades have a high impact on career opportunities, as independent scientifically based recruitment and assessment is used by less than 8% of German employers (cf 50–70% in other European countries). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "x={N_{\\mathrm{max}}-N_{\\mathrm{d}}\\over N_{\\mathrm{max}}-N_{\\mathrm{min}}}3+1" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "N_{\\mathrm{max}}" }, { "math_id": 3, "text": "N_{\\mathrm{min}}" }, { "math_id": 4, "text": "N_{\\mathrm{d}}" } ]
https://en.wikipedia.org/wiki?curid=7341926
73422453
Topological Yang–Mills theory
Topological field theory In gauge theory, topological Yang–Mills theory, also known as the theta term or formula_0-term is a gauge-invariant term which can be added to the action for four-dimensional field theories, first introduced by Edward Witten. It does not change the classical equations of motion, and its effects are only seen at the quantum level, having important consequences for CPT symmetry. Action. Spacetime and field content. The most common setting is on four-dimensional, flat spacetime (Minkowski space). As a gauge theory, the theory has a gauge symmetry under the action of a gauge group, a Lie group formula_1, with associated Lie algebra formula_2 through the usual correspondence. The field content is the gauge field formula_3, also known in geometry as the connection. It is a formula_4-form valued in a Lie algebra formula_2. Action. In this setting the theta term action is formula_5 where As a total derivative. The action can be written as formula_16 where formula_17 is the Chern–Simons 3-form. Classically, this means the theta term does not contribute to the classical equations of motion. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\mathfrak{g}" }, { "math_id": 3, "text": "A_\\mu" }, { "math_id": 4, "text": "1" }, { "math_id": 5, "text": "S_\\theta = \\frac{\\theta}{16\\pi^2}\\int d^4x \\, \\text{tr}(F_{\\mu\\nu}*F^{\\mu\\nu}) = \\frac{\\theta}{16\\pi^2}\\int \\langle F \\wedge F \\rangle" }, { "math_id": 6, "text": "F_{\\mu\\nu}" }, { "math_id": 7, "text": "F_{\\mu\\nu} = \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu + [A_\\mu, A_\\nu]" }, { "math_id": 8, "text": "\\pm i" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": "*F^{\\mu\\nu}" }, { "math_id": 11, "text": "*F^{\\mu\\nu} = \\frac{1}{2}\\epsilon^{\\mu\\nu\\rho\\sigma}F_{\\rho\\sigma}" }, { "math_id": 12, "text": "\\epsilon^{\\mu\\nu\\rho\\sigma}" }, { "math_id": 13, "text": "*F" }, { "math_id": 14, "text": "F" }, { "math_id": 15, "text": "\\text{tr}" }, { "math_id": 16, "text": " S_\\theta = \\frac{\\theta}{8\\pi^2} \\int d^4 x \\, \\partial_\\mu \\epsilon^{\\mu\\nu\\rho\\sigma} \\text{tr}\\left(A_\\nu \\partial_\\rho A_\\sigma + \\frac{2}{3}A_\\nu A_\\rho A_\\sigma\\right)\n= \\frac{\\theta}{8\\pi^2} \\int d^4 x \\, \\partial_\\mu \\epsilon^{\\mu\\nu\\rho\\sigma} \\text{CS}(A)_{\\nu\\rho\\sigma}," }, { "math_id": 17, "text": "\\text{CS}(A)" } ]
https://en.wikipedia.org/wiki?curid=73422453
734256
Molecular modelling
Discovering chemical properties by physical simulations Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach). Molecular mechanics. Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics. formula_0 formula_1 This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, formula_2. Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects. Variables. Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as "gas-phase" simulations, while those that include the presence of solvent molecules are referred to as "explicit solvent" simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed "implicit solvation" simulations. Coordinate representations. Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation "molecule", make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method. Applications. Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "E = E_\\text{bonds} + E_\\text{angle} + E_\\text{dihedral} + E_\\text{non-bonded} \\, " }, { "math_id": 1, "text": "E_\\text{non-bonded} = E_\\text{electrostatic} + E_\\text{van der Waals} \\, " }, { "math_id": 2, "text": " \\mathbf{F} = m\\mathbf{a}" } ]
https://en.wikipedia.org/wiki?curid=734256
73429915
Mathematics, science, technology and engineering of the Victorian era
Mathematics, science, technology and engineering of the Victorian era refers to the development of mathematics, science, technology and engineering during the reign of Queen Victoria. Professionalisation of science. Founded in 1799 with the stated purpose of "diffusing the Knowledge, and facilitating the general Introduction, of Useful Mechanical Inventions and Improvements; and for teaching, by Courses of Philosophical Lectures and Experiments, the application of Science to the common Purposes of Life," the Royal Institution was a proper scientific institution with laboratories, a lecture hall, libraries, and offices. In its first years, the Institution was dedicated to the improvement of agriculture using chemistry, prompted by trade restrictions with Europe. Such practical concerns continued through the next two centuries. However, it soon became apparent that additional funding was required in order for the Institution to continue. Some well-known experts were hired as lecturers and researchers. The most successful of them all was Sir Humphry Davy, whose lectures concerned a myriad of topics and were so popular that the original practical purpose of the Institution faded away. It became increasingly dominated by research in basic science. The professionalisation of science began in the aftermath of the French Revolution and soon spread to other parts of the Continent, including the German lands. It was slow to reach Britain, however. Master of Trinity College William Whewell coined the term "scientist" in 1833 to describe the new professional breed of specialists and experts studying what was still commonly known as "natural philosophy". In 1840, Whewell wrote, "We need very much a name to describe a cultivator of science in general. I should incline to call him a Scientist." The new term signalled the recognition of the importance of empiricism and inductive reasoning. But this term was slow to catch on. As biologist Thomas Huxley indicated in 1852, the prospect of earning a decent living as a scientist remained remote despite the prestige of the occupation. It was possible for a scientist to "earn praise but not pudding," he wrote. Since its birth, the Royal Society of London had been a club of gentlemanly amateurs, though some of whom were the very best in their fields, people like Charles Darwin and James Prescott Joule. But the Society reformed itself in the 1830s and 1840s. By 1847, it only admitted the new breed of professionals. The Victorians were impressed by science and progress and felt that they could improve society in the same way as they were improving technology. Britain was the leading world centre for advanced engineering and technology. Its engineering firms were in worldwide demand for designing and constructing railways. Ease of discovery and rate of progress. A necessary part of understanding scientific progress is the ease of scientific discovery. In many cases, from planetary science to mammalian biology, the ease of discovery since the 1700s and 1800s can be fitted to an exponentially decaying curve. But the rate of progress is also dependent on other factors, such as the number of researchers, the level of funding, and advances in technology. Thus the number of new species of mammals discovered between the late 1700s and late 1800s followed grew exponentially before leveling off in the 1900s; the general shape is known as the logistic curve. In other cases, a branch of study reached the point of saturation. For instance, the last major internal human organ, the parathyroid gland, was discovered in 1880 by Ivar Viktor Sandström. This does not mean that basic science was coming an end. Despite the despondency of many Victorian-era scientists, who thought that all that remained was measuring quantities to the next decimal place and that new discoveries would not change the contemporary scientific paradigm, as the nineteenth century became the twentieth, science witnessed truly revolutionary discoveries, such as radioactivity, and basic science continued its advance, though a number of twentieth-century scientists shared the same pessimism as their late-Victorian counterparts. Mathematics and statistics. In the field of statistics, the nineteenth century saw significant innovations in data visualisation. William Playfair, who created charts of all sorts, justified it thus, "a man who has carefully investigated a printed table, finds, when done, that he has only a very faint and partial idea of what he has read; and that like a figure imprinted on sand, is soon totally erased and defaced." For example, in a chart showing the relationship between population and government revenue of some European nations, he used the areas of circles to represent the geographical sizes of those nations. In the same graph he used the slopes of lines to indicate the tax burden of a given population. While serving as nurse during the Crimean War, Florence Nightingale drew the first pie charts representing the monthly fatality rates of the conflict, distinguishing deaths due to battle wounds (innermost section), those due to infectious disease (outer section), and to other causes (middle section). (See figure.) Her charts clearly showed that most deaths resulted from disease, which led the general public to demand improved sanitation at field hospitals. Although bar charts representing frequencies were first used by the Frenchman A. M. Guerry in 1833, it was the statistician Karl Pearson who gave them the name "histograms". Pearson used them in an 1895 article mathematically analyzing biological evolution. One such histogram showed that buttercups with large numbers of petals were rarer. Normal distributions, expressible in the form formula_0, arose in various works on probability and the theory of errors. Belgian sociologist and statistician Adolphe Quetelet discovered that its extremely wide applicability in his analysis of vast amounts of statistics of human physical characteristics such as height and other traits such as criminality and alcoholism. Quetelet derived the concept of the "average man" from his studies. Sir Francis Galton employed Quetelet's ideas in his research on mathematical biology. In his experiments with sweet peas in the 1870s, Galton discovered that the spread of the distributions of a particular trait did not change over the generations. He invented what he called the "quincunx" to demonstrate why mixtures of normal distributions were normal. Galton noticed that the means of a particular trait in the offspring generation differed from those of the parent generation, a phenomenon now known as regression to the mean. He found that the slopes of the regression lines of two given variables were the same if the two data sets were scaled by units of probable error and introduced the notion of the correlation coefficient, but noted that correlation does not imply causation. During the late nineteenth century, British statisticians introduced a number of methods to relate and draw conclusions from statistical quantities. Francis Edgeworth developed a test for statistical significance that estimated the "fluctuations"—twice the variance in modern language—from two given means. By modern standards, however, he was extremely conservative when it comes to drawing conclusions about the significance of an observation. For Edgeworth, an observation was significant if it was at the level of 0.005, which is much stricter than the requirement of 0.05 to 0.01 commonly used today. Pearson defined the standard deviation and introduced the formula_1-statistic (chi-squared). Pearson's student, George Udney Yule, demonstrated that one could compute the regression equation of a given data set using the method of least squares. In 1828, miller and autodidactic mathematician George Green published "An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism", making use of the mathematics of potential theory developed by Continental mathematicians. But this paper fell on deaf ears until William Thomson read it, realised its significance, and had it re-printed in 1850. Green's work became a source of inspiration for the Cambridge school of mathematical physicists, which included Thomson himself, George Gabriel Stokes, and James Clerk Maxwell. Green's "Essay" contained what became known as Green's theorem, a basic result in vector calculus, Green's identities, and the notion of Green's functions, which appears in the study of differential equations. Thomson went on to prove Stokes' theorem, which earned that name after Stokes asked students to prove it in the Smith's Prize exam in 1854. Stokes learned it from Thomson in a letter in 1850. Stokes' theorem generalises Green's theorem, which itself is a higher-dimensional version of the Fundamental Theorem of Calculus. Research in physics—in particular elasticity, heat conduction, hydrodynamics, and electromagnetism—motivated the development of vector calculus in the nineteenth century. Arthur Cayley is credited with the creation of the theory of matrices—rectangular arrays of numbers—as distinct objects from determinants, studied since the mid-eighteenth century. The term "matrix" was coined by James Joseph Sylvester, a major contributor to the theory of determinants. It is difficult to overestimate the value of matrix theory to modern theoretical physics. Peter Tait wrote, prophetically, that Cayley was "forging the weapons for future generations of physicists." Theoretical mechanics and optics. <templatestyles src="Unsolved/styles.css" /> Unsolved problem in physics: Under what conditions do solutions to the Navier–Stokes equations exist and are smooth? This is a Millennium Prize Problem in mathematics. Early contributions study of elasticity—how objects behave under stresses, pressures, and loads— employed "ad hoc" hypotheses to solve specific problems. It was during the nineteenth century that scientists began to work out a thorough theory. In 1821, using an analogy with elastic bodies, French professor of mechanics Claude-Louis Navier arrived at the basic equations of motion for viscous fluids. George Gabriel Stokes re-derived them in 1845 using continuum mechanics in a paper titled "On the Theories of Internal Friction of Fluids in Motion." In it, Stokes sought to develop a mathematical description for all known fluids that take into account viscosity, or internal friction. These are now referred to as the Navier–Stokes equations. In 1852, Stokes showed that light polarisation can be described in terms of what are now known as the Stokes parameters. The Stokes parameters for a given wave may be viewed as a vector. Founded in the eighteenth century, the calculus of variations grew into a much favored mathematical tool among physicists. Scientific problems thus became the impetus for the development of the subject. William Rowan Hamilton advanced it in his course to construct a deductive framework for optics; he then applied the same ideas to mechanics. With an appropriate variational principle, one could deduce the equations of motion for a given mechanical or optical system. Soon, scientists worked out the variational principles for the theory of elasticity, electromagnetism, and fluid mechanics (and, in the future, relativity and quantum theory). Whilst variational principles did not necessarily provide a simpler way to solve problems, they were of interest for philosophical or aesthetic reasons, though scientists at this time were not as motivated by religion in their work as their predecessors. Hamilton's work in physics was great achievement; he was able to provide a unifying mathematical framework for wave propagation and particle motion. In light of this description, it becomes clear why the wave and corpuscle theories of light were equally able to account for the phenomena of reflection and refraction. Hamilton's equations also proved useful in calculating planetary orbits. In 1845, John James Waterson submitted to the Royal Society a paper on the kinetic theory of gases that included a statement of the equipartition theorem and a calculation of the ratio of the specific heats of gases. Although the paper was read before the Society and its abstract published, Waterson's paper faced antipathy. At this time, the kinetic theory of gases was considered highly speculative as it was based on the then not-accepted atomic hypothesis. But by the mid-1850s, interest was revived. In the 1860s, James Clerk Maxwell published a series of papers on the subject. Unlike those of his predecessors, who were only using averages, Maxwell's papers were explicitly statistical in nature. He proposed that the speeds of molecules in a gas followed a distribution. Although the speeds would cluster around the average, some molecules were moving faster or slower than this average. He showed that this distribution is a function of temperature and mathematically described various properties of gases, such as diffusion and viscosity. He predicted, surprisingly, that the viscosity of a gas is independent of its density. This was verified at once by a series of experiments Maxwell conducted with his wife, Katherine. Experimental verification of the Maxwell distribution was not obtained till 60 years later, however. In the meantime, the Austrian Ludwig Boltzmann developed Maxwell's statistics further and proved, in 1872, using the "formula_2-function," that the Maxwellian distribution is stable and any non-Maxwellian distribution would morph into it. In his "Dynamics of Rigid Bodies" (1877), Edward John Routh noted the importance of what he called "absent coordinates," also known as cyclic coordinates or ignorable coordinates (following the terminology of E. T. Whittaker). Such coordinates are associated with conserved momenta and as such are useful in problem solving. Routh also devised a new method for solving problems in mechanics. Although Routh's procedure does not add any new insights, it allows for more systematic and convenient analysis, especially in problems with many degrees of freedom and at least some cyclic coordinates. In 1899, at the request the British Association for the Advancement of Science from the year before, Edmund Taylor Whittaker submitted his "Report on the Progress of Solution to the Problem of Three Bodies". At that time, classical mechanics in general and the three-body problem in particular captured the imagination of many talented mathematicians, whose contributions Whittaker covered in his "Report". Whittaker later incorporated the "Report" into his textbook titled "Analytical Dynamics of Particles and Rigid Bodies" (first edition 1907). It helped provide the scientific basis for the aerospace industry in the twentieth century. Despite its age, it remains in print in the early twenty-first century. Thermodynamics, heat engines, and refrigerators. During the 1830s and 1840s, traditional caloric theory of heat began losing favour to "dynamical" alternatives, which posit that heat is a kind of motion. Brewer and amateur scientist James Prescott Joule was one of the proponents of the latter. Joule's intricate experiments—the most successful of which involved heating water with paddle wheels—making full use of his skill in temperature control as a brewer, demonstrated decisively the reality of the "mechanical equivalent of heat." What would later become known as the "conservation of energy" was pursued by many other workers approaching the subject from a variety of backgrounds, from medicine and physiology to physics and engineering. Another notable contributor to this development was the German researcher Hermann von Helmholtz, who gave an essentially Newtonian, that is, mechanical, account. William Thomson (later Lord Kelvin) received the works of Joule and Helmholtz positively, embracing them as providing support for the emerging "science of energy." In the late 1840s to the 1850s, Kelvin, his friend William John Macquorn Rankine, and the German Rudolf Clausius published a steady stream of papers concerning heat engines and an absolute temperature scale. Indeed, the commercial value of new science had already become apparent by this time; some businessmen were quite willing to offer generous financial support for researchers. Rankine spoke confidently of the new science of "thermodynamics", a term Kelvin coined in 1854, whose fundamental principles came to be known as the First and Second Laws and whose core concepts were "energy" and "entropy." Kelvin and Peter Guthrie Tait's "Treatise on Natural Philosophy" (1867) was an attempt to reformulate physics in terms of energy. Here, Kelvin and Tait introduced the phrase "kinetic energy" (instead of 'actual'), now in standard usage. The phrase "potential energy" was promoted by Rankine. On the practical side, the food-preserving effect of low temperatures had long been recognised. Natural ice was vigorously traded in the early nineteenth century, but it was inevitably in short supply, especially in Australia. During the eighteenth and nineteenth centuries, there was considerable commercial incentive to develop ever more effective refrigerators thanks to the expansion of agriculture in the Americas, Australia, and New Zealand and rapid urbanization in Western Europe. From the 1830s onward, refrigerators relied on the expansion of compressed air or the evaporation of a volatile liquid; evaporation became the basis of all modern refrigerator designs. Long-distance shipping of perishable foods, such as meat, boomed in the late 1800s. On the theoretical side, new refrigeration techniques were also of great value. From his absolute temperature scale, Lord Kelvin deduced the existence of absolute zero occurring at −273.15 °C. Scientists began trying to reach ever lower temperatures and to liquefy every gas they encountered. This paved the way for the development of low-temperature physics and the Third Law of Thermodynamics. Natural history. This study of natural history was most powerfully advanced by Charles Darwin and his theory of evolution first published in his book "On the Origin of Species" in 1859. Research in geology and evolutionary biology naturally led to the question of how old the Earth was. Indeed, between the mid-1700s to the mid-1800s, this was the topic of increasingly sophisticated intellectual discussions. With the advent of thermodynamics, it became clear that the Earth and the Sun must have an old but finite age. Whatever the energy source of the Sun, it must be finite, and since it is constantly dissipating, there must be a day when the Sun runs out of energy. Lord Kelvin wrote in 1852, "...within a finite period of time past the earth must have been, and within a finite period of time to come the earth must again be, unfit for the habitation of man as at present constituted, unless operations have been, or are to be performed, which are impossible under the laws to which the known operations going on are subject." In the 1860s, Kelvin employed a mathematical model by von Helmholtz suggesting that the energy of the Sun is released via gravitational collapse to calculate the age of the Sun to be between 50 and 500 million years. He reached comparable figures for the Earth. The missing ingredient here was radioactivity, which was not known to science till the end of the nineteenth century. Electricity, magnetism, and electrification. After the Dane Hans Christian Ørsted demonstrated that it was possible to deflect a magnetic needle by closing or opening an electric circuit nearby, a deluge of papers attempting explain the phenomenon was published. Michael Faraday set himself to the task of clarifying the nature of electricity and magnetism by experiments. In doing so, he devised what could be described as the first electric motor (though it does not resemble a modern one), a transformer (now used to step up the voltage and step down the current or vice versa), and a dynamo (which contains the basics of all electric turbine generators). The practical value of Faraday's research on electricity and magnetism was nothing short of revolutionary. A dynamo converts mechanical energy into an electrical current whilst a motor does the reverse. The world's first power plants entered service in 1883, and by the following year, people realized the possibility of using electricity to power a variety of household appliances. Inventors and engineers soon raced to develop such items, starting with affordable and durable incandescent light bulbs, perhaps the most important of the early applications of electricity. As the foremost expert on electricity and magnetism at the time, Lord Kelvin oversaw the laying of the trans-Atlantic telegraphic cable, which became successful in 1866. Drawing on the work of his predecessors, especially the experimental research of Michael Faraday, the analogy with heat flow by Lord Kelvin, and the mathematical analysis of George Green, James Clerk Maxwell synthesized all that was known about electricity and magnetism into a single mathematical framework, Maxwell's equations. Maxwell used his equations to predict the existence of electromagnetic waves, which travel at the speed of light. In other words, light is but one kind of electromagnetic wave. Maxwell's theory predicted there ought to be other types, with different frequencies. After some ingenious experiments, Maxwell's prediction was confirmed by German physicist Heinrich Hertz. In the process, Hertz generated and detected what are now called radio waves and built crude radio antennas and the predecessors of satellite dishes. Dutch physicist Hendrik Lorentz derived, using suitable boundary conditions, Fresnel's equations for the reflection and transmission of light in different media from Maxwell's equations. He also showed that Maxwell's theory succeeded in illuminating the phenomenon of light dispersion where other models failed. John William Strutt (Lord Rayleigh) and the American Josiah Willard Gibbs then proved that the optical equations derived from Maxwell's theory are the only self-consistent description of the reflection, refraction, and dispersion of light consistent with experimental results. Optics thus found a new foundation in electromagnetism. But it was Oliver Heaviside, an enthusiastic supporter of Maxwell's electromagnetic theory, who deserves most of the credit for shaping how people understood and applied Maxwell's work for decades to come. Maxwell originally wrote down a grand total of 20 equations for the electromagnetic field, which he later reduced to eight. Heaviside rewrote them in the form commonly used today, just four expressions. In addition, Heaviside was responsible for considerable progress in electrical telegraphy, telephony, and the study of the propagation of electromagnetic waves. Independent of Gibbs, Heaviside assembled a set of mathematical tools known as vector calculus to replace the quaternions, which were in vogue at the time but which Heaviside dismissed as "antiphysical and unnatural." Faraday also investigated how electrical currents affected chemical solutions. His experiments led him to the two laws of electrochemistry. Together with Whewell, Faraday introduced the basic vocabulary for the subject, the words "electrode", "anode", "cathode", "electrolysis", "electrolyte", "ion", "anion", and "cation". They remain in standard usage. But Faraday's work was of value to more than just chemists. In his Faraday Memorial Lecture in 1881, the German Hermann von Helmholtz asserted that Faraday's laws of electrochemistry hinted at the atomic structure of matter. If the chemical elements were distinguishable from one another by simple ratios of mass, and if the same amounts of electricity deposited amounts of these elements upon the poles in ratios, then electricity must also come in as discrete units, later named electrons. In the late nineteenth century, the nature of the energy emitted by the discharge between high-voltage electrodes inside an evacuated tube—cathode rays—attracted the attention of many physicists. While the Germans thought cathode rays were waves, the British and the French believed they were particles. Working at the Cavendish Laboratory, established by Maxwell, J. J. Thomson directed a dedicate experiment demonstrating that cathode rays were in fact negatively charged particles, now called electrons. The experiment enabled Thompson to calculate the ratio between the magnitude of the charge and the mass of the particle (formula_3). In addition, because the ratio was the same regardless of the metal used, Thompson concluded that electrons must be a constituent of all atoms. Although the atoms of each chemical elements have different numbers of electrons, all electrons are identical. Computer science and logic. Inspired by the explorations in abstract algebra of George Peacock and Augustus de Morgan, George Boole published a book titled "An Investigation of the Laws of Thought" (1854), in which he brought the study of logic from philosophy and metaphysics to mathematics. His stated goal was to "investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of logical and construct its methods." Although ignored at first, Boolean algebra, as it is now known, became central to the design of circuits and computers in the following century. The desire to construct calculating machines is not new. In fact, it can be traced all the way back to the Hellenistic Civilization. While people have devised such machines over the centuries, mathematicians continued to perform calculations by hand, as machines offered little advantage in speed. For complicated calculations, they employed tables, especially of logarithmic and trigonometric functions, which were computed by hand. But right in the middle of the Industrial Revolution in England, Charles Babbage thought of using the all-important steam engine to power a mechanical computer, the Difference Engine. Unfortunately, whilst Babbage managed to secure government funds for the construction of the machine, the government subsequently lost interest and Babbage faced considerable troubles developing the necessary machine components. He abandoned the project to pursue a new one, his Analytical Engine. By 1838, he had worked out the basic design. Like a modern computer, it consisted of two basic parts, one that stores the numbers to be processed (the store), and one that performed the operations (the mill). Babbage adopted the concept of punch cards from the French engineer Joseph Jacquard, who had used it to automate the textile industry in France, to control the operations of his Analytical Engine. Unfortunately, he again lacked the financial resources to build it, and so it remained a theoretical construct. But he did leave behind detailed notes and engineering drawings, from which modern experts conclude that the technology of the time was advanced enough to actually build it, even if he never had enough money to do so. In 1840, Babbage went to Turin to give lectures on his work designing the Analytical Engine to Italian scientists. Ada Lovelace translated the notes published by one of the attendees into English and heavily annotated it. She wrote down the very first computer program, in her case one for computing the Bernoulli numbers. She employed what modern computer programmers would recognise as loops and decision steps, and gave a detailed diagram, possibly the first flowchart ever created. She noted that a calculating machine could perform not just arithmetic operations but also symbolic manipulations. On the limitations and implications of the computer, she wrote, Communication and transportation. Steam ships. Steam ships were one of the keys to Britain's prosperity in the nineteenth century. This technology, which predates the Victorian era, had a long a rich history. Starting in the late 1700s, people had begun building steam-powered ships with ever increasing size, operational range, and speed, first to cross the English Channel and then the Atlantic and finally to reach places as far away as India and Australia without having to refuel mid-route. International trade and travel boosted demand, and there was intense competition among the shipping companies. Steam ships such as the SS "Great Britain" and SS "Great Western" made international travel more common but also advanced trade, so that in Britain it was not just the luxury goods of earlier times that were imported into the country but essentials and raw materials such as corn and cotton from the United States and meat and wool from Australia. At 693 feet long, 120 feet wide and weighing over 18,900 tons, the SS "Great Eastern" was the largest ship built at the time, capable of transporting 4,000 passengers from Britain to Australia without having to refuel along the way. Even when she was finally broken up for scraps in 1888, she was still the largest ship in the world. Her record was not broken till the Edwardian era with super liners like the "Lusitania" in 1907, the "Titanic" in 1912. Yet despite being a remarkable feat of engineering, the "Great Eastern" became more and more of a white elephant as smaller and faster ships were in greater demand. Nevertheless, she gained a new lease of life when she was chartered to lay telegraphic cables across the Atlantic, and then to India. Her size and range made her ideally suited for the task. The British government had long realised that national prosperity depended on trade. For that reason, it deployed the Royal Navy to protect maritime trade routes and financed the construction of many steam ships. Telegraphy, telephony, the wireless, and photography. Although the idea of transmitting messages via electrical signals dated back to the eighteenth century, it was not until the 1820s that advances in the study of electricity and magnetism made that a practical reality. In 1837, William Fothergill Cooke and Charles Wheatstone invented a telegraphic system that used electrical currents to deflect magnetic needles, thus transmitting coded messages. This design soon made its way all across Britain, appearing in every town and post office. By the mid-1800s, a telegraphic cable was laid across the English Channel, the Irish Sea, and the North Sea. In 1866, the SS "Great Eastern" successfully laid the transatlantic telegraphic cable. A global network boomed towards the end of the century. In 1876, Alexander Graham Bell patented the telephone. Like the telegraph, the telephone enabled rapid personal communication. A little over a decade later, 26,000 telephones were in service in Britain (and 150,000 in America). Multiple switchboards were installed in every major town and city. Hertz's experimental work in electromagnetism stimulated interest in the possibility of wireless communication, which did not require long and expensive cables and was faster than even the telegraph. Receiving little support in his native Italy, Guglielmo Marconi moved to England and adapted Hertz's equipment for this purpose in the 1890s. He achieved the first international wireless transmission between England and France in 1900 and by the following year, he succeeded in sending messages in Morse code across the Atlantic. Seeing its value, the shipping industry adopted this technology at once. Radio broadcasting became extremely popular in the twentieth century and remains in common use in the early twenty-first. In fact, the global communications network of the twenty-first century has its roots in the Victorian era. Photography was realised in 1839 by Louis Daguerre in France and William Fox Talbot in Britain. By 1889, hand-held cameras were available. Another important innovation in communications was the Penny Black, the first postage stamp, which standardised postage to a flat price regardless of distance sent. Railways. A central development during the Victorian era was the rise of rail transport. The new railways all allowed goods, raw materials, and people to be moved about, rapidly facilitating trade and industry. The financing of railways became an important specialty of London's financiers. They retained an ownership share even while turning over management to locals; that ownership was largely liquidated in 1914–1916 to pay for the World War. Railroads originated in England because industrialists had already discovered the need for inexpensive transportation to haul coal for the new steam engines, to supply parts to specialized factories, and to take products to market. The existing system of canals was inexpensive but was too slow and too limited in geography. The railway system led to a reorganisation of society more generally, with "railway time" being the standard by which clocks were set throughout Britain; the complex railway system setting the standard for technological advances and efficiency. The engineers and businessmen needed to create and finance a railway system were available; they knew how to invent, to build, and to finance a large complex system. The first quarter of the 19th century involved numerous experiments with locomotives and rail technology. By 1825 railways were commercially feasible, as demonstrated by George Stephenson (1791–1848) when he built the Stockton and Darlington. On his first run, his locomotive pulled 38 freight and passenger cars at speeds as high as 12 miles per hour. Stephenson went on to design many more railways and is best known for standardizing designs, such as the "standard gauge" of rail spacing, at 4 feet 8<templatestyles src="Fraction/styles.css" />1⁄2 inches. Thomas Brassey (1805–70) was even more prominent, operating construction crews that at one point in the 1840s totalled 75,000 men throughout Europe, the British Empire, and Latin America. Brassey took thousands of British engineers and mechanics across the globe to build new lines. They invented and improved thousands of mechanical devices, and developed the science of civil engineering to build roadways, tunnels and bridges. Britain had a superior financial system based in London that funded both the railways in Britain and also in many other parts of the world, including the United States, up until 1914. The boom years were 1836 and 1845–47 when Parliament authorised 8,000 miles of lines at a projected cost of £200 million, which was about the same value as the country's annual Gross Domestic Product (GDP) at that time. A new railway needed a charter, which typically cost over £200,000 (about $1 million) to obtain from Parliament, but opposition could effectively prevent its construction. The canal companies, unable or unwilling to upgrade their facilities to compete with railways, used political power to try to stop them. The railways responded by purchasing about a fourth of the canal system, in part to get the right of way, and in part to buy off critics. Once a charter was obtained, there was little government regulation, as laissez-faire and private ownership had become accepted practices. The different lines typically had exclusive territory, but given the compact size of Britain, this meant that multiple competing lines could provide service between major cities. George Hudson (1800–1871) became the "railway king" of Britain. He merged various independent lines and set up a "Clearing House" in 1842 which rationalized interconnections by establishing uniform paperwork and standard methods for transferring passengers and freight between lines, and rates when one system used freight cars owned by another. By 1850, rates had fallen to a penny a ton mile for coal, at speeds of up to fifty miles an hour. Britain now had had the model for the world in a well integrated, well-engineered system that allowed fast, cheap movement of freight and people, and which could be replicated in other major nations. The railways directly or indirectly employed tens of thousands of engineers, mechanics, repairmen and technicians, as well as statisticians and financial planners. They developed new and more efficient and less expensive techniques. Most important, they created a mindset of how technology could be used in many different forms of business. Railways had a major impact on industrialization. By lowering transportation costs, they reduced costs for all industries moving supplies and finished goods, and they increased demand for the production of all the inputs needed for the railroad system itself. By 1880, there were 13,500 locomotives which each carried 97,800 passengers a year, or 31,500 tons of freight. Member of Parliament and Solicitor to the City of London Charles Pearson campaigned for an underground rail service in London. Parts of the first such railway, the Metropolitan Line, opened to the public in 1863, thereby becoming the first subway line in the world. Trains were originally steam-powered, but in 1890, the first electric trains entered service. That same year, the whole system became officially known as the Tube after the shape of the rail tunnels. (It was not until 1908 that the name London Underground was introduced.) India provides an example of the London-based financiers pouring money and expertise into a very well built system designed for military reasons (after the Mutiny of 1857), and with the hope that it would stimulate industry. The system was overbuilt and much too elaborate and expensive for the small amount of freight traffic it carried. However, it did capture the imagination of the Indians, who saw their railways as the symbol of an industrial modernity—but one that was not realised until a century or so later. Public safety, health and medicine. A gas network for lighting and heating was introduced in the 1880s. The model town of Saltaire was founded, along with others, as a planned environment with good sanitation and many civic, educational and recreational facilities, although it lacked a pub, which was regarded as a focus of dissent. Although initially developed in the early years of the 19th century, gas lighting became widespread during the Victorian era in industry, homes, public buildings and the streets. The invention of the incandescent gas mantle in the 1890s greatly improved light output and ensured its survival as late as the 1960s. Hundreds of gasworks were constructed in cities and towns across the country. In 1882, incandescent electric lights were introduced to London streets, although it took many years before they were installed everywhere. Medicine progressed during Queen Victoria's reign. In fact, medicine at the start of the nineteenth century was little different from that in the medieval era whereas by the end of the century, it became a lot closer to twenty-first century practice thanks to advances in science, especially microbiology, paving the way for the germ theory of disease. This was during the height of the Industrial Revolution, and urbanisation occurred at a frantic pace. As the population density of the cities grew, epidemics of cholera, smallpox, tuberculosis, and typhus were commonplace. After studying previous outbreaks, physician John Snow drew the conclusion that cholera was a water-borne disease. When the 1854 broke out, Snow mapped the locations of the cases in Soho, London, and found that they centered around a well he deemed contaminated. He asked that the pump's handle be removed, after which the epidemic petered out. Snow also discovered that households whose water supplies came from companies that used the Thames downstream, after many sewers had flown into the river, were fourteen times more likely to die from cholera. He thus recommended boiling water before use. Sanitation reforms, prompted by the Public Health Acts 1848 and 1869, were made in the crowded, dirty streets of the existing cities, and soap was the main product shown in the relatively new phenomenon of advertising. A great engineering feat in the Victorian Era was the sewage system in London. It was designed by Joseph Bazalgette in 1858. He proposed to build of sewer system linked with over of street sewers. Many problems were encountered but the sewers were completed. After this, Bazalgette designed the Thames Embankment which housed sewers, water pipes and the London Underground. During the same period, London's water supply network was expanded and improved. John Simon, as chief medical officer of the General Board of Health, secured funds for research into various common infectious diseases at the time, including cholera, diphtheria, smallpox, and typhus. Using his political influence, he garnered support for the Public Health Act of 1875, which focused on preventative measures in housing, the water supply, sewage and drainage, providing Britain with an extensive public health system. By mid-century, the stethoscope became an oft-used device and designs of the microscope had advanced enough for scientists to closely examine pathogens. The pioneering work of French microbiologist Louis Pasteur from the 1850s earned widespread acceptance for the germ theory of disease. It led to the introduction antiseptics by Joseph Lister in 1867 in the form of carbolic acid (phenol). He instructed the hospital staff to wear gloves and wash their hands, instruments, and dressings with a phenol solution and in 1869, he invented a machine that would spray carbolic acid in the operating theatre during surgery. Infection-related deaths fell noticeably as a result. As the British Empire expanded, Britons found themselves facing novel climates and contagions; there was active research into tropical diseases. In 1898, Ronald Ross proved that the mosquito was responsible for spreading malaria. Although nitrous oxide, or laughing gas, had been proposed as an anaesthetic as far back as 1799 by Humphry Davy, it was not until 1846 when an American dentist named William Morton started using ether on his patients that anaesthetics became common in the medical profession. In 1847 chloroform was introduced as an anaesthetic by James Young Simpson. Chloroform was favoured by doctors and hospital staff because it is much less flammable than ether, but critics complained that it could cause the patient to have a heart attack. Chloroform gained in popularity in England and Germany after John Snow gave Queen Victoria chloroform for the birth of her eighth child (Prince Leopold). By 1920, chloroform was used in 80 to 95% of all narcoses performed in the UK and German-speaking countries. A combination of antiseptics and anaesthetics helped surgeons operate more carefully and comfortably on their patients. Anaesthetics made painless dentistry possible. At the same time sugar consumption in the British diet increased, greatly increasing instances of tooth decay. As a result, more and more people were having teeth extracted and needing dentures. This gave rise to "Waterloo Teeth", which were real human teeth set into hand-carved pieces of ivory from hippopotamus or walrus jaws. The teeth were obtained from executed criminals, victims of battlefields, from grave-robbers, and were even bought directly from the desperately impoverished. The increase in tooth decay also brought the first prominent recommendation for fluoride as a nutrient, particularly in pregnancy and childhood, in 1892. News of the discovery of X-rays in 1895 spread like wildfire. Its medical value was realised immediately, and within a year, doctors were prescribing X-rays for diagnosis, in particular to locate bone fractures and foreign objects inside the patient's body. Radioactivity was discovered 1896, and was later used to treat cancer. During the second half of the nineteenth century, British medical doctors became increasingly specialised, following the footsteps of their German counterparts, and more hospitals were built. Surgeons began wearing gowns in the operating room and doctors white coats and stethoscopes, sights that are common in the early twenty-first century. Yet despite all the aforementioned medical advances, the mortality rate fell only marginally, from 20.8 per thousand in 1850 to 18.2 by the end of the century. Urbanisation aided the spread of diseases and squalid living conditions in many places exacerbated the problem. Moreover, while some diseases, such as cholera, were being driven out, others, such as sexually transmitted diseases, made themselves felt.
[ { "math_id": 0, "text": "y = A e^{-kx^2}" }, { "math_id": 1, "text": "\\chi^2" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "q/m" } ]
https://en.wikipedia.org/wiki?curid=73429915
73429960
Imitation learning
Machine learning technique where agents learn from demonstrations Imitation learning is a paradigm in reinforcement learning, where an agent learns to perform a task by supervised learning from expert demonstrations. It is also called learning from demonstration and apprenticeship learning. It has been applied to underactuated robotics, self-driving cars, quadcopter navigation, helicopter aerobatics, and locomotion. Approaches. Expert demonstrations are recordings of an expert performing the desired task, often collected as state-action pairs formula_0. Behavior Cloning. Behavior Cloning (BC) is the most basic form of imitation learning. Essentially, it uses supervised learning to train a policy formula_1 such that, given an observation formula_2, it would output an action distribution formula_3 that is approximately the same as the action distribution of the experts. BC is susceptible to distribution shift. Specifically, if the trained policy differs from the expert policy, it might find itself straying from expert trajectory into observations that would have never occurred in expert trajectories. This was already noted by ALVINN, where they trained a neural network to drive a van using human demonstrations. They noticed that because a human driver never strays far from the path, the network would never be trained on what action to take if it ever finds itself straying far from the path. DAgger. Dagger (Dataset Aggregation) improves on behavior cloning by iteratively training on a dataset of expert demonstrations. In each iteration, the algorithm first collects data by rolling out the learned policy formula_1. Then, it queries the expert for the optimal action formula_4 on each observation formula_2 encountered during the rollout. Finally, it aggregates the new data into the datasetformula_5and trains a new policy on the aggregated dataset. Other approaches. See for more examples. Related approaches. Inverse Reinforcement Learning (IRL) learns a reward function that explains the expert's behavior and then uses reinforcement learning to find a policy that maximizes this reward. Generative Adversarial Imitation Learning (GAIL) uses generative adversarial networks (GANs) to match the distribution of agent behavior to the distribution of expert demonstrations. It extends a previous approach using game theory.
[ { "math_id": 0, "text": "(o_t^*, a_t^*)" }, { "math_id": 1, "text": "\\pi_\\theta" }, { "math_id": 2, "text": "o_t" }, { "math_id": 3, "text": "\\pi_\\theta(\\cdot | o_t) " }, { "math_id": 4, "text": "a_t^* " }, { "math_id": 5, "text": "D \\leftarrow D \\cup \\{ (o_1, a_1^*), (o_2, a_2^*), ..., (o_T, a_T^*) \\}" } ]
https://en.wikipedia.org/wiki?curid=73429960
73438573
Dark-field X-ray microscopy
Synchrotron X-ray diffraction-based imaging technique Dark-field X-ray microscopy (DFXM or DFXRM) is an imaging technique used for multiscale structural characterisation. It is capable of mapping deeply embedded structural elements with nm-resolution using synchrotron X-ray diffraction-based imaging. The technique works by using scattered X-rays to create a high degree of contrast, and by measuring the intensity and spatial distribution of the diffracted beams, it is possible to obtain a three-dimensional map of the sample's structure, orientation, and local strain. History. The first experimental demonstration of dark-field X-ray microscopy was reported in 2006 by a group at the European Synchrotron Radiation Facility in Grenoble, France. Since then, the technique has been rapidly evolving and has shown great promise in multiscale structural characterization. Its development is largely due to advances in synchrotron X-ray sources, which provide highly collimated and intense beams of X-rays. The development of dark-field X-ray microscopy has been driven by the need for non-destructive imaging of bulk crystalline samples at high resolution, and it continues to be an active area of research today. However, dark-field microscopy, dark-field scanning transmission X-ray microscopy, and "soft" dark-field X-ray microscopy has long been used to map deeply embedded structural elements. Principles and instrumentation. In this technique, a synchrotron light source is used to generate an intense and coherent X-ray beam, which is then focused onto the sample using a specialized objective lens. The objective lens acts as a collimator to select and focus the scattered light, which is then detected by the 2D detector to create a diffraction pattern. The specialized objective lens in DFXM, referred to as an X-ray objective lens, is a crucial component of the instrumentation required for the technique. It can be made from different materials such as beryllium, silicon, and diamond, depending on the specific requirements of the experiment. The objective enables one to enlarge or reduce the spatial resolution and field of view within the sample by varying the number of individual lenses and adjusting formula_0 and formula_1 (as in the figure) correspondingly. The diffraction angle formula_2 is typically 10–30°. The sample is positioned at an angle such that the direct beam is blocked by a beam stop or aperture, and the diffracted beams from the sample are allowed to pass through a detector. An embedded crystalline element (for example, a grain or domain) of choice (green) is aligned such that the detector is positioned at a Bragg angle that corresponds to a particular diffraction peak of interest, which is determined by the crystal structure of the sample. The objective magnifies the diffracted beam by a factor formula_3 and generates an inverted 2D projection of the grain. Through repeated exposures during a 360° rotation of the element around an axis parallel to the diffraction vector, formula_4, several 2D projections of the grain are obtained from various angles. A 3D map is then obtained by combining these projections using reconstruction algorithms similar to those developed for CT scanning. If the lattice of the crystalline element exhibits an internal orientation spread, this procedure is repeated for a number of sample tilts, indicated by the angles formula_5 and formula_6. The current implementation of DFXM at ID06, ESRF, uses a compound refractive lens (CRL) as the objective, giving spatial resolution of 100 nm and angular resolution of 0.001°. Applications, limitations and alternatives. Current and potential applications. DFXM has been used for the non-destructive investigation of polycrystalline materials and composites, revealing the 3D microstructure, phases, orientation of individual grains, and local strains. It has also been used for in situ studies of materials recrystallisation, dislocations and other defects, and the deformation and fracture mechanisms in materials, such as metals and composites. DFXM can provide insights into the 3D microstructure and deformation of geological materials such as minerals and rocks, and irradiated materials. DFXM has the potential to revolutionise the field of nanotechnology by providing non-destructive, high-resolution 3D imaging of nanostructures and nanomaterials. It has been used to investigate the 3D morphology of nanowires and to detect structural defects in nanotubes. DFXM has shown potential for imaging biological tissues and organs with high contrast and resolution. It has been used to visualize the 3D microstructure of cartilage and bone, as well as to detect early-stage breast cancer in mouse model. Limitations. The intense X-ray beams used in DFXM can damage delicate samples, particularly biological specimens. DFXM can suffer from imaging artefacts such as ring artefacts, which can affect image quality and limit interpretation. The instrumentation required for DFXM is expensive and typically only available at synchrotron facilities, making it inaccessible to many researchers. Although DFXM can achieve high spatial resolution, it is still not as high as the resolution achieved by other imaging techniques such as transmission electron microscopy (TEM) or X-ray crystallography. Preparation of samples for DFXM imaging can be challenging, especially for samples that are not crystalline. There are also limitations on the sample size that can be imaged as the technique works best with thin samples, typically less than 100 microns thick, due to the attenuation of the X-ray beam by thicker samples. DFXM still suffers from long integration times, which can limit its practical applications. This is due to the low flux density of X-rays emitted by synchrotron sources and the high sensitivity required to detect scattered X-rays. Alternatives. There are several alternative techniques to DFXM, depending on the application, some of which are:
[ { "math_id": 0, "text": "p'" }, { "math_id": 1, "text": "q'" }, { "math_id": 2, "text": "2\\theta" }, { "math_id": 3, "text": "M=q'/p'" }, { "math_id": 4, "text": "G" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=73438573
7344293
Zero sound
Zero sound is the name given by Lev Landau in 1957 to the unique quantum vibrations in quantum Fermi liquids. The zero sound can no longer be thought of as a simple wave of compression and rarefaction, but rather a fluctuation in space and time of the quasiparticles' momentum distribution function. As the shape of Fermi distribution function changes slightly (or largely), zero sound propagates in the direction for the head of Fermi surface with no change of the density of the liquid. Predictions and subsequent experimental observations of zero sound was one of the key confirmation on the correctness of Landau's Fermi liquid theory. Derivation from Boltzmann transport equation. The Boltzmann transport equation for general systems in the semiclassical limit gives, for a Fermi liquid, formula_0, where formula_1 is the density of quasiparticles (here we ignore spin) with momentum formula_2 and position formula_3 at time formula_4, and formula_5 is the energy of a quasiparticle of momentum formula_2 (formula_6 and formula_7 denote equilibrium distribution and energy in the equilibrium distribution). The semiclassical limit assumes that formula_8 fluctuates with angular frequency formula_9 and wavelength formula_10, which are much lower than formula_11 and much longer than formula_12 respectively, where formula_13 and formula_14 are the Fermi energy and momentum respectively, around which formula_8 is nontrivial. To first order in fluctuation from equilibrium, the equation becomes formula_15. When the quasiparticle's mean free path formula_16 (equivalently, relaxation time formula_17), ordinary sound waves ("first sound") propagate with little absorption. But at low temperatures formula_18 (where formula_19 and formula_20 scale as formula_21 ), the mean free path exceeds formula_22, and as a result the collision functional formula_23. Zero sound occurs in this collisionless limit. In the Fermi liquid theory, the energy of a quasiparticle of momentum formula_2 is formula_24, where formula_25 is the appropriately normalized Landau parameter, and formula_26. The approximated transport equation then has plane wave solutions formula_27, with formula_28 given by formula_29. This functional operator equation gives the dispersion relation for the zero sound waves with frequency formula_9 and wave vector formula_30 . The transport equation is valid in the regime where formula_31 and formula_32. In many systems, formula_33 only slowly depends on the angle between formula_34 and formula_35. If formula_25 is an angle-independent constant formula_36 with formula_37 (note that this constraint is stricter than the Pomeranchuk instability) then the wave has the form formula_38 and dispersion relation formula_39 where formula_40 is the ratio of zero sound phase velocity to Fermi velocity. If the first two Legendre components of the Landau parameter are significant, formula_41 and formula_42, the system also admits an asymmetric zero sound wave solution formula_43 (where formula_44 and formula_45 are the azimuthal and polar angle of formula_34 about the propagation direction formula_46) and dispersion relation formula_47. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\frac{\\partial f}{\\partial t}+\\frac{\\partial E}{\\partial \\vec{p}}\\cdot\\frac{\\partial f}{\\partial \\vec{x}}-\\frac{\\partial E}{\\partial \\vec{x}}\\cdot \\frac{\\partial f}{\\partial \\vec{p}} = \\text{St}[f] " }, { "math_id": 1, "text": "f(\\vec{p}, \\vec{x}, t) = f_0(\\vec{p}) + \\delta f(\\vec{p}, \\vec{x}, t)" }, { "math_id": 2, "text": "\\vec{p}" }, { "math_id": 3, "text": "\\vec{x}" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "E(\\vec{p},\\vec{x}, t) = E_0(\\vec{p}) + \\delta E(\\vec{p}, \\vec{x}, t)" }, { "math_id": 6, "text": "f_0" }, { "math_id": 7, "text": "E_0" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "\\lambda = 2\\pi/k" }, { "math_id": 11, "text": "E_{\\rm F}/\\hbar" }, { "math_id": 12, "text": "\\hbar/p_{\\rm F}" }, { "math_id": 13, "text": "E_{\\rm F}" }, { "math_id": 14, "text": "p_{\\rm F}" }, { "math_id": 15, "text": "\\frac{\\partial \\delta f}{\\partial t}+\\frac{\\partial E_0}{\\partial \\vec{p}}\\cdot\\frac{\\partial \\delta f}{\\partial \\vec{x}}-\\frac{\\partial \\delta E}{\\partial \\vec{x}}\\cdot \\frac{\\partial f_0}{\\partial \\vec{p}} = \\text{St}[f] " }, { "math_id": 16, "text": "\\ell \\ll \\lambda " }, { "math_id": 17, "text": "\\tau \\ll 1/\\omega " }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "\\tau\n" }, { "math_id": 20, "text": "\\ell\n" }, { "math_id": 21, "text": "T^{-2}" }, { "math_id": 22, "text": "\\lambda" }, { "math_id": 23, "text": "\\text{St}[f] \\approx 0 " }, { "math_id": 24, "text": "E_{\\rm F} + v_{\\rm F}(|\\vec{p}| -p_{\\rm F}) + \\int \\frac{d^3 \\vec{p}'}{4\\pi p_{\\rm F} m^{*}} F(p, p') \\delta f( p')" }, { "math_id": 25, "text": "F" }, { "math_id": 26, "text": "f_0(\\vec{p}) = \\Theta(p_{\\rm F} - |\\vec{p}|)" }, { "math_id": 27, "text": "\\delta f(\\vec{p}, \\vec{x}, t) = \\delta(E(\\vec{p})-E_{\\rm F})e^{i(\\vec{k}\\cdot \\vec{r}-\\omega t)} \\nu( \\hat{p})" }, { "math_id": 28, "text": "\\nu(\\hat{p})" }, { "math_id": 29, "text": "(\\omega - v_{\\rm F} \\hat{p}\\cdot \\hat{k}) \\nu(\\hat{p}) = v_{\\rm F} \\hat{p} \\cdot \\hat{k} \\int d^2 \\frac{\\hat{p}'}{4\\pi} F(\\hat{p}, \\hat{p}') \\nu(\\hat{p}')" }, { "math_id": 30, "text": "\\vec{k}" }, { "math_id": 31, "text": "\n\\hbar \\omega \\ll E_{\\rm F}" }, { "math_id": 32, "text": "\\hbar |\\vec{k}| \\ll p_{\\rm F}" }, { "math_id": 33, "text": "F(\\hat{p},\\hat{p}')" }, { "math_id": 34, "text": "\\hat{p}" }, { "math_id": 35, "text": "\\hat{p}'" }, { "math_id": 36, "text": "F_0" }, { "math_id": 37, "text": "F_0>0" }, { "math_id": 38, "text": "\\nu(\\hat{p}) \\propto ({\\omega}/({v_{\\rm F} \\hat{p} \\cdot \\vec{k}}) -1)^{-1}" }, { "math_id": 39, "text": "\\frac{s}{2} \\log{\\frac{s+1}{s-1}} - 1 = 1/F_0" }, { "math_id": 40, "text": "s = \\omega/{k v_{\\rm F}}" }, { "math_id": 41, "text": "F(\\hat{p},\\hat{p}') = F_0 + F_1 \\hat{p}\\cdot\\hat{p}'" }, { "math_id": 42, "text": "F_1>6" }, { "math_id": 43, "text": "\\nu(\\hat{p}) \\propto {\\sin(2\\theta)}/({s-\\cos{\\theta}})e^{i\\phi}" }, { "math_id": 44, "text": "\\phi" }, { "math_id": 45, "text": "\\theta" }, { "math_id": 46, "text": "\\hat{k}" }, { "math_id": 47, "text": "\\int_{0}^{\\pi} \\frac{\\sin^3 \\theta \\cos \\theta}{ s - \\cos\\theta} d\\theta = \\frac{4}{F_1}" } ]
https://en.wikipedia.org/wiki?curid=7344293
7344320
Fixed-point iteration
Root-finding algorithm In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function formula_0 defined on the real numbers with real values and given a point formula_1 in the domain of formula_0, the fixed-point iteration is formula_2 which gives rise to the sequence formula_3 of iterated function applications formula_4 which is hoped to converge to a point formula_5. If formula_0 is continuous, then one can prove that the obtained formula_5 is a fixed point of formula_0, i.e., formula_6 More generally, the function formula_0 can be defined on any metric space with values in that same space. Attracting fixed points. An "attracting fixed point" of a function "f" is a fixed point "x"fix of "f" with a neighborhood "U" of "close enough" points around "x"fix such that for any value of x in "U", the fixed-point iteration sequence formula_20 is contained in "U" and converges to "x"fix. The basin of attraction of "x"fix is the largest such neighborhood "U". The natural cosine function ("natural" means in radians, not degrees or other units) has exactly one fixed point, and that fixed point is attracting. In this case, "close enough" is not a stringent criterion at all—to demonstrate this, start with "any" real number and repeatedly press the "cos" key on a calculator (checking first that the calculator is in "radians" mode). It eventually converges to the Dottie number (about 0.739085133), which is a fixed point. That is where the graph of the cosine function intersects the line formula_21. Not all fixed points are attracting. For example, 0 is a fixed point of the function "f"("x") = 2"x", but iteration of this function for any value other than zero rapidly diverges. We say that the fixed point of formula_22 is repelling. An attracting fixed point is said to be a "stable fixed point" if it is also Lyapunov stable. A fixed point is said to be a "neutrally stable fixed point" if it is Lyapunov stable but not attracting. The center of a linear homogeneous differential equation of the second order is an example of a neutrally stable fixed point. Multiple attracting points can be collected in an "attracting fixed set". Banach fixed-point theorem. The Banach fixed-point theorem gives a sufficient condition for the existence of attracting fixed points. A contraction mapping function formula_0 defined on a complete metric space has precisely one fixed point, and the fixed-point iteration is attracted towards that fixed point for any initial guess formula_1 in the domain of the function. Common special cases are that (1) formula_0 is defined on the real line with real values and is Lipschitz continuous with Lipschitz constant formula_23, and (2) the function "f" is continuously differentiable in an open neighbourhood of a fixed point "x"fix, and formula_24. Although there are other fixed-point theorems, this one in particular is very useful because not all fixed-points are attractive. When constructing a fixed-point iteration, it is very important to make sure it converges to the fixed point. We can usually use the Banach fixed-point theorem to show that the fixed point is attractive. Attractors. Attracting fixed points are a special case of a wider mathematical concept of attractors. Fixed-point iterations are a discrete dynamical system on one variable. Bifurcation theory studies dynamical systems and classifies various behaviors such as attracting fixed points, periodic orbits, or strange attractors. An example system is the logistic map. Iterative methods. In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. Convergent fixed-point iterations are mathematically rigorous formalizations of iterative methods. Convergence acceleration. The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Anderson acceleration and Aitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic. Chaos game. The term "chaos game" refers to a method of generating the fixed point of any iterated function system (IFS). Starting with any point "x"0, successive iterations are formed as "x""k"+1 = "f""r"("x""k"), where "f""r" is a member of the given IFS randomly selected for each iteration. Hence the chaos game is a randomized fixed-point iteration. The chaos game allows plotting the general shape of a fractal such as the Sierpinski triangle by repeating the iterative process a large number of times. More mathematically, the iterations converge to the fixed point of the IFS. Whenever "x"0 belongs to the attractor of the IFS, all iterations "x""k" stay inside the attractor and, with probability 1, form a dense set in the latter. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "x_0" }, { "math_id": 2, "text": "x_{n+1}=f(x_n), \\, n=0, 1, 2, \\dots" }, { "math_id": 3, "text": "x_0, x_1, x_2, \\dots" }, { "math_id": 4, "text": "x_0, f(x_0), f(f(x_0)), \\dots" }, { "math_id": 5, "text": "x_\\text{fix}" }, { "math_id": 6, "text": "f(x_\\text{fix})=x_\\text{fix} ." }, { "math_id": 7, "text": "f(x) = \\frac 1 2 \\left(\\frac a x + x\\right)" }, { "math_id": 8, "text": "x = \\sqrt a" }, { "math_id": 9, "text": "x_0 \\gg 0 " }, { "math_id": 10, "text": "x_{n+1} = \\cos x_n\\," }, { "math_id": 11, "text": "f(x) = \\cos x\\," }, { "math_id": 12, "text": "x_0." }, { "math_id": 13, "text": "|x_n-x| \\leq { q^n \\over 1-q } | x_1 - x_0 | = C q^n" }, { "math_id": 14, "text": "q = 0.85" }, { "math_id": 15, "text": "x_0=1" }, { "math_id": 16, "text": "q^n" }, { "math_id": 17, "text": " x_{n+1} = \n\\begin{cases}\n\\frac{x_n}{2}, & x_n \\ne 0\\\\\n1, & x_n=0\n\\end{cases}" }, { "math_id": 18, "text": "f(x) = \n\\begin{cases}\n\\frac{x}{2}, & x \\ne 0\\\\\n1, & x = 0\n\\end{cases}" }, { "math_id": 19, "text": "x = 0" }, { "math_id": 20, "text": "x,\\ f(x),\\ f(f(x)),\\ f(f(f(x))), \\dots" }, { "math_id": 21, "text": "y = x" }, { "math_id": 22, "text": "f(x) = 2x" }, { "math_id": 23, "text": "L < 1" }, { "math_id": 24, "text": "|f'(x_\\text{fix})| < 1" } ]
https://en.wikipedia.org/wiki?curid=7344320
734434
Modern portfolio theory
Mathematical framework for investment risk Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return (or its transformation, the standard deviation) is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available. Economist Harry Markowitz introduced MPT in a 1952 essay, for which he was later awarded a Nobel Memorial Prize in Economic Sciences; see Markowitz model. In 1940, Bruno de Finetti published the mean-variance analysis method, in the context of proportional reinsurance, under a stronger assumption. The paper was obscure and only became known to economists of the English-speaking world in 2006. Mathematical model. Risk and expected return. MPT assumes that investors are risk averse, meaning that given two portfolios that offer the same expected return, investors will prefer the less risky one. Thus, an investor will take on increased risk only if compensated by higher expected returns. Conversely, an investor who wants higher expected returns must accept more risk. The exact trade-off will not be the same for all investors. Different investors will evaluate the trade-off differently based on individual risk aversion characteristics. The implication is that a rational investor will not invest in a portfolio if a second portfolio exists with a more favorable risk vs expected return profile — i.e., if for that level of risk an alternative portfolio exists that has better expected returns. Under the model: In general: formula_1 where formula_2 is the return on the portfolio, formula_3 is the return on asset "i" and formula_4 is the weighting of component asset formula_5 (that is, the proportion of asset "i" in the portfolio, so that formula_6). formula_7, where formula_8 is the (sample) standard deviation of the periodic returns on an asset "i", and formula_9 is the correlation coefficient between the returns on assets "i" and "j". Alternatively the expression can be written as: formula_10, where formula_11 for formula_12 , or formula_13, where formula_14 is the (sample) covariance of the periodic returns on the two assets, or alternatively denoted as formula_15, formula_16 or formula_17. formula_18 For a two-asset portfolio: For a three-asset portfolio: The algebra can be much simplified by expressing the quantities involved in matrix notation. Arrange the returns of N risky assets in an formula_23 vector formula_24, where the first element is the return of the first asset, the second element of the second asset, and so on. Arrange their expected returns in a column vector formula_25, and their variances and covariances in a covariance matrix formula_26. Consider a portfolio of risky assets whose weights in each of the N risky assets is given by the corresponding element of the weight vector formula_27. Then: and For the case where there is investment in a riskfree asset with return formula_30, the weights of the weight vector do not sum to 1, and the portfolio expected return becomes formula_31. The expression for the portfolio variance is unchanged. Diversification. An investor can reduce portfolio risk (especially formula_0) simply by holding combinations of instruments that are not perfectly positively correlated (correlation coefficient formula_32). In other words, investors can reduce their exposure to individual asset risk by holding a diversified portfolio of assets. Diversification may allow for the same portfolio expected return with reduced risk. The mean-variance framework for constructing optimal investment portfolios was first posited by Markowitz and has since been reinforced and improved by other economists and mathematicians who went on to account for the limitations of the framework. If all the asset pairs have correlations of 0—they are perfectly uncorrelated—the portfolio's return variance is the sum over all assets of the square of the fraction held in the asset times the asset's return variance (and the portfolio standard deviation is the square root of this sum). If all the asset pairs have correlations of 1—they are perfectly positively correlated—then the portfolio return's standard deviation is the sum of the asset returns' standard deviations weighted by the fractions held in the portfolio. For given portfolio weights and given standard deviations of asset returns, the case of all correlations being 1 gives the highest possible standard deviation of portfolio return. Efficient frontier with no risk-free asset. The MPT is a mean-variance theory, and it compares the expected (mean) return of a portfolio with the standard deviation of the same portfolio. The image shows expected return on the vertical axis, and the standard deviation on the horizontal axis (volatility). Volatility is described by standard deviation and it serves as a measure of risk. The return - standard deviation space is sometimes called the space of 'expected return vs risk'. Every possible combination of risky assets, can be plotted in this risk-expected return space, and the collection of all such possible portfolios defines a region in this space. The left boundary of this region is hyperbolic, and the upper part of the hyperbolic boundary is the "efficient frontier" in the absence of a risk-free asset (sometimes called "the Markowitz bullet"). Combinations along this upper edge represent portfolios (including no holdings of the risk-free asset) for which there is lowest risk for a given level of expected return. Equivalently, a portfolio lying on the efficient frontier represents the combination offering the best possible expected return for given risk level. The tangent to the upper part of the hyperbolic boundary is the capital allocation line (CAL). Matrices are preferred for calculations of the efficient frontier. In matrix form, for a given "risk tolerance" formula_33, the efficient frontier is found by minimizing the following expression: formula_34 where The above optimization finds the point on the frontier at which the inverse of the slope of the frontier would be "q" if portfolio return variance instead of standard deviation were plotted horizontally. The frontier in its entirety is parametric on "q". Harry Markowitz developed a specific procedure for solving the above problem, called the critical line algorithm, that can handle additional linear constraints, upper and lower bounds on assets, and which is proved to work with a semi-positive definite covariance matrix. Examples of implementation of the critical line algorithm exist in Visual Basic for Applications, in JavaScript and in a few other languages. Also, many software packages, including MATLAB, Microsoft Excel, Mathematica and R, provide generic optimization routines so that using these for solving the above problem is possible, with potential caveats (poor numerical accuracy, requirement of positive definiteness of the covariance matrix...). An alternative approach to specifying the efficient frontier is to do so parametrically on the expected portfolio return formula_43 This version of the problem requires that we minimize formula_44 subject to formula_45 and formula_46 for parameter formula_47. This problem is easily solved using a Lagrange multiplier which leads to the following linear system of equations: formula_48 Two mutual fund theorem. One key result of the above analysis is the two mutual fund theorem. This theorem states that any portfolio on the efficient frontier can be generated by holding a combination of any two given portfolios on the frontier; the latter two given portfolios are the "mutual funds" in the theorem's name. So in the absence of a risk-free asset, an investor can achieve any desired efficient portfolio even if all that is accessible is a pair of efficient mutual funds. If the location of the desired portfolio on the frontier is between the locations of the two mutual funds, both mutual funds will be held in positive quantities. If the desired portfolio is outside the range spanned by the two mutual funds, then one of the mutual funds must be sold short (held in negative quantity) while the size of the investment in the other mutual fund must be greater than the amount available for investment (the excess being funded by the borrowing from the other fund). Risk-free asset and the capital allocation line. The risk-free asset is the (hypothetical) asset that pays a risk-free rate. In practice, short-term government securities (such as US treasury bills) are used as a risk-free asset, because they pay a fixed rate of interest and have exceptionally low default risk. The risk-free asset has zero variance in returns if held to maturity (hence is risk-free); it is also uncorrelated with any other asset (by definition, since its variance is zero). As a result, when it is combined with any other asset or portfolio of assets, the change in return is linearly related to the change in risk as the proportions in the combination vary. When a risk-free asset is introduced, the half-line shown in the figure is the new efficient frontier. It is tangent to the hyperbola at the pure risky portfolio with the highest Sharpe ratio. Its vertical intercept represents a portfolio with 100% of holdings in the risk-free asset; the tangency with the hyperbola represents a portfolio with no risk-free holdings and 100% of assets held in the portfolio occurring at the tangency point; points between those points are portfolios containing positive amounts of both the risky tangency portfolio and the risk-free asset; and points on the half-line beyond the tangency point are portfolios involving negative holdings of the risk-free asset and an amount invested in the tangency portfolio equal to more than 100% of the investor's initial capital. This efficient half-line is called the capital allocation line (CAL), and its formula can be shown to be formula_49 In this formula "P" is the sub-portfolio of risky assets at the tangency with the Markowitz bullet, "F" is the risk-free asset, and "C" is a combination of portfolios "P" and "F". By the diagram, the introduction of the risk-free asset as a possible component of the portfolio has improved the range of risk-expected return combinations available, because everywhere except at the tangency portfolio the half-line gives a higher expected return than the hyperbola does at every possible risk level. The fact that all points on the linear efficient locus can be achieved by a combination of holdings of the risk-free asset and the tangency portfolio is known as the one mutual fund theorem, where the mutual fund referred to is the tangency portfolio. Geometric intuition. The efficient frontier can be pictured as a problem in quadratic curves. On the market, we have the assets formula_50. We have some funds, and a portfolio is a way to divide our funds into the assets. Each portfolio can be represented as a vector formula_51, such that formula_6, and we hold the assets according to formula_52. Markowitz bullet. Since we wish to maximize expected return while minimizing the standard deviation of the return, we are to solve a quadratic optimization problem:formula_53Portfolios are points in the Euclidean space formula_54. The third equation states that the portfolio should fall on a plane defined by formula_6. The first equation states that the portfolio should fall on a plane defined by formula_55. The second condition states that the portfolio should fall on the contour surface for formula_56 that is as close to the origin as possible. Since the equation is quadratic, each such contour surface is an ellipsoid (assuming that the covariance matrix formula_9 is invertible). Therefore, we can solve the quadratic optimization graphically by drawing ellipsoidal contours on the plane formula_6, then intersect the contours with the plane formula_57. As the ellipsoidal contours shrink, eventually one of them would become exactly tangent to the plane, before the contours become completely disjoint from the plane. The tangent point is the optimal portfolio at this level of expected return. As we vary formula_47, the tangent point varies as well, but always falling on a single line (this is the two mutual funds theorem). Let the line be parameterized as formula_58. We find that along the line,formula_59giving a hyperbola in the formula_60 plane. The hyperbola has two branches, symmetric with respect to the formula_47 axis. However, only the branch with formula_61 is meaningful. By symmetry, the two asymptotes of the hyperbola intersect at a point formula_62 on the formula_47 axis. The point formula_63 is the height of the leftmost point of the hyperbola, and can be interpreted as the expected return of the global minimum-variance portfolio (global MVP). Tangency portfolio. The tangency portfolio exists if and only if formula_64. In particular, if the risk-free return is greater or equal to formula_62, then the tangent portfolio "does not exist". The capital market line (CML) becomes parallel to the upper asymptote line of the hyperbola. Points "on" the CML become impossible to achieve, though they can be "approached" from below. It is usually assumed that the risk-free return is less than the return of the global MVP, in order that the tangency portfolio exists. However, even in this case, as formula_65 approaches formula_62 from below, the tangency portfolio diverges to a portfolio with infinite return and variance. Since there are only finitely many assets in the market, such a portfolio must be shorting some assets heavily while longing some other assets heavily. In practice, such a tangency portfolio would be impossible to achieve, because one cannot short an asset too much due to short sale constraints, and also because of price impact, that is, longing a large amount of an asset would push up its price, breaking the assumption that the asset prices do not depend on the portfolio. Non-invertible covariance matrix. If the covariance matrix is not invertible, then there exists some nonzero vector formula_66, such that formula_67 is a random variable with zero variance—that is, it is not random at all. Suppose formula_68 and formula_69, then that means one of the assets can be exactly replicated using the other assets at the same price and the same return. Therefore, there is never a reason to buy that asset, and we can remove it from the market. Suppose formula_68 and formula_70, then that means there is free money, breaking the "no arbitrage" assumption. Suppose formula_71, then we can scale the vector to formula_72. This means that we have constructed a risk-free asset with return formula_73. We can remove each such asset from the market, constructing one risk-free asset for each such asset removed. By the no arbitrage assumption, all their return rates are equal. For the assets that still remain in the market, their covariance matrix is invertible. Asset pricing. The above analysis describes optimal behavior of an individual investor. Asset pricing theory builds on this analysis, allowing MPT to derive the required expected return for a correctly priced asset in this context. Intuitively (in a perfect market with rational investors), if a security was expensive relative to others - i.e. too much risk for the price - demand would fall and its price would drop correspondingly; if cheap, demand and price would increase likewise. This would continue until all such adjustments had ceased - a state of "market equilibrium". In this equilibrium, relative supplies will equal relative demands: given the relationship of price with supply and demand, since the risk-to-reward ratio is "identical" across all securities, proportions of each security in any fully-diversified portfolio would correspondingly be the same as in the overall market. More formally, then, since everyone holds the risky assets in identical proportions to each other — namely in the proportions given by the tangency portfolio — in market equilibrium the risky assets' prices, and therefore their expected returns, will adjust so that the ratios in the tangency portfolio are the same as the ratios in which the risky assets are supplied to the market. The result for expected return then follows, as below. Systematic risk and specific risk. Specific risk is the risk associated with individual assets - within a portfolio these risks can be reduced through diversification (specific risks "cancel out"). Specific risk is also called diversifiable, unique, unsystematic, or idiosyncratic risk. Systematic risk (a.k.a. portfolio risk or market risk) refers to the risk common to all securities—except for selling short as noted below, systematic risk cannot be diversified away (within one market). Within the market portfolio, asset specific risk will be diversified away to the extent possible. Systematic risk is therefore equated with the risk (standard deviation) of the market portfolio. Since a security will be purchased only if it improves the risk-expected return characteristics of the market portfolio, the relevant measure of the risk of a security is the risk it adds to the market portfolio, and not its risk in isolation. In this context, the volatility of the asset, and its correlation with the market portfolio, are historically observed and are therefore given. (There are several approaches to asset pricing that attempt to price assets by modelling the stochastic properties of the moments of assets' returns - these are broadly referred to as conditional asset pricing models.) Systematic risks within one market can be managed through a strategy of using both long and short positions within one portfolio, creating a "market neutral" portfolio. Market neutral portfolios, therefore, will be uncorrelated with broader market indices. Capital asset pricing model. The asset return depends on the amount paid for the asset today. The price paid must ensure that the market portfolio's risk / return characteristics improve when the asset is added to it. The CAPM is a model that derives the theoretical required expected return (i.e., discount rate) for an asset in a market, given the risk-free rate available to investors and the risk of the market as a whole. The CAPM is usually expressed: formula_74 A derivation is as follows: (1) The incremental impact on risk and expected return when an additional risky asset, a, is added to the market portfolio, m, follows from the formulae for a two-asset portfolio. These results are used to derive the asset-appropriate discount rate. Hence, risk added to portfolio = formula_77 but since the weight of the asset will be very low re. the overall market, formula_78 i.e. additional risk = formula_79 Hence additional expected return = formula_81 (2) If an asset, a, is correctly priced, the improvement for an investor in her risk-to-expected return ratio achieved by adding it to the market portfolio, m, will at least (in equilibrium, exactly) match the gains of spending that money on an increased stake in the market portfolio. The assumption is that the investor will purchase the asset with funds borrowed at the risk-free rate, formula_30; this is rational if formula_82. Thus: formula_83 i.e.: formula_84 i.e.: formula_85 formula_86 is the "beta", formula_87 return mentioned — the covariance between the asset's return and the market's return divided by the variance of the market return — i.e. the sensitivity of the asset price to movement in the market portfolio's value (see also ). This equation can be estimated statistically using the following regression equation: formula_88 where α"i" is called the asset's alpha, β"i" is the asset's beta coefficient and SCL is the security characteristic line. Once an asset's expected return, formula_89, is calculated using CAPM, the future cash flows of the asset can be discounted to their present value using this rate to establish the correct price for the asset. A riskier stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. In theory, an asset is correctly priced when its observed price is the same as its value calculated using the CAPM derived discount rate. If the observed price is higher than the valuation, then the asset is overvalued; it is undervalued for a too low price. Criticisms. Despite its theoretical importance, critics of MPT question whether it is an ideal investment tool, because its model of financial markets does not match the real world in many ways. The risk, return, and correlation measures used by MPT are based on expected values, which means that they are statistical statements about the future (the expected value of returns is explicit in the above equations, and implicit in the definitions of variance and covariance). Such measures often cannot capture the true statistical features of the risk and return which often follow highly skewed distributions (e.g. the log-normal distribution) and can give rise to, besides reduced volatility, also inflated growth of return. In practice, investors must substitute predictions based on historical measurements of asset return and volatility for these values in the equations. Very often such expected values fail to take account of new circumstances that did not exist when the historical data were generated. An optimal approach to capturing trends, which differs from Markowitz optimization by utilizing invariance properties, is also derived from physics. Instead of transforming the normalized expectations using the inverse of the correlation matrix, the invariant portfolio employs the inverse of the square root of the correlation matrix. The optimization problem is solved under the assumption that expected values are uncertain and correlated. The Markowitz solution corresponds only to the case where the correlation between expected returns is similar to the correlation between returns. More fundamentally, investors are stuck with estimating key parameters from past market data because MPT attempts to model risk in terms of the likelihood of losses, but says nothing about why those losses might occur. The risk measurements used are probabilistic in nature, not structural. This is a major difference as compared to many engineering approaches to risk management. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Options theory and MPT have at least one important conceptual difference from the probabilistic risk assessment done by nuclear power [plants]. A PRA is what economists would call a "structural model". The components of a system and their relationships are modeled in Monte Carlo simulations. If valve X fails, it causes a loss of back pressure on pump Y, causing a drop in flow to vessel Z, and so on. But in the Black–Scholes equation and MPT, there is no attempt to explain an underlying structure to price changes. Various outcomes are simply given probabilities. And, unlike the PRA, if there is no history of a particular system-level event like a liquidity crisis, there is no way to compute the odds of it. If nuclear engineers ran risk management this way, they would never be able to compute the odds of a meltdown at a particular plant until several similar events occurred in the same reactor design. Mathematical risk measurements are also useful only to the degree that they reflect investors' true concerns—there is no point minimizing a variable that nobody cares about in practice. In particular, variance is a symmetric measure that counts abnormally high returns as just as risky as abnormally low returns. The psychological phenomenon of loss aversion is the idea that investors are more concerned about losses than gains, meaning that our intuitive concept of risk is fundamentally asymmetric in nature. There many other risk measures (like coherent risk measures) might better reflect investors' true preferences. Modern portfolio theory has also been criticized because it assumes that returns follow a Gaussian distribution. Already in the 1960s, Benoit Mandelbrot and Eugene Fama showed the inadequacy of this assumption and proposed the use of more general stable distributions instead. Stefan Mittnik and Svetlozar Rachev presented strategies for deriving optimal portfolios in such settings. More recently, Nassim Nicholas Taleb has also criticized modern portfolio theory on this ground, writing:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;After the stock market crash (in 1987), they rewarded two theoreticians, Harry Markowitz and William Sharpe, who built beautifully Platonic models on a Gaussian base, contributing to what is called Modern Portfolio Theory. Simply, if you remove their Gaussian assumptions and treat prices as scalable, you are left with hot air. The Nobel Committee could have tested the Sharpe and Markowitz models—they work like quack remedies sold on the Internet—but nobody in Stockholm seems to have thought about it. Contrarian investors and value investors typically do not subscribe to Modern Portfolio Theory. One objection is that the MPT relies on the efficient-market hypothesis and uses fluctuations in share price as a substitute for risk. Sir John Templeton believed in diversification as a concept, but also felt the theoretical foundations of MPT were questionable, and concluded (as described by a biographer): "the notion that building portfolios on the basis of unreliable and irrelevant statistical inputs, such as historical volatility, was doomed to failure." A few studies have argued that "naive diversification", splitting capital equally among available investment options, might have advantages over MPT in some situations. When applied to certain universes of assets, the Markowitz model has been identified by academics to be inadequate due to its susceptibility to model instability which may arise, for example, among a universe of highly correlated assets. Extensions. Since MPT's introduction in 1952, many attempts have been made to improve the model, especially by using more realistic assumptions. Post-modern portfolio theory extends MPT by adopting non-normally distributed, asymmetric, and fat-tailed measures of risk. This helps with some of these problems, but not others. Black–Litterman model optimization is an extension of unconstrained Markowitz optimization that incorporates relative and absolute 'views' on inputs of risk and returns from. The model is also extended by assuming that expected returns are uncertain, and the correlation matrix in this case can differ from the correlation matrix between returns. Connection with rational choice theory. Modern portfolio theory is inconsistent with main axioms of rational choice theory, most notably with monotonicity axiom, stating that, if investing into portfolio "X" will, with probability one, return more money than investing into portfolio "Y", then a rational investor should prefer "X" to "Y". In contrast, modern portfolio theory is based on a different axiom, called variance aversion, and may recommend to invest into "Y" on the basis that it has lower variance. Maccheroni et al. described choice theory which is the closest possible to the modern portfolio theory, while satisfying monotonicity axiom. Alternatively, mean-deviation analysis is a rational choice theory resulting from replacing variance by an appropriate deviation risk measure. Other applications. In the 1970s, concepts from MPT found their way into the field of regional science. In a series of seminal works, Michael Conroy modeled the labor force in the economy using portfolio-theoretic methods to examine growth and variability in the labor force. This was followed by a long literature on the relationship between economic growth and volatility. More recently, modern portfolio theory has been used to model the self-concept in social psychology. When the self attributes comprising the self-concept constitute a well-diversified portfolio, then psychological outcomes at the level of the individual such as mood and self-esteem should be more stable than when the self-concept is undiversified. This prediction has been confirmed in studies involving human subjects. Recently, modern portfolio theory has been applied to modelling the uncertainty and correlation between documents in information retrieval. Given a query, the aim is to maximize the overall relevance of a ranked list of documents and at the same time minimize the overall uncertainty of the ranked list. Project portfolios and other "non-financial" assets. Some experts apply MPT to portfolios of projects and other assets besides financial instruments. When MPT is applied outside of traditional financial portfolios, some distinctions between the different types of portfolios must be considered. Neither of these necessarily eliminate the possibility of using MPT and such portfolios. They simply indicate the need to run the optimization with an additional set of mathematically expressed constraints that would not normally apply to financial portfolios. Furthermore, some of the simplest elements of Modern Portfolio Theory are applicable to virtually any kind of portfolio. The concept of capturing the risk tolerance of an investor by documenting how much risk is acceptable for a given return may be applied to a variety of decision analysis problems. MPT uses historical variance as a measure of risk, but portfolios of assets like major projects do not have a well-defined "historical variance". In this case, the MPT investment boundary can be expressed in more general terms like "chance of an ROI less than cost of capital" or "chance of losing more than half of the investment". When risk is put in terms of uncertainty about forecasts and possible losses then the concept is transferable to various types of investment. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_p" }, { "math_id": 1, "text": " \\operatorname{E}(R_p) = \\sum_i w_i \\operatorname{E}(R_i) \\quad " }, { "math_id": 2, "text": "R_p" }, { "math_id": 3, "text": " R_i " }, { "math_id": 4, "text": " w_i " }, { "math_id": 5, "text": " i " }, { "math_id": 6, "text": "\\sum_i w_i = 1" }, { "math_id": 7, "text": " \\sigma_p^2 = \\sum_i w_i^2 \\sigma_{i}^2 + \\sum_i \\sum_{j \\neq i} w_i w_j \\sigma_i \\sigma_j \\rho_{ij} " }, { "math_id": 8, "text": " \\sigma_{i} " }, { "math_id": 9, "text": "\\rho_{ij}" }, { "math_id": 10, "text": " \\sigma_p^2 = \\sum_i \\sum_j w_i w_j \\sigma_i \\sigma_j \\rho_{ij} " }, { "math_id": 11, "text": " \\rho_{ij} = 1 " }, { "math_id": 12, "text": " i = j " }, { "math_id": 13, "text": " \\sigma_p^2 = \\sum_i \\sum_j w_i w_j \\sigma_{ij} " }, { "math_id": 14, "text": " \\sigma_{ij} = \\sigma_i \\sigma_j \\rho_{ij} " }, { "math_id": 15, "text": " \\sigma(i,j) " }, { "math_id": 16, "text": " \\text{cov}_{ij} " }, { "math_id": 17, "text": " \\text{cov}(i,j) " }, { "math_id": 18, "text": " \\sigma_p = \\sqrt {\\sigma_p^2} " }, { "math_id": 19, "text": " \\operatorname{E}(R_p) = w_A \\operatorname{E}(R_A) +\nw_B \\operatorname{E}(R_B) = w_A \\operatorname{E}(R_A) + (1 - w_A) \\operatorname{E}(R_B). " }, { "math_id": 20, "text": " \\sigma_p^2 = w_A^2 \\sigma_A^2 + w_B^2 \\sigma_B^2 + 2w_Aw_B \\sigma_{A} \\sigma_{B} \\rho_{AB}" }, { "math_id": 21, "text": " \\operatorname{E}(R_p) = w_A \\operatorname{E}(R_A) + w_B \\operatorname{E}(R_B) + w_C \\operatorname{E}(R_C) " }, { "math_id": 22, "text": " \\sigma_p^2 = w_A^2 \\sigma_A^2 + w_B^2 \\sigma_B^2 + w_C^2 \\sigma_C^2 + 2w_Aw_B \\sigma_{A} \\sigma_{B} \\rho_{AB}\n+ 2w_Aw_C \\sigma_{A} \\sigma_{C} \\rho_{AC} + 2w_Bw_C \\sigma_{B} \\sigma_{C} \\rho_{BC}" }, { "math_id": 23, "text": " N\\times 1" }, { "math_id": 24, "text": " R" }, { "math_id": 25, "text": " \\mu " }, { "math_id": 26, "text": "\\Sigma" }, { "math_id": 27, "text": " w" }, { "math_id": 28, "text": " w'\\mu" }, { "math_id": 29, "text": "w'\\Sigma w" }, { "math_id": 30, "text": "R_f" }, { "math_id": 31, "text": " w'\\mu+(1-w'1)R_f" }, { "math_id": 32, "text": "-1 \\le \\rho_{ij}< 1" }, { "math_id": 33, "text": "q \\in [0,\\infty)" }, { "math_id": 34, "text": " w^T \\Sigma w - q R^T w" }, { "math_id": 35, "text": "w\\in\\mathbb{R}^N" }, { "math_id": 36, "text": "\\sum_{i=1}^N w_i = 1." }, { "math_id": 37, "text": "\\Sigma\\in\\mathbb{R}^{N\\times N}" }, { "math_id": 38, "text": "q \\ge 0" }, { "math_id": 39, "text": "\\infty" }, { "math_id": 40, "text": "R\\in\\mathbb{R}^N" }, { "math_id": 41, "text": "w^T \\Sigma w\\in\\mathbb{R}" }, { "math_id": 42, "text": "R^T w\\in\\mathbb{R}" }, { "math_id": 43, "text": "R^T w." }, { "math_id": 44, "text": " w^T \\Sigma w " }, { "math_id": 45, "text": "R^T w = \\mu" }, { "math_id": 46, "text": "\\sum_{i=1}^{N} w_i = 1" }, { "math_id": 47, "text": "\\mu" }, { "math_id": 48, "text": "\\begin{bmatrix}2\\Sigma &-R & -{\\bf1}\\\\ R^T &0 & 0 \\\\ {\\bf1}^T &0 &0 \\end{bmatrix} \\begin{bmatrix}w\\\\\\lambda_1\\\\\\lambda_2\\end{bmatrix} = \\begin{bmatrix}0\\\\\\mu \\\\ 1\\end{bmatrix}" }, { "math_id": 49, "text": " E(R_{C}) = R_F + \\sigma_C \\frac{E(R_P) - R_F}{\\sigma_P}." }, { "math_id": 50, "text": "R_1, R_2, \\dots, R_n" }, { "math_id": 51, "text": "w_1, w_2, \\dots, w_n" }, { "math_id": 52, "text": "w^T R = \\sum_i w_i R_i " }, { "math_id": 53, "text": "\\begin{cases}\nE[w^T R] = \\mu \\\\\n\\min \\sigma^2 = Var[w^T R ]\\\\\n\\sum_i w_i = 1\n\\end{cases}" }, { "math_id": 54, "text": "\\R^n" }, { "math_id": 55, "text": "w^T E[R] = \\mu" }, { "math_id": 56, "text": "\\sum_{ij} w_i \\rho_{ij} w_j" }, { "math_id": 57, "text": "\\{w: w^T E[R] = \\mu \\text{ and } \\sum_i w_i =1\\}" }, { "math_id": 58, "text": "\\{w + w' t : t \\in \\R\\}" }, { "math_id": 59, "text": "\\begin{cases}\n\\mu &= (w'^T E[R]) t + w^T E[R]\\\\\n\\sigma^2 &= (w'^T \\rho w') t^2 + 2 (w^T \\rho w') t + (w^T \\rho w)\n\\end{cases} " }, { "math_id": 60, "text": "(\\sigma, \\mu)" }, { "math_id": 61, "text": "\\sigma > 0" }, { "math_id": 62, "text": "\\mu_{MVP}" }, { "math_id": 63, "text": "\\mu_{mid}" }, { "math_id": 64, "text": "\\mu_{RF} < \\mu_{MVP}" }, { "math_id": 65, "text": "\\mu_{RF} " }, { "math_id": 66, "text": "v" }, { "math_id": 67, "text": "v^T R" }, { "math_id": 68, "text": "\\sum_i v_i = 0" }, { "math_id": 69, "text": "v^T R = 0" }, { "math_id": 70, "text": "v^T R \\neq 0 " }, { "math_id": 71, "text": "\\sum_i v_i \\neq 0 " }, { "math_id": 72, "text": "\\sum_i v_i = 1" }, { "math_id": 73, "text": "v^T R " }, { "math_id": 74, "text": " \\operatorname{E}(R_i) = R_f + \\beta_i (\\operatorname{E}(R_m) - R_f) " }, { "math_id": 75, "text": " (\\operatorname{E}(R_m) - R_f) " }, { "math_id": 76, "text": " (w_m^2 \\sigma_m ^2 + [ w_a^2 \\sigma_a^2 + 2 w_m w_a \\rho_{am} \\sigma_a \\sigma_m] ) " }, { "math_id": 77, "text": " [ w_a^2 \\sigma_a^2 + 2 w_m w_a \\rho_{am} \\sigma_a \\sigma_m] " }, { "math_id": 78, "text": " w_a^2 \\approx 0 " }, { "math_id": 79, "text": " [ 2 w_m w_a \\rho_{am} \\sigma_a \\sigma_m] \\quad " }, { "math_id": 80, "text": " ( w_m \\operatorname{E}(R_m) + [ w_a \\operatorname{E}(R_a) ] ) " }, { "math_id": 81, "text": " [ w_a \\operatorname{E}(R_a) ] " }, { "math_id": 82, "text": " \\operatorname{E}(R_a) > R_f " }, { "math_id": 83, "text": " [ w_a ( \\operatorname{E}(R_a) - R_f ) ] / [2 w_m w_a \\rho_{am} \\sigma_a \\sigma_m] = [ w_a ( \\operatorname{E}(R_m) - R_f ) ] / [2 w_m w_a \\sigma_m \\sigma_m ] " }, { "math_id": 84, "text": " [\\operatorname{E}(R_a) ] = R_f + [\\operatorname{E}(R_m) - R_f] * [ \\rho_{am} \\sigma_a \\sigma_m] / [ \\sigma_m \\sigma_m ] " }, { "math_id": 85, "text": " [\\operatorname{E}(R_a) ] = R_f + [\\operatorname{E}(R_m) - R_f] * [\\sigma_{am}] / [ \\sigma_{mm}] " }, { "math_id": 86, "text": "[\\sigma_{am}] / [ \\sigma_{mm}] \\quad" }, { "math_id": 87, "text": "\\beta" }, { "math_id": 88, "text": "\\mathrm{SCL} : R_{i,t} - R_{f} = \\alpha_i + \\beta_i\\, ( R_{M,t} - R_{f} ) + \\epsilon_{i,t} \\frac{}{}" }, { "math_id": 89, "text": " E(R_i) " } ]
https://en.wikipedia.org/wiki?curid=734434
73448421
Pieter Jacobus Wemelsfelder
Dutch civil engineer Pieter Jacobus (P.J.) Wemelsfelder (18 November 1907 – 1 July 1995) was a Dutch hydraulic engineer who made significant contributions to the field of hydrometry in the Netherlands, and in hydraulic engineering internationally. In addition to his involvement in the design and planning of the Delta Works, he published widely and is notable for the first use of probability theory in the design of flood levels. Wemelsfelder introduced a systematic approach to understanding and predicting the occurrence of storm floods, considering both the characteristics of the sea's probable and possible heights and the human and economic interests at stake. His methodology involved creating frequency curves for storm floods, using a standard frequency curve applicable to different gauges worldwide, and classifying storm floods based on their probability of exceedance. This classification system helped in understanding the variability of maximum storm floods over different time periods. Wemelsfelder emphasised the importance of considering both the period and risk when determining design levels, advocating for a two-dimensional approach to flood protection. His approach required establishing a frequency curve for each gauge, determining the period during which the risk is present, and choosing an acceptable total risk value for serious damage, generally not exceeding 10%, and as low as 0.1% for critical areas. He noted the need to balance the costs of safety measures with the economic and human values they protect, and recognised the importance of incorporating contingency in design to account for uncertainties. The body set up by the Dutch Government in response to the 1953 North Sea Flood, the Delta Commission, adopted Wemelsfelder's probabilistic methods, setting design levels based on a risk of total loss of 1 in 1,000 years for critical areas, ensuring a high level of safety. His contributions have had a lasting impact on coastal engineering and continue to inform the design and implementation of flood defences in the Netherlands and beyond. Life. Wemelsfelder was born in Goes, the son of Jacob Abraham Wemelsfelder and Jannigje Verschoor, in 1907. After completing his studies at the Delft University of Technology, Wemelsfelder worked at the Waterloopkundig Laboratorium and later at Rijkswaterstaat, where he served as the head of the Hydrometric Department of the Water Management and Movement Directorate. One of his major accomplishments was the development of methods and instruments for hydrometry in the Netherlands. Wemelsfelder introduced a probabilistic approach to determining design flood levels for storm surges in the Netherlands. Prior to his work, flood protection measures were based on a deterministic approach that relied on the highest previously recorded water levels, along with some estimation. For example, the height of the Afsluitdijk was determined based on the highest observed storm surge, with the height of the crest determined based on insufficient data about wave run-up. This became apparent soon after the first significant storm surge following the completion of the dike in December 1936, when water in the Wadden Sea reached to around half a metre below the dike. In 1938, Wemelsfelder introduced a significant change in the design approach through a brief note on the frequency of storm surges, in which he carried out a statistical analysis of water levels measured between 1888 and 1937 at Hoek van Holland to derive the probability distribution of such events. Wemelsfelder determined a statistical estimate of the cumulative distribution of sea-level heights during high tide, and determined that the exceedance frequencies formula_0, where formula_1 represents the number of times the level formula_2 was exceeded during formula_3 years, closely followed a straight line when plotted on logarithmic paper. Prior to Wemelsfelder's work, S.H.A. Begemann had applied statistical methods to hydrological aspects such as precipitation and runoff for irrigation. In the United States, publications on stochastic hydrology such as those by Allen Hazen and others had been appearing since the early twentieth century. However, Wemelsfelder's statistical analysis of water levels measured between 1888 and 1937 at Hoek van Holland enabled the derivation of a probability distribution of storm surges. He published his findings in the Dutch journal in March 1939, which revolutionised the way flood protection measures were designed in the country. By using a frequency curve on a logarithmic scale, Wemelsfelder showed that an effective statistical overview of storm surges could be obtained. His 1939 paper demonstrated that the structure of the distribution of storm surges over the years, both in terms of strength and frequency, can be accurately represented by a probability law. In 1939, the establishment of the "(Storm Surge Commission)" was prompted by concerns regarding the state of many dikes in Zeeland. Under the leadership of Johan van Veen, the commission adopted Wemelsfelder's probabilistic approach as the basis for determining the probability of water level exceedance and the calculation of dike heights. This was a departure from the earlier approach of relying solely on previously recorded high water levels. Despite the commission's recommendation to raise the levels of the dikes, the Government of The Netherlands did not take action, and the dike system remained vulnerable throughout World War II. After the war, attention turned to rebuilding efforts, and this was exacerbated in 1953 further to a catastrophic flood that claimed 1,836 lives in The Netherlands, and caused billions of guilders in infrastructural damage. The storm surge associated with the 1953 flood saw water levels reach 3.85 metres above Normaal Amsterdams Peil (NAP) at Hoek van Holland, higher than the crest height of the dikes which had been determined based on previously recorded highest water levels (3.28 metres above NAP) at the same location. In response, the Dutch Government formed the "(Delta Commission)", which was charged with making recommendations for reducing the risk of such disasters. The commission relied heavily on the analysis and solutions put forth by the Storm Surge Commission, which had already adopted Wemelsfelder's probabilistic approach for determining dike heights. The Delta Commission recommended a target exceedance frequency of 10−4 per year as the basis for design levels in central Holland, and its work led to the enactment of the 1958 Delta Act. Wemelsfelder's work actually considered a factor of safety that increased the exceedance frequency to that which corresponds to "m"=10−5, where "m" is the acceptable risk. This corresponds to a "total loss" figure of 1% in a 1000-year period. Wemelsfelder made significant contributions to the Delta Commission's analysis and recommendations, and was active in research throughout his career, publishing a number of technical papers in Dutch and English. His findings continue to inform flood protection measures in the Netherlands today. Activities outside hydraulic engineering. In 1946, Wemelsfelder published a 358-page monograph focused on topics such as cultural development, societal change, the structure of an organic society, governance, legislation, and international order, entitled "(English: Plan for a reasonable Dutch society)". The book set out Wemelsfelder's thoughts on how The Netherlands could organise society in political and economic fields in the immediate aftermath of the Second World War and dealt with issues including combating unemployment, the establishment of a savings credit bank, reorganisation of banking and industry, pensions, the introduction of child allowance and the introduction of a separate income for married women. Selected Publications. Between 1965 and 1972, Wemelsfelder prepared reports on various storm surges which occurred in the Netherlands. All reports from this period are available at TU Delft Repository: Link: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n(h)/n" }, { "math_id": 1, "text": "n(h)" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=73448421
7346
Centimetre–gram–second system of units
Physical system of measurement that uses the centimetre, gram, and second as base units The centimetre–gram–second system of units (CGS or cgs) is a variant of the metric system based on the centimetre as the unit of length, the gram as the unit of mass, and the second as the unit of time. All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways in which the CGS system was extended to cover electromagnetism. The CGS system has been largely supplanted by the MKS system based on the metre, kilogram, and second, which was in turn extended and replaced by the International System of Units (SI). In many fields of science and engineering, SI is the only system of units in use, but CGS is still prevalent in certain subfields. In measurements of purely mechanical systems (involving units of length, mass, force, energy, pressure, and so on), the differences between CGS and SI are straightforward: the unit-conversion factors are all powers of 10 as 100 cm = 1 m and 1000 g = 1 kg. For example, the CGS unit of force is the dyne, which is defined as , so the SI unit of force, the newton (), is equal to . On the other hand, in measurements of electromagnetic phenomena (involving units of charge, electric and magnetic fields, voltage, and so on), converting between CGS and SI is less straightforward. Formulas for physical laws of electromagnetism (such as Maxwell's equations) take a form that depends on which system of units is being used, because the electromagnetic quantities are defined differently in SI and in CGS. Furthermore, within CGS, there are several plausible ways to define electromagnetic quantities, leading to different "sub-systems", including Gaussian units, "ESU", "EMU", and Heaviside–Lorentz units. Among these choices, Gaussian units are the most common today, and "CGS units" is often intended to refer to CGS-Gaussian units. History. The CGS system goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1873, a committee of the British Association for the Advancement of Science, including physicists James Clerk Maxwell and William Thomson recommended the general adoption of centimetre, gram and second as fundamental units, and to express all derived electromagnetic units in these fundamental units, using the prefix "C.G.S. unit of ...". The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard. Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide. CGS units have been deprecated in favor of SI units by NIST, as well as organizations such as the American Physical Society and the International Astronomical Union. SI units are predominantly used in engineering applications and physics education, while Gaussian CGS units are still commonly used in theoretical physics, describing microscopic systems, relativistic electrodynamics, and astrophysics. The units gram and centimetre remain useful as noncoherent units within the SI system, as with any other prefixed SI units. Definition of CGS units in mechanics. In mechanics, the quantities in the CGS and SI systems are defined identically. The two systems differ only in the scale of the three base units (centimetre versus metre and gram versus kilogram, respectively), with the third unit (second) being the same in both systems. There is a direct correspondence between the base units of mechanics in CGS and SI. Since the formulae expressing the laws of mechanics are the same in both systems and since both systems are coherent, the definitions of all coherent derived units in terms of the base units are the same in both systems, and there is an unambiguous relationship between derived units: Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time: 1 unit of pressure = 1 unit of force / (1 unit of length)2 = 1 unit of mass / (1 unit of length × (1 unit of time)2) 1 Ba = 1 g/(cm⋅s2) 1 Pa = 1 kg/(m⋅s2). Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems: 1 Ba = 1 g/(cm⋅s2) = 10−3 kg / (10−2 m⋅s2) = 10−1 kg/(m⋅s2) = 10−1 Pa. Derivation of CGS units in electromagnetism. CGS approach to electromagnetic units. The conversion factors relating electromagnetic units in the CGS and SI systems are made more complex by the differences in the formulas expressing physical laws of electromagnetism as assumed by each system of units, specifically in the nature of the constants that appear in these formulas. This illustrates the fundamental difference in the ways the two systems are built: In each of these systems the quantities called "charge" etc. may be a different quantity; they are distinguished here by a superscript. The corresponding quantities of each system are related through a proportionality constant. Maxwell's equations can be written in each of these systems as: Electrostatic units (ESU). In the electrostatic units variant of the CGS system, (CGS-ESU), charge is defined as the quantity that obeys a form of Coulomb's law without a multiplying constant (and current is then defined as charge per unit time): formula_6 The ESU unit of charge, franklin (Fr), also known as statcoulomb or esu charge, is therefore defined as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;two equal point charges spaced 1 centimetre apart are said to be of 1 franklin each if the electrostatic force between them is 1 dyne. Therefore, in CGS-ESU, a franklin is equal to a centimetre times square root of dyne: formula_7 The unit of current is defined as: formula_8 In the CGS-ESU system, charge "q" is therefore has the dimension to M1/2L3/2T−1. Other units in the CGS-ESU system include the statampere (1 statC/s) and statvolt (1 erg/statC). In CGS-ESU, all electric and magnetic quantities are dimensionally expressible in terms of length, mass, and time, and none has an independent dimension. Such a system of units of electromagnetism, in which the dimensions of all electric and magnetic quantities are expressible in terms of the mechanical dimensions of mass, length, and time, is traditionally called an 'absolute system'.:3 Unit symbols. All electromagnetic units in the CGS-ESU system that have not been given names of their own are named as the corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu", and similarly with the corresponding symbols. Electromagnetic units (EMU). In another variant of the CGS system, electromagnetic units (EMU), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well). The EMU unit of current, biot (Bi), also known as abampere or emu current, is therefore defined as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The biot is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one centimetre apart in vacuum, would produce between these conductors a force equal to two dynes per centimetre of length. Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne: formula_9 The unit of charge in CGS EMU is: formula_10 Dimensionally in the CGS-EMU system, charge "q" is therefore equivalent to M1/2L1/2. Hence, neither charge nor current is an independent physical quantity in the CGS-EMU system. EMU notation. All electromagnetic units in the CGS-EMU system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu". Practical CGS units. The practical CGS system is a hybrid system that uses the volt and the ampere as the units of voltage and current respectively. Doing this avoids the inconveniently large and small electrical units that arise in the esu and emu systems. This system was at one time widely used by electrical engineers because the volt and ampere had been adopted as international standard units by the International Electrical Congress of 1881. As well as the volt and ampere, the farad (capacitance), ohm (resistance), coulomb (electric charge), and henry (inductance) are consequently also used in the practical system and are the same as the SI units. The magnetic units are those of the emu system. The electrical units, other than the volt and ampere, are determined by the requirement that any equation involving only electrical and kinematical quantities that is valid in SI should also be valid in the system. For example, since electric field strength is voltage per unit length, its unit is the volt per centimetre, which is one hundred times the SI unit. The system is electrically rationalized and magnetically unrationalized; i.e., 𝜆 = 1 and 𝜆′ = 4π, but the above formula for 𝜆 is invalid. A closely related system is the International System of Electric and Magnetic Units, which has a different unit of mass so that the formula for 𝜆′ is invalid. The unit of mass was chosen to remove powers of ten from contexts in which they were considered to be objectionable (e.g., "P" = "VI" and "F" = "qE"). Inevitably, the powers of ten reappeared in other contexts, but the effect was to make the familiar joule and watt the units of work and power respectively. The ampere-turn system is constructed in a similar way by considering magnetomotive force and magnetic field strength to be electrical quantities and rationalizing the system by dividing the units of magnetic pole strength and magnetization by 4π. The units of the first two quantities are the ampere and the ampere per centimetre respectively. The unit of magnetic permeability is that of the emu system, and the magnetic constitutive equations are B = (4π/10)"μ"H and B = (4π/10)"μ"0H + "μ"0M. Magnetic reluctance is given a hybrid unit to ensure the validity of Ohm's law for magnetic circuits. In all the practical systems "ε"0 = 8.8542 × 10−14 A⋅s/(V⋅cm), "μ"0 = 1 V⋅s/(A⋅cm), and "c"2 = 1/(4"π" × 10−9 "ε"0"μ"0). Other variants. There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These include the Gaussian units and the Heaviside–Lorentz units. Electromagnetic units in various CGS systems. In this table, "c" = is the numeric value of the speed of light in vacuum when expressed in units of centimetres per second. The symbol "≘" is used instead of "=" as a reminder that the units are "corresponding" but not "equal". For example, according to the capacitance row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 "c"2) cm in ESU; "but" it is incorrect to replace "1 F" with "(10−9 "c"2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units. By contrast it is "always" correct to replace, e.g., "1 m" with "100 cm" within an equation or formula.) Advantages and disadvantages. Lack of unique unit names leads to potential confusion: "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. With its system of uniquely named units, the SI removes any confusion in usage: 1 ampere is a fixed value of a specified quantity, and so are 1 henry, 1 ohm, and 1 volt. In the CGS-Gaussian system, electric and magnetic fields have the same units, 4π𝜖0 is replaced by 1, and the only dimensional constant appearing in the Maxwell equations is "c", the speed of light. The Heaviside–Lorentz system has these properties as well (with "ε"0 equaling 1). In SI, and other rationalized systems (for example, Heaviside–Lorentz), the unit of current was chosen such that electromagnetic equations concerning charged spheres contain 4π, those concerning coils of current and straight wires contain 2π and those dealing with charged surfaces lack π entirely, which was the most convenient choice for applications in electrical engineering and relates directly to the geometric symmetry of the system being described by the equation. Specialized unit systems are used to simplify formulas further than either SI or CGS do, by eliminating constants through a convention of normalizing quantities with respect to some system of natural units. For example, in particle physics a system is in use where every quantity is expressed by only one unit of energy, the electronvolt, with lengths, times, and so on all converted into units of energy by inserting factors of speed of light "c" and the reduced Planck constant "ħ". This unit system is convenient for calculations in particle physics, but is impractical in other contexts. See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "v = \\frac{dx}{dt}" }, { "math_id": 1, "text": "F = m\\frac{d^2x}{dt^2}" }, { "math_id": 2, "text": "E = \\int \\vec{F}\\cdot d\\vec{x}" }, { "math_id": 3, "text": "p = \\frac{F}{L^2} " }, { "math_id": 4, "text": "\\eta = \\tau/\\frac{dv}{dx}" }, { "math_id": 5, "text": "q = I \\, t," }, { "math_id": 6, "text": "F={q^\\text{ESU}_1 q^\\text{ESU}_2 \\over r^2} ." }, { "math_id": 7, "text": "\\mathrm{1\\,Fr = 1\\,statcoulomb = 1\\,esu\\; charge = 1\\,dyne^{1/2}{\\cdot}cm=1\\,g^{1/2}{\\cdot}cm^{3/2}{\\cdot}s^{-1}} ." }, { "math_id": 8, "text": "\\mathrm{1\\,Fr/s = 1\\,statampere = 1\\,esu\\; current = 1\\,dyne^{1/2}{\\cdot}cm{\\cdot}s^{-1}=1\\,g^{1/2}{\\cdot}cm^{3/2}{\\cdot}s^{-2}} ." }, { "math_id": 9, "text": "\\mathrm{1\\,Bi = 1\\,abampere = 1\\,emu\\; current= 1\\,dyne^{1/2}=1\\,g^{1/2}{\\cdot}cm^{1/2}{\\cdot}s^{-1}}." }, { "math_id": 10, "text": "\\mathrm{1\\,Bi{\\cdot}s = 1\\,abcoulomb = 1\\,emu\\, charge= 1\\,dyne^{1/2}{\\cdot}s=1\\,g^{1/2}{\\cdot}cm^{1/2}}." } ]
https://en.wikipedia.org/wiki?curid=7346
73462795
Fractional job scheduling
Optimal job scheduling with some jobs done in parts Fractional job scheduling is a variant of optimal job scheduling in which it is allowed to break jobs into parts and process each part separately on the same or a different machine. Breaking jobs into parts may allow for improving the overall performance, for example, decreasing the makespan. Moreover, the computational problem of finding an optimal schedule may become easier, as some of the optimization variables become continuous. On the other hand, breaking jobs apart might be costly. Variants. There are several variants of job scheduling problems in which it is allowed to break jobs apart. They can be broadly classified into preemption and splitting. Job scheduling with preemption. Various problems have been studied in job scheduling with preemption. One of them is generalized multiprocessor scheduling (GMS). It has two variants. In both variants, the goal is to find a schedule that minimizes the makespan subject to the preemption constraints. For identical machines, Shchepin and Vakhania prove that with at most formula_1 total preemptions, the problem is NP-hard, whereas McNaughton shows a linear-time algorithm with formula_2 preemptions. For uniform machines, a polynomial algorithm by Gonzalez and Sahni yields at most formula_3 preemptions. Shachnai, Tamir, and Woeginger proved NP-hardness for the case where the number of preemption is strictly less than formula_3. They also presented a PTAS for GMS with a global preemption bound, and another PTAS for GMS with job-wise preemption bound when the number of machines is a fixed constant. Soper and Strusevitch study the special case in which at most one preemption is allowed. They show that makespan minimization is polynomial for two machines. Many papers study other variants of preemptive scheduling. For example, Liu and Cheng consider single-machine scheduling with job release and delivery dates, where there is no firm bound on the number of preemptions, but each preemption requires spending time on "job setup". Some works like Blazewicz "et al." or Deng "et al." study preemptive scheduling for jobs with parallelism where jobs must be processed simultaneously on several processors. Job scheduling with splitting. Various objectives have been studied. There are many variants including different setup times. In machine scheduling, the "setup time" refers to the time required to prepare a machine for a specific job or task. Sequence-dependent setup time is a situation where the setup time required for a job depends on the job that came before it, rather than being constant for all jobs (independent job setup time). Serafini assumes unbounded splittings and preemptions and gives polynomial-time algorithms that minimize the maximum tardiness and the maximum weighted tardiness, for uniform and unrelated machines. Xing and Zhang allow unbounded splittings, and give polynomial algorithms for many optimality criteria (such as makespan, lateness, tardiness, and more), with identical, uniform, and unrelated machines. For the case with independent job setup time, they give a formula_4 approximation algorithm. Son et al. study makespan minimization on a single machine with a machine-availability constraint with a lower bound on the length of each part of a job that is split. For identical machines, Shim and Kim suggest a branch and bound algorithm with the objective to minimize total tardiness with independent job setup time. Yalaoui and Chu propose a heuristic to the same problem, with the objective to minimize the makespan. Kim et al. suggest a two-phase heuristic algorithm with the objective of minimizing total tardiness. With the objective to minimize the makespan, Kim studies another variant with family setup time in which no setup is required when parts from the same job are produced consecutively. And, Wang et al. include a learning property that improves the processing time of a job according to the learning effect. The learning has to be restarted if one job is split and processed by a different machine. For uniform machines, Kim and Lee study a variant with dedicated machines (there are some dedicated machines for each job), sequence-dependent setup times, and limited set-up resources (jobs require setup operators that are limited) with the objective to minimize the makespan. Krysta, Sanders, and Vöcking study makespan minimization using the k-splittable variant, a variant where each job is allowed to be split into most formula_5 different machines. They show that this variant and another more general variant where each job has its own splitability parameter, are NP-hard. They give some approximation algorithms, but their main result is a polynomial-time algorithm for both problems, for a fixed number of machines. They show that allowing a bounded number of splittings reduces the complexity of scheduling. In all these works, there is no global bound on the number of splitting jobs.
[ { "math_id": 0, "text": "j" }, { "math_id": 1, "text": "n-2" }, { "math_id": 2, "text": "n-1" }, { "math_id": 3, "text": "2n-2" }, { "math_id": 4, "text": "7/4" }, { "math_id": 5, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=73462795
734635
Lucas sequence
Certain constant-recursive integer sequences In mathematics, the Lucas sequences formula_0 and formula_1 are certain constant-recursive integer sequences that satisfy the recurrence relation formula_2 where formula_3 and formula_4 are fixed integers. Any sequence satisfying this recurrence relation can be represented as a linear combination of the Lucas sequences formula_5 and formula_6 More generally, Lucas sequences formula_5 and formula_1 represent sequences of polynomials in formula_3 and formula_4 with integer coefficients. Famous examples of Lucas sequences include the Fibonacci numbers, Mersenne numbers, Pell numbers, Lucas numbers, Jacobsthal numbers, and a superset of Fermat numbers (see below). Lucas sequences are named after the French mathematician Édouard Lucas. Recurrence relations. Given two integer parameters formula_3 and formula_4, the Lucas sequences of the first kind formula_0 and of the second kind formula_7 are defined by the recurrence relations: formula_8 and formula_9 It is not hard to show that for formula_10, formula_11 The above relations can be stated in matrix form as follows: formula_12 &lt;br&gt; formula_13 &lt;br&gt; formula_14 Examples. Initial terms of Lucas sequences formula_0 and formula_7 are given in the table: formula_15 Explicit expressions. The characteristic equation of the recurrence relation for Lucas sequences formula_0 and formula_7 is: formula_16 It has the discriminant formula_17 and the roots: formula_18 Thus: formula_19 formula_20 formula_21 Note that the sequence formula_22 and the sequence formula_23 also satisfy the recurrence relation. However these might not be integer sequences. Distinct roots. When formula_24, "a" and "b" are distinct and one quickly verifies that formula_25 formula_26 It follows that the terms of Lucas sequences can be expressed in terms of "a" and "b" as follows formula_27 formula_28 Repeated root. The case formula_29 occurs exactly when formula_30 for some integer "S" so that formula_31. In this case one easily finds that formula_32 formula_33 Properties. Generating functions. The ordinary generating functions are formula_34 formula_35 Pell equations. When formula_36, the Lucas sequences formula_5 and formula_1 satisfy certain Pell equations: formula_37 formula_38 formula_39 formula_42 formula_43 have the same discriminant as formula_5 and formula_1: formula_44 formula_45 formula_46 Other relations. The terms of Lucas sequences satisfy relations that are generalizations of those between Fibonacci numbers formula_47 and Lucas numbers formula_48. For example: formula_49 Divisibility properties. Among the consequences is that formula_50 is a multiple of formula_51, i.e., the sequence formula_52 is a divisibility sequence. This implies, in particular, that formula_0 can be prime only when "n" is prime. Another consequence is an analog of exponentiation by squaring that allows fast computation of formula_0 for large values of "n". Moreover, if formula_53, then formula_52 is a strong divisibility sequence. Other divisibility properties are as follows: The last fact generalizes Fermat's little theorem. These facts are used in the Lucas–Lehmer primality test. The converse of the last fact does not hold, as the converse of Fermat's little theorem does not hold. There exists a composite "n" relatively prime to "D" and dividing formula_64, where formula_66. Such a composite is called a Lucas pseudoprime. A prime factor of a term in a Lucas sequence that does not divide any earlier term in the sequence is called primitive. Carmichael's theorem states that all but finitely many of the terms in a Lucas sequence have a primitive prime factor. Indeed, Carmichael (1913) showed that if "D" is positive and "n" is not 1, 2 or 6, then formula_58 has a primitive prime factor. In the case "D" is negative, a deep result of Bilu, Hanrot, Voutier and Mignotte shows that if "n" &gt; 30, then formula_58 has a primitive prime factor and determines all cases formula_58 has no primitive prime factor. Specific names. The Lucas sequences for some values of "P" and "Q" have specific names: "Un"(1, −1) : Fibonacci numbers "Vn"(1, −1) : Lucas numbers "Un"(2, −1) : Pell numbers "Vn"(2, −1) : Pell–Lucas numbers (companion Pell numbers) "Un"(1, −2) : Jacobsthal numbers "Vn"(1, −2) : Jacobsthal–Lucas numbers "Un"(3, 2) : Mersenne numbers 2"n" − 1 "Vn"(3, 2) : Numbers of the form 2"n" + 1, which include the Fermat numbers "Un"(6, 1) : The square roots of the square triangular numbers. "Un"("x", −1) : Fibonacci polynomials "Vn"("x", −1) : Lucas polynomials "Un"(2"x", 1) : Chebyshev polynomials of second kind "Vn"(2"x", 1) : Chebyshev polynomials of first kind multiplied by 2 "Un"("x"+1, "x") : Repunits in base "x" "Vn"("x"+1, "x") : "xn" + 1 Some Lucas sequences have entries in the On-Line Encyclopedia of Integer Sequences: Software. Sagemath implements formula_58 and formula_56 as codice_0 and codice_1, respectively. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U_n(P,Q)" }, { "math_id": 1, "text": "V_n(P, Q)" }, { "math_id": 2, "text": "x_n = P \\cdot x_{n - 1} - Q \\cdot x_{n - 2}" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "U_n(P, Q)" }, { "math_id": 6, "text": "V_n(P, Q)." }, { "math_id": 7, "text": "V_n(P,Q)" }, { "math_id": 8, "text": "\\begin{align}\nU_0(P,Q)&=0, \\\\\nU_1(P,Q)&=1, \\\\\nU_n(P,Q)&=P\\cdot U_{n-1}(P,Q)-Q\\cdot U_{n-2}(P,Q) \\mbox{ for }n>1,\n\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\nV_0(P,Q)&=2, \\\\\nV_1(P,Q)&=P, \\\\\nV_n(P,Q)&=P\\cdot V_{n-1}(P,Q)-Q\\cdot V_{n-2}(P,Q) \\mbox{ for }n>1.\n\\end{align}" }, { "math_id": 10, "text": "n>0" }, { "math_id": 11, "text": "\\begin{align}\nU_n(P,Q)&=\\frac{P\\cdot U_{n-1}(P,Q) + V_{n-1}(P,Q)}{2}, \\\\\nV_n(P,Q)&=\\frac{(P^2-4Q)\\cdot U_{n-1}(P,Q)+P\\cdot V_{n-1}(P,Q)}{2}. \n\\end{align}" }, { "math_id": 12, "text": "\\begin{bmatrix} U_n(P,Q)\\\\ U_{n+1}(P,Q)\\end{bmatrix} = \\begin{bmatrix} 0 & 1\\\\ -Q & P\\end{bmatrix}\\cdot \\begin{bmatrix} U_{n-1}(P,Q)\\\\ U_n(P,Q)\\end{bmatrix}," }, { "math_id": 13, "text": "\\begin{bmatrix} V_n(P,Q)\\\\ V_{n+1}(P,Q)\\end{bmatrix} = \\begin{bmatrix} 0 & 1\\\\ -Q & P\\end{bmatrix}\\cdot \\begin{bmatrix} V_{n-1}(P,Q)\\\\ V_n(P,Q)\\end{bmatrix}," }, { "math_id": 14, "text": "\\begin{bmatrix} U_n(P,Q)\\\\ V_n(P,Q)\\end{bmatrix} = \\begin{bmatrix} P/2 & 1/2\\\\ (P^2-4Q)/2 & P/2\\end{bmatrix}\\cdot \\begin{bmatrix} U_{n-1}(P,Q)\\\\ V_{n-1}(P,Q)\\end{bmatrix}." }, { "math_id": 15, "text": "\n\\begin{array}{r|l|l}\nn & U_n(P,Q) & V_n(P,Q)\n\\\\\n\\hline\n0 & 0 & 2\n\\\\\n1 & 1 & P\n\\\\\n2 & P & {P}^{2}-2Q\n\\\\\n3 & {P}^{2}-Q & {P}^{3}-3PQ\n\\\\\n4 & {P}^{3}-2PQ & {P}^{4}-4{P}^{2}Q+2{Q}^{2}\n\\\\\n5 & {P}^{4}-3{P}^{2}Q+{Q}^{2} & {P}^{5}-5{P}^{3}Q+5P{Q}^{2}\n\\\\\n6 & {P}^{5}-4{P}^{3}Q+3P{Q}^{2} & {P}^{6}-6{P}^{4}Q+9{P}^{2}{Q}^{2}-2{Q}^{3}\n\\end{array}\n" }, { "math_id": 16, "text": "x^2 - Px + Q=0 \\," }, { "math_id": 17, "text": "D = P^2 - 4Q" }, { "math_id": 18, "text": "a = \\frac{P+\\sqrt{D}}2\\quad\\text{and}\\quad b = \\frac{P-\\sqrt{D}}2. \\," }, { "math_id": 19, "text": "a + b = P\\, ," }, { "math_id": 20, "text": "a b = \\frac{1}{4}(P^2 - D) = Q\\, ," }, { "math_id": 21, "text": "a - b = \\sqrt{D}\\, ." }, { "math_id": 22, "text": "a^n" }, { "math_id": 23, "text": "b^n" }, { "math_id": 24, "text": "D\\ne 0" }, { "math_id": 25, "text": "a^n = \\frac{V_n + U_n \\sqrt{D}}{2}" }, { "math_id": 26, "text": "b^n = \\frac{V_n - U_n \\sqrt{D}}{2}." }, { "math_id": 27, "text": "U_n = \\frac{a^n-b^n}{a-b} = \\frac{a^n-b^n}{ \\sqrt{D}}" }, { "math_id": 28, "text": "V_n = a^n+b^n \\," }, { "math_id": 29, "text": " D=0 " }, { "math_id": 30, "text": " P=2S \\text{ and }Q=S^2" }, { "math_id": 31, "text": "a=b=S" }, { "math_id": 32, "text": "U_n(P,Q)=U_n(2S,S^2) = nS^{n-1}\\," }, { "math_id": 33, "text": "V_n(P,Q)=V_n(2S,S^2)=2S^n.\\," }, { "math_id": 34, "text": "\n\\sum_{n\\ge 0} U_n(P,Q)z^n = \\frac{z}{1-Pz+Qz^2};\n" }, { "math_id": 35, "text": "\n\\sum_{n\\ge 0} V_n(P,Q)z^n = \\frac{2-Pz}{1-Pz+Qz^2}.\n" }, { "math_id": 36, "text": "Q=\\pm 1" }, { "math_id": 37, "text": "V_n(P,1)^2 - D\\cdot U_n(P,1)^2 = 4," }, { "math_id": 38, "text": "V_{2n}(P,-1)^2 - D\\cdot U_{2n}(P,-1)^2 = 4," }, { "math_id": 39, "text": "V_{2n+1}(P,-1)^2 - D\\cdot U_{2n+1}(P,-1)^2 = -4." }, { "math_id": 40, "text": "U_n(P', Q')" }, { "math_id": 41, "text": "V_n(P', Q')" }, { "math_id": 42, "text": " P' = P + 2c " }, { "math_id": 43, "text": " Q' = cP + Q + c^2 " }, { "math_id": 44, "text": "P'^2 - 4Q' = (P+2c)^2 - 4(cP + Q + c^2) = P^2 - 4Q = D." }, { "math_id": 45, "text": "U_n(cP,c^2Q) = c^{n-1}\\cdot U_n(P,Q)," }, { "math_id": 46, "text": "V_n(cP,c^2Q) = c^n\\cdot V_n(P,Q)." }, { "math_id": 47, "text": "F_n=U_n(1,-1)" }, { "math_id": 48, "text": "L_n=V_n(1,-1)" }, { "math_id": 49, "text": "\n\\begin{array}{r|l}\n\\text{General case} & (P,Q) = (1,-1)\n\\\\\n\\hline\n(P^2-4Q) U_n = {V_{n+1} - Q V_{n-1}}=2V_{n+1}-P V_n & 5F_n = {L_{n+1} + L_{n-1}}=2L_{n+1} - L_{n} \n\\\\\nV_n = U_{n+1} - Q U_{n-1}=2U_{n+1}-PU_n & L_n = F_{n+1} + F_{n-1}=2F_{n+1}-F_n \n\\\\\nU_{2n} = U_n V_n & F_{2n} = F_n L_n \n\\\\\nV_{2n} = V_n^2 - 2Q^n & L_{2n} = L_n^2 - 2(-1)^n \n\\\\\nU_{m+n} = U_n U_{m+1} - Q U_m U_{n-1}=\\frac{U_mV_n+U_nV_m}{2} & F_{m+n} = F_n F_{m+1} + F_m F_{n-1}=\\frac{F_mL_n+F_nL_m}{2} \n\\\\\nV_{m+n} = V_m V_n - Q^n V_{m-n} = D U_m U_n + Q^n V_{m-n} & L_{m+n} = L_m L_n - (-1)^n L_{m-n} = 5 F_m F_n + (-1)^n L_{m-n} \n\\\\\nV_n^2-DU_n^2=4Q^n & L_n^2-5F_n^2=4(-1)^n \n\\\\\nU_n^2-U_{n-1}U_{n+1}=Q^{n-1} & F_n^2-F_{n-1}F_{n+1}=(-1)^{n-1} \n\\\\\nV_n^2-V_{n-1}V_{n+1}=DQ^{n-1} & L_n^2-L_{n-1}L_{n+1}=5(-1)^{n-1} \n\\\\\nDU_n=V_{n+1}-QV_{n-1} & F_n=\\frac{L_{n+1}+L_{n-1}}{5} \n\\\\\nV_{m+n}=\\frac{V_mV_n+DU_mU_n}{2} & L_{m+n}=\\frac{L_mL_n+5F_mF_n}{2} \n\\\\\nU_{m+n}=U_mV_n-Q^nU_{m-n} & F_{n+m}=F_mL_n-(-1)^nF_{m-n}\n\\\\\n2^{n-1}U_n={n \\choose 1}P^{n-1}+{n \\choose 3}P^{n-3}D+\\cdots & 2^{n-1}F_n={n \\choose 1}+5{n \\choose 3}+\\cdots\n\\\\\n2^{n-1}V_n=P^n+{n \\choose 2}P^{n-2}D+{n \\choose 4}P^{n-4}D^2+\\cdots & 2^{n-1}L_n=1+5{n \\choose 2}+5^2{n \\choose 4}+\\cdots\n\\end{array}\n" }, { "math_id": 50, "text": "U_{km}(P,Q)" }, { "math_id": 51, "text": "U_m(P,Q)" }, { "math_id": 52, "text": "(U_m(P,Q))_{m\\ge1}" }, { "math_id": 53, "text": "\\gcd(P,Q)=1" }, { "math_id": 54, "text": "n \\mid m" }, { "math_id": 55, "text": "V_m" }, { "math_id": 56, "text": "V_n" }, { "math_id": 57, "text": "U_r" }, { "math_id": 58, "text": "U_n" }, { "math_id": 59, "text": "U_n, V_n" }, { "math_id": 60, "text": "U_1" }, { "math_id": 61, "text": "n=1, 2, \\ldots" }, { "math_id": 62, "text": "U_p\\equiv\\left(\\tfrac{D}{p}\\right), V_p\\equiv P\\pmod{p}" }, { "math_id": 63, "text": "n>1" }, { "math_id": 64, "text": "U_l" }, { "math_id": 65, "text": "l=p-\\left(\\tfrac{D}{p}\\right)" }, { "math_id": 66, "text": "l=n-\\left(\\tfrac{D}{n}\\right)" } ]
https://en.wikipedia.org/wiki?curid=734635
734701
911 (number)
Natural number 911 (nine hundred [and] eleven) is the integer following 910 and preceding 912. It is a prime number, a Sophie Germain prime, and the sum of three consecutive primes (293 + 307 + 311). It is an Eisenstein prime with no imaginary part and real part of the form formula_0. Since 913 is a semiprime, 911 is a Chen prime. It is also a centered decagonal number. There are 911 inverse semigroups of order 7 (sequence in the OEIS) 911 is obtained by concatenating its product of digits and sum of digits.
[ { "math_id": 0, "text": "3n-1" } ]
https://en.wikipedia.org/wiki?curid=734701
7347143
Small-angle scattering
Small-angle scattering (SAS) is a scattering technique based on deflection of collimated radiation away from the straight trajectory after it interacts with structures that are much larger than the wavelength of the radiation. The deflection is small (0.1-10°) hence the name "small-angle". SAS techniques can give information about the size, shape and orientation of structures in a sample. SAS is a powerful technique for investigating large-scale structures from 10 Å up to thousands and even several tens of thousands of angstroms. The most important feature of the SAS method is its potential for analyzing the inner structure of disordered systems, and frequently the application of this method is a unique way to obtain direct structural information on systems with random arrangement of density inhomogeneities in such large-scales. Currently, the SAS technique, with its well-developed experimental and theoretical procedures and wide range of studied objects, is a self-contained branch of the structural analysis of matter. SAS can refer to small angle neutron scattering (SANS) or small angle X-ray scattering (SAXS). Applications. Small-angle scattering is particularly useful because of the dramatic increase in forward scattering that occurs at phase transitions, known as critical opalescence, and because many materials, substances and biological systems possess interesting and complex features in their structure, which match the useful length scale ranges that these techniques probe. The technique provides valuable information over a wide variety of scientific and technological applications including chemical aggregation, defects in materials, surfactants, colloids, ferromagnetic correlations in magnetism, alloy segregation, polymers, proteins, biological membranes, viruses, ribosome and macromolecules. While analysis of the data can give information on size, shape, etc., without making any model assumptions a preliminary analysis of the data can only give information on the radius of gyration for a particle using Guinier's equation. Theory. Continuum description. SAS patterns are typically represented as scattered intensity as a function of the magnitude of the "scattering vector" formula_0. Here formula_1 is the angle between the incident beam and the detector measuring the scattered intensity, and formula_2 is the wavelength of the radiation. One interpretation of the scattering vector is that it is the "resolution" or "yardstick" with which the sample is observed. In the case of a two-phase sample, e.g. small particles in liquid suspension, the only contrast leading to scattering in the typical range of resolution of the SAS is simply Δρ, the difference in "average" scattering length density between the particle and the surrounding liquid, because variations in ρ due to the atomic structure only become visible at higher angles. This means that the total integrated intensity of the SAS pattern (in 3D) is an invariant quantity proportional to the square Δ"ρ"2. In 1-dimensional projection, as usually recorded for an isotropic pattern this invariant quantity becomes formula_3, where the integral runs from q=0 to wherever the SAS pattern is assumed to end and the diffraction pattern starts. It is also assumed that the density does not vary in the liquid or inside the particles, i.e. there is "binary" contrast. SAXS is described in terms of the electronic density where SANS is described in terms of a neutron scattering length density. Porod's law. At wave numbers that are relatively large on the scale of SAS, but still small when compared to wide-angle Bragg diffraction, local interface intercorrelations are probed, whereas correlations between opposite interface segments are averaged out. For smooth interfaces, one obtains Porod's law: formula_4 This allows the surface area "S" of the particles to be determined with SAS. This needs to be modified if the interface is rough on the scale "q"−1. If the roughness can be described by a fractal dimension "d" between 2-3 then Porod's law becomes: formula_5 Scattering from particles. Small-angle scattering from particles can be used to determine the particle shape or their size distribution. A small-angle scattering pattern can be fitted with intensities calculated from different model shapes when the size distribution is known. If the shape is known, a size distribution may be fitted to the intensity. Typically one assumes the particles to be spherical in the latter case. If the particles are in solution and known to have uniform size dispersity, then a typical strategy is to measure different concentrations of particles in the solution. From the SAXS patterns obtained one can extrapolate to the intensity pattern one would get for a single particle. This is a necessary procedure that eliminates the "concentration effect", which is a small shoulder that appears in the intensity patterns due to the proximity of neighbouring particles. The average distance between particles is then roughly the distance 2π/"q*", where "q*" is the position of the shoulder on the scattering vector range "q". The shoulder thus comes from the structure of the solution and this contribution is called "the structure factor". One can write for the small-angle X-ray scattering intensity: formula_6 where When the intensities from low concentrations of particles are extrapolated to infinite dilution, the structure factor is equal to 1 and no longer disturbs the determination of the particle shape from the form factor formula_9. One can then easily apply the Guinier approximation (also called Guinier law, after André Guinier), which applies only at the very beginning of the scattering curve, at small "q"-values. According to the Guinier approximation the intensity at small "q" depends on the radius of gyration of the particle. An important part of the particle shape determination is usually the distance distribution function formula_11, which may be calculated from the intensity using a Fourier transform formula_12 The distance distribution function formula_11 is related to the frequency of certain distances formula_13 within the particle. Therefore, it goes to zero at the largest diameter of the particle. It starts from zero at formula_14 due to the multiplication by formula_15. The shape of the formula_11-function already tells something about the shape of the particle. If the function is very symmetric, the particle is also highly symmetric, like a sphere. The distance distribution function should not be confused with the size distribution. The particle shape analysis is especially popular in biological small-angle X-ray scattering, where one determines the shapes of proteins and other natural colloidal polymers. History. Small-angle scattering studies were initiated by André Guinier (1937). Subsequently, Peter Debye, Otto Kratky, Günther Porod, R. Hosemann and others developed the theoretical and experimental fundamentals of the method and they were established until around 1960. Later on, new progress in refining the method began in the 1970s and is continuing today. Organisations. As a 'low resolution' diffraction technique, the worldwide interests of the small-angle scattering community are promoted and coordinated by the Commission on Small-Angle Scattering of the International Union of Crystallography (IUCr/CSAS). There are also a number of community-led networks and projects. One such network, canSAS - the acronym stands for Collective Action for Nomadic Small-Angle Scatterers, emphasising the global nature of the technique, champions the development of instrumental calibration standards and data file formats. International conferences. There is a long history of international conferences on small-angle scattering. These are hosted independently by individual organizations wishing to host the conference. The hosts of the conference are often collaborating with the IUCr/CSAS on the conference details. Since 2006, the sequence of conferences has been held at three year intervals. Attendees at the conference will vote on bids to host the next conference(s). Awards. Several awards are presented at the international conference. André Guinier Prize. The André Guinier Prize (in honor of André Guinier) is given for lifetime achievement, a major breakthrough, or an outstanding contribution to the field of small-angle scattering. This award is sponsored by the IUCr and the conference organizers. Previous recipients of the Guinier prize: Otto Kratky Prize. The Otto Kratky Prize is awarded to an outstanding young scientist working in SAXS. This award is sponsored by Anton Paar. To be eligible, you must be a fully registered attendee at the international conference of that year, be author or co-author on an abstract utilizing SAXS, and either less than 35 years of age or fewer than five years since the date of PhD graduation. The prize jury is assembled by the conference organizers and staff of Anton Paar. Previous recipients of the Kratky prize: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Textbooks. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "q = 4\\pi \\sin (\\theta ) / \\lambda" }, { "math_id": 1, "text": "2\\theta" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "\\int I(q)q^2\\,dx " }, { "math_id": 4, "text": "I(q) \\sim Sq^{-4}" }, { "math_id": 5, "text": "I(q) \\sim S' q^{-(6-d)}" }, { "math_id": 6, "text": "I(q) = P(q)S(q) ," }, { "math_id": 7, "text": "I(q)" }, { "math_id": 8, "text": "q" }, { "math_id": 9, "text": "P(q)" }, { "math_id": 10, "text": "S(q)" }, { "math_id": 11, "text": "p(r)" }, { "math_id": 12, "text": "p(r) = \\frac{r^2}{2\\pi^2}\\int_0^\\infty I(q)\\frac{\\sin qr}{qr}q^2dq." }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "r = 0" }, { "math_id": 15, "text": "r^2" } ]
https://en.wikipedia.org/wiki?curid=7347143
73473067
Weak-beam dark-field microscopy
Electron microscopy technique Weak beam dark field (WBDF) microscopy is a type of transmission electron microscopy (TEM) dark field imaging technique that allows for the visualization of crystal defects with high resolution and contrast. Specifically, the technique is mainly used to study crystal defects such as dislocations, stacking faults, and interfaces in crystalline materials. WBDF is a valuable tool for studying the microstructure of materials, as it can provide detailed information about the nature and distribution of defects in crystals. These characteristics can have a significant impact on material properties such as strength, ductility, and corrosion resistance. WBDF works by using a selected weak first-order diffracted beam from the specimen. This is made possible by tilting the specimen to excite higher angle diffraction spots. The electrons diffracted by the crystal are selected using an objective aperture and selective aperture, which allows only a small fraction of the diffracted electrons to be imaged to the detector. The objective aperture controls size and angle of the incoming beam that is selecting the diffracted beam. The selective aperture selects the area where the diffraction comes from. The WBDF image is able to highlight the location and type of crystal defects because the lattice bends back to Bragg's diffraction orientation near the defect core. The image can be further enhanced by tilting the crystal in different directions, which changes the orientation of the defects with respect to the electron beam. Under certain special diffraction conditions, dislocations can be imaged as narrow lines. The dislocation lines and Burgers vector can be determined for each dislocation. Also, the movement of dislocations in materials can be studied to determine mobility and subsequent material properties. History. One of the first instances in literature which began the development of WBDF is from Hirsch, Howie, and Whelan in 1960. Their paper focused on applying kinematical theory to TEM imaging with emphasis on dislocation and defect imaging. Then, weak-beam techniques were further demonstrated from R. Gever, et al. The authors predicted that even when selecting a weak kinematical spot to form a TEM image, the fringe periodicity is the same as for bright field imaging. Further research into WBDF in 1969 demonstrated the technique's usefulness in imaging dislocations, as developed by Cockayne, Ray and Whelan. Since then, the technique has been widely used for analysis of dislocations and their interactions in crystalline samples. WBDF theory. The weak beam dark field (WBDF) technique is based on using a diffracted beam with a large excitation error (formula_1) to form an on-axis dark field image. To form an image, a first-order diffraction spot is selected while the sample is tilted to excite a higher angle, typically ~ 3g, diffraction spot. The WBDF g-ng condition means that 1g is the g vector used for forming the image and ng is the excited g vector. The specimen is tilted for the Ewald sphere to intersect the lattice at the origin and 3g as the figure shown. In the figure, formula_0, the excitation error, is made to be large such that the origin and the third order diffraction spot intersect with the Ewald sphere and are excited by the electron beam. Note that under the two-beam conditions, the crystal is tilted in a way that there is only one strong diffracted beam at formula_2 and all other diffracted beams are weak, ideally in a symmetric way around the direct beam. The intensity of the diffracted beam g in a perfect crystal can be written as an equation below: formula_3 Where formula_4 is the sample thickness, formula_5 is the extinction distance for diffraction vector g that depends on lattice parameters, atomic number, and the beam electron voltage used, and formula_6  is the effective excitation error given by the equation: formula_7 For the WBDF technique, the excitation error can increase to about 0.2 formula_8 as well as formula_6. When formula_9 then formula_10, and this is known as the kinematical approximation. This theory leads to the main advantages seen from high-contrast defect images from WBDF.  From the equation above, it is evident that the intensity decreases as formula_6 increases. In the areas of the sample without defects, diffraction intensity is weak which appears as dark areas in the image. However, near the dislocation core, the lattice plane bends back into Bragg's condition, which leads to a bright intensity peak observed as the dislocation line in the WBDF image. The main challenge presented with this technique is the ability to optimally adjust the tilt conditions to minimize the excitation error of the g reflection near the dislocation core so that a sharp dislocation line becomes visible. The exact value of g is not typically exactly 3g as proposed, and is dependent on material properties such as lattice parameter and TEM instrumentation used. Dislocations in the sample, described by their Burgers vector b, will appear under the diffracted beam, vector g, when formula_11. This is important in WBDF imaging because one of the main advantages are the ability to qualitatively and quantitatively describe defects in a given material. There are three main mathematical methods to determine the dislocation peak position as described below. 1. Weak beam criterion: The diffracted beam has the largest intensity when formula_6 or formula_12 is zero.formula_13 Where z describes the direction of the electron beam and R is the displacement field around the dislocation. 2. Kinematical integral: There is maximized scattering from the transmitted to the diffracted beam when the kinematical integral equation below is maximized. This occurs near the dislocation core in the sample. formula_14 3. Computing the contrast: The width of the dislocation peak, represented by formula_15, can be narrowed according to the below equation. As the excitation error increases, the width of the dislocation peaks, and therefore the contrast seen in the image near the boundary between the dislocation and the background. formula_16 Weak beam dark field technique setup process. Setting up weak-beam dark-field imaging in transmission electron microscopy involves several steps, which may vary depending on the specific TEM instrument and the sample being analyzed. This may require further optimization and adjustment to achieve the desired image quality and contrast. The general steps include: Comparison with other TEM imaging techniques. WBDF is often used in tandem with other TEM imaging techniques such as bright field (BF) and dark field (DF) imaging. These frequently used techniques similarly create an image from electrons that pass through and interact with the sample, however, the difference lies in the electrons which are selected to fall on the detector, and the degree of sample tilt. This control is allowed by the objective aperture. In BF imaging, the direct beam is selected to create the image, and in DF imaging, the scattered electron beams are used to create an image. Evidently, WBDF also uses scattered beams to form an image, but the difference between DF and WBDF comes from the degree of sample tilt and thus beam intensity on the first order diffraction spot. Shown below, a WBDF sample is tilted to excite the 3g diffraction spot to the Bragg condition, and the first-order diffraction spot is selected. This excitation is seen on the instrument as the diffraction spot getting brighter as the sample is tilted. The analytical difference between WBDF and BF and DF imaging is that WBDF can achieve high-contrast images of defects and thickness changes in a sample. This is made possible by tilting the sample to increase the intensity of the beam on diffraction spots further from the direct beam. Due to the tilting and subsequent increase in excitation error, the electrons are treated in the kinematical approximation. This is the aspect of WBDF which sets it apart from BF and DF imaging, and allows for high contrast defect characterization. This process is described in more detail in the WBDF theory section. In this approach, when the specimen is tilted far away from the Bragg condition as it is here, stronger peaks arise from defects in the material which are then able to be in the Bragg condition. In bright field imaging, the width of the dislocation features is larger than in WBDF because the core of the dislocation planes is what is locally bent back into the Bragg condition. When the angle is larger such as in WBDF, the planes need to bend more to satisfy the condition, decreasing the amount of the dislocation line that is shown in the subsequent image. As a result, the diffraction contrast increases as excitation error increases. The high contrast which is seen in WBDF images makes this technique especially useful in comparison to BF and DF since it provides more precise defect analysis. This can help to qualitatively and quantitatively analyze stacking faults, Burgers vectors, and even 3D reconstruction of dislocation networks in a specimen. Advantages and limitations of WBDF. The main advantage of using WBDF is the ability to acquire high-contrast images of imperfect specimens for the purpose of studying defects in the material. This technique is not overly complex in its setup and can provide additional quantitative data for imaging a sample via TEM. Some examples of this include higher contrast images of thickness fringes, strain fields, and dissociated dislocations. For example, Feng et al. were able to use WBDF to show waveform contours in a barium titanate sample which demonstrated that strain contours were dependent on the stress state. The bright field images did not clearly show the strain field, while the WBDF was able to clearly show a pattern indicative of strain. In another example, Rakhmonov et al. used WBDF to study how dislocations interact with precipitates in an Al-Cu-Mn-Zr alloy crept at 300 °C, and they observed Orowan loops around precipitates. The advantage of higher contrast thickness fringes is made possible by a large formula_6 value which in turn makes formula_18 be smaller and affects thickness periodicity. This is shown by the equation formula_19 which comes from the weak beam approximation. When the extinction distance is larger, it increases the fringe separation and fringe width which thus increases the contrast that can be seen in thickness fringes and strain contrast. Some limitations to WBDF are related to setup conditions, projection errors, and hardware limitations. For setup conditions, it is nontrivial to select the tilt, and therefore the excitation error, that is required for an optimal WBDF image. The 3g condition, in which the sample is tilted to make the 3g diffraction spot have a higher intensity, is a rule of thumb for attaining an image, but is not always true. The lattice parameter of the sample and the wavelength of electrons used can have an effect on the optimized value of s. A smaller value of s can be used to still attain defect information, but determining the tilt angle can take time and this can damage the defect structure of interest. A projection error is found in every WBDF image because the image of any defect is projected in the direction of the k-vector of the diffracted wave. This projection can change depending on the starting parameters used to form the image, and thus the analysis of the defect lines is not straightforward. Finally, there are limitations to the technique based on current instrument limitations such as the CCD cameras used, energy filtering of the electron source, and image processing which helps to rid noise from the image. Examples of applications that use WBDF. Burgers Vector Determination in Perovskites. One example in literature of the utilization of WBDF microscopy is to quantitatively determine the direction of Burgers vectors for the purpose of characterizing dislocation types. In this case, the authors were able to determine screw and edge dislocations in a perovskite sample by imaging down multiple zone axes and calculating the Burgers vectors by counting the number of thickness fringes which terminate within the sample as opposed to at the edge of the sample. The high contrast from WBDF allows for easier determination of where the terminating edges are located. Reconstruct three-dimensional dislocation array. The three-dimensional structure of dislocation arrays in GaN are able to be reconstructed by combining the weak beam dark field technique with tomography by Barnard, et al. The hetero-epitaxial GaN grown on sapphire with high dislocation along [0001] was used. The WBDF images were taken from  5° tilt to 120° tilt at constant excitation error, magnification, and rotation. Using back projections and sequentially iterated reconstruction technique, the reconstructed tomographic volume was achieved. The reconstructed volume is able to show threading dislocations, in-plane dislocation, and dislocation interactions. Imaging of superdislocations and dislocation dynamics. The advantages of WBDF are utilized to resolve dislocation dynamics in Fe2MnAl single crystals where superdislocations with 4-fold dissociation were imaged. The movement of a superdislocation at the nanometer scale can be seen in the image on the right. In the paper, Liao et al. show that the superdislocations glide in segments. The dislocations with screw character were shown to move in a “locking and unlocking” manner dependent on the pinning of the dislocation. The ability to see how dislocations move in a solid is fundamentally important to materials science and understanding yield stress anomaly of intermetallic compounds. Future Directions for WBDF. The technique of WBDF can be further improved by TEM instrument advancements for the electron source and image processing. More specifically, field emission guns (FEGs) and the reduction of energy variation in the electron source can help to get even higher contrast images with higher resolution. Also, improvements to image detectors can help to reduce noise in the image which helps with quantitative analysis of defects in materials. These advancements would be especially helpful because of the weak beam condition. Improvements to contrast can also help with further analysis via high-throughput analysis of defects via computer modeling. This has been previously seen in literature which uses STEM images to train computers to find defects in a material that have high contrast and are more easily processed by the program. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s_z" }, { "math_id": 1, "text": "s_z\n" }, { "math_id": 2, "text": "s = 0" }, { "math_id": 3, "text": "{\\left\\vert \\Phi_g \\right\\vert}^2 = (\\tfrac{\\Pi t}{\\xi_g})^2 \\cdot \\tfrac{sin^2{(\\Pi t s_{eff}})}{(\\Pi t s_{eff})^2}" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "\\xi_g" }, { "math_id": 6, "text": "s_{eff}" }, { "math_id": 7, "text": "s_{eff} = \\sqrt{s^2 + \\tfrac{1}{\\xi_g^2}}" }, { "math_id": 8, "text": "nm^{-1}" }, { "math_id": 9, "text": "s >> \\xi_g" }, { "math_id": 10, "text": "s = s_{eff}" }, { "math_id": 11, "text": "g \\cdot b=0" }, { "math_id": 12, "text": "s_R" }, { "math_id": 13, "text": "s_R = s_g + g \\cdot {dR \\over dz} = 0" }, { "math_id": 14, "text": "\\int_{column}^{} e^{-2 \\pi i (s_g z + g \\cdot R)} dz" }, { "math_id": 15, "text": "\\Delta x" }, { "math_id": 16, "text": "\\Delta x = \\tfrac{1}{\\pi s_g} \\cdot \\tfrac{\\xi_{eff}}{3}" }, { "math_id": 17, "text": "s_g" }, { "math_id": 18, "text": "\\xi_{eff}" }, { "math_id": 19, "text": "\\xi_{eff} = \\tfrac{1}{s_{eff}}" } ]
https://en.wikipedia.org/wiki?curid=73473067
7347446
Robocopy
Windows command-line component specialized in file transfer Robocopy is a command-line file transfer utility for Microsoft Windows. Robocopy is functionally more comprehensive than the COPY command and XCOPY, but replaces neither. Created by Kevin Allen and first released as part of the Windows NT 4.0 Resource Kit, it has been a standard feature of Windows since Windows Vista and Windows Server 2008. Features. Robocopy provides features not found in the built-in Windows COPY and XCOPY commands, including the following: Compression. Since Windows Server 2019 and Windows 10, Robocopy supports SMB compression for transferring files across a network. If the codice_3 is specified, the destination computer supports SMB compression, and the files being copied are compressible, the operation enjoys significant performance improvements. The SMB compression adds inline whitespace compression to file transfers. Compression is also available with the codice_4 command and Hyper-V live migration with SMB. Examples of use. Here are some examples of usage, which is not case-sensitive. If more than one option is specified, they must be separated by spaces. Copy directory contents of the source to the destination (including file data, attributes and timestamps), recursively with empty directories (codice_5): Robocopy "C:\Directory A" "C:\Directory B" /E If directory names have non-standard characters, such as spaces, they must be enclosed in double quotes, as is usual in the command line. Copy directory recursively (codice_5), copy all file information (codice_7, equivalent to codice_8, codice_9=Data, codice_10=Attributes, codice_11=Timestamps, codice_12=Security=NTFS ACLs, codice_13=Owner info, codice_14=Auditing info), do not retry locked files (codice_15) (the number of retries on failed copies default value is 1 million), preserve original directories' Timestamps (codice_16 - requires version XP026 or later): Robocopy C:\A C:\B /COPYALL /E /R:0 /DCOPY:T Mirror A to B, destroying any files in B that are not present in A (codice_17), copy files in resume mode (codice_18) in case network connection is lost: Robocopy C:\A \\backupserver\B /MIR /Z For the full reference, see the Microsoft TechNet Robocopy page. Syntactic focus on copying folders. Robocopy syntax is markedly different from its predecessors (copy and xcopy), in that it accepts only folder names, without trailing backslash, as its source and destination arguments. File names and wildcard characters (such as codice_19 and codice_20) are not valid as source or destination arguments; files may be selected or excluded using the optional "file" filtering argument (which supports wildcards) along with various other options. For example, to copy two files from folder codice_21 to codice_22, the following syntax is used: robocopy c:\bar c:\baz file1.txt file2.db And to copy all PDF files from codice_21 to codice_22: robocopy c:\bar c:\baz *.pdf The files named are copied only from the folder selected for copying; fully qualified path names are not supported. CAUTION: A long-standing issue with Robocopy means that if you back up from the root folder of a drive [ e.g., ], the destination files will be given attributes including SH. This means that they will be invisible to normal access (including DIR in cmd.exe). To fix this, add to the robocopy command line - or do an ATTRIB command to remove them afterwards. Output. Robocopy outputs to the screen, or optionally to a log file, the names of all the directories it encounters, in alphabetical order. Each name is preceded by the number of files in the directory that fulfill the criteria for being copied. If the directory does not yet exist in the target, it is marked "New Dir"; if the directory is empty and the /E option is not used, or it contains no files meeting the criteria, a new directory will not be created. If the /NFL (no file names in log) option is not used, the files being copied will be listed after the name of the directory they are in. At the end of the output is a table giving numbers of directories, files, and bytes. For each of these, the table gives the total number found in the source, the number "copied" (including directories marked "New Dir" even if they are not copied), the number "skipped" (because they already exist in the target), and the number of "mismatches", "FAILED", and "extras". "Failed" can mean that there was an I/O error that prevented a file being copied, or that access was denied. There is also a row of time taken (in which the time spent on failed files seems to be in the wrong column). Bandwidth throttling. Robocopy's "inter-packet gap" (IPG) option allows some control over the network bandwidth used in a session. In theory, the following formula expresses the delay (D, in milliseconds) required to simulate a desired bandwidth (BD, in kilobits per second), over a network link with an available bandwidth of BA kbps: formula_0 In practice however, some experimentation is usually required to find a suitable delay, due to factors such as the nature and volume of other traffic on the network. The methodology employed by the IPG option may not offer the same level of control provided by some other bandwidth throttling technologies, such as BITS (which is used by Windows Update and BranchCache). GUI. Although Robocopy itself is a command-line tool, Microsoft TechNet provided a GUI front-end called Robocopy GUI. It was developed by Derk Benisch, a systems engineer with the MSN Search group at Microsoft, and required .NET Framework 2.0. It included a copy of Robocopy version XP026. It is no longer available from Microsoft, but may be downloaded from the Internet Archive's Wayback Machine. There are non-Microsoft GUIs for Robocopy: Ken Tamaru of Microsoft developed a copying program with functionality similar to Robocopy, called RichCopy, this was discontinued in 2010. It is not based on Robocopy, and does not require .NET Framework. Versions. All versions of Robocopy store their version number and release date in their executable file header, viewable with File Explorer or PowerShell. Some of them (not all) report their version numbers in their textual output. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D = {B_A - B_D \\over B_A \\times B_D} \\times 512 \\times 1000" } ]
https://en.wikipedia.org/wiki?curid=7347446
73475847
Vietoris–Rips filtration
In topological data analysis, the Vietoris–Rips filtration (sometimes shortened to "Rips filtration") is the collection of nested Vietoris–Rips complexes on a metric space created by taking the sequence of Vietoris–Rips complexes over an increasing scale parameter. Often, the Vietoris–Rips filtration is used to create a discrete, simplicial model on point cloud data embedded in an ambient metric space. The Vietoris–Rips filtration is a multiscale extension of the Vietoris–Rips complex that enables researchers to detect and track the persistence of topological features, over a range of parameters, by way of computing the persistent homology of the entire filtration. It is named after Leopold Vietoris and Eliyahu Rips. Definition. The Vietoris–Rips filtration is the nested collection of Vietoris–Rips complexes indexed by an increasing scale parameter. The Vietoris–Rips complex is a classical construction in mathematics that dates back to a 1927 paper of Leopold Vietoris, though it was independently considered by Eliyahu Rips in the study of hyperbolic groups, as noted by Mikhail Gromov in the 1980s. The conjoined name "Vietoris–Rips" is due to Jean-Claude Hausmann. Given a metric space formula_0 and a scale parameter (sometimes called the "threshold" or "distance parameter") formula_1, the "Vietoris–Rips complex" (with respect to formula_2) is defined as formula_3, where formula_4 is the "diameter", i.e. the maximum distance of points lying in formula_5. Observe that if formula_6, there is a simplicial inclusion map formula_7 . The Vietoris–Rips filtration is the nested collection of complexes formula_8 : formula_9 If the non-negative real numbers formula_10 are viewed as a posetal category via the formula_11 relation, then the Vietoris–Rips filtration can be viewed as a functor formula_12 valued in the category of simplicial complexes and simplicial maps, where the morphisms (i.e., relations in the poset) in the source category induce inclusion maps among the complexes. Note that the category of simplicial complexes may be viewed as a subcategory of formula_13, the category of topological spaces, by post-composing with the geometric realization functor. Properties. The "size" of a filtration refers to the number of simplices in the largest complex, assuming the underlying metric space is finite. The formula_14-skeleton, i.e., the number of simplices up to dimension formula_14, of the Vietoris–Rips filtration is known to be formula_15, where formula_16 is the number of points. The size of the complete skeleton has precisely formula_17 simplices, one for each non-empty subset of points. Since this is exponential, researchers usually only compute the skeleton of the Vietoris–Rips filtration up to small values of formula_14. When the underlying metric space is finite, the Vietoris–Rips filtration is sometimes referred to as "essentially discrete", meaning that there exists some "terminal" or "maximum" scale parameter formula_18 such that formula_19 for all formula_20, and furthermore that the inclusion map formula_21 is an isomorphism for all but finitely many parameters formula_22. In other words, when the underlying metric space is finite, the Vietoris–Rips filtration has a largest complex, and the complex changes at only a finite number of steps. The latter implies that the Vietoris–Rips filtration on a finite metric space can be considered as indexed over a discrete set such as formula_23, by restricting the filtration to the scale parameters at which the filtration changes, then relabeling the complexes using the natural numbers. An explicit bound can also be given for the number of steps at which the Vietoris–Rips filtration changes. The Vietoris–Rips complex is a "clique complex", meaning it is entirely determined by its 1-skeleton. Therefore the number of steps at which the Vietoris–Rips filtration changes is bounded by the number of edges in the largest complex. The number of edges in the largest complex is formula_24, since all formula_16 vertices are joined by an edge. Therefore the Vietoris–Rips filtration changes at formula_25 steps, where formula_26 denotes an asymptotic upper bound. For points in Euclidean space, the Vietoris–Rips filtration is an approximation to the Čech filtration, in the sense of the interleaving distance. This follows from the fact that for any scale parameter formula_27, the Vietoris–Rips and Čech complexes on a finite set formula_0 of points in Euclidean space satisfy the inclusion relationship formula_28, which is sometimes referred to as the "Vietoris–Rips Lemma." In general metric spaces, a straightforward application of the triangle inequality shows that formula_29 for any scale parameter formula_27. Variants. Approximations. Since the Vietoris–Rips filtration has an exponential number of simplices in its complete skeleton, a significant amount of research has been done on approximating the persistent homology of the Vietoris–Rips filtration using constructions of smaller size. The first work in this direction was published by computer scientist Donald Sheehy in 2012, who showed how to construct a filtration of formula_30 size in formula_31 time that approximates the persistent homology of the Vietoris–Rips filtration to a desired margin of error. This type of filtration is known as a S"parse Vietoris–Rips filtration", since it removes points from the standard Vietoris–Rips filtration using ideas from computational geometry related to geometric spanners. Since then, there have been several more efficient methods developed for approximating the Vietoris–Rips filtration, mostly using the ideas of Sheehy, but also building upon approximation schemes developed for the Čech and Delaunay filtrations. Multiparameter Extensions. It is known that persistent homology can be sensitive to outliers in the underlying data set. To remedy this, in 2009 Gunnar Carlsson and Afra Zomorodian proposed a multidimensional version of persistence, that considers filtrations with respect to multiple parameters, such as scale and density. To that end, several multiparameter extensions of the Vietoris–Rips filtration have been developed.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "r \\in [0, \\infty)" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "\\mathbf{VR}_r(X) = \\{ \\emptyset \\neq S \\subseteq X \\mid S \\text{ finite}; \\operatorname{diam} S \\leq r \\}" }, { "math_id": 4, "text": "\\operatorname{diam} S" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "r \\leq s \\in [0, \\infty)" }, { "math_id": 7, "text": "\\mathbf{VR}_r(X) \\hookrightarrow \\mathbf{VR}_s(X) " }, { "math_id": 8, "text": "\\mathbf{VR}_r(X)" }, { "math_id": 9, "text": "\\mathbf{VR}(X) = \\{ \\mathbf{VR}_r (X) \\}_{r \\in [0, \\infty)}" }, { "math_id": 10, "text": "[0, \\infty)" }, { "math_id": 11, "text": "\\leq" }, { "math_id": 12, "text": "\\mathbf{VR}(X) : [0, \\infty) \\to \\mathbf{Simp}" }, { "math_id": 13, "text": "\\mathbf{Top}" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "O\\left(n^{k+1}\\right)" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "2^n - 1" }, { "math_id": 18, "text": "r_{\\text{max}} \\in [0, \\infty)" }, { "math_id": 19, "text": "\\mathbf{VR}_s(X) = \\mathbf{VR}_{r_{\\max}}(X)" }, { "math_id": 20, "text": "s \\geq r_{\\max}" }, { "math_id": 21, "text": "\\mathbf{VR}_{s\\to t}(X): \\mathbf{VR}_s(X) \\hookrightarrow \\mathbf{VR}_{t}(X)" }, { "math_id": 22, "text": "s \\leq t" }, { "math_id": 23, "text": "\\mathbb N" }, { "math_id": 24, "text": "{n \\choose 2} = n(n-1)/2" }, { "math_id": 25, "text": "O(n^2)" }, { "math_id": 26, "text": "O(-)" }, { "math_id": 27, "text": "\\alpha" }, { "math_id": 28, "text": "\\mathbf{VR}_\\alpha (X) \\subseteq \\operatorname{\\check{C}ech}_{\\sqrt 2\\alpha}(X) \\subseteq \\mathbf{VR}_{\\sqrt 2\\alpha}(X)" }, { "math_id": 29, "text": "\\mathbf{VR}_\\alpha (X) \\subseteq \\operatorname{\\check{C}ech}_{2\\alpha}(X) \\subseteq \\mathbf{VR}_{2\\alpha}(X)" }, { "math_id": 30, "text": "O(n)" }, { "math_id": 31, "text": "O(n \\log n)" }, { "math_id": 32, "text": "a \\in [0, \\infty)" }, { "math_id": 33, "text": "a" }, { "math_id": 34, "text": "(a,r) \\in \\mathbb R^\\operatorname{op} \\times \\mathbb R" }, { "math_id": 35, "text": "\\gamma: X \\to \\mathbb R" }, { "math_id": 36, "text": "\\gamma" }, { "math_id": 37, "text": "\\mathbf{F}\\text{-}\\mathbf{VR}_{a,r}(\\gamma) = \\mathbf{VR}_r (\\gamma^{-1}[a, \\infty))" }, { "math_id": 38, "text": "\\mathbb R^\\operatorname{op} \\times [0, \\infty)" }, { "math_id": 39, "text": "\\sigma_0 \\subset \\cdots\\subset\\sigma_m" }, { "math_id": 40, "text": "[0, \\infty)^{\\operatorname{op}} \\times [0, \\infty)" } ]
https://en.wikipedia.org/wiki?curid=73475847
734787
Automatic differentiation
Numerical calculations carrying along derivatives In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program. Difference from other differentiation methods. Automatic differentiation is distinct from symbolic differentiation and numerical differentiation. Symbolic differentiation faces the difficulty of converting a computer program into a single mathematical expression and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce round-off errors in the discretization process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to "many" inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems. Applications. Automatic differentiation is particularly important in the field of machine learning. For example, it allows one to implement backpropagation in a neural network without a manually-computed derivative. Forward and reverse accumulation. Chain rule of partial derivatives of composite functions. Fundamental to automatic differentiation is the decomposition of differentials provided by the chain rule of partial derivatives of composite functions. For the simple composition formula_0 the chain rule gives formula_1 Two types of automatic differentiation. Usually, two distinct modes of automatic differentiation are presented. Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute formula_2 and then formula_3 and at last formula_4), while reverse accumulation has the traversal from outside to inside (first compute formula_4 and then formula_3 and at last formula_5). More succinctly, The value of the partial derivative, called "seed", is propagated forward or backward and is initially formula_10 or formula_11. Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable formula_12 a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one (formula_13) and of all others to zero (formula_14). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass. Which of these two types should be used depends on the sweep count. The computational complexity of one sweep is proportional to the complexity of the original code. Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse accumulation. Forward accumulation was introduced by R.E. Wengert in 1964. According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown. Seppo Linnainmaa published reverse accumulation in 1976. Forward accumulation. In forward accumulation AD, one first fixes the "independent variable" with respect to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the "inner" functions in the chain rule: formula_15 This can be generalized to multiple variables as a matrix product of Jacobians. Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable formula_16 is augmented with its derivative formula_17 (stored as a numerical value, not a symbolic expression), formula_18 as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule. Using the chain rule, if formula_16 has predecessors in the computational graph: formula_19 As an example, consider the function: formula_20 For clarity, the individual sub-expressions have been labeled with the variables formula_16. The choice of the independent variable to which differentiation is performed affects the "seed" values "ẇ"1 and "ẇ"2. Given interest in the derivative of this function with respect to "x"1, the seed values should be set to: formula_21 With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph. To compute the gradient of this example function, which requires not only formula_22 but also formula_23, an "additional" sweep is performed over the computational graph using the seed values formula_24. Implementation. Pseudocode. Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression "Z" to be derived with regard to a variable "V". The method returns a pair of the evaluated function and its derivation. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated. tuple&lt;float,float&gt; evaluateAndDerive(Expression Z, Variable V) { if isVariable(Z) if (Z = V) return {valueOf(Z), 1}; else return {valueOf(Z), 0}; else if (Z = A + B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a + b, a' + b'}; else if (Z = A - B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a - b, a' - b'}; else if (Z = A * B) {a, a'} = evaluateAndDerive(A, V); {b, b'} = evaluateAndDerive(B, V); return {a * b, b * a' + a * b'}; C++. struct ValueAndPartial { float value, partial; }; struct Variable; struct Expression { virtual ValueAndPartial evaluateAndDerive(Variable *variable) = 0; struct Variable: public Expression { float value; ValueAndPartial evaluateAndDerive(Variable *variable) { float partial = (this == variable) ? 1.0f : 0.0f; return {value, partial}; }; struct Plus: public Expression { Expression *a, *b; ValueAndPartial evaluateAndDerive(Variable *variable) { auto [valueA, partialA] = a-&gt;evaluateAndDerive(variable); auto [valueB, partialB] = b-&gt;evaluateAndDerive(variable); return {valueA + valueB, partialA + partialB}; }; struct Multiply: public Expression { Expression *a, *b; ValueAndPartial evaluateAndDerive(Variable *variable) { auto [valueA, partialA] = a-&gt;evaluateAndDerive(variable); auto [valueB, partialB] = b-&gt;evaluateAndDerive(variable); return {valueA * valueB, valueB * partialA + valueA * partialB}; }; int main () { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(&amp;x, &amp;y); Multiply m1(&amp;x, &amp;p1); Multiply m2(&amp;y, &amp;y); Plus z(&amp;m1, &amp;m2); float xPartial = z.evaluateAndDerive(&amp;x).partial; float yPartial = z.evaluateAndDerive(&amp;y).partial; std::cout « "∂z/∂x = " « xPartial « ", " « "∂z/∂y = " « yPartial « std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; Reverse accumulation. In reverse accumulation AD, the "dependent variable" to be differentiated is fixed and the derivative is computed "with respect to" each sub-expression recursively. In a pen-and-paper calculation, the derivative of the "outer" functions is repeatedly substituted in the chain rule: formula_25 In reverse accumulation, the quantity of interest is the "adjoint", denoted with a bar formula_26; it is a derivative of a chosen dependent variable with respect to a subexpression formula_16: formula_27 Using the chain rule, if formula_16 has successors in the computational graph: formula_28 Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables "w""i" as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization. Checkpointing is also used to save intermediary states. The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order): &lt;templatestyles src="Block indent/styles.css"/&gt;; Operations to compute derivative formula_29 formula_30 formula_31 formula_32 formula_33 The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function "y" = "f"("x") in the primal causes "x̄" = "ȳ" "f"′("x") in the adjoint; etc. Implementation. Pseudo code. Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression "Z" to be derived and "seed" with the derived value of the parent expression. For the top expression, Z derived with regard to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current "seed" value to the derivative expression. void derive(Expression Z, float seed) { if isVariable(Z) partialDerivativeOf(Z) += seed; else if (Z = A + B) derive(A, seed); derive(B, seed); else if (Z = A - B) derive(A, seed); derive(B, -seed); else if (Z = A * B) derive(A, valueOf(B) * seed); derive(B, valueOf(A) * seed); C++. struct Expression { float value; virtual void evaluate() = 0; virtual void derive(float seed) = 0; struct Variable: public Expression { float partial; Variable(float value) { this-&gt;value = value; partial = 0.0f; void derive(float seed) { partial += seed; }; struct Plus: public Expression { Expression *a, *b; void evaluate() { a-&gt;evaluate(); b-&gt;evaluate(); value = a-&gt;value + b-&gt;value; void derive(float seed) { a-&gt;derive(seed); b-&gt;derive(seed); }; struct Multiply: public Expression { Expression *a, *b; void evaluate() { a-&gt;evaluate(); b-&gt;evaluate(); value = a-&gt;value * b-&gt;value; void derive(float seed) { a-&gt;derive(b-&gt;value * seed); b-&gt;derive(a-&gt;value * seed); }; int main () { // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) Variable x(2), y(3); Plus p1(&amp;x, &amp;y); Multiply m1(&amp;x, &amp;p1); Multiply m2(&amp;y, &amp;y); Plus z(&amp;m1, &amp;m2); z.evaluate(); std::cout « "z = " « z.value « std::endl; // Output: z = 19 z.derive(1); std::cout « "∂z/∂x = " « x.partial « ", " « "∂z/∂y = " « y.partial « std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; Beyond forward and reverse accumulation. Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of "f" : R"n" → R"m" with a minimum number of arithmetic operations is known as the "optimal Jacobian accumulation" (OJA) problem, which is NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent. Automatic differentiation using dual numbers. Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers. Replace every number formula_34 with the number formula_35, where formula_36 is a real number, but formula_37 is an abstract number with the property formula_38 (an infinitesimal; see "Smooth infinitesimal analysis"). Using only this, regular arithmetic gives formula_39 using formula_40. Now, polynomials can be calculated in this augmented arithmetic. If formula_41, then formula_42 where formula_43 denotes the derivative of formula_44 with respect to its first argument, and formula_36, called a "seed", can be chosen arbitrarily. The new arithmetic consists of ordered pairs, elements written formula_45, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic: formula_46 and in general for the primitive function formula_47, formula_48 where formula_49 and formula_50 are the derivatives of formula_47 with respect to its first and second arguments, respectively. When a binary basic arithmetic operation is applied to mixed arguments—the pair formula_51 and the real number formula_52—the real number is first lifted to formula_53. The derivative of a function formula_54 at the point formula_55 is now found by calculating formula_56 using the above arithmetic, which gives formula_57 as the result. Implementation. An example implementation based on the dual number approach follows. Pseudo code. &lt;templatestyles src="Pre/styles.css"/&gt; C++. struct Dual { float realPart, infinitesimalPart; Dual operator+(Dual other) { return Dual( realPart + other.realPart, infinitesimalPart + other.infinitesimalPart Dual operator*(Dual other) { return Dual( realPart * other.realPart, other.realPart * infinitesimalPart + realPart * other.infinitesimalPart // Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3) int main () { Dual x = Dual(2); Dual y = Dual(3); Dual epsilon = Dual(0, 1); Dual a = f(x + epsilon, y); Dual b = f(x, y + epsilon); std::cout « "∂z/∂x = " « a.infinitesimalPart « ", " « "∂z/∂y = " « b.infinitesimalPart « std::endl; // Output: ∂z/∂x = 7, ∂z/∂y = 8 return 0; Vector arguments and functions. Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute formula_58, the directional derivative formula_59 of formula_60 at formula_61 in the direction formula_62 may be calculated as formula_63 using the same arithmetic as above. If all the elements of formula_64 are desired, then formula_65 function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient. High order and many variables. The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted. Implementation. Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: "source code transformation" or "operator overloading". Source code transformation (SCT). The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions. Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex. Operator overloading (OO). Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance. Operator overloading and source code transformation. Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample. With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\ny &= f(g(h(x))) = f(g(h(w_0))) = f(g(w_1)) = f(w_2) = w_3 \\\\\nw_0 &= x \\\\ \nw_1 &= h(w_0) \\\\\nw_2 &= g(w_1) \\\\\nw_3 &= f(w_2) = y\n\\end{align}" }, { "math_id": 1, "text": "\\frac{\\partial y}{\\partial x} = \\frac{\\partial y}{\\partial w_2} \\frac{\\partial w_2}{\\partial w_1} \\frac{\\partial w_1}{\\partial x} = \\frac{\\partial f(w_2)}{\\partial w_2} \\frac{\\partial g(w_1)}{\\partial w_1} \\frac{\\partial h(w_0)}{\\partial x}" }, { "math_id": 2, "text": "\\partial w_1/ \\partial x" }, { "math_id": 3, "text": "\\partial w_2/\\partial w_1" }, { "math_id": 4, "text": "\\partial y/\\partial w_2" }, { "math_id": 5, "text": "\\partial w_1/\\partial x" }, { "math_id": 6, "text": "\\frac{\\partial w_i}{\\partial x} = \\frac{\\partial w_i}{\\partial w_{i-1}} \\frac{\\partial w_{i-1}}{\\partial x}" }, { "math_id": 7, "text": "w_3 = y" }, { "math_id": 8, "text": "\\frac{\\partial y}{\\partial w_i} = \\frac{\\partial y}{\\partial w_{i+1}} \\frac{\\partial w_{i+1}}{\\partial w_{i}}" }, { "math_id": 9, "text": "w_0 = x" }, { "math_id": 10, "text": "\\frac{\\partial x}{\\partial x}=1" }, { "math_id": 11, "text": "\\frac{\\partial y}{\\partial y}=1" }, { "math_id": 12, "text": "x_1,x_2,\\dots,x_n" }, { "math_id": 13, "text": "\\frac{\\partial x_1}{\\partial x_1}=1" }, { "math_id": 14, "text": "\\frac{\\partial x_2}{\\partial x_1}= \\dots = \\frac{\\partial x_n}{\\partial x_1} = 0" }, { "math_id": 15, "text": "\\begin{align}\n\\frac{\\partial y}{\\partial x}\n&= \\frac{\\partial y}{\\partial w_{n-1}} \\frac{\\partial w_{n-1}}{\\partial x} \\\\[6pt]\n&= \\frac{\\partial y}{\\partial w_{n-1}} \\left(\\frac{\\partial w_{n-1}}{\\partial w_{n-2}} \\frac{\\partial w_{n-2}}{\\partial x}\\right) \\\\[6pt]\n&= \\frac{\\partial y}{\\partial w_{n-1}} \\left(\\frac{\\partial w_{n-1}}{\\partial w_{n-2}} \\left(\\frac{\\partial w_{n-2}}{\\partial w_{n-3}} \\frac{\\partial w_{n-3}}{\\partial x}\\right)\\right) \\\\[6pt]\n&= \\cdots\n\\end{align}" }, { "math_id": 16, "text": "w_i" }, { "math_id": 17, "text": "\\dot w_i" }, { "math_id": 18, "text": "\\dot w_i = \\frac{\\partial w_i}{\\partial x}" }, { "math_id": 19, "text": "\\dot w_i = \\sum_{j \\in \\{\\text{predecessors of i}\\}} \\frac{\\partial w_i}{\\partial w_j} \\dot w_j" }, { "math_id": 20, "text": "\\begin{align}\ny\n&= f(x_1, x_2) \\\\\n&= x_1 x_2 + \\sin x_1 \\\\\n&= w_1 w_2 + \\sin w_1 \\\\\n&= w_3 + w_4 \\\\\n&= w_5\n\\end{align}" }, { "math_id": 21, "text": "\\begin{align}\n\\dot w_1 = \\frac{\\partial w_1}{\\partial x_1} = \\frac{\\partial x_1}{\\partial x_1} = 1 \\\\\n\\dot w_2 = \\frac{\\partial w_2}{\\partial x_1} = \\frac{\\partial x_2}{\\partial x_1} = 0\n\\end{align}" }, { "math_id": 22, "text": "\\tfrac{\\partial y}{\\partial x_1}" }, { "math_id": 23, "text": "\\tfrac{\\partial y}{\\partial x_2}" }, { "math_id": 24, "text": "\\dot w_1 = 0; \\dot w_2 = 1" }, { "math_id": 25, "text": "\\begin{align}\n\\frac{\\partial y}{\\partial x}\n&= \\frac{\\partial y}{\\partial w_1} \\frac{\\partial w_1}{\\partial x}\\\\\n&= \\left(\\frac{\\partial y}{\\partial w_2} \\frac{\\partial w_2}{\\partial w_1}\\right) \\frac{\\partial w_1}{\\partial x}\\\\\n&= \\left(\\left(\\frac{\\partial y}{\\partial w_3} \\frac{\\partial w_3}{\\partial w_2}\\right) \\frac{\\partial w_2}{\\partial w_1}\\right) \\frac{\\partial w_1}{\\partial x}\\\\\n&= \\cdots\n\\end{align}" }, { "math_id": 26, "text": "\\bar w_i" }, { "math_id": 27, "text": "\\bar w_i = \\frac{\\partial y}{\\partial w_i}" }, { "math_id": 28, "text": "\\bar w_i = \\sum_{j \\in \\{\\text{successors of i}\\}} \\bar w_j \\frac{\\partial w_j}{\\partial w_i}" }, { "math_id": 29, "text": "\\bar w_5 = 1 \\text{ (seed)}" }, { "math_id": 30, "text": "\\bar w_4 = \\bar w_5 \\cdot 1" }, { "math_id": 31, "text": "\\bar w_3 = \\bar w_5 \\cdot 1" }, { "math_id": 32, "text": "\\bar w_2 = \\bar w_3 \\cdot w_1" }, { "math_id": 33, "text": "\\bar w_1 = \\bar w_3 \\cdot w_2 + \\bar w_4 \\cdot \\cos w_1" }, { "math_id": 34, "text": "\\,x" }, { "math_id": 35, "text": "x + x'\\varepsilon" }, { "math_id": 36, "text": "x'" }, { "math_id": 37, "text": "\\varepsilon" }, { "math_id": 38, "text": "\\varepsilon^2=0" }, { "math_id": 39, "text": "\\begin{align}\n (x + x'\\varepsilon) + (y + y'\\varepsilon) &= x + y + (x' + y')\\varepsilon \\\\\n (x + x'\\varepsilon) - (y + y'\\varepsilon) &= x - y + (x' - y')\\varepsilon \\\\\n (x + x'\\varepsilon) \\cdot (y + y'\\varepsilon) &= xy + xy'\\varepsilon + yx'\\varepsilon + x'y'\\varepsilon^2 = xy + (x y' + yx')\\varepsilon \\\\\n (x + x'\\varepsilon) / (y + y'\\varepsilon) &= (x/y + x'\\varepsilon/y) / (1 + y'\\varepsilon/y) = (x/y + x'\\varepsilon/y) \\cdot (1 - y'\\varepsilon/y) = x/y + (x'/y - xy'/y^2)\\varepsilon\n\\end{align}" }, { "math_id": 40, "text": "(1 + y'\\varepsilon/y) \\cdot (1 - y'\\varepsilon/y) = 1" }, { "math_id": 41, "text": "P(x) = p_0 + p_1 x + p_2x^2 + \\cdots + p_n x^n" }, { "math_id": 42, "text": "\\begin{align}\n P(x + x'\\varepsilon) &= p_0 + p_1(x + x'\\varepsilon) + \\cdots + p_n (x + x'\\varepsilon)^n \\\\\n &= p_0 + p_1 x + \\cdots + p_n x^n + p_1x'\\varepsilon + 2p_2xx'\\varepsilon + \\cdots + np_n x^{n-1} x'\\varepsilon \\\\\n &= P(x) + P^{(1)}(x)x'\\varepsilon\n\\end{align}" }, { "math_id": 43, "text": "P^{(1)}" }, { "math_id": 44, "text": "P" }, { "math_id": 45, "text": "\\langle x, x' \\rangle" }, { "math_id": 46, "text": "\\begin{align}\n \\left\\langle u,u'\\right\\rangle + \\left\\langle v,v'\\right\\rangle &= \\left\\langle u + v, u' + v' \\right\\rangle \\\\\n \\left\\langle u,u'\\right\\rangle - \\left\\langle v,v'\\right\\rangle &= \\left\\langle u - v, u' - v' \\right\\rangle \\\\\n \\left\\langle u,u'\\right\\rangle * \\left\\langle v,v'\\right\\rangle &= \\left\\langle u v, u'v + uv' \\right\\rangle \\\\\n \\left\\langle u,u'\\right\\rangle / \\left\\langle v,v'\\right\\rangle &= \\left\\langle \\frac{u}{v}, \\frac{u'v - uv'}{v^2} \\right\\rangle \\quad ( v\\ne 0) \\\\\n \\sin\\left\\langle u,u'\\right\\rangle &= \\left\\langle \\sin(u) , u' \\cos(u) \\right\\rangle \\\\\n \\cos\\left\\langle u,u'\\right\\rangle &= \\left\\langle \\cos(u) , -u' \\sin(u) \\right\\rangle \\\\\n \\exp\\left\\langle u,u'\\right\\rangle &= \\left\\langle \\exp u , u' \\exp u \\right\\rangle \\\\\n \\log\\left\\langle u,u'\\right\\rangle &= \\left\\langle \\log(u) , u'/u \\right\\rangle \\quad (u>0) \\\\\n \\left\\langle u,u'\\right\\rangle^k &= \\left\\langle u^k , u' k u^{k - 1} \\right\\rangle \\quad (u \\ne 0) \\\\\n \\left| \\left\\langle u,u'\\right\\rangle \\right| &= \\left\\langle \\left| u \\right| , u' \\operatorname{sign} u \\right\\rangle \\quad (u \\ne 0)\n\\end{align}" }, { "math_id": 47, "text": "g" }, { "math_id": 48, "text": "g(\\langle u,u' \\rangle , \\langle v,v' \\rangle ) = \\langle g(u,v) , g_u(u,v) u' + g_v(u,v) v' \\rangle" }, { "math_id": 49, "text": "g_u" }, { "math_id": 50, "text": "g_v" }, { "math_id": 51, "text": "\\langle u, u' \\rangle" }, { "math_id": 52, "text": "c" }, { "math_id": 53, "text": "\\langle c, 0 \\rangle" }, { "math_id": 54, "text": "f : \\R\\to\\R" }, { "math_id": 55, "text": "x_0" }, { "math_id": 56, "text": "f(\\langle x_0, 1 \\rangle)" }, { "math_id": 57, "text": "\\langle f ( x_0 ) , f' ( x_0 ) \\rangle " }, { "math_id": 58, "text": "y' = \\nabla f(x)\\cdot x'" }, { "math_id": 59, "text": "y' \\in \\R^m" }, { "math_id": 60, "text": "f:\\R^n\\to\\R^m" }, { "math_id": 61, "text": "x \\in \\R^n" }, { "math_id": 62, "text": "x' \\in \\R^n" }, { "math_id": 63, "text": "(\\langle y_1,y'_1\\rangle, \\ldots, \\langle y_m,y'_m\\rangle) = f(\\langle x_1,x'_1\\rangle, \\ldots, \\langle x_n,x'_n\\rangle)" }, { "math_id": 64, "text": "\\nabla f" }, { "math_id": 65, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=734787
73479684
Bell diagonal state
Quantum states of two qubits Bell diagonal states are a class of bipartite qubit states that are frequently used in quantum information and quantum computation theory. Definition. The Bell diagonal state is defined as the probabilistic mixture of Bell states: formula_0 formula_1 formula_2 formula_3 In density operator form, a Bell diagonal state is defined as formula_4 where formula_5is a probability distribution. Since formula_6, a Bell diagonal state is determined by three real parameters. The maximum probability of a Bell diagonal state is defined as formula_7. Properties. 1. A Bell-diagonal state is separable if all the probabilities are less or equal to 1/2, i.e., formula_8. 2. Many entanglement measures have a simple formulas for entangled Bell-diagonal states: Relative entropy of entanglement: formula_9, where formula_10 is the binary entropy function. Entanglement of formation: formula_11,where formula_10 is the binary entropy function. Negativity: formula_12 Log-negativity: formula_13 3. Any 2-qubit state where the reduced density matrices are maximally mixed, formula_14, is Bell-diagonal in some local basis. Viz., there exist local unitaries formula_15 such that formula_16is Bell-diagonal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\phi^+\\rangle = \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |0\\rangle_B + |1\\rangle_A \\otimes |1\\rangle_B)" }, { "math_id": 1, "text": "|\\phi^-\\rangle = \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |0\\rangle_B - |1\\rangle_A \\otimes |1\\rangle_B)" }, { "math_id": 2, "text": "|\\psi^+\\rangle = \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |1\\rangle_B + |1\\rangle_A \\otimes |0\\rangle_B)" }, { "math_id": 3, "text": "|\\psi^-\\rangle = \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |1\\rangle_B - |1\\rangle_A \\otimes |0\\rangle_B)" }, { "math_id": 4, "text": "\\varrho^{Bell}=p_1|\\phi^+\\rangle \\langle \\phi^+|+p_2|\\phi^-\\rangle\\langle \\phi^-|+p_3|\\psi^+\\rangle\\langle \\psi^+|+p_4|\\psi^-\\rangle\\langle\\psi^-|" }, { "math_id": 5, "text": "p_1,p_2,p_3,p_4" }, { "math_id": 6, "text": "p_1+p_2+p_3+p_4=1" }, { "math_id": 7, "text": " p_{max}=\\max\\{p_1,p_2,p_3,p_4\\}" }, { "math_id": 8, "text": "p_\\text{max}\\leq 1/2" }, { "math_id": 9, "text": "S_r=1-h(p_\\text{max})" }, { "math_id": 10, "text": "h" }, { "math_id": 11, "text": "E_f=h(\\frac{1}{2}+\\sqrt{p_\\text{max}(1-p_\\text{max})})" }, { "math_id": 12, "text": "N=p_\\text{max}-1/2" }, { "math_id": 13, "text": "E_N=\\log(2 p_\\text{max} )" }, { "math_id": 14, "text": "\\rho_A=\\rho_B=I/2" }, { "math_id": 15, "text": "U=U_1\\otimes U_2" }, { "math_id": 16, "text": "U\\rho U^{\\dagger} " } ]
https://en.wikipedia.org/wiki?curid=73479684
73484608
Generalized balanced ternary
Generalized balanced ternary is a generalization of the balanced ternary numeral system to represent points in a higher-dimensional space. It was first described in 1982 by Laurie Gibson and Dean Lucas. It has since been used for various applications, including geospatial and high-performance scientific computing. General form. Like standard positional numeral systems, generalized balanced ternary represents a point formula_0 as powers of a base formula_1 multiplied by digits formula_2. formula_3 Generalized balanced ternary uses a transformation matrix as its base formula_1. Digits are vectors chosen from a finite subset formula_4 of the underlying space. One dimension. In one dimension, generalized balanced ternary is equivalent to standard balanced ternary, with three digits (0, 1, and -1). formula_1 is a formula_5 matrix, and the digits formula_6 are length-1 vectors, so they appear here without the extra brackets. formula_7 Addition table. This is the same addition table as standard balanced ternary, but with formula_8 replacing T. To make the table easier to read, the numeral formula_9 is written instead of formula_6. Two dimensions. In two dimensions, there are seven digits. The digits formula_10 are six points arranged in a regular hexagon centered at formula_11. formula_12 Addition table. As in the one-dimensional addition table, the numeral formula_9 is written instead of formula_6 (despite e.g. formula_8 having no particular relationship to the number 2). If there are two numerals in a cell, the left one is carried over to the next digit. Unlike standard addition, addition of two-dimensional generalized balanced ternary numbers may require multiple carries to be performed while computing a single digit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "d_i" }, { "math_id": 3, "text": "p = d_0 + B d_1 + B^2 d_2 + \\ldots" }, { "math_id": 4, "text": "\\{D_0 = 0, D_1, \\ldots, D_n\\}" }, { "math_id": 5, "text": "1\\times 1" }, { "math_id": 6, "text": "D_i" }, { "math_id": 7, "text": "\\begin{align}B &= 3 \\\\ D_0 &= 0 \\\\ D_1 &= 1 \\\\ D_2 &= -1\\end{align}" }, { "math_id": 8, "text": "D_2" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "D_1, \\ldots, D_6" }, { "math_id": 11, "text": "D_0 = 0" }, { "math_id": 12, "text": "\n\\begin{align}\nB &= \\frac{1}{2}\\begin{bmatrix} 5 & \\sqrt{3} \\\\ -\\sqrt{3} & 5 \\end{bmatrix} \\\\\nD_0 &= 0 \\\\\nD_1 &= \\left( 0, \\sqrt{3} \\right) \\\\\nD_2 &= \\left( \\frac{3}{2}, -\\frac{\\sqrt{3}}{2} \\right) \\\\\nD_3 &= \\left( \\frac{3}{2}, \\frac{\\sqrt{3}}{2} \\right) \\\\\nD_4 &= \\left( -\\frac{3}{2}, -\\frac{\\sqrt{3}}{2} \\right) \\\\\nD_5 &= \\left( -\\frac{3}{2}, \\frac{\\sqrt{3}}{2} \\right) \\\\\nD_6 &= \\left( 0, -\\sqrt{3} \\right) \\\\\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=73484608
734853
Cashflow matching
Cash flow matching is a process of hedging in which a company or other entity matches its cash outflows (i.e., financial obligations) with its cash inflows over a given time horizon. It is a subset of immunization strategies in finance. Cash flow matching is of particular importance to defined benefit pension plans. Solution with linear programming. It is possible to solve the simple cash flow matching problem using linear programming. Suppose that we have a choice of formula_0 bonds with which to receive cash flows over formula_1 time periods in order to cover liabilities formula_2 for each time period. The formula_3th bond in time period formula_4 is assumed to have known cash flows formula_5 and initial price formula_6. It possible to buy formula_7 bonds and to run a surplus formula_8 in a given time period, both of which must be non-negative, and leads to the set of constraints:formula_9Our goal is to minimize the initial cost of purchasing bonds to meet the liabilities in each time period, given by formula_10. Together, these requirements give rise to the associated linear programming problem:formula_11where formula_12 and formula_13, with entries:formula_14In the instance when fixed income instruments (not necessarily bonds) are used to provide the dedicated cash flows, it is unlikely to be the case that fractional components are available for purchase. Therefore, a more realistic approach to cash flow matching is to employ mixed-integer linear programming to select a discrete number of instruments with which to match liabilities. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "j=1,...,n" }, { "math_id": 1, "text": "t=1,...,T" }, { "math_id": 2, "text": "L_{1},...,L_{T}" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "F_{tj}" }, { "math_id": 6, "text": "p_{j}" }, { "math_id": 7, "text": "x_{j}" }, { "math_id": 8, "text": "s_{t}" }, { "math_id": 9, "text": "\\begin{aligned}\n\\sum_{j=1}^{n}F_{1j}x_{j} - s_{1} &= L_{1} \\\\\n\\sum_{j=1}^{n}F_{tj}x_{j} + s_{t-1} - s_{t} &= L_{t}, \\quad t = 2,...,T\n\\end{aligned}" }, { "math_id": 10, "text": "p^{T}x" }, { "math_id": 11, "text": "\\min_{x,s} \\; p^{T}x, \\quad \\text{s.t.} \\; Fx + Rs = L, \\; x,s\\geq 0" }, { "math_id": 12, "text": "F\\in\\mathbb{R}^{T\\times n}" }, { "math_id": 13, "text": "R\\in\\mathbb{R}^{T\\times T}" }, { "math_id": 14, "text": "R_{t,t} = -1, \\quad R_{t+1,t} = 1" } ]
https://en.wikipedia.org/wiki?curid=734853
73488171
Interleaving distance
Measure of distance between persistence modules In topological data analysis, the interleaving distance is a measure of similarity between persistence modules, a common object of study in topological data analysis and persistent homology. The interleaving distance was first introduced by Frédéric Chazal et al. in 2009. since then, it and its generalizations have been a central consideration in the study of applied algebraic topology and topological data analysis. Definition. A "persistence module" formula_0 is a collection formula_1 of vector spaces indexed over the real line, along with a collection formula_2 of linear maps such that formula_3 is always an isomorphism, and the relation formula_4 is satisfied for every formula_5. The case of formula_6 indexing is presented here for simplicity, though the interleaving distance can be readily adapted to more general settings, including multi-dimensional persistence modules. Let formula_7 and formula_0 be persistence modules. Then for any formula_8, a "formula_9-shift" is a collection formula_10 of linear maps between the persistence modules that commute with the internal maps of formula_7 and formula_0. The persistence modules formula_7 and formula_0 are said to be formula_9-interleaved if there are formula_9-shifts formula_11 and formula_12 such that the following diagrams commute for all formula_13. It follows from the definition that if formula_7 and formula_0 are formula_9-interleaved for some formula_9, then they are also formula_14-interleaved for any positive formula_15. Therefore, in order to find the closest interleaving between the two modules, we must take the infimum across all possible interleavings. The "interleaving distance" between two persistence modules formula_7 and formula_0 is defined as formula_16. Properties. Metric properties. It can be shown that the interleaving distance satisfies the triangle inequality. Namely, given three persistence modules formula_7, formula_0, and formula_17, the inequality formula_18 is satisfied. On the other hand, there are examples of persistence modules that are not isomorphic but that have interleaving distance zero. Furthermore, if no suitable formula_9 exists then two persistence modules are said to have infinite interleaving distance. These two properties make the interleaving distance an "extended pseudometric", which means non-identical objects are allowed to have distance zero, and objects are allowed to have infinite distance, but the other properties of a proper metric are satisfied. Further metric properties of the interleaving distance and its variants were investigated by Luis Scoccola in 2020. Computational complexity. Computing the interleaving distance between two single-parameter persistence modules can be accomplished in polynomial time. On the other hand, it was shown in 2018 that computing the interleaving distance between two multi-dimensional persistence modules is NP-hard.
[ { "math_id": 0, "text": "\\mathbb V" }, { "math_id": 1, "text": "(V_t \\mid t \\in \\mathbb R)" }, { "math_id": 2, "text": "(v^s_t : V_s \\to V_t \\mid s\\leq t)" }, { "math_id": 3, "text": "v^t_t" }, { "math_id": 4, "text": "v^s_t \\circ v^r_s = v^r_t" }, { "math_id": 5, "text": "r\\leq s \\leq t" }, { "math_id": 6, "text": "\\mathbb R" }, { "math_id": 7, "text": "\\mathbb U" }, { "math_id": 8, "text": "\\delta \\in \\mathbb R" }, { "math_id": 9, "text": "\\delta" }, { "math_id": 10, "text": "(\\phi_t : U_t \\to V_{t+\\delta} \\mid t \\in \\mathbb R)" }, { "math_id": 11, "text": "\\phi_t: U_t \\to V_{t+ \\delta}" }, { "math_id": 12, "text": "\\psi_t: V_t \\to U_{t+ \\delta}" }, { "math_id": 13, "text": "s \\leq t" }, { "math_id": 14, "text": "(\\delta + \\varepsilon)" }, { "math_id": 15, "text": "\\varepsilon" }, { "math_id": 16, "text": "d_I (\\mathbb U, \\mathbb V) = \\inf \\{\\delta \\mid \\mathbb U \\text{ and } \\mathbb V \\text{ are } \\delta\\text{-interleaved}\\}" }, { "math_id": 17, "text": "\\mathbb W" }, { "math_id": 18, "text": "d_I (\\mathbb U, \\mathbb W) \\leq d_I (\\mathbb U, \\mathbb V) + d_I (\\mathbb V, \\mathbb W)" } ]
https://en.wikipedia.org/wiki?curid=73488171
734893
Canonical bundle
In mathematics, the canonical bundle of a non-singular algebraic variety formula_0 of dimension formula_1 over a field is the line bundle formula_2, which is the "n"th exterior power of the cotangent bundle formula_3 on formula_0. Over the complex numbers, it is the determinant bundle of the holomorphic cotangent bundle formula_4. Equivalently, it is the line bundle of holomorphic "n"-forms on formula_0. This is the dualising object for Serre duality on formula_0. It may equally well be considered as an invertible sheaf. The canonical class is the divisor class of a Cartier divisor formula_5 on formula_0 giving rise to the canonical bundle — it is an equivalence class for linear equivalence on formula_0, and any divisor in it may be called a canonical divisor. An anticanonical divisor is any divisor −formula_5 with formula_5 canonical. The anticanonical bundle is the corresponding inverse bundle formula_6. When the anticanonical bundle of formula_0 is ample, formula_0 is called a Fano variety. The adjunction formula. Suppose that "X" is a smooth variety and that "D" is a smooth divisor on "X". The adjunction formula relates the canonical bundles of "X" and "D". It is a natural isomorphism formula_7 In terms of canonical classes, it is formula_8 This formula is one of the most powerful formulas in algebraic geometry. An important tool of modern birational geometry is inversion of adjunction, which allows one to deduce results about the singularities of "X" from the singularities of "D". The canonical bundle formula. Let formula_9 be a normal surface. A genus formula_10 fibration formula_11 of formula_9 is a proper flat morphism formula_12 to a smooth curve such that formula_13 and all fibers of formula_12 have arithmetic genus formula_10. If formula_9 is a smooth projective surface and the fibers of formula_12 do not contain rational curves of self-intersection formula_14, then the fibration is called minimal. For example, if formula_9 admits a (minimal) genus 0 fibration, then is formula_9 is birationally ruled, that is, birational to formula_15. For a minimal genus 1 fibration (also called elliptic fibrations) formula_11 all but finitely many fibers of formula_12 are geometrically integral and all fibers are geometrically connected (by Zariski's connectedness theorem). In particular, for a fiber formula_16 of formula_12, we have that formula_17 where formula_18 is a canonical divisor of formula_9; so for formula_19, if formula_20 is geometrically integral if formula_21 and formula_22 otherwise. Consider a minimal genus 1 fibration formula_11. Let formula_23 be the finitely many fibers that are not geometrically integral and write formula_24 where formula_25 is greatest common divisor of coefficients of the expansion of formula_26 into integral components; these are called multiple fibers. By cohomology and base change one has that formula_27 where formula_28 is an invertible sheaf and formula_29 is a torsion sheaf (formula_29 is supported on formula_30 such that formula_31). Then, one has that formula_32 where formula_33 for each formula_34 and formula_35. One notes that formula_36. For example, for the minimal genus 1 fibration of a (quasi)-bielliptic surface induced by the Albanese morphism, the canonical bundle formula gives that this fibration has no multiple fibers. A similar deduction can be made for any minimal genus 1 fibration of a K3 surface. On the other hand, a minimal genus one fibration of an Enriques surface will always admit multiple fibers and so, such a surface will not admit a section. Singular case. On a singular variety formula_9, there are several ways to define the canonical divisor. If the variety is normal, it is smooth in codimension one. In particular, we can define canonical divisor on the smooth locus. This gives us a unique Weil divisor class on formula_9. It is this class, denoted by formula_18 that is referred to as the canonical divisor on formula_37 Alternately, again on a normal variety formula_9, one can consider formula_38, the formula_39'th cohomology of the normalized dualizing complex of formula_9. This sheaf corresponds to a Weil divisor class, which is equal to the divisor class formula_18 defined above. In the absence of the normality hypothesis, the same result holds if formula_9 is S2 and Gorenstein in dimension one. Canonical maps. If the canonical class is effective, then it determines a rational map from "V" into projective space. This map is called the canonical map. The rational map determined by the "n"th multiple of the canonical class is the "n"-canonical map. The "n"-canonical map sends "V" into a projective space of dimension one less than the dimension of the global sections of the "n"th multiple of the canonical class. "n"-canonical maps may have base points, meaning that they are not defined everywhere (i.e., they may not be a morphism of varieties). They may have positive dimensional fibers, and even if they have zero-dimensional fibers, they need not be local analytic isomorphisms. Canonical curves. The best studied case is that of curves. Here, the canonical bundle is the same as the (holomorphic) cotangent bundle. A global section of the canonical bundle is therefore the same as an everywhere-regular differential form. Classically, these were called differentials of the first kind. The degree of the canonical class is 2"g" − 2 for a curve of genus "g". Low genus. Suppose that "C" is a smooth algebraic curve of genus "g". If "g" is zero, then "C" is P1, and the canonical class is the class of −2"P", where "P" is any point of "C". This follows from the calculus formula "d"(1/"t") = −"dt"/"t"2, for example, a meromorphic differential with double pole at the origin on the Riemann sphere. In particular, "K""C" and its multiples are not effective. If "g" is one, then "C" is an elliptic curve, and "K""C" is the trivial bundle. The global sections of the trivial bundle form a one-dimensional vector space, so the "n"-canonical map for any "n" is the map to a point. Hyperelliptic case. If "C" has genus two or more, then the canonical class is big, so the image of any "n"-canonical map is a curve. The image of the 1-canonical map is called a canonical curve. A canonical curve of genus "g" always sits in a projective space of dimension "g" − 1. When "C" is a hyperelliptic curve, the canonical curve is a rational normal curve, and "C" a double cover of its canonical curve. For example if "P" is a polynomial of degree 6 (without repeated roots) then "y"2 = "P"("x") is an affine curve representation of a genus 2 curve, necessarily hyperelliptic, and a basis of the differentials of the first kind is given in the same notation by "dx"/√"P"("x"),   "x dx"/√"P"("x"). This means that the canonical map is given by homogeneous coordinates [1: "x"] as a morphism to the projective line. The rational normal curve for higher genus hyperelliptic curves arises in the same way with higher power monomials in "x". General case. Otherwise, for non-hyperelliptic "C" which means "g" is at least 3, the morphism is an isomorphism of "C" with its image, which has degree 2"g" − 2. Thus for "g" = 3 the canonical curves (non-hyperelliptic case) are quartic plane curves. All non-singular plane quartics arise in this way. There is explicit information for the case "g" = 4, when a canonical curve is an intersection of a quadric and a cubic surface; and for "g" = 5 when it is an intersection of three quadrics. There is a converse, which is a corollary to the Riemann–Roch theorem: a non-singular curve "C" of genus "g" embedded in projective space of dimension "g" − 1 as a linearly normal curve of degree 2"g" − 2 is a canonical curve, provided its linear span is the whole space. In fact the relationship between canonical curves "C" (in the non-hyperelliptic case of "g" at least 3), Riemann-Roch, and the theory of special divisors is rather close. Effective divisors "D" on "C" consisting of distinct points have a linear span in the canonical embedding with dimension directly related to that of the linear system in which they move; and with some more discussion this applies also to the case of points with multiplicities. More refined information is available, for larger values of "g", but in these cases canonical curves are not generally complete intersections, and the description requires more consideration of commutative algebra. The field started with Max Noether's theorem: the dimension of the space of quadrics passing through "C" as embedded as canonical curve is ("g" − 2)("g" − 3)/2. Petri's theorem, often cited under this name and published in 1923 by Karl Petri (1881–1955), states that for "g" at least 4 the homogeneous ideal defining the canonical curve is generated by its elements of degree 2, except for the cases of (a) trigonal curves and (b) non-singular plane quintics when "g" = 6. In the exceptional cases, the ideal is generated by the elements of degrees 2 and 3. Historically speaking, this result was largely known before Petri, and has been called the theorem of Babbage-Chisini-Enriques (for Dennis Babbage who completed the proof, Oscar Chisini and Federigo Enriques). The terminology is confused, since the result is also called the Noether–Enriques theorem. Outside the hyperelliptic cases, Noether proved that (in modern language) the canonical bundle is normally generated: the symmetric powers of the space of sections of the canonical bundle map onto the sections of its tensor powers. This implies for instance the generation of the quadratic differentials on such curves by the differentials of the first kind; and this has consequences for the local Torelli theorem. Petri's work actually provided explicit quadratic and cubic generators of the ideal, showing that apart from the exceptions the cubics could be expressed in terms of the quadratics. In the exceptional cases the intersection of the quadrics through the canonical curve is respectively a ruled surface and a Veronese surface. These classical results were proved over the complex numbers, but modern discussion shows that the techniques work over fields of any characteristic. Canonical rings. The canonical ring of "V" is the graded ring formula_40 If the canonical class of "V" is an ample line bundle, then the canonical ring is the homogeneous coordinate ring of the image of the canonical map. This can be true even when the canonical class of "V" is not ample. For instance, if "V" is a hyperelliptic curve, then the canonical ring is again the homogeneous coordinate ring of the image of the canonical map. In general, if the ring above is finitely generated, then it is elementary to see that it is the homogeneous coordinate ring of the image of a "k"-canonical map, where "k" is any sufficiently divisible positive integer. The minimal model program proposed that the canonical ring of every smooth or mildly singular projective variety was finitely generated. In particular, this was known to imply the existence of a canonical model, a particular birational model of "V" with mild singularities that could be constructed by blowing down "V". When the canonical ring is finitely generated, the canonical model is Proj of the canonical ring. If the canonical ring is not finitely generated, then Proj "R" is not a variety, and so it cannot be birational to "V"; in particular, "V" admits no canonical model. One can show that if the canonical divisor "K" of "V" is a nef divisor and the self intersection of "K" is greater than zero, then "V" will admit a canonical model (more generally, this is true for normal complete Gorenstein algebraic spaces). A fundamental theorem of Birkar–Cascini–Hacon–McKernan from 2006 is that the canonical ring of a smooth or mildly singular projective algebraic variety is finitely generated. The Kodaira dimension of "V" is the dimension of the canonical ring minus one. Here the dimension of the canonical ring may be taken to mean Krull dimension or transcendence degree. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\,\\!\\Omega^n = \\omega" }, { "math_id": 3, "text": "\\Omega" }, { "math_id": 4, "text": "T^*V" }, { "math_id": 5, "text": "K" }, { "math_id": 6, "text": "\\omega^{-1}" }, { "math_id": 7, "text": "\\omega_D = i^*(\\omega_X \\otimes \\mathcal{O}(D))." }, { "math_id": 8, "text": "K_D = (K_X + D)|_D." }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "g" }, { "math_id": 11, "text": "f:X\\to B" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "f_*\\mathcal{O}_X\\cong \\mathcal{O}_B" }, { "math_id": 14, "text": "-1" }, { "math_id": 15, "text": "\\mathbb{P}^1\\times B" }, { "math_id": 16, "text": "F=\\sum^{n}_{i=1}a_iE_i" }, { "math_id": 17, "text": "F.E_i=K_X.E_i=0," }, { "math_id": 18, "text": "K_X" }, { "math_id": 19, "text": "m=\\operatorname{gcd}(a_i)" }, { "math_id": 20, "text": "F" }, { "math_id": 21, "text": "m=1" }, { "math_id": 22, "text": "m>1" }, { "math_id": 23, "text": "F_1,\\dots,F_r" }, { "math_id": 24, "text": "F_i=m_iF_i^'" }, { "math_id": 25, "text": "m_i>1" }, { "math_id": 26, "text": "F_i" }, { "math_id": 27, "text": "R^1f_*\\mathcal{O}_X=\\mathcal{L}\\oplus\\mathcal{T}" }, { "math_id": 28, "text": "\\mathcal{L}" }, { "math_id": 29, "text": "\\mathcal{T}" }, { "math_id": 30, "text": "b\\in B" }, { "math_id": 31, "text": "h^0(X_b,\\mathcal{O}_{X_b})>1" }, { "math_id": 32, "text": "\\omega_X\\cong f^*(\\mathcal{L}^{-1}\\otimes \\omega_{B})\\otimes \\mathcal{O}_X\\left(\\sum^r_{i=1}a_iF_i'\\right)" }, { "math_id": 33, "text": "0\\leq a_i<m_i" }, { "math_id": 34, "text": "i" }, { "math_id": 35, "text": "\\operatorname{deg}\\left(\\mathcal{L}^{-1}\\right)=\\chi(\\mathcal{O}_X)+\\operatorname{length}(\\mathcal{T})" }, { "math_id": 36, "text": "\\operatorname{length}(\\mathcal{T})=0\\iff a_i=m_i-1" }, { "math_id": 37, "text": "X." }, { "math_id": 38, "text": "h^{-d}(\\omega^._X)" }, { "math_id": 39, "text": "-d" }, { "math_id": 40, "text": "R = \\bigoplus_{d = 0}^\\infty H^0(V, K_V^d)." } ]
https://en.wikipedia.org/wiki?curid=734893
73492903
Persistent Betti number
In persistent homology, a persistent Betti number is a multiscale analog of a Betti number that tracks the number of topological features that persist over multiple scale parameters in a filtration. Whereas the classical formula_0 Betti number equals the rank of the formula_0 homology group, the formula_0 persistent Betti number is the rank of the formula_0 persistent homology group. The concept of a persistent Betti number was introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in the 2002 paper "Topological Persistence and Simplification", one of the seminal papers in the field of persistent homology and topological data analysis. Applications of the persistent Betti number appear in a variety of fields including data analysis, machine learning, and physics. Definition. Let formula_1 be a simplicial complex, and let formula_2 be a monotonic, i.e., non-decreasing function. Requiring monotonicity guarantees that the sublevel set formula_3 is a subcomplex of formula_1 for all formula_4. Letting the parameter formula_5 vary, we can arrange these subcomplexes into a nested sequence formula_6 for some natural number formula_7. This sequences defines a "filtration" on the complex formula_1. Persistent homology concerns itself with the evolution of topological features across a filtration. To that end, by taking the formula_8 homology group of every complex in the filtration we obtain a sequence of homology groups formula_9 that are connected by homomorphisms induced by the inclusion maps in the filtration. When applying homology over a field, we get a sequence of vector spaces and linear maps commonly known as a persistence module. In order to track the evolution of homological features as opposed to the static topological information at each individual index, one needs to count only the number of nontrivial homology classes that persist in the filtration, i.e., that remain nontrivial across multiple scale parameters. For each formula_10, let formula_11 denote the induced homomorphism formula_12. Then the "formula_8 persistent homology groups" are defined to be the images of each induced map. Namely, formula_13 for all formula_14. In parallel to the classical Betti number, the "formula_8 persistent Betti numbers" are precisely the ranks of the "formula_8" persistent homology groups, given by the definition formula_15.
[ { "math_id": 0, "text": "n^{th}" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "f:K \\to \\mathbb R" }, { "math_id": 3, "text": "K(a) := f^{-1} (-\\infty, a]" }, { "math_id": 4, "text": "a \\in \\mathbb R" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "\\emptyset = K_0 \\subseteq K_1 \\subseteq \\cdots \\subseteq K_n = K" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "p^{th}" }, { "math_id": 9, "text": "0 = H_p (K_0) \\to H_p (K_1) \\to \\cdots \\to H_p (K_n) = H_p (K)" }, { "math_id": 10, "text": "i \\leq j" }, { "math_id": 11, "text": "f_p^{i,j}" }, { "math_id": 12, "text": "H_p (K_i) \\to H_p (K_j)" }, { "math_id": 13, "text": "H_p^{i,j} := \\operatorname{im} f_p^{i,j}" }, { "math_id": 14, "text": "0 \\leq i \\leq j \\leq n" }, { "math_id": 15, "text": "\\beta_p^{i,j} := \\operatorname{rank} H_p^{i,j}" } ]
https://en.wikipedia.org/wiki?curid=73492903
73493450
Active circulator
Active non-reciprocal three-port device In electrical engineering, an active circulator is an active non-reciprocal three-port device that couples a microwave or radio-frequency signal only to an adjacent port in the direction of circulation. Other (external) circuitry connects to the circulator ports via transmission lines. An ideal three-port active circulator has the following scattering matrix: formula_0 An active circulator can be constructed using one of several different technologies. One early technology is the use of transistors as the active devices to perform the non-reciprocal function. Varactor circuits are another technology, relying on a time-varying transmission line structure, driven by a separate pump signal. A third technology utilizes spatiotemporally-modulated rings of coupled resonators. Another design approach relies on staggered commutation and integrated circuit techniques. Compared to passive (ferrite) circulators, active circulators have the advantages of small size, low mass, and simple integration with other circuitry. System designers must weigh these factors with the disadvantages of active circulators: they require DC power and sometimes a separate pump or clock signal, they can be nonlinear, and can introduce significant noise into the signal path. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S = \\begin{pmatrix}\n 0 & 0 & 1\\\\\n 1 & 0 & 0 \\\\\n 0 & 1 & 0\n\\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=73493450
73493845
Persistent homology group
Multiscale analog of homology group In persistent homology, a persistent homology group is a multiscale analog of a homology group that captures information about the evolution of topological features across a filtration of spaces. While the ordinary homology group represents nontrivial homology classes of an individual topological space, the persistent homology group tracks only those classes that remain nontrivial across multiple parameters in the underlying filtration. Analogous to the ordinary Betti number, the ranks of the persistent homology groups are known as the persistent Betti numbers. Persistent homology groups were first introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in a 2002 paper "Topological Persistence and Simplification", one of the foundational papers in the fields of persistent homology and topological data analysis, based largely on the persistence barcodes and the persistence algorithm, that were first described by Serguei Barannikov in the 1994 paper. Since then, the study of persistent homology groups has led to applications in data science, machine learning, materials science, biology, and economics. Definition. Let formula_0 be a simplicial complex, and let formula_1 be a real-valued monotonic function. Then for some values formula_2 the sublevel-sets formula_3 yield a sequence of nested subcomplexes formula_4 known as a "filtration" of formula_0. Applying formula_5 homology to each complex yields a sequence of homology groups formula_6 connected by homomorphisms induced by the inclusion maps of the underlying filtration. When homology is taken over a field, we get a sequence of vector spaces and linear maps known as a persistence module. Let formula_7 be the homomorphism induced by the inclusion formula_8. Then the formula_5 persistent homology groups are defined as the images formula_9 for all formula_10. In particular, the persistent homology group formula_11. More precisely, the formula_5 persistent homology group can be defined as formula_12, where formula_13 and formula_14 are the standard p-cycle and p-boundary groups, respectively. Birth and death of homology classes. Sometimes the elements of formula_15 are described as the homology classes that are "born" at or before formula_16 and that have not yet "died" entering formula_17. These notions can be made precise as follows. A homology class formula_18 is said to be "born" at formula_16 if it is not contained in the image of the previous persistent homology group, i.e., formula_19. Conversely, formula_20 is said to "die entering" formula_17 if formula_20 is subsumed (i.e., merges with) another older class as the sequence proceeds from formula_21. That is to say, formula_22 but formula_23. The determination that an older class persists if it merges with a younger class, instead of the other way around, is sometimes known as the "Elder Rule". The indices formula_24 at which a homology class formula_20 is born and dies entering are known as the "birth" and "death" indices of formula_20. The difference formula_25 is known as the "index persistence" of formula_20, while the corresponding difference formula_26 in function values corresponding to those indices is known as the "persistence" of formula_20 . If there exists no index at which formula_20 dies, it is assigned an infinite death index. Thus, the persistence of each class can be represented as an interval in the extended real line formula_27 of either the form formula_28 or formula_29. Since, in the case of an infinite field, the infinite number of classes always have the same persistence, the collection over "all" classes of such intervals does not give meaningful multiplicities for a multiset of intervals. Instead, such multiplicities and a multiset of intervals in the extended real line are given by the structure theorem of persistence homology. This multiset is known as the "persistence barcode". Canonical form. Concretely, the structure theorem states that for any filtered complex over a field formula_30, there exists a linear transformation that preserves the filtration and converts the filtered complex into so called canonical form, a canonically defined direct sum of filtered complexes of two types: two-dimensional complexes with trivial homology formula_31 and one-dimensional complexes with trivial differential formula_32. Persistence diagram. Geometrically, a barcode can be plotted as a multiset of points (with possibly infinite coordinates) formula_33 in the extended plane formula_34. By the above definitions, each point will lie above the diagonal, and the distance to the diagonal is exactly equal to the persistence of the corresponding class times formula_35. This construction is known as the "persistence diagram", and it provides a way of visualizing the structure of the persistence of homology classes in the sequence of persistent homology groups.
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "f: K \\to \\mathbb R" }, { "math_id": 2, "text": "a_0 < a_1 < \\cdots < a_n \\in \\mathbb R" }, { "math_id": 3, "text": "K(a) := f^{-1}(-\\infty, a]" }, { "math_id": 4, "text": "\\emptyset = K_0 \\subseteq K_1 \\subseteq \\cdots \\subseteq K_n = K" }, { "math_id": 5, "text": "p^{th}" }, { "math_id": 6, "text": "0 = H_p (K_0) \\to H_p (K_1) \\to \\cdots \\to H_p (K_n) = H_p (K)" }, { "math_id": 7, "text": "f_p^{i,j}: H_p (K_i) \\to H_p (K_j)" }, { "math_id": 8, "text": "K_i \\hookrightarrow K_j" }, { "math_id": 9, "text": "H_p^{i,j} := \\operatorname{im} f_p^{i,j}" }, { "math_id": 10, "text": "1 \\leq i \\leq j \\leq n" }, { "math_id": 11, "text": "H_p^{i,i} = H_p (K_i)" }, { "math_id": 12, "text": "H_p^{i,j} = Z_p (K_i) / \\left( B_p (K_j) \\cap Z_p(K_i) \\right)" }, { "math_id": 13, "text": "Z_p(K_\\bullet)" }, { "math_id": 14, "text": "B_p(K_\\bullet)" }, { "math_id": 15, "text": "H_p^{i,j}" }, { "math_id": 16, "text": "K_i" }, { "math_id": 17, "text": "K_j" }, { "math_id": 18, "text": "\\gamma \\in H_p (K_i)" }, { "math_id": 19, "text": "\\gamma \\notin H_p^{i-1, i}" }, { "math_id": 20, "text": "\\gamma" }, { "math_id": 21, "text": "K_{j-1} \\to K_j" }, { "math_id": 22, "text": "f_p^{i,j-1} (\\gamma) \\notin H_p^{i-1,j-1}" }, { "math_id": 23, "text": "f_p^{i,j} (\\gamma) \\in H_p^{i-1,j}" }, { "math_id": 24, "text": "i,j" }, { "math_id": 25, "text": "j-i" }, { "math_id": 26, "text": "a_j - a_i" }, { "math_id": 27, "text": "\\mathbb R \\cup \\{\\pm \\infty\\}" }, { "math_id": 28, "text": "[a_i, a_j)" }, { "math_id": 29, "text": "[a_i', \\infty)" }, { "math_id": 30, "text": "F" }, { "math_id": 31, "text": "d(e_{a_j})=e_{a_i}" }, { "math_id": 32, "text": "d(e_{a'_i})=0" }, { "math_id": 33, "text": "(a_i, a_j)" }, { "math_id": 34, "text": "\\left( \\mathbb R \\cup \\{\\pm \\infty \\} \\right)^2" }, { "math_id": 35, "text": "\\frac{1}{\\sqrt 2}" } ]
https://en.wikipedia.org/wiki?curid=73493845
73498718
Dichromatic symmetry
Two-colour symmetry (examples, history and dimensional counts) Dichromatic symmetry, also referred to as antisymmetry, black-and-white symmetry, magnetic symmetry, counterchange symmetry or dichroic symmetry, is a symmetry operation which reverses an object to its opposite. A more precise definition is "operations of antisymmetry transform objects possessing two possible values of a given property from one value to the other." Dichromatic symmetry refers specifically to two-coloured symmetry; this can be extended to three or more colours in which case it is termed polychromatic symmetry. A general term for dichromatic and polychromatic symmetry is simply colour symmetry. Dichromatic symmetry is used to describe magnetic crystals and in other areas of physics, such as time reversal, which require two-valued symmetry operations. Examples. A simple example is to take a white object, such as a triangle, and apply a colour change resulting in a black triangle. Applying the colour change once more yields the original white triangle. The colour change, here termed an anti-identity operation (1'), yields the identity operation (1) if performed twice. Another example is to construct an anti-mirror reflection (m') from a mirror reflection (m) and an anti-identity operation (1') executed in either order. The m' operation can then be used to construct the antisymmetry point group 3m' of a dichromatic triangle. There are no mirror reflection (m) operations for the dichromatic triangle, as there would be if all the smaller component triangles were coloured white. However, by introducing the anti-mirror reflection (m') operation the full dihedral D3 symmetry is restored. The six operations making up the dichromatic D3 (3m') point group are: Note that the vertex numbers do not form part of the triangle being operated on - they are shown to keep track of where the vertices end up after each operation. History. In 1930 Heinrich Heesch was the first person to formally postulate an antisymmetry operation in the context of examining the 3D space groups in 4D. Heesch's work was influenced by Weber's 1929 paper on black-and-white colouring of 2D bands. In 1935-1936 H.J. Woods published a series of four papers with the title "The geometrical basis of pattern design". The last of these was devoted to counterchange symmetry and in which was derived for the first time the 46 dichromatic 2D point groups. The work of Heesch and Woods were not influential at the time, and the subject of dichromatic symmetry did not start to become important until the publication of A.V. Shubnikov's book "Symmetry and antisymmetry of finite figures" in 1951. Thereafter the subject developed rapidly, initially in Russia but subsequently in many other countries, because of its importance in magnetic structures and other physical fields. Dimensional counts. The table below gives the number of ordinary and dichromatic groups by dimension. The Bohm symbol formula_0 is used to denote the number of groups where formula_1 = overall dimension, formula_2 = lattice dimension and formula_3 = number of antisymmetry operation types. formula_4 for dichromatic groups with a single antisymmetry operation . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G_{ol}^a" }, { "math_id": 1, "text": "o" }, { "math_id": 2, "text": "l" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "a = 1" } ]
https://en.wikipedia.org/wiki?curid=73498718
73500095
Carving width
Graph width parameter In graph theory, the carving width of a graph is a number, defined from the graph, that describes the number of edges separating the clusters in a hierarchical clustering of the graph vertices. Definition and examples. The carving width is defined in terms of hierarchical clusterings of the vertices of a given graph, called "carvings". A carving can be described as an unrooted binary tree whose leaves are labeled with the vertices of the given graph. Removing any edge from this tree partitions the tree into two subtrees, and correspondingly partitions the vertices of the tree into two clusters. The vertex clusters, formed in this way, constitute a laminar set family: any two vertex clusters (not just the two complementary clusters formed by removing the same edge) are either disjoint, or one is contained in the other. The width of a carving, defined in this way, is the maximum number of edges that connect two complementary clusters. The carving width of the graph is the minimum width of any hierarchical clustering. The graphs of carving width one are exactly the matchings. The graphs of carving width two are exactly those formed from disjoint unions of path graphs and cycle graphs. The graphs of carving width three are the subcubic partial 2-trees. This means that their maximum degree is three and that they are subgraphs of series-parallel graphs. All other graphs have carving width at least four. Computational complexity. Carving width is NP-hard in general, but may be computed in polynomial time in planar graphs. It may be approximated to within a constant of the same approximation ratio as balanced cuts, for which the current best approximation ratio is formula_0. It is also fixed-parameter tractable: for any fixed formula_1, testing whether the carving width is at most formula_1, and if so finding a hierarchical clustering that realizes that width, can be performed in linear time. In general, computing the carving width exactly, on a multigraph with formula_2 vertices and formula_3 edges, may be done in time formula_4. Related parameters. The carving width is only one of several graph width parameters that measure how tree-like a given graph is. Others include the treewidth and branchwidth. The branchwidth of a graph is defined similarly to carving width, using hierarchical clusterings, but of the edges of a graph rather than of its vertices; these are called branch-decompositions. A carving of a graph can be converted into a branch decomposition by attaching each graph edge to one of its two endpoints, and expanding each leaf of a carving into a subtree representing its attached edges. Using this construction, it can be shown that for any graph, the carving width is greater than or equal to half the branch width, and is less than or equal to the degree times the branchwidth. Because treewidth and branchwidth are always within constant factors of each other, similar bounds can be used to relate carving width to treewidth. Another width parameter, defined by the numbers of edges spanning cuts in a graph, is its cutwidth, defined using a linear ordering on the vertices of a graph and the system of partitions separating earlier from later vertices in this ordering. Unlike carving width, this system of partitions does not include a partition separating each vertex from the remaining vertices, so (despite using a more restricted class of families of cuts) the cutwidth can be smaller than the carving width. However, the carving width is always at most the maximum of the cutwidth and the maximum degree of a graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\sqrt{\\log n})" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "O(2^n n^3\\log n\\log\\log n\\log m)" } ]
https://en.wikipedia.org/wiki?curid=73500095
73506123
Sphericity (graph theory)
In graph theory, the sphericity of a graph is a graph invariant defined to be the smallest dimension of Euclidean space required to realize the graph as an intersection graph of unit spheres. The sphericity of a graph is a generalization of the boxicity and cubicity invariants defined by F.S. Roberts in the late 1960s. The concept of sphericity was first introduced by Hiroshi Maehara in the early 1980s. Definition. Let formula_0 be a graph. Then the "sphericity" of formula_0, denoted by formula_1, is the smallest integer formula_2 such that formula_0 can be realized as an intersection graph of unit spheres in formula_2-dimensional Euclidean space formula_3. Sphericity can also be defined using the language of space graphs as follows. For a finite set of points in some formula_2-dimensional Euclidean space, a "space graph" is built by connecting pairs of points with a line segment when their Euclidean distance is less than some specified constant. Then the sphericity of a graph formula_0 is the minimum formula_2 such that formula_0 is isomorphic to a space graph in formula_3. Graphs of sphericity 1 are known as interval graphs or indifference graphs. Graphs of sphericity 2 are known as unit disk graphs. Bounds. The sphericity of certain graph classes can be computed exactly. The following sphericities were given by Maehara on page 56 of his original paper on the topic. The most general known upper bound on sphericity is as follows. Assuming the graph is not complete, then formula_4 where formula_5 is the clique number of formula_0 and formula_6 denotes the number of vertices of formula_7
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\operatorname{sph}(G)" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\mathbb R^n" }, { "math_id": 4, "text": "\\operatorname{sph}(G) \\leq |G| - \\omega(G)" }, { "math_id": 5, "text": "\\omega(G)" }, { "math_id": 6, "text": "|G|" }, { "math_id": 7, "text": "G." } ]
https://en.wikipedia.org/wiki?curid=73506123
73506350
Cubicity
Graph invariant In graph theory, cubicity is a graph invariant defined to be the smallest dimension such that a graph can be realized as an intersection graph of unit cubes in Euclidean space. Cubicity was introduced by Fred S. Roberts in 1969 along with a related invariant called boxicity that considers the smallest dimension needed to represent a graph as an intersection graph of axis-parallel rectangles in Euclidean space. Definition. Let formula_0 be a graph. Then the "cubicity" of formula_0, denoted by formula_1, is the smallest integer formula_2 such that formula_0 can be realized as an intersection graph of axis-parallel unit cubes in formula_2-dimensional Euclidean space. The cubicity of a graph is closely related to the boxicity of a graph, denoted formula_3. The definition of boxicity is essentially the same as cubicity, except in terms of using axis-parallel rectangles instead of cubes. Since a cube is a special case of a rectangle, the cubicity of a graph is always an upper bound for the boxicity of a graph. In the other direction, it can be shown that for any graph formula_0 on formula_4 vertices, the inequality formula_5, where formula_6 is the ceiling function, i.e., the smallest integer greater than or equal to formula_7.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\operatorname{cub} (G)" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\operatorname{box} (G)" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "\\operatorname{cub} (G) \\leq \\lceil \\log_2 n \\rceil \\operatorname{box} (G)" }, { "math_id": 6, "text": "\\lceil x \\rceil" }, { "math_id": 7, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=73506350
73516582
Synthetic nervous system
Computational neuroscience model Synthetic Nervous System (SNS) is a computational neuroscience model that may be developed with the Functional Subnetwork Approach (FSA) to create biologically plausible models of circuits in a nervous system. The FSA enables the direct analytical tuning of dynamical networks that perform specific operations within the nervous system without the need for global optimization methods like genetic algorithms and reinforcement learning. The primary use case for a SNS is system control, where the system is most often a simulated biomechanical model or a physical robotic platform. An SNS is a form of a neural network much like artificial neural networks (ANNs), convolutional neural networks (CNN), and recurrent neural networks (RNN). The building blocks for each of these neural networks is a series of nodes and connections denoted as neurons and synapses. More conventional artificial neural networks rely on training phases where they use large data sets to form correlations and thus “learn” to identify a given object or pattern. When done properly this training results in systems that can produce a desired result, sometimes with impressive accuracy. However, the systems themselves are typically “black boxes” meaning there is no readily distinguishable mapping between structure and function of the network. This makes it difficult to alter the function, without simply starting over, or extract biological meaning except in specialized cases. The SNS method differentiates itself by using details of both structure and function of biological nervous systems. The neurons and synapse connections are intentionally designed rather than iteratively changed as part of a learning algorithm. As in many other computational neuroscience models (Rybak, Eliasmith), the details of a neural model are informed by experimental data wherever possible. Not every study can measure every parameter of the network under investigation, requiring the modeler to make assumptions regarding plausible parameter values. Rybak uses a sampling method where each node is composed of many neurons and each particular neuron’s parameters are pulled from a probability distribution. Eliasmith uses what they call the Neural Engineering Framework (NEF) in which the user specifies the functions of the network and the synaptic and neural properties are learned over time. SNS follows a similar approach via the Functional Subnetwork Approach (FSA). FSA allows parameters within the network (e.g., membrane conductances, synaptic conductances) to be designed analytically based on their intended function. As a result, it is possible to use this approach to directly assemble networks that perform basic functions, like addition or subtraction, as well as dynamical operations like differentiation and integration. Background and History of Synthetic Nervous Systems. Background. The details of the underlying control networks for many biological systems are not very well understood. However, recent advancements in neuroscience tools and techniques have clarified the cellular and biophysical mechanisms of these networks, and their operation during behavior in complex environments. Although there is a long-standing interest in biologically-inspired robots and robotic platforms, there is a recent interest in incorporating features of biomechanics and neural control, e.g., biomimicry. The SNS method uses data from neuroscience in control systems for neuromechanical simulations and robots. Designing both a robot’s mechanics and controller to capture key aspects of a particular animal may lead to more flexible functionality while suggesting new hypotheses for how the animal’s nervous system works. Keeping neural models simple facilitates analysis, real time operation, and tuning. To this end, SNSs primarily model neurons as leaky integrators, which are reasonable approximations of sub-threshold passive membrane dynamics. The leaky integrator also models non-spiking interneurons which contribute to motor control in some invertebrates (locust, stick insect, "C. elegans" ). If spiking needs to be incorporated into the model, nodes may be represented using the leaky integrate-and-fire models. In addition, other conductances like those of the Hodgkin-Huxley model can be incorporated into the model. A model may be initialized with simple components (e.g., leaky integrators), and then details added to incorporate additional biological details. The modeler may then increase or decrease the level of biological detail depending upon the intended application. Keeping models simple in this way offers: While the neuroscientific models are typically simplified for SNS, the method is flexible enough that more features can be incorporated. Consequently, the SNS method can accommodate demand driven complexity, only adding features specifically where they are needed. For example, persistent sodium channels can be added to just two neurons in a neural circuit to create a half- center oscillator pattern generator without changing the other neurons in the circuit. While these additions may increase computational cost, they grant the system the ability to perform a wider array of interesting behaviors. History. The term “synthetic nervous system” (SNS) has appeared in the literature since the year 2000 to describe several different computational frameworks for mimicking the functionality of biological nervous systems. Cynthia Breazeal developed a social robot named “Kismet” while at MIT in the early 2000s. She used the term SNS to refer to her biologically-inspired hierarchical model of cognition, which included systems for low-level sensory feature extraction, attention, perception, motivation, behavior, and motor output. Using this framework, Kismet could respond to people by abstracting its sensory information into motivation for responsive behaviors and the corresponding motor output. In 2005, Inman Harvey used the term in a review article on his field, Evolutionary Robotics. In his article, Harvey uses the term SNS to refer to the evolved neural controller for a simulated agent. He does not explicitly define the term SNS; instead, he uses the term to differentiate the evolved neural controller from one created "via" alternative approaches, e.g., multi-layer perceptron (MLP) networks. In 2008, Thomas R. Insel, MD, the director of the National Institute of Mental Health, was quoted in an American Academy of Neurology interview calling for a “clear moon shot…[to motivate] a decade of new discovery [and] basic research on brain anatomy”. As part of that interview, Dr. Insel suggested building a “synthetic nervous system” as one such motivational moon shot to drive ongoing and future research. The technical details of what such a SNS would entail were not described. An article published as part of the International Work-Conference on Artificial Neural Networks (IWANN) proposes a “synthetic nervous system” as an alternative to artificial neural networks (ANNs) based in machine learning. In particular, SNS should be able to include or learn new information without forgetting what it has already learned. However, the authors do not propose a computational neuroscience framework for constructing such networks. Instead, they propose a homeostatic network of the robot’s “needs”, in which the robot takes actions to satisfy its needs and return to homeostasis. Over time, the robot learns which actions to take in response to its needs. A dissertation from Prof. Joseph Ayer’s lab at Northeastern University uses a similar term in its title but never explicitly defines it. The topic of the dissertation is “RoboLobster, a biomimetic robot controlled by an electronic nervous system simulation”. Other publications from Prof. Ayers use the term “electronic nervous system” (ENS) to describe similar work. In each of these studies, Prof. Ayers uses a robot that is controlled by a network of simplified dynamical neural models whose structure mimic specific networks from the model organism. The choice of neural model reflects a balance between simulating the dynamics of the nervous system, which motivates mathematical complexity, while ensuring the simulation runs in real time, which motivates mathematical simplicity. A 2017 research article from Prof. Alexander Hunt, Dr. Nicholas Szczecinski, and Prof. Roger Quinn use the term SNS and implicitly define it as “neural [or] neuro-mechanical models…composed of non-spiking leaky integrator neuron models”. Similar to work by Ayers et al., Hunt et al. apply the term SNS to refer to a simplified dynamical simulation of neurons and synapses used in the closed-loop control of robotic hardware. Subsequent articles by these authors present the Functional Subnetwork Approach for tuning SNS constructed from these and other simplified dynamical neural models (i.e., leaky integrate-and-fire), as well as further SNS models of the nervous system Comparing the diversity of works that use the term SNS produces an implicit definition of SNS: Comparison to Other Neural Networks. SNSs share some features with machine learning networks like Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN). All of these networks are composed of neurons and synapses inspired in some way by biological nervous systems. These components are used to build neural circuits with the express purpose of accomplishing a specific task. ANN simply refers to a collection of nodes (neurons) connected such that they loosely model a biological brain. This is a rather broad definition and as a consequence there are many subcategories of ANN, two of which are CNN and RNN. CNNs are primarily used for image recognition and classification. Their layer-to-layer connections implement convolutional kernels across small areas of the image, which map the input to the system (typically an image) onto a collection of features. ANNs and CNNs are only loosely associated with SNS in that they share the same general building blocks of neurons and synapses, though the methods used to model each component varies between the networks. Of the three, RNNs are the most closely related to SNS. SNSs use the same leaky-integrator neuron models utilized in RNNs. This is advantageous as neurons inherently act as low pass filters, which is useful for robotic applications where such filtering is often applied to reduce noise for both sensing and control purposes. Both models also exhibit dynamic responses to inputs. While predicting the responses of a complicated network can be difficult, the dynamics of each node are relatively simple in that each is a system of first order differential equations (as opposed to fractional derivatives). The key difference that distinguishes SNS from these neural networks are the synaptic connections and the general architecture of the neural circuit. RNN structures generally present as large, highly connected or even all-to- all connected layers of neurons. Instead of these layers, SNS relies on functional subnetworks which are tuned to perform specific operations and then assembled into larger networks with explainable functions. These are significantly more tractable than a typical machine learning network. The tradeoff of SNS is that it typically takes more time to design and tune the network but it does not require a training phase involving large amounts of computing power and training data. The other key difference is that SNS synapses are conductance based rather than current based which makes the dynamics non-linear, unlike an RNN. This allows for the modelling of modulatory neural pathways since the synapses can alter the net membrane conductance of a postsynaptic neuron without injecting current. It also enables the functional subnetwork approach to encompass addition, subtraction, multiplication, division, differentiation, and integration operations using the same family of functions. Neuron and Synapse Models. Non-spiking Neuron. SNS networks are composed mainly of non-spiking leaky integrator nodes to which complexity may be added if needed. Such dynamics model non-spiking neurons like those studied extensively in invertebrates (e.g., nematode, locust, cockroach ) or may represent the mean activity of a population of spiking neurons. The dynamics of the membrane voltage formula_0 of a non-spiking neuron are governed by the differential equation formula_1 where formula_2 is the membrane capacitance, formula_3 is an arbitrary current injected into the cell e.g., "via" a current clamp, and formula_4 and formula_5 are the leak and synaptic currents, respectively. The leak current formula_6 where formula_7 is the conductance of the cell membrane and formula_8 is the rest potential across the cell membrane. The synaptic current formula_9 where formula_10 is the number of synapses that impinge on the cell, formula_11 is the instantaneous synaptic conductance of the formula_12 incoming synapse, and formula_13 is the reversal potential of the formula_12 incoming synapse. Graded Chemical Synapse. Non-spiking neurons communicate "via" graded chemical synapses:. Typically, synaptic conductances are modeled with a continuous function like a sigmoid but in an SNS this conductance is approximated by the following piecewise-linear function formula_14 As shown in the corresponding figure this allows the conductance to vary between 0 and a prescribed or designed maximum value (formula_15) depending on the presynaptic potential (formula_16). A piecewise approach is used to ensure exactly 0 conductance, and therefore current, at low activation potentials. This approximates a feature of spiking neuron activity in that no information is transmitted when the neuron isn’t spiking/active. Furthermore, this approximation eliminates transcendental functions enabling analytical calculations of dynamical properties. While this does prevent the network activity from being differentiable, since no gradient-based learning methods are employed (like backpropagation) this is not a drawback. Persistent Sodium Current. It was previously mentioned that additional ion channels could be incorporated to elicit more interesting behaviors from non-spiking neuron models. The persistent sodium current is one such addition. A persistent sodium current can depolarize a membrane enough to induce action potential firings at sub-threshold membrane potentials while also being slow to inactivate. In the context of neuroscientific models, this is useful for applications such as pattern generators where it is desired that a neuron’s potential can be rapidly increased and remain elevated until inhibited by another neural signal or applied current. The model for the behavior of this channel is based on the m and h gating present in the full Hodgkin-Huxley model. The main difference is that this model only uses one m gate instead of three. The equations governing this behavior can be found here and in this paper. Integrate-and-Fire. Unless explicitly studying or utilizing the Hodgkin-Huxley model for action potentials, spiking neurons can be modeled via the integrate-and-fire method. This is significantly more computationally efficient than Hodgkin-Huxley making it easier to simulate much larger networks. In particular, leaky integrate-and-fire (LIF) neurons are used for SNS. As the name suggests, this model accounts for membrane potential leak behavior representing ion diffusion across the membrane. This integrate-and-fire model is very similar to the non-spiking neuron described above with the key addition of a firing threshold parameter. When the neuron potential depolarizes to this threshold the neuron “spikes” by instantaneously resetting to its resting potential While these do not provide the same diversity of dynamical responses as Hodgkin-Huxley, they are usually sufficient for SNS applications and can be analyzed mathematically which is crucial for network tractability. Please refer to the linked Wikipedia article and paper for the equations associated with the LIF neuron model. Izhikevich Model. Spiking neurons can also be modeled in a computationally efficient manner without sacrificing the rich behaviors exhibited in biological neural activity. The Izhikevich model can produce spiking behaviors approximately as plausible as Hodgkin-Huxley but with comparable computational efficiency to the integrate-and-fire method. To accomplish this, Izhikevich reduces the Hodgkin-Huxley model to a two-dimensional set of ordinary differential equations via bifurcation methods. These can be seen here: formula_17 formula_18 Where the membrane potential resets after spiking as described by: formula_19 formula_20 is a dimensionless variable representing the membrane potential. formula_21 is a dimensionless variable representing membrane recovery which accounts for the ion current behaviors, specifically those of formula_22 and formula_23. formula_24, formula_25, formula_26, and formula_27 are dimensionless parameters that can be altered to shape the signal into different neuronal response patterns. This enables chattering, bursting, and continuous spiking with frequency adaptation which constitute a richer array of behaviors than the basic integrate-and-fire method can produce. The coefficients in the formula_28 equation were acquired via data fitting to a particular neuron’s spiking patterns (a cortical neuron in this case) to get the potentials in the mV range and time on the scale of ms. It is possible to use other neurons to fit the spike initiation dynamics, they will simply produce different coefficients. For more information on the Izhikevich model and the bifurcation methods used to develop it please read the following. Rulkov Map. The Rulkov map forgoes complex ion channel-based models composed of many non-linear differential equations in favor of a two-dimensional map. This map expresses slow and fast dynamics which is vital for representing both slow oscillations and fast spikes and bursts. The model is shown below: formula_29 formula_30 formula_31 is the fast dynamical variable and represents the membrane potential while formula_32 is the slow dynamical variable and does not have explicit biological meaning. formula_33 and formula_34 are used to describe external influences and help model the dynamics of stimuli like injected/synaptic and tonic/bias currents. Small values of formula_35 result in slow changes in formula_32 that account for its slower behavior. Assuming a constant external influence (formula_36) the function formula_37 can be written as the following discontinuous function: formula_38 In this case formula_39 is the map control parameter and can be used, along with formula_34, to shape the output behavior of the neuron. You can read more about the Rulkov map on the Wikipedia page hyperlinked here and in. Functional Subnetwork Approach (FSA). Functional Subnetworks are the building blocks of SNSs. They are composed of neurons and synapses modeled from the equations described above as well as other neuroscience models. When tuned properly, as shown in the following section, they are capable of performing mathematical calculations as well as dynamical operations. This process is different from other artificial neural networks in that the tuning exploits the network structure. The artificial neural networks mentioned previously utilize all-to-all connectivity between layers but in a SNS there are no distinct layers. Rather the synaptic connections are methodically designed with an express function in mind. This results in fewer synaptic connections without sacrificing network effectiveness. Tuned subnetworks can be assembled into larger networks to form the SNS itself. Assembly can be done in series or in parallel, much like adding components to an electrical circuit. The resulting neural network is reminiscent of a peripheral nervous system, rather than a brain-like network (ANN). Tuning with the Functional Subnetwork Approach (FSA). The leaky-integrator model above can be converted into a tuning-friendly equation by normalizing the membrane potential to read 0 when at rest (formula_40) and by introducing the formula_41 parameter. formula_41 is the potential operating range of the graded chemical synapse and is equal to formula_42. Making these changes and then solving for the steady-state activation (formula_43) of the neuron (when formula_44) gives the following equation: formula_45 formula_46 is determined by the difference between cell resting potential (formula_47) and the synaptic reversal potential (formula_48). This equation can be used to tune synapse conductances for specific points in the network’s operation where the neurons are in a steady state or have a known/designed membrane potential (formula_49). In this way it is possible to intentionally and directly set the state of the network during key moments in its operation sequence so that it produces a desired action or behavior. Tuning a subnetwork requires the use of signal transmission and/or modulation pathways. Signal transmission pathways make a postsynaptic neuron’s potential proportional to that of the presynaptic neuron(s). The ratio of the synaptic proportionality is referred to as formula_50. This can be used to calculate the maximum conductance value for a synapse (formula_51) via the equation: formula_52 formula_51 is used in the graded chemical synapse model discussed previously. Tuning a synapse using formula_50 instead of the steady-state activation equation is practical when a specific relationship between a small subset of neurons is desired. For example, if a network requires that the postsynaptic neuron membrane potential be half that of the presynaptic neuron, formula_50 can be set to formula_53 and plugged into the equation. The signal modulation pathway is used to modulate neuron sensitivity This allows for adaptive responses to various inputs. In this pathway formula_54 is used instead of formula_50. Technically both are defined as the steady state postsynaptic potential (formula_55) divided by the presynaptic potential (formula_49), the ratio mentioned above, for a given applied current. The letter formula_26 is used for modulation to indicate that the neuron sensitivity is changing and is therefore not the same as formula_56 which represents a static relationship. For a modulation pathway, formula_51 can be calculated as: formula_57 In order to minimize hyperpolarization of the postsynaptic neuron formula_46 should be kept negative and as close to 0 as possible (or zeroed entirely). Arithmetic Networks. All arithmetic subnetwork tuning methods were taken from. Addition. Addition subnetworks are composed of one postsynaptic neuron connected to presynaptic neurons via excitatory transmission pathways. The purpose of the network is to enable an approximation of linear addition of the incoming presynaptic signals. The subnetwork can be tuned using either of the methods outlined previously. The addition behavior can be weighted using formula_50. This type of network may represent positive feedback mechanisms in biology. To capture the addition properly formula_51 must be small but it cannot be 0 or the synapse will effectively be severed. Instead, formula_46 is maximized which results in small values of formula_51. Subtraction. Subtraction subnetworks are similar to addition networks except the presynaptic potentials being subtracted travel to the postsynaptic cell via inhibitory transmission pathways. This may approximate negative feedback mechanisms in the nervous system. Unlike with depolarizing ions, hyperpolarizing ion potentials tend to be much closer to the membrane resting potential. This results in smaller formula_46 values so it is difficult to minimize formula_58 like in the addition subnetwork. The easiest way to properly tune a subtraction network is to design the parameters to fit a specific scenario. This process was already described using the steady state activation equation. formula_50 can also be used as in the addition subnetwork but since formula_58 cannot be minimized to the same degree the effect is not as precise. The equation to solve the inhibitory pathway in this manner is as follows: formula_59 The excitation pathway synaptic potential difference (formula_60) must first be determined. It is vital that formula_61 for the inhibitory pathway be negative or solving the equation will produce a negative conductance which is biologically impossible. Division. The physical structure of a division subnetwork is the same as a subtraction subnetwork except the inhibitory synapse is modulatory, rather than transmission. The division performed in this network follows the form below where the transmitted signal formula_62 is divided by the modulating signal formula_63: formula_64 The excitatory transmission synapse is tuned as described previously. The modulatory reversal potential is right around 0 so the formula_61 from before is cancelled out (set to 0). Setting formula_60 equal to formula_41 and applying these to the steady-state activation equation gives the division equation above once simplified. From here the equation can be used as before and formula_54 can be set such that the network produces the desired division behavior. For example, if it were desired that formula_65 when formula_66 then formula_54 could be set to formula_67 (example from ). formula_54 values closer to 0 more strongly reduce the postsynaptic neuron’s sensitivity to inputs. Multiplication. Multiplication networks are somewhat similar to division networks but rather than having the modulatory synaptic connection directly between the presynaptic and postsynaptic neuron there is an interneuron in the way. The presynaptic neuron connects to the interneuron via a modulatory pathway and the interneuron connects to the postsynaptic neuron with another modulatory synapse (please see the figure in this section for clarification). This modulation in series results in a network that essentially divides by the inverse which turns out to be multiplication. The formula_54 parameter between the interneuron and the postsynaptic neuron is 0. This ensures that when its potential (formula_68) is at the maximum value allowed (formula_41) the postsynaptic neuron potential (formula_69) is 0 regardless of the applied current. This makes sense since dividing by a maximum allowed number for a system should result in the lowest possible output. Plugging this into the steady-state activation equation gives the following solution for the synaptic conductance: formula_70 formula_71 cannot be greater than or equal to 0 here as division by zero is undefined and dividing by a positive number gives a negative conductance which is impossible. The less negative formula_71 is, the larger formula_72 is. This means formula_72 must determine formula_71 so as to stay within the confines of biological plausibility. With this synapse designed the rest can be determined using the methods outlined in. The process is rather involved and better suited for an in-depth reading. Dynamic networks. All dynamic subnetwork tuning methods were taken from. Differentiation. Differentiation networks are nearly the same as subtraction networks with an added dynamical component. The presynaptic neuron that inhibits the postsynaptic neuron is modelled as a physically larger neuron which means it has a greater capacitance than the excitatory synapsing neuron. This increased capacitance results in a neuron that is slower to reach its fully excited state (ie. formula_73). Subtracting the slow responding neuron signal (with the inhibitory synapse) from the fast responding signal is basically a biological version of numerical differentiation whereby a previous time-“step” is subtracted from the current time-“step”. This network is good for identifying changes in applied current to a network or applied stimulus to a sensory neuron. The equations that detail this behavior are presented in. Integration. The neuron model used for SNS has leak dynamics meaning a current is always leaking out of the neuron to return it to resting potential. This means a single neuron modelled in this fashion is incapable of storing data. A system of two neurons, however, are capable of this if linked via mutually inhibitory transmission synapses with a marginally stable equilibrium curve. The mutual inhibition means that the activation levels are maintained instead of leaking away and the system state changes continuously for the duration of an applied stimulus (integration). Integration subnetworks, while not necessarily complicated in structure, are the most complex to define and prove. As such the derivation and proof of marginal stability are worth an in-depth read here as a cursory overview would be insufficient. Applications. Robotic Leg Control. As mentioned previously, the primary application of the Synthetic Nervous System method is robotic control. Within this field, SNSs have largely been used to control the locomotion of legged robots. Many examples of both simulated and physical robots which incorporate SNSs exist in the literature: Brain networks. Synthetic Nervous Systems have also been used to model higher functions in the nervous system than the peripheral networks responsible for locomotion. Some examples of these kinds of SNS are listed here: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "C_m\\dfrac{dV}{dt}=I_\\text{leak}+I_\\text{syn}+I_\\text{app}" }, { "math_id": 2, "text": "C_m" }, { "math_id": 3, "text": "I_\\text{app}" }, { "math_id": 4, "text": "I_\\text{leak}" }, { "math_id": 5, "text": "I_\\text{syn}" }, { "math_id": 6, "text": "I_\\text{leak}=G_m*(E_r-V)" }, { "math_id": 7, "text": "G_m" }, { "math_id": 8, "text": "E_r" }, { "math_id": 9, "text": "I_\\text{syn}=\\sum\\limits_{i=1}^n G_{s,i}*(E_{s,i}-V)" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "G_{s,i}" }, { "math_id": 12, "text": "i^{th}" }, { "math_id": 13, "text": "E_{s,i}" }, { "math_id": 14, "text": "G_{s,i} = \\begin{cases} 0 & \\text{if }V_{pre}<E_{lo} \\\\ \n g_{s,i}*\\dfrac{V_{pre}-E_{lo}}{E_{hi}-E_{lo}} & \\text{if }E_{lo}<V_{pre}<E_{hi} \\\\ \n g_{s,i} & \\text{if }V_{pre}>E_{hi}\n \\end{cases}" }, { "math_id": 15, "text": "g_{s,i}" }, { "math_id": 16, "text": "V_{pre}" }, { "math_id": 17, "text": "v'=0.04v^2+5v+140-u+I" }, { "math_id": 18, "text": "u'=a*(bv-u)" }, { "math_id": 19, "text": "\\text{if } v\\geq30mV, \\text{then } \\begin{cases} v\\longleftarrow c \\\\\n u\\longleftarrow u+d\\\\\n \\end{cases}" }, { "math_id": 20, "text": "v" }, { "math_id": 21, "text": "u" }, { "math_id": 22, "text": "Na^+" }, { "math_id": 23, "text": "K^+" }, { "math_id": 24, "text": "a" }, { "math_id": 25, "text": "b" }, { "math_id": 26, "text": "c" }, { "math_id": 27, "text": "d" }, { "math_id": 28, "text": "v'" }, { "math_id": 29, "text": "x_{n+1}=f(x_n,y_n+\\beta_n)" }, { "math_id": 30, "text": "y_{n+1}=y_n-\\mu(x_n+1)+\\mu\\sigma_n" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": "y" }, { "math_id": 33, "text": "\\beta" }, { "math_id": 34, "text": "\\sigma" }, { "math_id": 35, "text": "\\mu" }, { "math_id": 36, "text": "\\beta_n=\\beta" }, { "math_id": 37, "text": "f" }, { "math_id": 38, "text": "f(x,y) = \\begin{cases} \\dfrac{\\alpha}{1-x}+y & x\\leq0 \\\\\n \\alpha+y & 0<x<\\alpha+y \\\\\n -1 & x\\geq \\alpha+y\n \\end{cases}" }, { "math_id": 39, "text": "\\alpha" }, { "math_id": 40, "text": "U=V-E_r" }, { "math_id": 41, "text": "R" }, { "math_id": 42, "text": "E_{hi}-E_{lo}" }, { "math_id": 43, "text": "U^*" }, { "math_id": 44, "text": "\\tfrac{dU}{dt}=0" }, { "math_id": 45, "text": "U^*=\\dfrac{\\sum\\limits_{i=1}^n \\dfrac{g_{s,i}}{R}*U_{pre,i}*\\Delta E_{s,i}+I_\\text{app}}\n{1+\\sum\\limits_{i=1}^n \\dfrac{g_{s,i}}{R} U_{pre,i}}" }, { "math_id": 46, "text": "\\Delta E_s" }, { "math_id": 47, "text": "E_r\n" }, { "math_id": 48, "text": "E_s\n" }, { "math_id": 49, "text": "U_{pre}" }, { "math_id": 50, "text": "k_{syn}" }, { "math_id": 51, "text": "g_s" }, { "math_id": 52, "text": "g_s=\\dfrac{k_{syn}R}{\\Delta E_s-k_{syn}R}" }, { "math_id": 53, "text": "\\tfrac{1}{2}" }, { "math_id": 54, "text": "c_{syn}" }, { "math_id": 55, "text": "U^*_{post}" }, { "math_id": 56, "text": "k" }, { "math_id": 57, "text": "g_s=\\dfrac{c_{syn}R-R}{\\Delta E_s-c_{syn}R}" }, { "math_id": 58, "text": "g_{syn}" }, { "math_id": 59, "text": "g_{s,2}=\\dfrac{\\Delta E_{s,1}}{\\Delta E_{s,2}}*\\dfrac{-k_{syn}R}{\\Delta E_{s,1}-k_{syn}R}" }, { "math_id": 60, "text": "\\Delta E_{s,1}" }, { "math_id": 61, "text": "\\Delta E_{s,2}" }, { "math_id": 62, "text": "U_{pre,1}" }, { "math_id": 63, "text": "U_{pre,2}" }, { "math_id": 64, "text": "U^*_{post}=\\dfrac{U_{pre,1}}{1+\\dfrac{1-c_{syn}}{c_{syn}R}*U_{pre,2}}" }, { "math_id": 65, "text": "U^*_{post}=1" }, { "math_id": 66, "text": "U_{pre,2}=R" }, { "math_id": 67, "text": "\\tfrac{1}{R}" }, { "math_id": 68, "text": "U_{inter}" }, { "math_id": 69, "text": "U_{post}" }, { "math_id": 70, "text": "g_{s,2}=\\dfrac{-R}{\\Delta E_{syn,2}}" }, { "math_id": 71, "text": "\\Delta E_{syn,2}" }, { "math_id": 72, "text": "g_{s,2}" }, { "math_id": 73, "text": "\\tau_2>\\tau_1" }, { "math_id": 74, "text": "E_{lo}" } ]
https://en.wikipedia.org/wiki?curid=73516582
73519955
Direct linear plot
Plot for enzyme kinetics data In biochemistry, the direct linear plot is a graphical method for enzyme kinetics data following the Michaelis–Menten equation. In this plot, observations are not plotted as points, but as "lines" in parameter space with axes formula_0 and formula_1, such that each observation of a rate formula_2 at substrate concentration formula_3 is represented by a straight line with intercept formula_4 on the formula_0 axis and formula_2 on the formula_1 axis. Ideally (in the absence of experimental error) the lines intersect at a unique point formula_5 whose coordinates provide the values of formula_6 and formula_7. Comparison with other plots of the Michaelis–Menten equation. The best known plots of the Michaelis–Menten equation, including the double-reciprocal plot of formula_8 against formula_9, the Hanes plot of formula_10 against formula_11, and the Eadie–Hofstee plot of formula_12 against formula_13 are all plots in observation space, with each observation represented by a point, and the parameters determined from the slope and intercepts of the lines that result. This is also the case for non-linear plots, such as that of formula_12 against formula_11, often wrongly called a "Michaelis-Menten plot", and that of formula_12 against formula_14 used by Michaelis and Menten. In contrast to all of these, the direct linear plot is a plot in parameter space, with observations represented by lines rather than as points. Effect of experimental error. The case illustrated above is idealized, because it ignores the effect of experimental error. In practice, with formula_15 observations, instead of a unique point of intersection, there is a "family" of formula_16 intersection points, with each one giving a separate estimate of formula_17 and formula_18 for the lines drawn for the formula_19 and formula_20 observations. Some of these, when the intersecting lines are almost parallel, will be subject to very large errors, so one must not take the means (weighted or not) as the estimates of formula_6 and formula_7. Instead one can take the medians of each set as estimates formula_21 and formula_22. The great majority of intersection points should occur in the first quadrant (both formula_17 and formula_18 positive). Intersection points in the second quadrant (formula_17 negative and formula_18 positive) do not require any special attention. However, intersection points in the third quadrant (both formula_17 and formula_18 negative) should not be taken at face value, because these can occur if both formula_12 values are large enough to approach formula_1, and indicate that both formula_17 and formula_18 should be taken as infinite and positive: formula_23. The illustration is drawn for just four observations, in the interest of clarity, but in most applications there will be much more than that. Determining the location of the medians by inspection becomes increasingly difficult as the number of observations increases, but that is not a problem if the data are processed computationally. In any case, if the experimental errors are reasonably small, as in Fig. 1b of a study of tyrosine aminotransferase with seven observations, the lines crowd closely enough together around the point formula_24 for this to be located with reasonable precision. Resistance to outliers and incorrect weighting. The major merit of the direct linear plot is that median estimates based on it are highly resistant to the presence of outliers. If the underlying distribution of errors in formula_12 is not strictly Gaussian, but contains a small proportion of observations with abnormally large errors, this can have a disastrous effect on many regression methods, whether linear or non-linear, but median estimates are very little affected. In addition, to give satisfactory results regression methods require correct weighting: do the errors formula_25 follow a normal distribution with uniform standard deviation, or uniform coefficient of variation, or something else? This is very rarely investigated, so the weighting is usually based on preconceptions. Atkins and Nimmo made a comparison of different methods of fitting the Michaelis-Menten equation, and concluded that We have therefore concluded that, unless the error is definitely known to be normally distributed and of constant magnitude, Eisenthal and Cornish-Bowden's method is the one to use. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_\\mathrm{m}" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "v_i" }, { "math_id": 3, "text": "a_i" }, { "math_id": 4, "text": "-a_i" }, { "math_id": 5, "text": "(\\hat{K}_\\mathrm{m}, \\hat{V})" }, { "math_id": 6, "text": "\\hat{K}_\\mathrm{m}" }, { "math_id": 7, "text": "\\hat{V}" }, { "math_id": 8, "text": "1/v" }, { "math_id": 9, "text": "1/a" }, { "math_id": 10, "text": "a/v" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "v" }, { "math_id": 13, "text": "v/a" }, { "math_id": 14, "text": "\\log a" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "n(n-1)/2" }, { "math_id": 17, "text": "K_{\\mathrm{m}_{ij}}" }, { "math_id": 18, "text": "V_{ij}" }, { "math_id": 19, "text": "i\\,\\mathrm{th}" }, { "math_id": 20, "text": "j\\,\\mathrm{th}" }, { "math_id": 21, "text": "K_\\mathrm{m}^*" }, { "math_id": 22, "text": "V^*" }, { "math_id": 23, "text": "K_{\\mathrm{m}_{ij}} \\rightarrow +\\infty, V_{ij} \\rightarrow +\\infty" }, { "math_id": 24, "text": "(K_\\mathrm{m}^*, V^*)" }, { "math_id": 25, "text": "\\varepsilon(v)" } ]
https://en.wikipedia.org/wiki?curid=73519955
73520500
De Vries–Rose Law
The De Vries - Rose law of vision science. The De Vries – Rose law (or Rose – de Vries law) is a principle of vision science named after Hessel de Vries and Albert Rose. De Vries discovered it in 1943 from considerations of quantum efficiency, and Rose developed the idea substantially a few years later. The law says that for visual targets seen against a background luminance formula_0, subject to certain assumptions, the threshold contrast should be inversely proportional to formula_1 (i.e. contrast sensitivity is directly proportional to formula_1). In reality it holds only approximately, at luminance levels between the regimes of "dark light" and Weber's Law. Derivation. Suppose that an achromatic target is viewed against a uniform background luminance formula_0. For the target to be visible there must be sufficient luminance contrast; i.e. the target must be brighter (or darker) than the background by some amount formula_2. If the target is at threshold (i.e. only just visible, or with some specified probability of detection) then the threshold contrast is defined as formula_3. If formula_0 is in the range of photopic vision, then as formula_0 varies we expect formula_4 = constant (Weber's law). Suppose instead that formula_0 is in the scotopic range, when the quantum nature of light might be significant. Vision is initiated by a shower of (visible spectrum) photons coming from both target and background to the eye. The photon emission rate will be subject to some probability distribution, so can be taken to lie in the range formula_5 where formula_6 is the mean of the distribution and formula_7 is the standard deviation. Luminance is directly proportional to shower rate over some sufficient time period, so we can write formula_8, formula_9 for some fixed constant formula_10 and average rates formula_6, formula_11. Then formula_12. Photons from the target and background are a visual signal that the observer must discriminate from noise. The likelihood of visibility will be related to the signal to noise ratio. Imagine that the only noise is the variability of the photon shower, and that the eye is an ideal photon detector. Then the least amount of excess brightness required for the target to be visible will be directly proportional to the greatest accuracy with which the photon rate can be measured. So we can write formula_13 for some fixed constant formula_14. If the photon shower is assumed to obey Poisson statistics, then formula_15. Then formula_16, hence formula_17. Empirical results. The law predicts that if logformula_2 is plotted against logformula_0, the threshold curve for low formula_0 will be a straight line with gradient 1/2, but this is only approximately true in the interval between the very darkest background level (gradient 0 - "dark light"), and daylight conditions (gradient 1 - Weber's law). The portion of approximate validity is termed the De Vries - Rose (or Rose - De Vries) region. Dark light is evidence of neural noise, and Weber's law indicates the presence of a neural gain function. These factors partly account for deviations from the De Vries - Rose law in the intermediate region. The assumption of Poisson statistics places a further limitation on the law's applicability. Using contrast threshold data collected by H.R. Blackwell, and Knoll et al., Crumey has shown that for targets of angular area formula_18 sr against scotopic backgrounds formula_19 cd m-2, the threshold can be accurately modelled as formula_20 for constants formula_21, formula_22. For low formula_0 and fixed formula_23 this gives the De Vries - Rose law as formula_24, and for fixed formula_0 it gives Ricco's law, formula_25. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "\\sqrt{B}" }, { "math_id": 2, "text": "\\Delta B" }, { "math_id": 3, "text": "C = \\Delta B / B " }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "(N \\pm \\delta N)" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "\\delta N" }, { "math_id": 8, "text": "B = \\phi N" }, { "math_id": 9, "text": "\\Delta B = \\phi \\Delta N" }, { "math_id": 10, "text": " \\phi" }, { "math_id": 11, "text": "\\Delta N" }, { "math_id": 12, "text": "C = \\Delta N /N" }, { "math_id": 13, "text": "\\Delta N = \\rho\\delta N" }, { "math_id": 14, "text": "\\rho" }, { "math_id": 15, "text": " \\delta N = \\sqrt{N}" }, { "math_id": 16, "text": "\\Delta B = \\phi \\rho \\sqrt{N} = \\phi \\rho \\sqrt{B/\\phi}" }, { "math_id": 17, "text": "C \\propto 1/\\sqrt{B}" }, { "math_id": 18, "text": "A \\leq 10^{-2}" }, { "math_id": 19, "text": "B \\geq 10^{-5}" }, { "math_id": 20, "text": "C = (r_1B^{-1/4}+r_2)^2/A" }, { "math_id": 21, "text": "r_1" }, { "math_id": 22, "text": "r_2" }, { "math_id": 23, "text": "A" }, { "math_id": 24, "text": "C \\approx (r_1^2/A)B^{-1/2}" }, { "math_id": 25, "text": "C \\propto 1/A" } ]
https://en.wikipedia.org/wiki?curid=73520500
73521325
Wave overtopping
Transmission of water waves over a coastal structure Wave overtopping is the time-averaged amount of water that is discharged (in liters per second) per structure length (in meters) by waves over a structure such as a breakwater, revetment or dike which has a crest height above still water level. When waves break over a dike, it causes water to flow onto the land behind it. Excessive overtopping is undesirable because it can compromise the integrity of the structure or result in a safety hazard, particularly when the structure is in an area where people, infrastructure or vehicles are present, such as in the case of a dike fronting an esplanade or densely populated area. Wave overtopping typically transpires during extreme weather events, such as intense storms, which often elevate water levels beyond average due to wind setup. These effects may be further intensified when the storm coincides with a high spring tide. Excessive overtopping may cause damage to the inner slope of the dike, potentially leading to failure and inundation of the land behind the dike, or create water-related issues on the inside of the dike due to excess water pressure and inadequate drainage. The process is highly stochastic, and the amount of overtopping depends on factors including the freeboard, wave height, wave period, the geometry of the structure, and slope of the dike. Overtopping factors and influences. Overtopping can transpire through various combinations of water levels and wave heights, wherein a low water level accompanied by high waves may yield an equivalent overtopping outcome to that of a higher water level with lower waves. This phenomenon is inconsequential when water levels and wave heights exhibit correlation; however, it poses difficulties in river systems where these factors are uncorrelated. In such instances, a probabilistic calculation is necessary. The "freeboard" is the height of the dike's crest above the still water level, which usually corresponds to the determining storm surge level or river water level. Overtopping is typically expressed in litres per second per metre of dike length (L/s/m), as an average value. Overtopping follows the cyclical nature of waves, resulting in a large amount of water flowing over a structure, followed by a period with no water. The official website of the "EurOtop Manual", which is widely used in the design of coastal engineering structures, features a number of visualisations of wave overtopping. In the case of overtopping at "rubble-mound" breakwaters, recent research using numerical models indicates that overtopping is strongly dependent on the slope angle. Since present design guidelines for non-breaking waves do not include the effect of the slope angle, modified guidelines have also been proposed. Whilst these observed slope effects are too large to be ignored, they still need to be verified by tests using physical models. Overtopping behaviour is also influenced by the geometry and layout of different coastal structures. For example, seawalls (which are typically vertical, or near-vertical, as opposed to sloping breakwaters or revetments), are often situated behind natural beaches. Scour at the base of these structures during storms can have a direct impact on wave energy dissipation along their frontage, thus influencing wave overtopping. This phenomenon assumes critical importance when storms occur in such quick succession that the beach doesn't have sufficient time for sediments removed by the storm to be re-established. Experimental results show that, for near-vertical structures at the back of a beach, there is an increase in wave overtopping volume for a storm that starts from an eroded beach configuration, rather than a simple slope. Calculation of overtopping. Wave overtopping predominantly depends on the respective heights of individual waves compared to the crest level of the coastal structure involved. This overtopping doesn't occur continuously; rather, it's a sporadic event that takes place when particularly high waves within a storm impact the structure. The extent of wave overtopping is quantified by the volume of water that overflows onto the adjacent land. This can be measured either as the volume of water per wave for each unit length of the seawall, or as the average rate of overtopped water volume per unit length during the storm wave period. Much research into overtopping has been carried out, ranging from laboratory experiments to full-scale testing and the use of simulators. In 1971, Jurjen Battjes developed a theoretically accurate equation for determining the average overtopping. However, the formula's complexity, involving error functions, has limited its widespread adoption in practical applications. Consequently, an alternative empirical relationship has been established: formula_0 in which formula_1 is the dimensionless overtopping, and formula_2 is the dimensionless freeboard: formula_3 formula_4 in which: formula_5 is the water depth formula_6 is the freeboard formula_7 is the overtopping discharge (in m³/s) formula_8 is the significant wave height at the toe of the structure formula_9 is the deep water wavelength formula_10 is the inclination of the slope (of e.g. the breakwater or revetment) formula_11 is the Iribarren number formula_12 is a resistance term. The values of formula_13 and formula_14 depend on the type of breaking wave, as shown in the table below: The resistance term formula_12 has a value between approximately 0.5 (for two layers of loosely dumped armourstone) and 1.0 (for a smooth slope). The effect of a berm and obliquely incident waves is also taken into account through the resistance term. This is determined in the same way as when calculating wave run-up. Special revetment blocks that reduce wave run-up (e.g., Hillblock, Quattroblock) also reduce wave overtopping. Since the governing overtopping is the boundary condition, this means that the use of such elements allows for a slightly lower flood barrier. Research for the EurOtop manual has provided much additional data, and based on this, the formula has been slightly modified to: formula_15 with a maximum of: formula_16 It turns out that this formula is also a perfect rational approximation of the original Battjes formula. In certain applications, it may also be necessary to calculate individual overtopping quantities, i.e. the overtopping per wave. The volumes of individual overtopping waves are Weibull distributed. The overtopping volume per wave for a given probability of exceedance is given by: formula_17 formula_18 formula_19 in which formula_20 is the probability of exceedance of the calculated volume, formula_21 is the probability of overtopping waves, and formula_6 is the crest height. Calculation and measurement of overtopping at rock revetment crests. In terms of revetments, the overtopping discussed in the EurOtop manual refers to the overtopping measured at the seaward edge of the revetment crest. The formulas above describe the wave overtopping occurring at the sea-side edge of the crest. In scenarios where the crest is impermeable (for example, a road surface or a clay layer), the volume of water overtopping the inland side of the crest would roughly equal that on the seaside. However, in the case of a rock armour breakwater with a more permeable crest, a large part of the overtopping water will seep into the crest, thus providing less overtopping on the inside of it. To analyse this effect, reduction coefficient formula_12 can be used. This factor can be multiplied by 0.5 for a standard crest, with a width of about three rocks. This can result in a significant reduction in overtopping, and thus in the required crest height. If, behind the crest at a lower level, a permeable rock armour layer is installed with width formula_22, the amount of overtopping on the landside of this layer decreases still further. In that case, the reduction term formula_12 (not to be confused with the reduction co-efficient formula_23) can be multiplied by formula_24, in which formula_25 is the crest width. Berm breakwaters. The circumstances surrounding overtopping at berm-type breakwaters differ slightly from those of dikes. Minor wave overtopping may occur as splashes from waves striking individual rocks. However, significant overtopping typically results in a horizontal flow across the crest, similar to what happens with dikes. The primary distinction lies in the wave heights used for designing these structures. Dikes rarely face wave heights exceeding 3 metres, while berm breakwaters are often designed to withstand wave heights of around 5 metres. This difference impacts the overtopping behaviour when dealing with smaller overtopping discharges. Tolerable overtopping. An understanding wave overtopping involves a combination of empirical data, physical modelling, and numerical simulations to predict and mitigate its impacts on coastal structures and safety. Traditionally, permissible average overtopping discharge has been utilised as a standard for designing coastal structures. It is necessary to restrict the average overtopping discharge to guarantee both the structural integrity of the structure, as well as the protection of individuals, vehicles, and properties situated behind it. Design handbooks often stipulate the thresholds for the maximum individual overtopping volumes, necessitating the examination of wave overtopping on a wave-per-wave basis. Often, to ensure a more dependable level of safety for pedestrians and vehicles, or to evaluate the stability of the inner slope of a revetment, it is necessary to consider the peak velocity and thickness of the overtopping flow. The tolerable overtopping is the overtopping which the design accepts may occur during a design storm condition. It is dependent on a number of factors including the intended use of the dike or coastal structure, and the quality of the revetment. Tolerable overtopping volumes are site-specific and depend on various factors, including the size and usage of the receiving area, the dimensions and capacity of drainage ditches, damage versus inundation curves, and return period. For coastal defences safeguarding the lives and well-being of residents, workers, and recreational users, designers and overseeing authorities must also address the direct hazards posed by overtopping. This necessitates evaluating the level of hazard and its likelihood of occurrence, thereby enabling the development of suitable action plans to mitigate risks associated with overtopping events. For rubble mound breakwaters (e.g., in harbour breakwaters) and a significant wave height formula_26 greater than 5m on the outside, a heavy rubble mound revetment on the inside is required for overtopping of 10-30 L/s per metre. For overtopping of 5-20 L/s per metre, there is a high risk of damage to the crest. For regular grass, an average overtopping of 5 L/s per metre of dike is considered permissible. For very good grass cover, without special elements or street furniture such as stairs, sign poles, or fences, 10 L/s per metre is allowed. Overtopping tests with a wave overtopping simulator have shown that for an undamaged grass cover, without special elements, 50L/s per metre often causes no damage. The problem is not so much the strength of the grass cover, but the presence of other elements such as gates, stairs and fences. It should be considered that, for example, 5 L/s per metre can occur due to high waves and a high freeboard, or low waves with a low freeboard. In the first case, there are not many overtopping waves, but when one overtops, it creates a high flow velocity on the inner slope. In the second case, there are many overtopping waves, but they create relatively low flow velocities. As a result, the requirements for overtopping over river dikes are different from those for sea dikes. A good sea dike with a continuous grass cover can easily handle 10 L/s per metre without problems, assuming good drainage is provided at the foot of the inner slope. Without adequate drainage, the amount of water that could potentially enter properties at the foot of the inner slope would be unacceptable, which is why such dikes are designed for a lower overtopping amount. Since it has been found that a grass cover does not fail due to the average overtopping, but rather due to the frequent occurrence of high flow velocities, coastal authorities such as Rijkswaterstaat in the Netherlands have decided (since 2015) to no longer test grass slopes on the inner side of the dike for average overtopping discharge, but rather for the frequency of high flow velocities during overtopping. Research has shown that grass roots can contribute to improving the shear strength of soil used in dike construction, providing that the grass is properly maintained. Developing a grass cover takes time and requires a suitable substrate, such as lean and reasonably compacted clay. Firmly compacted clay soil is initially unsuitable for colonisation by grass plants. However, after a frost or winter period, the top layer of such a compacted clay layer is sufficiently open for the establishment of grass. To function properly, grass cover formation must begin well before winter. Research in The Netherlands has found that dikes with a well-compacted and flat clay lining can withstand a limited wave height or limited wave overtopping, such as in the majority of river areas, during the first winter after construction even without a grass cover, for many days without significant damage. If the wave load in the river area is higher, no damage that threatens safety will occur if the clay lining is thick enough (0.8 metres or more) and adequately compacted throughout its entire thickness. An immature grass cover can be temporarily protected against hydraulic loads with stapled geotextile mats. For damage to ships in harbours or marinas, the following figures can be used: These values provide guidance on the expected impact of overtopping on ships in marinas or harbours, on nearby buildings and other infrastructure, depending on the significant wave height formula_27 and overtopping rate (in L/s per metre). This information then helps to inform the appropriate design, the required protection measures, and response plans for different scenarios. Wave transmission. When there is water on both sides of a barrier (such as in the case of a harbour dam, breakwater or closure dam), wave overtopping over the dam will also generate waves on the other side of the dam. This is called wave transmission. To determine the amount of wave transmission, it is not necessary to determine the amount of overtopping. The transmission depends only on the wave height on the outer side, the freeboard, and the roughness of the slope. For a smooth slope, the transmission coefficient (the relationship between the wave on the inside of the dam and the incoming wave) is: formula_28 In which "ξ0p" is the Iribarren number based on the peak period of the waves, and "β" is the angle of incidence of the waves. Overtopping simulation. In order to assess the safety and resilience of dikes, as well as the robustness of the grass lining on their crests and landward slopes, a wave overtopping simulator can be employed. The most onerous wave conditions for which a dike is designed occur relatively rarely, so using a wave overtopping simulator enables in-situ replication of anticipated conditions on the dike itself. This allows the responsible organisation overseeing the structure to evaluate its capacity to withstand predicted wave overtopping during specific extreme scenarios. During these tests, the wave overtopping simulator is positioned on the dike's crest and continuously filled with water. The device features valves at its base that can be opened to release varying volumes of water, thereby simulating a wide range of wave overtopping events. This approach helps ensure that the dike's integrity is accurately and effectively assessed. In the case of dikes with grass slopes, another test method is to use a sod puller to determine the tensile strength of the sod, which can then be translated into strength under the load caused by wave overtopping. In addition to simulating wave overtopping, the simulation of wave impacts and wave run-up is possible with a specially developed generator and simulator. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q=a \\cdot \\exp\\left(-b \\frac{R}{\\gamma}\\right)" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "Q = \\frac{q}{\\sqrt{gH_s^2}} \\sqrt{\\frac{h/L_0}{\\tan \\alpha}}" }, { "math_id": 4, "text": "R = \\frac{h_c}{H_s} \\frac{1}{\\xi}" }, { "math_id": 5, "text": "h" }, { "math_id": 6, "text": "h_c" }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "H_s" }, { "math_id": 9, "text": "L_0" }, { "math_id": 10, "text": "\\alpha" }, { "math_id": 11, "text": "\\xi" }, { "math_id": 12, "text": "\\gamma" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "b" }, { "math_id": 15, "text": "\\frac{q}{\\sqrt{gH^3_{m0}}} = \\frac{0{.}026}{\\sqrt{\\tan\\alpha}}\\gamma_b \n\\xi_{m-1.0}\\cdot \\exp\\left[\n-\\left(2{.}5\\frac{R_c}{\\xi_{m-1.0} H_{m0}\\gamma}\\right)^{1{.}3} \n\\right]" }, { "math_id": 16, "text": "\\frac{q}{\\sqrt{gH^3_{m0}}} = 0{.}1 \\,\\exp\\left[-\\left(1{.}35\\frac{R_c}{ H_{m0}\\gamma} \\right)^{1{.}3} \\right]" }, { "math_id": 17, "text": "V=a[-\\ln(P_v)]^{4/3} " }, { "math_id": 18, "text": "a=0.84T_mq/P_{ov}" }, { "math_id": 19, "text": "P_{ov}=\\exp\\Bigl[-\\Bigl(\\surd-\\ln0.02*(h_c/R_{u2%})\\Bigr)^2\\Bigr]" }, { "math_id": 20, "text": "P_v" }, { "math_id": 21, "text": "P_{ov}" }, { "math_id": 22, "text": "x" }, { "math_id": 23, "text": "\\gamma_b" }, { "math_id": 24, "text": "-0.142 \\frac{x}{B}+0.577" }, { "math_id": 25, "text": "B" }, { "math_id": 26, "text": "H_{m0}" }, { "math_id": 27, "text": "H{m0}" }, { "math_id": 28, "text": "K_T= \\left[ {-0.3 \\frac{R_c}{H_{m0}}+0.75 \\left( {1- exp(-0.5 \\xi_{0p}) }\\right) }\\right] \\cdot ({cos\\beta})^{3/2}" } ]
https://en.wikipedia.org/wiki?curid=73521325
73536266
Fixed-point computation
Computing the fixed point of a function Fixed-point computation refers to the process of computing an exact or approximate fixed point of a given function. In its most common form, the given function formula_0 satisfies the condition to the Brouwer fixed-point theorem: that is, formula_0 is continuous and maps the unit "d"-cube to itself. The Brouwer fixed-point theorem guarantees that formula_0 has a fixed point, but the proof is not constructive. Various algorithms have been devised for computing an approximate fixed point. Such algorithms are used in economics for computing a market equilibrium, in game theory for computing a Nash equilibrium, and in dynamic system analysis. Definitions. The unit interval is denoted by formula_1, and the unit "d"-dimensional cube is denoted by formula_2. A continuous function formula_0 is defined on formula_2 (from formula_2 to itself)"." Often, it is assumed that formula_0 is not only continuous but also Lipschitz continuous, that is, for some constant formula_3, formula_4 for all formula_5 in formula_2. A fixed point of formula_0 is a point formula_6 in formula_2 such that formula_7. By the Brouwer fixed-point theorem, any continuous function from formula_2 to itself has a fixed point. But for general functions, it is impossible to compute a fixed point precisely, since it can be an arbitrary real number. Fixed-point computation algorithms look for "approximate" fixed points. There are several criteria for an approximate fixed point. Several common criteria are: For Lipschitz-continuous functions, the absolute criterion is stronger than the residual criterion: If formula_0 is Lipschitz-continuous with constant formula_3, then formula_17 implies formula_18. Since formula_15 is a fixed-point of formula_0, this implies formula_19, so formula_20. Therefore, a δ-absolute fixed-point is also an ε-residual fixed-point with formula_21. The most basic step of a fixed-point computation algorithm is a value query: given any formula_6 in formula_2, the algorithm is provided with an oracle formula_22 to formula_0 that returns the value formula_23. The accuracy of the approximate fixed-point depends upon the error in the oracle formula_24. The function formula_0 is accessible via evaluation queries: for any formula_6, the algorithm can evaluate formula_23. The run-time complexity of an algorithm is usually given by the number of required evaluations. Contractive functions. A Lipschitz-continuous function with constant formula_3 is called contractive if formula_25; it is called weakly-contractive if formula_26. Every contractive function satisfying Brouwer's conditions has a "unique" fixed point. Moreover, fixed-point computation for contractive functions is easier than for general functions. The first algorithm for fixed-point computation was the fixed-point iteration algorithm of Banach. Banach's fixed-point theorem implies that, when fixed-point iteration is applied to a contraction mapping, the error after formula_27 iterations is in formula_28. Therefore, the number of evaluations required for a formula_29-relative fixed-point is approximately formula_30. Sikorski and Wozniakowski showed that Banach's algorithm is optimal when the dimension is large. Specifically, when formula_31, the number of required evaluations of "any" algorithm for formula_29-relative fixed-point is larger than 50% the number of evaluations required by the iteration algorithm. Note that when formula_3 approaches 1, the number of evaluations approaches infinity. No finite algorithm can compute a formula_29-absolute fixed point for all functions with formula_32. When formula_3 &lt; 1 and "d" = 1, the optimal algorithm is the Fixed Point Envelope (FPE) algorithm of Sikorski and Wozniakowski. It finds a "δ"-relative fixed point using formula_33 queries, and a "δ"-absolute fixed point using formula_34 queries. This is faster than the fixed-point iteration algorithm. When formula_35 but not too large, and formula_26, the optimal algorithm is the interior-ellipsoid algorithm (based on the ellipsoid method). It finds an ε-residual fixed-point using formula_36 evaluations. When formula_25, it finds a formula_29-absolute fixed point using formula_37 evaluations. Shellman and Sikorski presented an algorithm called BEFix (Bisection Envelope Fixed-point) for computing an ε-residual fixed-point of a two-dimensional function with 'formula_26, using only formula_38 queries. They later presented an improvement called BEDFix (Bisection Envelope Deep-cut Fixed-point), with the same worst-case guarantee but better empirical performance. When formula_25, BEDFix can also compute a formula_29-absolute fixed-point using formula_39 queries. Shellman and Sikorski presented an algorithm called PFix for computing an ε-residual fixed-point of a "d"-dimensional function with "L ≤" 1, using formula_40 queries. When formula_3 &lt; 1, PFix can be executed with formula_41, and in that case, it computes a δ-absolute fixed-point, using formula_42 queries. It is more efficient than the iteration algorithm when formula_3 is close to 1. The algorithm is recursive: it handles a "d"-dimensional function by recursive calls on ("d"-1)-dimensional functions. Algorithms for differentiable functions. When the function formula_0 is differentiable, and the algorithm can evaluate its derivative (not only formula_0 itself), the Newton method can be used and it is much faster. General functions. For functions with Lipschitz constant formula_3 &gt; 1, computing a fixed-point is much harder. One dimension. For a 1-dimensional function ("d" = 1), a formula_29-absolute fixed-point can be found using formula_43 queries using the bisection method: start with the interval formula_1; at each iteration, let formula_6 be the center of the current interval, and compute formula_23; if formula_44 then recurse on the sub-interval to the right of formula_6; otherwise, recurse on the interval to the left of formula_6. Note that the current interval always contains a fixed point, so after formula_43 queries, any point in the remaining interval is a formula_29-absolute fixed-point of formula_0 Setting formula_45, where formula_3 is the Lipschitz constant, gives an ε-residual fixed-point, using formula_46 queries. Two or more dimensions. For functions in two or more dimensions, the problem is much more challenging. Shellman and Sikorski proved that for any integers "d" ≥ 2 and formula_3 &gt; 1, finding a δ-absolute fixed-point of "d"-dimensional formula_3-Lipschitz functions might require infinitely many evaluations. The proof idea is as follows. For any integer "T" &gt; 1 and any sequence of "T" of evaluation queries (possibly adaptive), one can construct two functions that are Lipschitz-continuous with constant formula_3, and yield the same answer to all these queries, but one of them has a unique fixed-point at ("x", 0) and the other has a unique fixed-point at ("x", 1). Any algorithm using "T" evaluations cannot differentiate between these functions, so cannot find a δ-absolute fixed-point. This is true for any finite integer "T". Several algorithms based on function evaluations have been developed for finding an ε-residual fixed-point In the worst case, the number of function evaluations required by all these algorithms is exponential in the binary representation of the accuracy, that is, in formula_48. Query complexity. Hirsch, Papadimitriou and Vavasis proved that "any" algorithm based on function evaluations, that finds an ε-residual fixed-point of "f," requires formula_49 function evaluations, where formula_50 is the Lipschitz constant of the function formula_12 (note that formula_51). More precisely: The latter result leaves a gap in the exponent. Chen and Deng closed the gap. They proved that, for any "d" ≥ 2 and formula_55 and formula_56, the number of queries required for computing an ε-residual fixed-point is in formula_57. Discrete fixed-point computation. A discrete function is a function defined on a subset of "formula_58" (the "d"-dimensional integer grid). There are several discrete fixed-point theorems, stating conditions under which a discrete function has a fixed point. For example, the Iimura-Murota-Tamura theorem states that (in particular) if formula_0 is a function from a rectangle subset of "formula_58" to itself, and formula_0 is "hypercubic direction-preserving", then formula_0 has a fixed point. Let formula_0 be a direction-preserving function from the integer cube formula_59 to itself. Chen and Deng prove that, for any "d" ≥ 2 and "n" &gt; 48"d", computing such a fixed point requires formula_60 function evaluations. Chen and Deng define a different discrete-fixed-point problem, which they call 2D-BROUWER. It considers a discrete function formula_0 on formula_61 such that, for every "x" on the grid, formula_0("x") - "x" is either (0, 1) or (1, 0) or (-1, -1). The goal is to find a square in the grid, in which all three labels occur. The function formula_0 must map the square formula_61to itself, so it must map the lines "x" = 0 and "y" = 0 to either (0, 1) or (1, 0); the line "x" = "n" to either (-1, -1) or (0, 1); and the line "y" = "n" to either (-1, -1) or (1,0). The problem can be reduced to 2D-SPERNER (computing a fully-labeled triangle in a triangulation satisfying the conditions to Sperner's lemma), and therefore it is PPAD-complete. This implies that computing an approximate fixed-point is PPAD-complete even for very simple functions. Relation between fixed-point computation and root-finding algorithms. Given a function formula_62 from formula_2 to "R", a root of formula_62 is a point "x" in formula_2 such that formula_62("x")=0. An ε-root of g is a point "x" in formula_2 such that formula_63. Fixed-point computation is a special case of root-finding: given a function formula_0 on formula_2, define formula_64. "X" is a fixed-point of formula_0 if and only if "x" is a root of formula_62, and "x" is an ε-residual fixed-point of formula_0 if and only if "x" is an ε-root of formula_62. Therefore, any root-finding algorithm (an algorithm that computes an approximate root of a function) can be used to find an approximate fixed-point. The opposite is not true: finding an approximate root of a general function may be harder than finding an approximate fixed point. In particular, Sikorski proved that finding an ε-root requires formula_65 function evaluations. This gives an exponential lower bound even for a one-dimensional function (in contrast, an ε-residual fixed-point of a one-dimensional function can be found using formula_66 queries using the bisection method). Here is a proof sketch.35 Construct a function formula_62 that is slightly larger than ε everywhere in formula_2 except in some small cube around some point "x"0, where "x"0 is the unique root of formula_62. If formula_62 is Lipschitz continuous with constant formula_3, then the cube around "x"0 can have a side-length of formula_67. Any algorithm that finds an ε-root of formula_62 must check a set of cubes that covers the entire formula_2; the number of such cubes is at least formula_68. However, there are classes of functions for which finding an approximate root is equivalent to finding an approximate fixed point. One example is the class of functions formula_62 such that formula_69 maps formula_2 to itself (that is: formula_69 is in formula_2 for all x in formula_2). This is because, for every such function, the function formula_70 satisfies the conditions of Brouwer's fixed-point theorem. "X" is a fixed-point of formula_0 if and only if "x" is a root of formula_62, and "x" is an ε-residual fixed-point of formula_0 if and only if "x" is an ε-root of formula_62. Chen and Deng show that the discrete variants of these problems are computationally equivalent: both problems require formula_60 function evaluations. Communication complexity. Roughgarden and Weinstein studied the communication complexity of computing an approximate fixed-point. In their model, there are two agents: one of them knows a function formula_0 and the other knows a function formula_62. Both functions are Lipschitz continuous and satisfy Brouwer's conditions. The goal is to compute an approximate fixed point of the composite function formula_71. They show that the deterministic communication complexity is in formula_72. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "E := [0, 1]" }, { "math_id": 2, "text": "E^d" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "|f(x)-f(y)| \\leq L\\cdot |x-y|" }, { "math_id": 5, "text": "x,y" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "f(x) = x" }, { "math_id": 8, "text": "\\varepsilon>0" }, { "math_id": 9, "text": "|f(x)-x|\\leq \\varepsilon" }, { "math_id": 10, "text": "|\\cdot|" }, { "math_id": 11, "text": "d" }, { "math_id": 12, "text": "f(x)-x" }, { "math_id": 13, "text": "\\delta>0" }, { "math_id": 14, "text": "|x-x_0| \\leq \\delta" }, { "math_id": 15, "text": "x_0" }, { "math_id": 16, "text": "|x-x_0|/|x_0|\\leq \\delta" }, { "math_id": 17, "text": "|x-x_0|\\leq \\delta" }, { "math_id": 18, "text": "|f(x)-f(x_0)|\\leq L\\cdot \\delta" }, { "math_id": 19, "text": "|f(x)-x_0|\\leq L\\cdot \\delta" }, { "math_id": 20, "text": "|f(x)-x|\\leq (1+L)\\cdot \\delta" }, { "math_id": 21, "text": "\\varepsilon = (1+L)\\cdot \\delta" }, { "math_id": 22, "text": "\\tilde{f}" }, { "math_id": 23, "text": "f(x)" }, { "math_id": 24, "text": "\\tilde{f}(x)" }, { "math_id": 25, "text": "L<1" }, { "math_id": 26, "text": "L\\le 1" }, { "math_id": 27, "text": "t" }, { "math_id": 28, "text": "O(L^t)" }, { "math_id": 29, "text": "\\delta" }, { "math_id": 30, "text": "\\log_L(\\delta) = \\log(\\delta)/\\log(L) = \\log(1/\\delta)/\\log(1/L) " }, { "math_id": 31, "text": "d\\geq \\log(1/\\delta)/\\log(1/L) " }, { "math_id": 32, "text": "L=1" }, { "math_id": 33, "text": "O(\\log(1/\\delta) + \\log \\log(1/(1-L))) " }, { "math_id": 34, "text": "O(\\log(1/\\delta)) " }, { "math_id": 35, "text": "d>1" }, { "math_id": 36, "text": "O(d\\cdot \\log(1/\\varepsilon)) " }, { "math_id": 37, "text": "O(d\\cdot [\\log(1/\\delta) + \\log(1/(1-L))]) " }, { "math_id": 38, "text": "2 \\lceil\\log_2(1/\\varepsilon)\\rceil+1" }, { "math_id": 39, "text": "O(\\log(1/\\varepsilon)+\\log(1/(1-L)))" }, { "math_id": 40, "text": "O(\\log^d(1/\\varepsilon))" }, { "math_id": 41, "text": "\\varepsilon = (1-L)\\cdot \\delta" }, { "math_id": 42, "text": "O(\\log^d(1/[(1-L)\\delta]))" }, { "math_id": 43, "text": "O(\\log(1/\\delta))" }, { "math_id": 44, "text": "f(x) > x" }, { "math_id": 45, "text": "\\delta := \\varepsilon/(L+1)" }, { "math_id": 46, "text": "O(\\log(L/\\varepsilon) = \\log(L) + \\log(1/\\varepsilon))" }, { "math_id": 47, "text": "k > 1/\\varepsilon" }, { "math_id": 48, "text": "\\Omega(1/\\varepsilon)" }, { "math_id": 49, "text": "\\Omega(L'/\\varepsilon)" }, { "math_id": 50, "text": "L'" }, { "math_id": 51, "text": "L-1 \\leq L' \\leq L+1" }, { "math_id": 52, "text": "\\Theta(L'/\\varepsilon)" }, { "math_id": 53, "text": "\\Omega((L'/\\varepsilon)^{d-2})" }, { "math_id": 54, "text": "O((L'/\\varepsilon)^{d})" }, { "math_id": 55, "text": "1/\\varepsilon > 4 d" }, { "math_id": 56, "text": "L'/\\varepsilon > 192 d^3" }, { "math_id": 57, "text": "\\Theta((L'/\\varepsilon)^{d-1})" }, { "math_id": 58, "text": "\\mathbb{Z}^d" }, { "math_id": 59, "text": "\\{1, \\dots, n\\}^d" }, { "math_id": 60, "text": "\\Theta(n^{d-1})" }, { "math_id": 61, "text": "\\{0,\\dots, n\\}^2" }, { "math_id": 62, "text": "g" }, { "math_id": 63, "text": "g(x)\\leq \\varepsilon" }, { "math_id": 64, "text": "g(x) := |f(x)-x|" }, { "math_id": 65, "text": "\\Omega(1/\\varepsilon^d)" }, { "math_id": 66, "text": "O(\\log(1/\\varepsilon))" }, { "math_id": 67, "text": "\\varepsilon/L" }, { "math_id": 68, "text": "(L/\\varepsilon)^d" }, { "math_id": 69, "text": "g(x)+x" }, { "math_id": 70, "text": "f(x) := g(x)+x" }, { "math_id": 71, "text": "g\\circ f" }, { "math_id": 72, "text": "\\Omega(2^d)" } ]
https://en.wikipedia.org/wiki?curid=73536266
7353645
Mathematical discussion of rangekeeping
In naval gunnery, when long-range guns became available, an enemy ship would move some distance after the shells were fired. It became necessary to figure out where the enemy ship, the target, was going to be when the shells arrived. The process of keeping track of where the ship was likely to be was called rangekeeping, because the distance to the target—the range—was a very important factor in aiming the guns accurately. As time passed, train (also called bearing), the direction to the target, also became part of rangekeeping, but tradition kept the term alive. Rangekeeping is an excellent example of the application of analog computing to a real-world mathematical modeling problem. Because nations had so much money invested in their capital ships, they were willing to invest enormous amounts of money in the development of rangekeeping hardware to ensure that the guns of these ships could put their projectiles on target. This article presents an overview of the rangekeeping as a mathematical modeling problem. To make this discussion more concrete, the Ford Mk 1 Rangekeeper is used as the focus of this discussion. The Ford Mk 1 Rangekeeper was first deployed on the in 1916 during World War I. This is a relatively well documented rangekeeper that had a long service life. While an early form of mechanical rangekeeper, it does illustrate all the basic principles. The rangekeepers of other nations used similar algorithms for computing gun angles, but often differed dramatically in their operational use. In addition to long range gunnery, the launching of torpedoes also requires a rangekeeping-like function. The US Navy during World War II had the TDC, which was the only World War II-era submarine torpedo fire control system to incorporate a mechanical rangekeeper (other navies depended on manual methods). There were also rangekeeping devices for use with surface ship-launched torpedoes. For a view of rangekeeping outside that of the US Navy, there is a detailed reference that discusses the rangekeeping mathematics associated with torpedo fire control in the Imperial Japanese Navy. The following discussion is patterned after the presentations in World War II US Navy gunnery manuals. Analysis. Coordinate system. US Navy rangekeepers during World War II used a moving coordinate system based on the line of sight (LOS) between the ship firing its gun (known as the "own ship") and the target (known as the "target"). As is shown in Figure 1, the rangekeeper defines the "y axis" as the LOS and the "x axis" as a perpendicular to the LOS with the origin of the two axes centered on the target. An important aspect of the choice of coordinate system is understanding the signs of the various rates. The rate of bearing change is positive in the clockwise direction. The rate of range is positive for increasing target range. General approach. During World War II, tracking a target meant knowing continuously the target's range and bearing. These target parameters were sampled periodically by sailors manning gun directors and radar systems, who then fed the data into a rangekeeper. The rangekeeper performed a linear extrapolation of the target range and bearing as a function of time based on the target information samples. In addition to ship-board target observations, rangekeepers could also take input from spotting aircraft or even manned balloons tethered to the own ship. These spotting platforms could be launched and recovered from large warships, like battleships. In general, target observations made by shipboard instruments were preferred for targets at ranges of less than 20,000 yards and aircraft observations were preferred for longer range targets. After World War II, helicopters became available and the need to conduct the dangerous operations of launching and recovering spotting aircraft or balloons was eliminated (see "Iowa"-class battleship for a brief discussion). During World War I, target tracking information was often presented on a sheet of paper. During World War II, the tracking information could be displayed on electronic displays (see "Essex"-class aircraft carrier for a discussion of the common displays). Target range. Early in World War II, the range to the target was measured by optical rangefinders. Though some night operations were conducted using searchlights and star shells, in general optical rangefinders were limited to daytime operation. During the latter part of World War II, radar was used to determine the range to the target. Radar proved to be more accurate than the optical rangefinders (at least under operational conditions) and was the preferred way to determine target range during both night and day. Target speed. Early in World War II, target range and bearing measurements were taken over a period of time and plotted manually on a chart. The speed and course of the target could be computed using the distance the target traveled over an interval of time. During the latter part of World War II, the speed of the target could be measured using radar data. Radar provided accurate bearing rate, range, and radial speed, which was converted to target course and speed. In some cases, such as with submarines, the target speed could be estimated using sonar data. For example, the sonar operator could measure the propeller turn rate acoustically and, knowing the ship's class, compute the ship's speed (see TDC for more information). Target course. The target course was the most difficult piece of target data to obtain. In many cases, instead of measuring target course many systems measured a related quantity called angle on the bow. Angle on the bow is the angle made by the ship's course and the line of sight (see Figure 1). The angle on the bow was usually estimated based on the observational experience of the observer. In some cases, the observers improved their estimation abilities by practicing against ship models mounted on a "lazy Susan". The Imperial Japanese Navy had a unique tool, called "Sokutekiban" (測的盤), that was used to assist observers with measuring angle on the bow. The observer would first use this device to measure the angular width of the target. Knowing the angular width of the target, the range to the target, and the known length of that ship class, the angle on the bow of the target can be computed using equations shown in Figure 2. Human observers were required to determine the angle on the bow. To confuse the human observers, ships often used dazzle camouflage, which consisted of painting lines on a ship in an effort to make determining a target's angle on the bow difficult. While dazzle camouflage was useful against some types of optical rangefinders, this approach was useless against radar and it fell out of favor during World War II. Position prediction. The prediction of the target ship's position at the time of projectile impact is critical because that is the position at which the own ship's guns must be directed. During World War II, most rangekeepers performed position prediction using a linear extrapolation of the target's course and speed. While ships are maneuverable, the large ships maneuver slowly and linear extrapolation is a reasonable approach in many cases. During World War I, rangekeepers were often referred to as "clocks" (e.g. see range and bearing clocks in the Dreyer Fire Control Table). These devices were called clocks because they regularly incremented the target range and angle estimates using fixed values. This approach was of limited use because the target bearing changes are a function of range and using a fixed change causes the target bearing prediction to quickly become inaccurate. Range. The target range at the time of projectile impact can be estimated using Equation 1, which is illustrated in Figure 3. where * formula_0 is the range to the target at the time of projectile impact. * formula_1 is the range to the target at the time of gun firing. * formula_2 is the projectile time of flight formula_3 plus system &lt;br&gt; firing delays formula_4, i.e. formula_5. The exact prediction of the target range at the time of projectile impact is difficult because it requires knowing the projectile time of flight, which is a function of the projected target position. While this calculation can be performed using a trial and error approach, this was not a practical approach with the analog computer hardware available during World War II. In the case of the Ford Rangekeeper Mk 1, the time of flight was approximated by assuming the time of flight was linearly proportional to range, as is shown in Equation 2. where * formula_6 is the constant of proportionality between time of flight (TOF) and target range. The assumption of TOF being linearly proportional to range is a crude one and could be improved through the use of more sophisticated means of function evaluation. Range prediction requires knowing the rate of range change. As is shown in Figure 3, the rate of range change can be expressed as shown in Equation 3. where * formula_7 is the own ship speed along the LOS where formula_8. * formula_9 is the target ship speed along the LOS where formula_10. Equation 4 shows the complete equation for the predicted range. Azimuth. The prediction of azimuth is performed similarly to the range prediction. Equation 5 is the fundamental relationship, whose derivation is illustrated in Figure 4. where * formula_11 is the azimuth to the target at the time of gun firing. * formula_12 is the azimuth to the target at the time of projectile impact. The rate of bearing change can be computed using Equation 6, which is illustrated in Figure 4. where * formula_13 is the own ship speed along the x axis, i.e. formula_14. * formula_15 is the target speed along the x axis, i.e. formula_16. Substituting formula_17, Equation 7 shows the final formula for the predicted bearing. Ballistic correction. Firing artillery at targets beyond visual range historically has required computations based on firing tables. The impact point of a projectile is a function of many variables: * Air temperature * Air density * Wind * Range * Earth rotation * Projectile, fuze, weapon characteristics * Muzzle velocity * Propellant temperature * Drift * Parallax between the guns and the rangefinders and radar systems * Elevation difference between target and artillery piece The firing tables provide data for an artillery piece firing under standardized conditions and the corrections required to determine the point of impact under actual conditions. There were a number of ways to implement a firing table using cams. Consider Figure 5 for example. In this case the gun angle as a function of target's range and the target's relative elevation is represented by the thickness of the cam at a given axial distance and angle. A gun direction officer would input the target range and relative elevation using dials. The pin height then represents the required gun angle. This pin height could be used to drive cams or gears that would make other corrections, such as for propellant temperature and projectile type. The cams used in a rangekeeper needed to be very precisely machined in order to accurately direct the guns. Because these cams were machined to specifications composed of data tables, they became an early application of CNC machine tools. In addition to the target and ballistic corrections, the rangekeeper must also correct for the ships undulating motion. The warships had a gyroscope with its spin axis vertical. This gyro determined two angles that defined the tilt of the ship's deck with respect to the vertical. Those two angle were fed to the rangekeeper, which applied a correction based on these angles. While the rangekeeper designers spent an enormous amount of time working to minimize the sources of error in the rangekeeper calculations, there were errors and information uncertainties that contributed to projectiles missing their targets on the first shot. The rangekeeper had dials that allowed manual corrections to be incorporated into the rangekeeper firing solution. When artillery spotters would call in a correction, the rangekeeper operators would manually incorporate the correction using these dials. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_{TP}\\," }, { "math_id": 1, "text": "R_T\\," }, { "math_id": 2, "text": "t_{TOF'}\\," }, { "math_id": 3, "text": "\\left( t_{TOF}\\right)\\," }, { "math_id": 4, "text": "\\left( t_{Delay}\\right)\\," }, { "math_id": 5, "text": "t_{TOF'}= t_{TOF}+t_{Delay}\\," }, { "math_id": 6, "text": "k_{TOF}\\," }, { "math_id": 7, "text": "s_{Oy}\\," }, { "math_id": 8, "text": "v_{Oy}=\\lVert v_O \\rVert \\cdot \\cos(\\theta_{T})" }, { "math_id": 9, "text": "s_{Ty}\\," }, { "math_id": 10, "text": "v_{Ty}=\\lVert v_T \\rVert \\cdot \\cos(\\theta_{AOB})" }, { "math_id": 11, "text": "\\theta_{T}\\," }, { "math_id": 12, "text": "\\theta_{TP}\\," }, { "math_id": 13, "text": "s_{Ox}\\," }, { "math_id": 14, "text": "s_{Ox}=\\lVert v_{O} \\rVert \\cdot \\sin(\\theta_{T})\\," }, { "math_id": 15, "text": "s_{Tx}\\," }, { "math_id": 16, "text": "s_{Tx}= \\lVert v_{T}\\rVert \\cdot \\sin(\\theta_{AOB})\\," }, { "math_id": 17, "text": "t_{TOF}\\dot= k_{TOF} \\cdot R_T" } ]
https://en.wikipedia.org/wiki?curid=7353645
73541440
K-convexity in Rn
Mathematical concept K-convexity in Rn is a mathematical concept. Formula. Let "formula_0"= ("K0","K1"...,"Kn") to be a vector of (n+1) nonnegative constants and define a function formula_0(.): formula_1 → formula_2 as follows: formula_0(formula_3) = "K0"formula_4("e"formula_3) + formula_5"Ki"formula_4(formula_6), where "e" = (1,1...,1) ∈ formula_7, formula_8, formula_4(0) = 0 and formula_9= 1 for all formula_10 &gt; 0. The concept of K-convexity generalizes "K"-convexity introduced by Scarf (1960) to higher dimensional spaces and is useful in multiproduct inventory problems with fixed setup costs. Scarf used "K"-convexity to prove the optimality of the (s, S) policy in the single product case. Several papers are devoted to obtaining optimal policies for multiple product problems with fixed ordering costs. This definition introduced by Gallego and Sethi (2005) is motivated by the joint replenishment problem when we incur a setup cost "K0", whenever we order an item or items and an individual setup cost "Ki" for each item formula_11 we order. There are some important special cases: (i) The simplest is the case of one product or "n" = 1, where "K0 + K1" can be considered to be the setup cost.   (ii) The joint setup cost arises when "Ki" = 0", formula_11" = 1"," 2", . . . , n," and a setup cost of formula_12 is incurred whenever any one or more of the items are ordered. In this case, "formula_0" = "(K0," 0"," 0", . . . ," 0")" and formula_0(formula_3) = "K0formula_4"("e"formula_3). (iii) When there is no joint setup cost, i.e., "K0" = 0, and there are only individual setups, we have "formula_0(formula_3)" = formula_5"Ki"formula_4(formula_6)"."
[ { "math_id": 0, "text": "\\Kappa" }, { "math_id": 1, "text": "\\Re_+^n" }, { "math_id": 2, "text": "\\Re_+^1" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "\\delta" }, { "math_id": 5, "text": "\\sum_{i=1}^n" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "\\Re^n" }, { "math_id": 8, "text": "\\Re_+^n = \\{x \\in \\Re^n | x \\geq 0 \\}" }, { "math_id": 9, "text": "\\delta(z)" }, { "math_id": 10, "text": "z" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "\\Kappa_0" } ]
https://en.wikipedia.org/wiki?curid=73541440
73543019
Persistence barcode
Technique in topological data analysis In topological data analysis, a persistence barcode, sometimes shortened to barcode, is an algebraic invariant associated with a filtered chain complex or a persistence module that characterizes the stability of topological features throughout a growing family of spaces. Formally, a persistence barcode consists of a multiset of intervals in the extended real line, where the length of each interval corresponds to the lifetime of a topological feature in a filtration, usually built on a point cloud, a graph, a function, or, more generally, a simplicial complex or a chain complex. Generally, longer intervals in a barcode correspond to more robust features, whereas shorter intervals are more likely to be noise in the data. A persistence barcode is a "complete" invariant that captures all the topological information in a filtration. In algebraic topology, the persistence barcodes were first introduced by Sergey Barannikov in 1994 as the "canonical forms" invariants consisting of a multiset of line segments with ends on two parallel lines, and later, in geometry processing, by Gunnar Carlsson et al. in 2004. Definition. Let formula_0 be a fixed field. Consider a real-valued function on a chain complex formula_1 compatible with the differential, so that formula_2 whenever formula_3 in formula_4. Then for every formula_5 the sublevel set formula_6 is a subcomplex of "K", and the values of formula_7 on the generators in formula_4 define a filtration (which is in practice always finite): formula_8. Then, the filtered complexes classification theorem states that for any filtered chain complex over formula_0, there exists a linear transformation that preserves the filtration and brings the filtered complex into so called canonical form, a canonically defined direct sum of filtered complexes of two types: two-dimensional complexes with trivial homology formula_9 and one-dimensional complexes with trivial differential formula_10. The multiset formula_11 of the intervals formula_12 or formula_13 describing the canonical form, is called the "barcode", and it is the complete invariant of the filtered chain complex. The concept of a persistence module is intimately linked to the notion of a filtered chain complex. A persistence module formula_14 indexed over formula_15 consists of a family of formula_0-vector spaces formula_16 and linear maps formula_17 for each formula_18 such that formula_19 for all formula_20. This construction is not specific to formula_15; indeed, it works identically with any totally-ordered set. A persistence module formula_14 is said to be of "finite type" if it contains a finite number of unique finite-dimensional vector spaces. The latter condition is sometimes referred to as "pointwise finite-dimensional". Let formula_21 be an interval in formula_15. Define a persistence module formula_22 via formula_23, where the linear maps are the identity map inside the interval. The module formula_22 is sometimes referred to as an "interval module." Then for any formula_15-indexed persistence module formula_14 of finite type, there exists a multiset formula_24 of intervals such that formula_25, where the direct sum of persistence modules is carried out index-wise. The multiset formula_24 is called the "barcode" of formula_14, and it is unique up to a reordering of the intervals. This result was extended to the case of pointwise finite-dimensional persistence modules indexed over an arbitrary totally-ordered set by William Crawley-Boevey and Magnus Botnan in 2020, building upon known results from the structure theorem for finitely generated modules over a PID, as well as the work of Cary Webb for the case of the integers.
[ { "math_id": 0, "text": "\\mathbb F" }, { "math_id": 1, "text": "f:K \\rightarrow \\mathbb{R}" }, { "math_id": 2, "text": "f(\\sigma_i) \\leq f(\\tau)" }, { "math_id": 3, "text": "\\partial\\tau=\\sum_i\\sigma_i" }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": " a \\in \\mathbb{R}" }, { "math_id": 6, "text": "K_a=f^{-1}((-\\infty, a])" }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": " \\emptyset = K_0 \\subseteq K_1 \\subseteq \\cdots \\subseteq K_n = K " }, { "math_id": 9, "text": "d(e_{a_j})=e_{a_i}" }, { "math_id": 10, "text": "d(e_{a'_i})=0" }, { "math_id": 11, "text": "\\mathcal B_f " }, { "math_id": 12, "text": "[a_i, a_j)" }, { "math_id": 13, "text": "[a_i', \\infty)" }, { "math_id": 14, "text": "M" }, { "math_id": 15, "text": "\\mathbb R" }, { "math_id": 16, "text": "\\{ M_t \\}_{t \\in \\mathbb R}" }, { "math_id": 17, "text": "\\varphi_{s,t} : M_s \\to M_t" }, { "math_id": 18, "text": "s \\leq t" }, { "math_id": 19, "text": "\\varphi_{s,t} \\circ \\varphi_{r,s} = \\varphi_{r,t}" }, { "math_id": 20, "text": "r \\leq s \\leq t" }, { "math_id": 21, "text": "I" }, { "math_id": 22, "text": "Q(I)" }, { "math_id": 23, "text": "Q(I_s)=\n\\begin{cases}\n0, & \\text{if } s\\notin I;\\\\\n\\mathbb F, & \\text{otherwise}\n\\end{cases}" }, { "math_id": 24, "text": "\\mathcal B_M" }, { "math_id": 25, "text": "M \\cong \\bigoplus_{I \\in \\mathcal B_M}Q(I)" } ]
https://en.wikipedia.org/wiki?curid=73543019
735443
Neumann boundary condition
Mathematics In mathematics, the Neumann (or second-type) boundary condition is a type of boundary condition, named after Carl Neumann. When imposed on an ordinary or a partial differential equation, the condition specifies the values of the derivative applied at the boundary of the domain. It is possible to describe the problem using other boundary conditions: a Dirichlet boundary condition specifies the values of the solution itself (as opposed to its derivative) on the boundary, whereas the Cauchy boundary condition, mixed boundary condition and Robin boundary condition are all different types of combinations of the Neumann and Dirichlet boundary conditions. Examples. ODE. For an ordinary differential equation, for instance, formula_0 the Neumann boundary conditions on the interval ["a","b"] take the form formula_1 where α and β are given numbers. PDE. For a partial differential equation, for instance, formula_2 where ∇2 denotes the Laplace operator, the Neumann boundary conditions on a domain Ω ⊂ R"n" take the form formula_3 where n denotes the (typically exterior) normal to the boundary ∂Ω, and f is a given scalar function. The normal derivative, which shows up on the left side, is defined as formula_4 where ∇"y"(x) represents the gradient vector of "y"(x), n̂ is the unit normal, and ⋅ represents the inner product operator. It becomes clear that the boundary must be sufficiently smooth such that the normal derivative can exist, since, for example, at corner points on the boundary the normal vector is not well defined. Applications. The following applications involve the use of Neumann boundary conditions: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y'' + y = 0," }, { "math_id": 1, "text": "y'(a)= \\alpha, \\quad y'(b) = \\beta," }, { "math_id": 2, "text": "\\nabla^2 y + y = 0," }, { "math_id": 3, "text": "\\frac{\\partial y}{\\partial \\mathbf{n}}(\\mathbf{x}) = f(\\mathbf{x}) \\quad \\forall \\mathbf{x} \\in \\partial \\Omega," }, { "math_id": 4, "text": "\\frac{\\partial y}{\\partial \\mathbf{n}}(\\mathbf{x}) = \\nabla y(\\mathbf{x}) \\cdot \\mathbf{\\hat{n}}(\\mathbf{x})," } ]
https://en.wikipedia.org/wiki?curid=735443
73545085
Symbols of grouping
Symbols used in mathematical expressions In mathematics and related subjects, understanding a mathematical expression depends on an understanding of symbols of grouping, such as parentheses (), square brackets [], and braces {} (see note on terminology below). These same symbols are also used in ways where they are not symbols of grouping. For example, in the expression 3(x+y) the parentheses are symbols of grouping, but in the expression (3, 5) the parentheses may indicate an open interval. The most common symbols of grouping are the parentheses and the square brackets, and the latter are usually used to avoid too many repeated parentheses. For example, to indicate the product of binomials, parentheses are usually used, thus: formula_0. But if one of the binomials itself contains parentheses, as in formula_1 one or more pairs of () may be replaced by [], thus: formula_2. Beyond elementary mathematics, [] are mostly used for other purposes, e.g. to denote a closed interval, or an equivalence class, so they appear rarely for grouping. The usage of the word "brackets" varies from country. In the United States, the term denotes [], known elsewhere as "square brackets". In the United Kingdom and many other English-speaking countries, "brackets" means (), known in the US as "parentheses" (singular "parenthesis"). That said, the specific terms "parentheses" and "square brackets" are generally understood everywhere and may be used to avoid ambiguity. The symbol of grouping knows as "braces" has two major uses. If two of these symbols are used, one on the left and the mirror image of it on the right, it almost always indicates a set, as in formula_3, the set containing three members, formula_4, formula_5, and formula_6. But if it is used only on the left, it groups two or more simultaneous equations. There are other symbols of grouping. One is the bar above an expression, as in the square root sign in which the bar is a symbol of grouping. For example √"p"+"q" is the square root of the sum. The bar is also a symbol of grouping in repeated decimal digits. A decimal point followed by one or more digits with a bar over them, for example 0.123, represents the repeating decimal 0.123123123... . A superscript is understood to be grouped as long as it continues in the form of a superscript. For example if an "x" has a superscript of the form"a"+"b", the sum is the exponent. For example: "x""2"+"3", it is understood that the 2+3 is grouped, and that the exponent is the sum of 2 and 3. These rules are understood by all mathematicians. The associative law. In most mathematics, the operations of addition and multiplication are associative. The associative law for addition, for example, states that formula_7. This means that once the associative law is stated, the parentheses are unnecessary and are usually omitted. More generally, any sum, of any number of terms, can be written without parentheses and any product, of any number of factors, can be written without parentheses. Hierarchy of operations. The "hierarchy of operations", also called the "order of operations" is a rule that saves needing an excessive number of symbols of grouping. In its simplest form, if a number had a plus sign on one side and a multiplication sign on the other side, the multiplication acts first. If we were to express this idea using symbols of grouping, the factors in a product. Example: 2+3×4 = 2 +(3×4)=2+12=14. In understanding expressions without symbols of grouping, it is useful to think of subtraction as addition of the opposite, and to think of division as multiplication by the reciprocal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(2x+3)(3x+4)" }, { "math_id": 1, "text": "(2(a+b)+3)" }, { "math_id": 2, "text": "[(2(a+b)+3][3x+4]" }, { "math_id": 3, "text": "\\{a, b, c\\}" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "b" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "(a + b) + c = a + (b + c)" } ]
https://en.wikipedia.org/wiki?curid=73545085
735452
McEliece cryptosystem
Asymmetric encryption algorithm developed by Robert McEliece In cryptography, the McEliece cryptosystem is an asymmetric encryption algorithm developed in 1978 by Robert McEliece. It was the first such scheme to use randomization in the encryption process. The algorithm has never gained much acceptance in the cryptographic community, but is a candidate for "post-quantum cryptography", as it is immune to attacks using Shor's algorithm and – more generally – measuring coset states using Fourier sampling. The algorithm is based on the hardness of decoding a general linear code (which is known to be NP-hard). For a description of the private key, an error-correcting code is selected for which an efficient decoding algorithm is known, and that is able to correct formula_0 errors. The original algorithm uses binary Goppa codes (subfield codes of algebraic geometry codes of a genus-0 curve over finite fields of characteristic 2); these codes can be efficiently decoded, thanks to an algorithm due to Patterson. The public key is derived from the private key by disguising the selected code as a general linear code. For this, the code's generator matrix formula_1 is perturbated by two randomly selected invertible matrices formula_2 and formula_3 (see below). Variants of this cryptosystem exist, using different types of codes. Most of them were proven less secure; they were broken by structural decoding. McEliece with Goppa codes has resisted cryptanalysis so far. The most effective attacks known use information-set decoding algorithms. A 2008 paper describes both an attack and a fix. Another paper shows that for quantum computing, key sizes must be increased by a factor of four due to improvements in information set decoding. The McEliece cryptosystem has some advantages over, for example, RSA. The encryption and decryption are faster. For a long time, it was thought that McEliece could not be used to produce signatures. However, a signature scheme can be constructed based on the Niederreiter scheme, the dual variant of the McEliece scheme. One of the main disadvantages of McEliece is that the private and public keys are large matrices. For a standard selection of parameters, the public key is 512 kilobits long. Scheme definition. McEliece consists of three algorithms: a probabilistic key generation algorithm that produces a public and a private key, a probabilistic encryption algorithm, and a deterministic decryption algorithm. All users in a McEliece deployment share a set of common security parameters: formula_4. Key generation. The principle is that Alice chooses a linear code formula_5 from some family of codes for which she knows an efficient decoding algorithm, and to make formula_5 public knowledge but keep the decoding algorithm secret. Such a decoding algorithm requires not just knowing formula_5, in the sense of knowing an arbitrary generator matrix, but requires one to know the parameters used when specifying formula_5 in the chosen family of codes. For instance, for binary Goppa codes, this information would be the Goppa polynomial and the code locators. Therefore, Alice may publish a suitably obfuscated generator matrix of formula_5. More specifically, the steps are as follows: Message encryption. Suppose Bob wishes to send a message "m" to Alice whose public key is formula_12: Message decryption. Upon receipt of formula_20, Alice performs the following steps to decrypt the message: Proof of message decryption. Note that formula_26, and that formula_3 is a permutation matrix, thus formula_27 has weight formula_0. The Goppa code formula_1 can correct up to formula_0 errors, and the word formula_28 is at distance at most formula_0 from formula_29. Therefore, the correct code word formula_30 is obtained. Multiplying with the inverse of formula_2 gives formula_31, which is the plain text message. Key sizes. Because there is a free choice in the matrix formula_2, it is common to express formula_32 in "systematic form" so that the last formula_15 columns correspond to the identity matrix formula_33. This reduces the key size to formula_34. McEliece originally suggested security parameter sizes of formula_35, resulting in a public key size of 524 × (1024 − 524) =. Recent analysis suggests parameter sizes of formula_36 for 80 bits of security when using standard algebraic decoding, or formula_37 when using list decoding for the Goppa code, giving rise to public key sizes of and respectively. For resiliency against quantum computers, sizes of formula_38 with Goppa code were proposed, giving the size of public key of . In its round 3 submission to the NIST post quantum standardization the highest level of security, level 5 is given for parameter sets 6688128, 6960119, and 8192128. The parameters are formula_39, formula_40, formula_41 respectively. Attacks. An attack consists of an adversary, who knows the public key formula_12 but not the private key, deducing the plaintext from some intercepted ciphertext formula_42. Such attempts should be infeasible. There are two main branches of attacks for McEliece: Brute-force / unstructured attacks. The attacker knows formula_43, the generator matrix of an formula_44 code formula_45 that is combinatorially able to correct formula_0 errors. The attacker may ignore the fact that formula_45 is really the obfuscation of a structured code chosen from a specific family, and instead just use an algorithm for decoding with any linear code. Several such algorithms exist, such as going through each codeword of the code, syndrome decoding, or information set decoding. Decoding a general linear code, however, is known to be NP-hard, however, and all of the above-mentioned methods have exponential running time. In 2008, Bernstein, Lange, and Peters described a practical attack on the original McEliece cryptosystem, using the information set decoding method by Stern. Using the parameters originally suggested by McEliece, the attack could be carried out in 260.55 bit operations. Since the attack is embarrassingly parallel (no communication between nodes is necessary), it can be carried out in days on modest computer clusters. Structural attacks. The attacker may instead attempt to recover the "structure" of formula_5, thereby recovering the efficient decoding algorithm formula_7 or another sufficiently strong, efficient decoding algorithm. The family of codes from which formula_5 is chosen completely determines whether this is possible for the attacker. Many code families have been proposed for McEliece, and most of them have been completely "broken" in the sense that attacks have been found that recover an efficient decoding algorithm, such as Reed-Solomon codes. The originally proposed binary Goppa codes remain one of the few suggested families of codes that have largely resisted attempts at devising structural attacks. Post-quantum encryption candidate. A variant of this algorithm combined with NTS-KEM was entered into and selected during the third round of the NIST post-quantum encryption competition. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "n, k, t" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "(n, k)" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "k \\times k" }, { "math_id": 9, "text": "n \\times n" }, { "math_id": 10, "text": "k \\times n" }, { "math_id": 11, "text": "{\\hat G} = SGP" }, { "math_id": 12, "text": "({\\hat G}, t)" }, { "math_id": 13, "text": "(S, P, A)" }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "k" }, { "math_id": 16, "text": "c = m{\\hat G}" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "z" }, { "math_id": 19, "text": "c = c^{\\prime} + z" }, { "math_id": 20, "text": "c" }, { "math_id": 21, "text": "P^{-1}" }, { "math_id": 22, "text": "{\\hat c} = cP^{-1}" }, { "math_id": 23, "text": "{\\hat c}" }, { "math_id": 24, "text": "{\\hat m}" }, { "math_id": 25, "text": "m = {\\hat m}S^{-1}" }, { "math_id": 26, "text": "{\\hat c} = cP^{-1} = m{\\hat G}P^{-1} + zP^{-1} = mSG + zP^{-1}" }, { "math_id": 27, "text": "zP^{-1}" }, { "math_id": 28, "text": "mSG" }, { "math_id": 29, "text": "cP^{-1}" }, { "math_id": 30, "text": "{\\hat m} = mS" }, { "math_id": 31, "text": "m = {\\hat m}S^{-1}= mSS^{-1}" }, { "math_id": 32, "text": "{\\hat G}" }, { "math_id": 33, "text": "{\\hat G} = ({\\tilde G}|I)" }, { "math_id": 34, "text": "(n-k) \\times k" }, { "math_id": 35, "text": "n=1024, k=524, t=50" }, { "math_id": 36, "text": "n=2048, k=1751, t=27" }, { "math_id": 37, "text": "n=1632, k=1269, t=34" }, { "math_id": 38, "text": "n=6960, k=5413, t=119" }, { "math_id": 39, "text": "n=6688, k=128, t= 13" }, { "math_id": 40, "text": "n= 6960, k = 119, t = 13 " }, { "math_id": 41, "text": "n=8192, k =128, t = 13 " }, { "math_id": 42, "text": "y \\in \\mathbb{F}_2^n" }, { "math_id": 43, "text": "\\hat G" }, { "math_id": 44, "text": "(n,k)" }, { "math_id": 45, "text": "\\hat C" } ]
https://en.wikipedia.org/wiki?curid=735452
73546291
Wave run-up
Height that waves reach on a slope Wave run-up is the height to which waves run up the slope of a revetment, bank or dike, regardless of whether the waves are breaking or not. Conversely, "wave run-down" is the height to which waves recede. These heights are always measured vertically (and not along the slope). The wave run-up height, denoted by formula_0, formula_1, or formula_2, is a very important parameter in coastal engineering as, together with the design highest still water level, it determines the required crest height of a dike or revetment. History. The first scientific measurements of wave run-up were carried out by the Lorentz Committee in preparation for the works to close off the Zuiderzee. The Committee measured the wave height and wave run-up at various locations in 1920, but established that state of the art methods for measuring waves in the field during storms were inadequate. As a result, scale tests were also undertaken, but these also proved to be of very limited efficacy due to the fact that only regular waves (idealised, periodic waves with constant amplitude and a fixed time period between successive wave crests, following a sinusoidal pattern) could be modelled at the time. The methods and technology available to the committee at the time did not permit model testing of the more realistic and complex irregular waves (consisting of varying heights, periods and directions), which provide a more accurate representation of the actual conditions faced by coastal structures and shorelines. It was found, however, that the depth in front of the dike is very important for wave run-up and that, at least for the range of observations in the committee's measurements, the slope ratio does not play a major role. Nearly all dikes in the Netherlands at that time had a slope of 1:3. Current knowledge indicates that during storms and on gentle coastal slopes, the significant wave height is approximately half the water depth. This relationship appears to be accurate, and the observation is more pronounced for slopes around 1:3. This research was continued during the Zuiderzee Works, and eventually led to the (old) Delft formula for wave run-up: formula_3 in which: This formula proved to be generally applicable for smooth slopes and relatively steep (storm) waves. Subsequently, it was discovered that longer (swell) waves resulted in higher run-up. To account for this, the wave period was incorporated into the formula using the Iribarren number, formula_7, leading to the development of Hunt's Formula: formula_8 This formula was also valid for regular waves. The Old Delft Formula and Hunt's Formula are identical for waves with a steepness of 1/64, or about 2%. For higher values of formula_7, Hunt's formula has a limit value: formula_9 formula_10 formula_11. van der Meer, TAW and continuing development of formulae. In 1988, van der Meer provided formulae for wave run-up on rubble mound breakwaters, based on tests with rock-armoured straight slopes. He also introduced a notional permeability factor formula_12 for the structure. This factor also accounts for the effect of the pore volume. Defining formula_13 at the run-up level of exceedance probability formula_14, the formula, valid for formula_15 and head-on waves, is: formula_16 The term formula_17 reaches a constant maximum value equal to formula_18 in the case of permeable structures, i.e., formula_19. This corresponds to the region of surging waves, where there is no real wave breaking and where wave steepness and slope angle do not influence the run-up. The coefficients formula_20 formula_21 and formula_18 are presented in the table below: The values of the coefficients highlight the considerable variability of the run-up level from one wave to another in irregular seas. For run-up levels on smooth slopes, work in the Netherlands by the Technische Adviescommissie voor de Waterkeringen (English: Technical Advisory Committee on water Defences) in 1974 discussed the reduction in run-up due to different types of surface roughness. Irregular waves. In practical scenarios, waves are irregular, consisting of a combination of waves with varying heights, periods, and directions. These waves are typically analysed using statistical methods and spectral analysis, providing a more accurate representation of the actual conditions faced by coastal structures and shorelines. Consequently, it is not possible to define a single wave run-up value. Instead, a wave run-up with a specific probability of exceedance is used, typically set at 2%. This wave run-up represents the height exceeded by 2% of the waves in a wave field. Research indicates that wave run-up follows a Rayleigh distribution, similar to the waves themselves. A probability of exceedance value has been chosen that is small enough to prevent overtopping waves from causing damage to the inner slope. The 2% value has been adopted internationally and was arbitrarily selected by the Dutch Waterloopkundig Laboratorium shortly before 1940. Considering the function, 1% or 5% could have also been possible. The choice of 2% was based on the duration of experimental designs, as a complete trial could be conducted in half a day. In 1972, Jurjen Battjes, commissioned by the Dutch Technical Advisory Committee for Flood Defences, summarised the available research and provided a solid theoretical foundation. This work led to an improved version of Hunt's Formula, which explicitly included parameters for the angle of incidence of the waves, the effect of a berm, and the slope's roughness. However, the available experimental data on roughness and the berm were insufficient to establish a definitive formula. Subsequent research was conducted in the following years, with an emphasis on wave overtopping as a more indicative factor for dike height than wave run-up. This research ultimately resulted in a Technical Report in 2002 by the Dutch organisation, TAW. The wave run-up formula mentioned in this report remains in use, and the EurOtop manual has adopted it. The scope of validity has been further expanded in the EurOtop manual, featuring modified formulas. Modern wave run-up formulas. The EurOtop manual provides a general formula (Formula 1.4 in the manual) for wave run-up: formula_24 with a maximum value around 3. The Iribarren number formula_25 is then used based on the period determined using the first negative moment of the wave spectrum. Additionally, formula_22 is the reduction coefficient for the factors described below. The following equation is valid: formula_26 in which: A range is provided for Hillblock and Ronataille materials, as their reduction coefficient is dependent on wave height. A similar phenomenon occurs with grass. When subjected to high waves, natural grass becomes very smooth, resulting in a reduction coefficient formula_34. However, for smaller waves — approximately or less — natural grass tends to be much rougher. In such cases, one may opt for a reduction coefficient of formula_33 below 1.0. When dealing with short-crested waves, the highest run-up is caused by head-on waves (those with an angle of incidence, formula_35) and equates to the run-up for long-crested waves. For smooth slopes, the run-up decreases slightly with formula_36. The run-down typically ranges between a third and a half of the run-up. For breakwaters and revetments constructed with rock armour, the maximum run-down level may indicate the minimum downward extension of the primary armour, and a potential upper level for introducing a berm with a smaller armour size. Wave run-down. For wave run-down there is a similar formula: formula_37 Flood mark. Following storm events, a layer of floating debris, known as the flood mark or flotsam, often remains on the slope. This tide mark indicates the maximum wave run-up during the preceding storm. As the flood mark is situated near the height of maximum wave run-up and water levels are generally well-documented by nearby tide stations, it is straightforward to calculate the "Ru"2% of the storm by subtracting the observed storm surge level from the flood mark level. In the past, authorities in the Netherlands systematically recorded these observations for most dikes, resulting in a collection of flood mark heights for each dike section. The statistics of flood mark heights can be utilised to determine dike height, which should comprise the design water level plus a safety height (freeboard). The freeboard at the design water level must be equal to the maximum permissible wave run-up. For a dike with an acceptable load exceedance probability per year, such as 1/500 (as with the temporary dike reinforcement in the Oosterschelde), it is necessary to determine the 1/500 wave run-up. This can be calculated if the 1/500 wave height at the toe of the dike is known. However, this value is rarely measured and must be determined using a computational model, such as SWAN. In many instances, this process can be challenging and prone to errors. By analysing flood mark heights, which involves simply plotting the data on logarithmic paper, it is possible to directly obtain values such as the 1/500 wave run-up, and consequently the required safety height. An example of this can be observed in the accompanying photo of the run-up and flood mark lines at a dike along the Bathpolder in Zeeland. The photo shows two flood mark lines, which represent the wave run-up of two subsequent storms (on October 12, 2009, with water levels at and above mean sea level) in the Bathpolder. In the foreground, there is a slope with Haringman blocks, while the background features a slope of Elastocoast. The wave height during these storms was approximately . The wave run-up was above the storm surge level on the Elastocoast, and above the storm surge level on the Haringman blocks. The slope gradient here is 1:4.2. As a Haringman block measures exactly , the run-up can be assessed in this photo. Subsequent analysis reveals that the reduction coefficient "γf" for Haringman blocks here is 1.0, and for Elastocoast, it is 0.8. Wave run-up simulation. To assess the safety of a dike and the durability of its grass cover, particularly on the sea or river side, a wave run-up simulator can be employed. The wave conditions for which a dike is designed are infrequent, and the strength of grass coverings varies. These dike conditions can be replicated in-situ using a wave run-up simulator, allowing the manager of the relevant flood defence system to determine if the grass cover is strong enough to withstand expected waves under extreme conditions. During these tests, the wave run-up simulator is placed on the outer slope and continuously filled with water at a constant flow rate. The flaps at the bottom of the simulator can be opened to varying extents, enabling the simulation of different wave run-up volumes. The wave run-up simulator is one method for assessing the strength of the grass cover. Another approach involves using a sod puller, which can determine the tensile strength of a sod and allows conversion of this tensile strength by an engineer into a strength under load caused by wave run-up. In addition to simulating wave run-up, the simulation of wave impacts and wave overtopping can be achieved using specifically designed generators and simulators. Note. Wave run-up should not be confused with wave set-up (an increase in water level due to known waves) or with wind setup (storm surge, an increase in water level due to the driving force of wind). References. General reference &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_u" }, { "math_id": 1, "text": "{R_{up}}" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "Ru = 8 H \\tan \\alpha\n" }, { "math_id": 4, "text": "Ru" }, { "math_id": 5, "text": "H" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "\\xi" }, { "math_id": 8, "text": "\\frac{Ru}{H}=\\xi=\\frac{\\tan\\alpha}{(H/L_0)^{\\frac{1}{2}}}\n" }, { "math_id": 9, "text": "\\xi > 2{.}5" }, { "math_id": 10, "text": "\\rightarrow" }, { "math_id": 11, "text": "Ru/H = 2{.}5" }, { "math_id": 12, "text": " P " }, { "math_id": 13, "text": " R_{up} " }, { "math_id": 14, "text": " p " }, { "math_id": 15, "text": " 0.1 \\leq P \\leq 0.6 " }, { "math_id": 16, "text": "\\frac{R_{up}}{H_s} =\n\\begin{cases}\na \\xi_m & \\text{for } \\xi_m < 1.5 \\\\\nb \\xi^c_m & \\text{for } \\xi_m > 1.5\n\\end{cases}" }, { "math_id": 17, "text": " \\frac{R_{up}}{H_s} " }, { "math_id": 18, "text": " d " }, { "math_id": 19, "text": " P > 0.4 " }, { "math_id": 20, "text": " a, b, " }, { "math_id": 21, "text": " c " }, { "math_id": 22, "text": "\\gamma" }, { "math_id": 23, "text": "\\beta" }, { "math_id": 24, "text": "\\frac{Ru_{2\\%}}{H_{m0}} = 1.7 \\gamma\\, \\xi_{m-1,0}" }, { "math_id": 25, "text": " \\xi_{m-1,0}" }, { "math_id": 26, "text": "\\gamma = \\gamma_b \\gamma_f \\gamma_\\beta" }, { "math_id": 27, "text": "\\gamma_b" }, { "math_id": 28, "text": "\\gamma_b \\approx \\frac{6}{B/H_{m0}+6}" }, { "math_id": 29, "text": "B" }, { "math_id": 30, "text": "0.6 < \\gamma_b < 1.0" }, { "math_id": 31, "text": "\\gamma_\\beta" }, { "math_id": 32, "text": "\\gamma_\\beta = 1-0.0022\\,\\beta" }, { "math_id": 33, "text": "\\gamma_f" }, { "math_id": 34, "text": "\\gamma_f = 1{.}0" }, { "math_id": 35, "text": " \\beta = 0 " }, { "math_id": 36, "text": " \\beta " }, { "math_id": 37, "text": "\\frac{Rd_{2\\%}}{H_{m0}}=\\tfrac{1}{3} \\gamma_f \\xi_{m-1,0}" } ]
https://en.wikipedia.org/wiki?curid=73546291
7354687
Winkel tripel projection
Pseudoazimuthal compromise map projection The Winkel tripel projection (Winkel III), a modified azimuthal map projection of the world, is one of three projections proposed by German cartographer Oswald Winkel (7 January 1874 – 18 July 1953) in 1921. The projection is the arithmetic mean of the equirectangular projection and the Aitoff projection: The name (German for 'triple') refers to Winkel's goal of minimizing three kinds of distortion: area, direction, and distance. formula_0 Algorithm. where "λ" is the longitude relative to the central meridian of the projection, "φ" is the latitude, "φ"1 is the standard parallel for the equirectangular projection, sinc is the unnormalized cardinal sine function, and formula_1 In his proposal, Winkel set formula_2 A closed-form inverse mapping does not exist, and computing the inverse numerically requires the use of iterative methods. Comparison with other projections. David M. Goldberg and J. Richard Gott III showed that the Winkel tripel fares better against several other projections analyzed against their measures of distortion, producing minimal distance, Tissot indicatrix ellipticity and area errors, and the least skew of any of the projections they studied. By a different metric, Capek's "Q", the Winkel tripel ranked ninth among a hundred map projections of the world, behind the common Eckert IV projection and Robinson projections. In 1998, the Winkel tripel projection replaced the Robinson projection as the standard projection for world maps made by the National Geographic Society. Many educational institutes and textbooks soon followed National Geographic's example in adopting the projection, most of which still utilize it. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n x &= \\frac{1}{2} \\left(\\lambda \\cos \\varphi_1 + \\frac{2 \\cos \\varphi \\sin \\frac{\\lambda}{2}}{\\operatorname{sinc} \\alpha}\\right), \\\\\n y &= \\frac{1}{2} \\left(\\varphi + \\frac{\\sin \\varphi}{\\operatorname{sinc} \\alpha}\\right),\n\\end{align}" }, { "math_id": 1, "text": "\\alpha = \\arccos\\left(\\cos\\varphi \\cos \\frac{\\lambda}{2} \\right)." }, { "math_id": 2, "text": "\\varphi_1 = \\arccos \\frac{2}{\\pi}." } ]
https://en.wikipedia.org/wiki?curid=7354687
7354718
Guy Terjanian
French mathematician Guy Terjanian is a French mathematician who has worked on algebraic number theory. He achieved his Ph.D. under Claude Chevalley in 1970, and at that time published a counterexample to the original form of a conjecture of Emil Artin, which suitably modified had just been proved as the Ax-Kochen theorem. In 1977, he proved that if "p" is an odd prime number, and the natural numbers "x", "y" and "z" satisfy formula_0, then "2p" must divide "x" or "y".
[ { "math_id": 0, "text": "x^{2p} + y^{2p} = z^{2p}" } ]
https://en.wikipedia.org/wiki?curid=7354718
73548203
Integrability of demand
Problem in microeconomics In microeconomic theory, the problem of the integrability of demand functions deals with recovering a utility function (that is, consumer preferences) from a given walrasian demand function. The "integrability" in the name comes from the fact that demand functions can be shown to satisfy a system of partial differential equations in prices, and solving (integrating) this system is a crucial step in recovering the underlying utility function generating demand. The problem was considered by Paul Samuelson in his book Foundations of Economic Analysis, and conditions for its solution were given by him in a 1950 article. More general conditions for a solution were later given by Leonid Hurwicz and Hirofumi Uzawa. Mathematical formulation. Given consumption space formula_0 and a known walrasian demand function formula_1, solving the problem of integrability of demand consists in finding a utility function formula_2 such that formula_3 That is, it is essentially "reversing" the consumer's utility maximization problem. Sufficient conditions for solution. There are essentially two steps in solving the integrability problem for a demand function. First, one recovers an expenditure function formula_4 for the consumer. Then, with the properties of expenditure functions, one can construct an at-least-as-good set formula_5 which is equivalent to finding a utility function formula_6. If the demand function formula_7 is homogenous of degree zero, satisfies Walras' Law, and has a negative semi-definte substitution matrix formula_8, then it is possible to follow those steps to find a utility function formula_9 that generates demand formula_7. Proof: if the first two conditions (homogeneity of degree zero and Walras' Law) are met, then duality between the expenditure minimization problem and the utility maximization problem tells us that formula_10 where formula_11 is the consumers' indirect utility function and formula_12 is the consumers' hicksian demand function. Fix a utility level formula_13 . From Shephard's lemma, and with the identity above we have where we omit the fixed utility level formula_14 for conciseness. (1) is a system of PDEs in the prices vector formula_15, and Frobenius' theorem can be used to show that if the matrix formula_16 is symmetric, then it has a solution. Notice that the matrix above is simply the substitution matrix formula_17, which we assumed to be symmetric firsthand. So (1) has a solution, and it is (at least theoretically) possible to find an expenditure function formula_18 such that formula_19. For the second step, by definition, formula_20 where formula_21. By the properties of formula_22, it is not too hard to show that formula_23. Doing some algebraic manipulation with the inequality formula_24, one can reconstruct formula_25 in its original form with formula_26. If that is done, one has found a utility function formula_27 that generates consumer demand formula_28. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X " }, { "math_id": 1, "text": " x: \\mathbb{R}_{++}^{L} \\times \\mathbb{R}_{+} \\rightarrow X " }, { "math_id": 2, "text": " u: X \\rightarrow \\mathbb{R} " }, { "math_id": 3, "text": " x(p, w) = \\operatorname{argmax}_{x \\in X} \\{u(x) : p \\cdot x \\leq w\\}" }, { "math_id": 4, "text": "e(p, u)" }, { "math_id": 5, "text": "V_u = \\{x \\in \\mathbb{R}^L_+: u(x) \\geq u\\}" }, { "math_id": 6, "text": "u(x) " }, { "math_id": 7, "text": " x(p, w) " }, { "math_id": 8, "text": " S(p, w)" }, { "math_id": 9, "text": " u(x) " }, { "math_id": 10, "text": "x(p, w) = h(p, v(p, w))" }, { "math_id": 11, "text": "v(p, w) = u(x(p, w))" }, { "math_id": 12, "text": "h(p, u)" }, { "math_id": 13, "text": "u_0 = v(p, w)" }, { "math_id": 14, "text": "u_0 " }, { "math_id": 15, "text": "p" }, { "math_id": 16, "text": "D_p x(p, w) + D_w x(p, w) x(p, w) " }, { "math_id": 17, "text": "S(p, w)" }, { "math_id": 18, "text": " e(p) " }, { "math_id": 19, "text": " p \\cdot x(p, e(p)) = e(p)" }, { "math_id": 20, "text": "e(p) = e(p, u_0) = \\min \\{p \\cdot x : x \\in V_{u_0}\\}" }, { "math_id": 21, "text": " V_{u_0} = \\{x \\in \\mathbb{R}^L_+: u(x) \\geq u_0\\}" }, { "math_id": 22, "text": " e(p, u) " }, { "math_id": 23, "text": " V_{u_0} = \\{x \\in \\mathbb{R}^L_+: p \\cdot x \\geq e(p, u_0)\\}" }, { "math_id": 24, "text": " p \\cdot x \\geq e(p, u_0)" }, { "math_id": 25, "text": " V_{u_0} " }, { "math_id": 26, "text": "u(x) \\geq u_0" }, { "math_id": 27, "text": "u: X \\rightarrow \\mathbb{R} " }, { "math_id": 28, "text": "x(p, w)" } ]
https://en.wikipedia.org/wiki?curid=73548203
735512
K–Ar dating
Radiometric dating method Potassium–argon dating, abbreviated K–Ar dating, is a radiometric dating method used in geochronology and archaeology. It is based on measurement of the product of the radioactive decay of an isotope of potassium (K) into argon (Ar). Potassium is a common element found in many materials, such as feldspars, micas, clay minerals, tephra, and evaporites. In these materials, the decay product Ar is able to escape the liquid (molten) rock but starts to accumulate when the rock solidifies (recrystallizes). The amount of argon sublimation that occurs is a function of the purity of the sample, the composition of the mother material, and a number of other factors. These factors introduce error limits on the upper and lower bounds of dating, so that the final determination of age is reliant on the environmental factors during formation, melting, and exposure to decreased pressure or open air. Time since recrystallization is calculated by measuring the ratio of the amount of Ar accumulated to the amount of K remaining. The long half-life of K allows the method to be used to calculate the absolute age of samples older than a few thousand years. The quickly cooled lavas that make nearly ideal samples for K–Ar dating also preserve a record of the direction and intensity of the local magnetic field as the sample cooled past the Curie temperature of iron. The geomagnetic polarity time scale was calibrated largely using K–Ar dating. Decay series. Potassium naturally occurs in 3 isotopes: K (93.2581%), K (0.0117%), K (6.7302%). K and K are stable. The K isotope is radioactive; it decays with a half-life of to Ca and Ar. Conversion to stable Ca occurs via electron emission (beta decay) in 89.3% of decay events. Conversion to stable Ar occurs via electron capture in the remaining 10.7% of decay events. Argon, being a noble gas, is a minor component of most rock samples of geochronological interest: It does not bind with other atoms in a crystal lattice. When K decays to Ar; the atom typically remains trapped within the lattice because it is larger than the spaces between the other atoms in a mineral crystal. But it can escape into the surrounding region when the right conditions are met, such as changes in pressure or temperature. Ar atoms can diffuse through and escape from molten magma because most crystals have melted and the atoms are no longer trapped. Entrained argon – diffused argon that fails to escape from the magma – may again become trapped in crystals when magma cools to become solid rock again. After the recrystallization of magma, more K will decay and Ar will again accumulate, along with the entrained argon atoms, trapped in the mineral crystals. Measurement of the quantity of Ar atoms is used to compute the amount of time that has passed since a rock sample has solidified. Despite Ca being the favored daughter nuclide, it is rarely useful in dating because calcium is so common in the crust, with Ca being the most abundant isotope. Thus, the amount of calcium originally present is not known and can vary enough to confound measurements of the small increases produced by radioactive decay. Formula. The ratio of the amount of Ar to that of K is directly related to the time elapsed since the rock was cool enough to trap the Ar by the equation: formula_0, where: The scale factor 0.109 corrects for the unmeasured fraction of K which decayed into Ca; the sum of the measured K and the scaled amount of Ar gives the amount of K which was present at the beginning of the elapsed time period. In practice, each of these values may be expressed as a proportion of the total potassium present, as only relative, not absolute, quantities are required. Obtaining the data. To obtain the content ratio of isotopes Ar to K in a rock or mineral, the amount of Ar is measured by mass spectrometry of the gases released when a rock sample is volatilized in vacuum. The potassium is quantified by flame photometry or atomic absorption spectroscopy. The amount of K is rarely measured directly. Rather, the more common K is measured and that quantity is then multiplied by the accepted ratio of K/K (i.e., 0.0117%/93.2581%, see above). The amount of Ar is also measured to assess how much of the total argon is atmospheric in origin. Assumptions. According to the following assumptions must be true for computed dates to be accepted as representing the true age of the rock: Both flame photometry and mass spectrometry are destructive tests, so particular care is needed to ensure that the aliquots used are truly representative of the sample. Ar–Ar dating is a similar technique that compares isotopic ratios from the same portion of the sample to avoid this problem. Applications. Due to the long half-life of K, the technique is most applicable for dating minerals and rocks more than 100,000 years old. For shorter timescales, it is unlikely that enough Ar will have had time to accumulate to be accurately measurable. K–Ar dating was instrumental in the development of the geomagnetic polarity time scale. Although it finds the most utility in geological applications, it plays an important role in archaeology. One archeological application has been in bracketing the age of archeological deposits at Olduvai Gorge by dating lava flows above and below the deposits. It has also been indispensable in other early east African sites with a history of volcanic activity such as Hadar, Ethiopia. The K–Ar method continues to have utility in dating clay mineral diagenesis. In 2017, the successful dating of illite formed by weathering was reported. This finding indirectly led to the dating of the strandflat of Western Norway from where the illite was sampled. Clay minerals are less than 2 μm thick and cannot easily be irradiated for Ar–Ar analysis because Ar recoils from the crystal lattice. In 2013, the K–Ar method was used by the Mars Curiosity rover to date a rock on the Martian surface, the first time a rock has been dated from its mineral ingredients while situated on another planet. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " t = \\frac{t_\\frac{1}{2}}{\\ln(2)} \\ln\\left(\\frac{\\ce{K}_f + \\frac{\\ce{Ar}_f}{0.109}}{\\ce{K}_f}\\right)" } ]
https://en.wikipedia.org/wiki?curid=735512
73558859
Branch number
In cryptography, the branch number is a numerical value that characterizes the amount of diffusion introduced by a vectorial Boolean function F that maps an input vector a to output vector formula_0. For the (usual) case of a linear F the value of the "differential branch number" is produced by: If both a and formula_0 have s components, the result is obviously limited on the high side by the value formula_5 (this "perfect" result is achieved when any single nonzero component in a makes all components of formula_0 to be non-zero). A high branch number suggests higher resistance to the differential cryptanalysis: the small variations of input will produce large changes on the output and in order to obtain small variations of the output, large changes of the input value will be required. The term was introduced by Daemen and Rijmen in early 2000s and quickly became a typical tool to assess the diffusion properties of the transformations. Mathematics. The branch number concept is not limited to the linear transformations, Daemen and Rijmen provided two general metrics: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(a)" }, { "math_id": 1, "text": "W" }, { "math_id": 2, "text": "W(a)" }, { "math_id": 3, "text": "W(F(a))" }, { "math_id": 4, "text": "B_d(F) = \\underset {a \\ne 0} {\\min} (W(a) + W(F(a)))" }, { "math_id": 5, "text": "s+1" }, { "math_id": 6, "text": "\\oplus" }, { "math_id": 7, "text": "B_d(F) = \\underset {a \\ne b} {\\min} (W(a \\oplus b) + W(F(a) \\oplus F(b))" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\beta" }, { "math_id": 10, "text": "LAT(\\alpha,\\beta)" }, { "math_id": 11, "text": "B_l(F) = \\underset {\\alpha \\ne 0,\\beta,LAT(\\alpha,\\beta) \\ne 0} {\\min} (W(\\alpha) + W(\\beta))" } ]
https://en.wikipedia.org/wiki?curid=73558859
735611
Network analysis (electrical circuits)
Determining all voltages and currents within an electrical network In electrical engineering and electronics, a "network" is a collection of interconnected components. Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values; however, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to "linear" network analysis. Equivalent circuits. A useful procedure in network analysis is to simplify the network by reducing the number of components. This can be done by replacing physical components with other notional components that have the same effect. A particular technique might directly reduce the number of components, for instance by combining impedances in series. On the other hand, it might merely change the form into one in which the components can be reduced in a later operation. For instance, one might transform a voltage generator into a current generator using Norton's theorem in order to be able to later combine the internal resistance of the generator with a parallel impedance load. A resistive circuit is a circuit containing only resistors, ideal current sources, and ideal voltage sources. If the sources are constant (DC) sources, the result is a DC circuit. Analysis of a circuit consists of solving for the voltages and currents present in the circuit. The solution principles outlined here also apply to phasor analysis of AC circuits. Two circuits are said to be equivalent with respect to a pair of terminals if the voltage across the terminals and current through the terminals for one network have the same relationship as the voltage and current at the terminals of the other network. If formula_0 implies formula_1 for all (real) values of "V"1, then with respect to terminals ab and xy, circuit 1 and circuit 2 are equivalent. The above is a sufficient definition for a one-port network. For more than one port, then it must be defined that the currents and voltages between all pairs of corresponding ports must bear the same relationship. For instance, star and delta networks are effectively three port networks and hence require three simultaneous equations to fully specify their equivalence. Impedances in series and in parallel. Some two terminal network of impedances can eventually be reduced to a single impedance by successive applications of impedances in series or impedances in parallel. Delta-wye transformation. A network of impedances with more than two terminals cannot be reduced to a single impedance equivalent circuit. An n-terminal network can, at best, be reduced to n impedances (at worst formula_5). For a three terminal network, the three impedances can be expressed as a three node delta (Δ) network or four node star (Y) network. These two networks are equivalent and the transformations between them are given below. A general network with an arbitrary number of nodes cannot be reduced to the minimum number of impedances using only series and parallel combinations. In general, Y-Δ and Δ-Y transformations must also be used. For some networks the extension of Y-Δ to star-polygon transformations may also be required. For equivalence, the impedances between any pair of terminals must be the same for both networks, resulting in a set of three simultaneous equations. The equations below are expressed as resistances but apply equally to the general case with impedances. formula_6 formula_7 General form of network node elimination. The star-to-delta and series-resistor transformations are special cases of the general resistor network node elimination algorithm. Any node connected by N resistors ("R"1 … "RN") to nodes 1 … N can be replaced by formula_8 resistors interconnecting the remaining N nodes. The resistance between any two nodes x, y is given by: formula_9 For a star-to-delta ("N" = 3) this reduces to: formula_10 For a series reduction ("N" = 2) this reduces to: formula_11 For a dangling resistor ("N" = 1) it results in the elimination of the resistor because formula_12. Source transformation. A generator with an internal impedance (i.e. non-ideal generator) can be represented as either an ideal voltage generator or an ideal current generator plus the impedance. These two forms are equivalent and the transformations are given below. If the two networks are equivalent with respect to terminals ab, then V and I must be identical for both networks. Thus, formula_13 or formula_14 Simple networks. Some very simple networks can be analysed without the need to apply the more systematic approaches. Voltage division of series components. Consider n impedances that are connected in series. The voltage formula_15 across any impedance formula_16 is formula_17 Current division of parallel components. Consider n admittances that are connected in parallel. The current formula_18 through any admittance formula_19 is formula_20 for formula_21 formula_22 formula_23 Nodal analysis. Nodal analysis uses the concept of a node voltage and considers the node voltages to be the unknown variables.2-8 - 2-9 For all nodes, except a chosen reference node, the node voltage is defined as the voltage drop from the node to the reference node. Therefore, there are N-1 node voltages for a circuit with N nodes.2-10 In principle, nodal analysis uses Kirchhoff's current law (KCL) at N-1 nodes to get N-1 independent equations. Since equations generated with KCL are in terms of currents going in and out of nodes, these currents, if their values are not known, need to be represented by the unknown variables (node voltages). For some elements (such as resistors and capacitors) getting the element currents in terms of node voltages is trivial. For some common elements where this is not possible, specialized methods are developed. For example, a concept called supernode is used for circuits with independent voltage sources.2-12 - 2-13 Mesh analysis. Mesh — a loop that does not contain an inner loop. Superposition. In this method, the effect of each generator in turn is calculated. All the generators other than the one being considered are removed and either short-circuited in the case of voltage generators or open-circuited in the case of current generators. The total current through or the total voltage across a particular branch is then calculated by summing all the individual currents or voltages. There is an underlying assumption to this method that the total current or voltage is a linear superposition of its parts. Therefore, the method cannot be used if non-linear components are present. Superposition of powers cannot be used to find total power consumed by elements even in linear circuits. Power varies according to the square of total voltage or current and the square of the sum is not generally equal to the sum of the squares. Total power in an element can be found by applying superposition to the voltages and current independently and then calculating power from the total voltage and current. Choice of method. Choice of method is to some extent a matter of taste. If the network is particularly simple or only a specific current or voltage is required then ad-hoc application of some simple equivalent circuits may yield the answer without recourse to the more systematic methods. Transfer function. A transfer function expresses the relationship between an input and an output of a network. For resistive networks, this will always be a simple real number or an expression which boils down to a real number. Resistive networks are represented by a system of simultaneous algebraic equations. However, in the general case of linear networks, the network is represented by a system of simultaneous linear differential equations. In network analysis, rather than use the differential equations directly, it is usual practice to carry out a Laplace transform on them first and then express the result in terms of the Laplace parameter s, which in general is complex. This is described as working in the s-domain. Working with the equations directly would be described as working in the time (or t) domain because the results would be expressed as time varying quantities. The Laplace transform is the mathematical method of transforming between the s-domain and the t-domain. This approach is standard in control theory and is useful for determining stability of a system, for instance, in an amplifier with feedback. Two terminal component transfer functions. For two terminal components the transfer function, or more generally for non-linear elements, the constitutive equation, is the relationship between the current input to the device and the resulting voltage across it. The transfer function, Z(s), will thus have units of impedance, ohms. For the three passive components found in electrical networks, the transfer functions are; For a network to which only steady ac signals are applied, s is replaced with "jω" and the more familiar values from ac network theory result. Finally, for a network to which only steady dc is applied, s is replaced with zero and dc network theory applies. Two port network transfer function. Transfer functions, in general, in control theory are given the symbol H(s). Most commonly in electronics, transfer function is defined as the ratio of output voltage to input voltage and given the symbol A(s), or more commonly (because analysis is invariably done in terms of sine wave response), "A"("jω"), so that; formula_24 The "A" standing for attenuation, or amplification, depending on context. In general, this will be a complex function of "jω", which can be derived from an analysis of the impedances in the network and their individual transfer functions. Sometimes the analyst is only interested in the magnitude of the gain and not the phase angle. In this case the complex numbers can be eliminated from the transfer function and it might then be written as; formula_25 Two port parameters. The concept of a two-port network can be useful in network analysis as a black box approach to analysis. The behaviour of the two-port network in a larger network can be entirely characterised without necessarily stating anything about the internal structure. However, to do this it is necessary to have more information than just the A(jω) described above. It can be shown that four such parameters are required to fully characterise the two-port network. These could be the forward transfer function, the input impedance, the reverse transfer function (i.e., the voltage appearing at the input when a voltage is applied to the output) and the output impedance. There are many others (see the main article for a full listing), one of these expresses all four parameters as impedances. It is usual to express the four parameters as a matrix; formula_26 The matrix may be abbreviated to a representative element; formula_27 or just formula_28 These concepts are capable of being extended to networks of more than two ports. However, this is rarely done in reality because, in many practical cases, ports are considered either purely input or purely output. If reverse direction transfer functions are ignored, a multi-port network can always be decomposed into a number of two-port networks. Distributed components. Where a network is composed of discrete components, analysis using two-port networks is a matter of choice, not essential. The network can always alternatively be analysed in terms of its individual component transfer functions. However, if a network contains distributed components, such as in the case of a transmission line, then it is not possible to analyse in terms of individual components since they do not exist. The most common approach to this is to model the line as a two-port network and characterise it using two-port parameters (or something equivalent to them). Another example of this technique is modelling the carriers crossing the base region in a high frequency transistor. The base region has to be modelled as distributed resistance and capacitance rather than lumped components. Image analysis. Transmission lines and certain types of filter design use the image method to determine their transfer parameters. In this method, the behaviour of an infinitely long cascade connected chain of identical networks is considered. The input and output impedances and the forward and reverse transmission functions are then calculated for this infinitely long chain. Although the theoretical values so obtained can never be exactly realised in practice, in many cases they serve as a very good approximation for the behaviour of a finite chain as long as it is not too short. Time-based network analysis with simulation. Most analysis methods calculate the voltage and current values for static networks, which are circuits consisting of memoryless components only but have difficulties with complex dynamic networks. In general, the equations that describe the behaviour of a dynamic circuit are in the form of a differential-algebraic system of equations (DAEs). DAEs are challenging to solve and the methods for doing so are not yet fully understood and developed (as of 2010). Also, there is no general theorem that guarantees solutions to DAEs will exist and be unique. In special cases, the equations of the dynamic circuit will be in the form of an ordinary differential equations (ODE), which are easier to solve, since numerical methods for solving ODEs have a rich history, dating back to the late 1800s. One strategy for adapting ODE solution methods to DAEs is called direct discretization and is the method of choice in circuit simulation. 204-205 Simulation-based methods for time-based network analysis solve a circuit that is posed as an initial value problem (IVP). That is, the values of the components with memories (for example, the voltages on capacitors and currents through inductors) are given at an initial point of time t0, and the analysis is done for the time formula_29. 206-207 Since finding numerical results for the infinite number of time points from t0 to tf is not possible, this time period is discretized into discrete time instances, and the numerical solution is found for every instance. The time between the time instances is called the time step and can be fixed throughout the whole simulation or may be adaptive. In an IVP, when finding a solution for time tn+1, the solution for time tn is already known. Then, temporal discretization is used to replace the derivatives with differences, such as formula_30 for the backward Euler method, where hn+1 is the time step. 266 If all circuit components were linear or the circuit was linearized beforehand, the equation system at this point is a system of linear equations and is solved with numerical linear algebra methods. Otherwise, it is a nonlinear algebraic equation system and is solved with nonlinear numerical methods such as Root-finding algorithms. Comparison to other methods. Simulation methods are much more applicable than Laplace transform based methods, such as transfer functions, which only work for simple dynamic networks with capacitors and inductors. Also, the input signals to the network cannot be arbitrarily defined for Laplace transform based methods. Non-linear networks. Most electronic designs are, in reality, non-linear. There are very few that do not include some semiconductor devices. These are invariably non-linear, the transfer function of an ideal semiconductor p-n junction is given by the very non-linear relationship; formula_31 where; There are many other ways that non-linearity can appear in a network. All methods utilising linear superposition will fail when non-linear components are present. There are several options for dealing with non-linearity depending on the type of circuit and the information the analyst wishes to obtain. Constitutive equations. The diode equation above is an example of an element constitutive equation of the general form, formula_32 This can be thought of as a non-linear resistor. The corresponding constitutive equations for non-linear inductors and capacitors are respectively; formula_33 formula_34 where "f" is any arbitrary function, "φ" is the stored magnetic flux and "q" is the stored charge. Existence, uniqueness and stability. An important consideration in non-linear analysis is the question of uniqueness. For a network composed of linear components there will always be one, and only one, unique solution for a given set of boundary conditions. This is not always the case in non-linear circuits. For instance, a linear resistor with a fixed current applied to it has only one solution for the voltage across it. On the other hand, the non-linear tunnel diode has up to three solutions for the voltage for a given current. That is, a particular solution for the current through the diode is not unique, there may be others, equally valid. In some cases there may not be a solution at all: the question of existence of solutions must be considered. Another important consideration is the question of stability. A particular solution may exist, but it may not be stable, rapidly departing from that point at the slightest stimulation. It can be shown that a network that is absolutely stable for all conditions must have one, and only one, solution for each set of conditions. Methods. Boolean analysis of switching networks. A switching device is one where the non-linearity is utilised to produce two opposite states. CMOS devices in digital circuits, for instance, have their output connected to either the positive or the negative supply rail and are never found at anything in between except during a transient period when the device is switching. Here the non-linearity is designed to be extreme, and the analyst can take advantage of that fact. These kinds of networks can be analysed using Boolean algebra by assigning the two states ("on"/"off", "positive"/"negative" or whatever states are being used) to the Boolean constants "0" and "1". The transients are ignored in this analysis, along with any slight discrepancy between the state of the device and the nominal state assigned to a Boolean value. For instance, Boolean "1" may be assigned to the state of +5V. The output of the device may be +4.5V but the analyst still considers this to be Boolean "1". Device manufacturers will usually specify a range of values in their data sheets that are to be considered undefined (i.e. the result will be unpredictable). The transients are not entirely uninteresting to the analyst. The maximum rate of switching is determined by the speed of transition from one state to the other. Happily for the analyst, for many devices most of the transition occurs in the linear portion of the devices transfer function and linear analysis can be applied to obtain at least an approximate answer. It is mathematically possible to derive Boolean algebras that have more than two states. There is not too much use found for these in electronics, although three-state devices are passingly common. Separation of bias and signal analyses. This technique is used where the operation of the circuit is to be essentially linear, but the devices used to implement it are non-linear. A transistor amplifier is an example of this kind of network. The essence of this technique is to separate the analysis into two parts. Firstly, the dc biases are analysed using some non-linear method. This establishes the quiescent operating point of the circuit. Secondly, the small signal characteristics of the circuit are analysed using linear network analysis. Examples of methods that can be used for both these stages are given below. Graphical method of dc analysis. In a great many circuit designs, the dc bias is fed to a non-linear component via a resistor (or possibly a network of resistors). Since resistors are linear components, it is particularly easy to determine the quiescent operating point of the non-linear device from a graph of its transfer function. The method is as follows: from linear network analysis the output transfer function (that is output voltage against output current) is calculated for the network of resistor(s) and the generator driving them. This will be a straight line (called the load line) and can readily be superimposed on the transfer function plot of the non-linear device. The point where the lines cross is the quiescent operating point. Perhaps the easiest practical method is to calculate the (linear) network open circuit voltage and short circuit current and plot these on the transfer function of the non-linear device. The straight line joining these two point is the transfer function of the network. In reality, the designer of the circuit would proceed in the reverse direction to that described. Starting from a plot provided in the manufacturers data sheet for the non-linear device, the designer would choose the desired operating point and then calculate the linear component values required to achieve it. It is still possible to use this method if the device being biased has its bias fed through another device which is itself non-linear, a diode for instance. In this case however, the plot of the network transfer function onto the device being biased would no longer be a straight line and is consequently more tedious to do. Small signal equivalent circuit. This method can be used where the deviation of the input and output signals in a network stay within a substantially linear portion of the non-linear devices transfer function, or else are so small that the curve of the transfer function can be considered linear. Under a set of these specific conditions, the non-linear device can be represented by an equivalent linear network. It must be remembered that this equivalent circuit is entirely notional and only valid for the small signal deviations. It is entirely inapplicable to the dc biasing of the device. For a simple two-terminal device, the small signal equivalent circuit may be no more than two components. A resistance equal to the slope of the v/i curve at the operating point (called the dynamic resistance), and tangent to the curve. A generator, because this tangent will not, in general, pass through the origin. With more terminals, more complicated equivalent circuits are required. A popular form of specifying the small signal equivalent circuit amongst transistor manufacturers is to use the two-port network parameters known as [h] parameters. These are a matrix of four parameters as with the [z] parameters but in the case of the [h] parameters they are a hybrid mixture of impedances, admittances, current gains and voltage gains. In this model the three terminal transistor is considered to be a two port network, one of its terminals being common to both ports. The [h] parameters are quite different depending on which terminal is chosen as the common one. The most important parameter for transistors is usually the forward current gain, h21, in the common emitter configuration. This is designated hfe on data sheets. The small signal equivalent circuit in terms of two-port parameters leads to the concept of dependent generators. That is, the value of a voltage or current generator depends linearly on a voltage or current elsewhere in the circuit. For instance the [z] parameter model leads to dependent voltage generators as shown in this diagram; There will always be dependent generators in a two-port parameter equivalent circuit. This applies to the [h] parameters as well as to the [z] and any other kind. These dependencies must be preserved when developing the equations in a larger linear network analysis. Piecewise linear method. In this method, the transfer function of the non-linear device is broken up into regions. Each of these regions is approximated by a straight line. Thus, the transfer function will be linear up to a particular point where there will be a discontinuity. Past this point the transfer function will again be linear but with a different slope. A well known application of this method is the approximation of the transfer function of a pn junction diode. The transfer function of an ideal diode has been given at the top of this (non-linear) section. However, this formula is rarely used in network analysis, a piecewise approximation being used instead. It can be seen that the diode current rapidly diminishes to -Io as the voltage falls. This current, for most purposes, is so small it can be ignored. With increasing voltage, the current increases exponentially. The diode is modelled as an open circuit up to the knee of the exponential curve, then past this point as a resistor equal to the bulk resistance of the semiconducting material. The commonly accepted values for the transition point voltage are 0.7V for silicon devices and 0.3V for germanium devices. An even simpler model of the diode, sometimes used in switching applications, is short circuit for forward voltages and open circuit for reverse voltages. The model of a forward biased pn junction having an approximately constant 0.7V is also a much used approximation for transistor base-emitter junction voltage in amplifier design. The piecewise method is similar to the small signal method in that linear network analysis techniques can only be applied if the signal stays within certain bounds. If the signal crosses a discontinuity point then the model is no longer valid for linear analysis purposes. The model does have the advantage over small signal however, in that it is equally applicable to signal and dc bias. These can therefore both be analysed in the same operations and will be linearly superimposable. Time-varying components. In linear analysis, the components of the network are assumed to be unchanging, but in some circuits this does not apply, such as sweep oscillators, voltage controlled amplifiers, and variable equalisers. In many circumstances the change in component value is periodic. A non-linear component excited with a periodic signal, for instance, can be represented as a periodically varying "linear" component. Sidney Darlington disclosed a method of analysing such periodic time varying circuits. He developed canonical circuit forms which are analogous to the canonical forms of Ronald M. Foster and Wilhelm Cauer used for analysing linear circuits. Vector circuit theory. Generalization of circuit theory based on scalar quantities to vectorial currents is a necessity for newly evolving circuits such as spin circuits. Generalized circuit variables consist of four components: scalar current and vector spin current in x, y, and z directions. The voltages and currents each become vector quantities with conductance described as a 4x4 spin conductance matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_2=V_1" }, { "math_id": 1, "text": "I_2=I_1" }, { "math_id": 2, "text": "Z_\\mathrm{eq} = Z_1 + Z_2 + \\,\\cdots\\, + Z_n ." }, { "math_id": 3, "text": "\\frac{1}{Z_\\mathrm{eq}} = \\frac{1}{Z_1} + \\frac{1}{Z_2} + \\,\\cdots\\, + \\frac{1}{Z_n} ." }, { "math_id": 4, "text": "Z_\\mathrm{eq} = \\frac{Z_1Z_2}{Z_1 + Z_2} ." }, { "math_id": 5, "text": "\\tbinom{n}{2}" }, { "math_id": 6, "text": "\\begin{align}\nR_a &= \\frac{R_\\mathrm{ac}R_\\mathrm{ab}}{R_\\mathrm{ac} + R_\\mathrm{ab} + R_\\mathrm{bc}} \\\\\nR_b &= \\frac{R_\\mathrm{ab}R_\\mathrm{bc}}{R_\\mathrm{ac} + R_\\mathrm{ab} + R_\\mathrm{bc}} \\\\\nR_c &= \\frac{R_\\mathrm{bc}R_\\mathrm{ac}}{R_\\mathrm{ac} + R_\\mathrm{ab} + R_\\mathrm{bc}} \n\\end{align}" }, { "math_id": 7, "text": "\\begin{align}\nR_\\mathrm{ac} &= \\frac{R_a R_b + R_b R_c + R_c R_a}{R_b} \\\\\nR_\\mathrm{ab} &= \\frac{R_a R_b + R_b R_c + R_c R_a}{R_c} \\\\\nR_\\mathrm{bc} &= \\frac{R_a R_b + R_b R_c + R_c R_a}{R_a}\n\\end{align}" }, { "math_id": 8, "text": "\\tbinom{N}{2}" }, { "math_id": 9, "text": "R_\\mathrm{xy} = R_x R_y\\sum_{i=1}^N \\frac{1}{R_i}" }, { "math_id": 10, "text": "\\begin{align}\nR_\\mathrm{ab} &= R_a R_b \\left(\\frac 1 R_a+\\frac 1 R_b+\\frac 1 R_c\\right) = \\frac{R_a R_b(R_a R_b + R_a R_c + R_b R_c)}{R_a R_b R_c} \\\\\n&= \\frac{R_a R_b + R_b R_c + R_c R_a}{R_c}\n\\end{align}" }, { "math_id": 11, "text": "R_\\mathrm{ab} = R_a R_b \\left(\\frac 1 R_a+\\frac 1 R_b\\right) = \\frac{R_a R_b(R_a + R_b)}{R_a R_b} = R_a + R_b" }, { "math_id": 12, "text": "\\tbinom{1}{2} = 0" }, { "math_id": 13, "text": "V_\\mathrm{s} = RI_\\mathrm{s}\\,\\!" }, { "math_id": 14, "text": "I_\\mathrm{s} = \\frac{V_\\mathrm{s}}{R}" }, { "math_id": 15, "text": "V_i" }, { "math_id": 16, "text": "Z_i" }, { "math_id": 17, "text": "V_i = Z_iI = \\left( \\frac{Z_i}{Z_1 + Z_2 + \\cdots + Z_n} \\right)V" }, { "math_id": 18, "text": "I_i" }, { "math_id": 19, "text": "Y_i" }, { "math_id": 20, "text": "I_i = Y_iV = \\left( \\frac{Y_i}{Y_1 + Y_2 + \\cdots + Y_n} \\right)I" }, { "math_id": 21, "text": "i = 1,2,...,n." }, { "math_id": 22, "text": "I_1 = \\left( \\frac{Z_2}{Z_1 + Z_2} \\right)I" }, { "math_id": 23, "text": "I_2 = \\left( \\frac{Z_1}{Z_1 + Z_2} \\right)I" }, { "math_id": 24, "text": "A(j\\omega)=\\frac{V_o}{V_i}" }, { "math_id": 25, "text": "A(\\omega)=\\left|{\\frac{V_o}{V_i}}\\right|" }, { "math_id": 26, "text": "\n\\begin{bmatrix}\n V_1 \\\\\n V_0\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n z(j\\omega)_{11} & z(j\\omega)_{12} \\\\\n z(j\\omega)_{21} & z(j\\omega)_{22}\n\\end{bmatrix}\n\\begin{bmatrix}\n I_1 \\\\\n I_0\n\\end{bmatrix}\n" }, { "math_id": 27, "text": " \\left [z(j\\omega) \\right] " }, { "math_id": 28, "text": " \\left [z \\right] " }, { "math_id": 29, "text": "t_0\\leq t\\leq t_f" }, { "math_id": 30, "text": "x'(t_{n+1}) \\approx \\frac{x_{n+1}-x_n}{h_{n+1}}" }, { "math_id": 31, "text": "i = I_o \\left(e^{{v}/{V_T}}-1\\right)" }, { "math_id": 32, "text": "f(v,i) = 0 " }, { "math_id": 33, "text": "f(v, \\varphi) = 0 " }, { "math_id": 34, "text": "f(v, q) = 0 " } ]
https://en.wikipedia.org/wiki?curid=735611
73561616
Wind setup
Rise of water due to wind blowing over the water surface Wind setup, also known as "wind effect" or "storm effect", refers to the rise in water level in seas or lakes caused by winds pushing the water in a specific direction. As the wind moves across the water's surface, it applies a shear stress to the water, prompting the formation of a wind-driven current. When this current encounters a shoreline, the water level along the shore increases, generating a hydrostatic counterforce in equilibrium with the shear force. During a storm, wind setup is a component of the overall storm surge. For instance, in the Netherlands, the wind setup during a storm surge can elevate water levels by approximately 3 metres above the normal tide. In the case of cyclones, the wind setup can reach up to 5 metres. This can result in a significant rise in water levels, particularly when the water is forced into a shallow, funnel-shaped area. Observation. In lakes, water level fluctuations are typically attributed to wind setup. This effect is particularly noticeable in lakes with well-regulated water levels, where the wind setup can be clearly observed. By comparing this with the wind over the lake, the relationship between wind speed, water depth, and fetch length can be accurately determined. This is especially feasible in lakes where water depth remains fairly consistent, such as the IJsselmeer. At sea, wind setup is usually not directly observable, as the observed water level is a combination of both the tide and the wind setup. To isolate the wind setup, the (calculated) astronomical tide must be subtracted from the observed water level. For example, during the North Sea flood of 1953 at the Vlissingen tidal station (see image), the highest water level along the Dutch coast was recorded at 2.79 metres, but this was not the location of the highest wind setup, which was observed at Scheveningen with a measurement of 3.52 metres. Notably, the highest wind setup ever recorded in the Netherlands (3.63 metres) was in Dintelsas, Steenbergen in 1953, a location approximately 40 km from the sea along the Haringvliet estuary. Calculation of wind setup. Based on the equilibrium between the shear stress due to the wind on the water and the hydrostatic back pressure, the following equation is used: formula_0 in which: "h" = water depth "x" = distance "u"= wind speed formula_1, Ippen suggests formula_2 = 3.3*10−6 formula_3 = angle of the wind relative to the coast "g " = acceleration of gravity "cw" has a value between 0.8*10−3 and 3.0*10−3 Application at open coasts. For an open coast, the equation becomes: formula_4 in which Δ"h" = wind setup "F" = fetch length, this is the distance the wind blows over the water However, this formula is not always applicable, particularly when dealing with open coasts or varying water depths. In such cases, a more complex approach is needed, which involves solving the differential equation using a one- or two-dimensional grid. This method, combined with real-world data, is used in countries like the Netherlands to predict wind setup along the coast during potential storms. Application at (shallow) lakes. To calculate the wind setup in a lake, the following solution for the differential equation is used: formula_5 In 1966 the Delta Works Committee recommended using a value of 3.8*10−6 for formula_2 under Dutch conditions. However, an analysis of measurement data from the IJsselmeer between 2002 and 2013 led to a more reliable value for formula_2, specifically formula_2 = 2.2*10−6. This study also found that the formula underestimated wind setup at higher wind speeds. As a result, it has been suggested to increase the exponent of the wind speed from 2 to 3 and to further adjust formula_2 to formula_2=1.7*10−7. This modified formula can predict the wind setup on the IJsselmeer with an accuracy of approximately 15 centimetres. Note. Wind setup should not be mistaken for wave run-up, which refers to the height which a wave reaches on a slope, or wave setup which is the increase in water level caused by breaking waves. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dh}{dx}=\\frac{\\kappa u^2 \\cos \\phi}{gh}" }, { "math_id": 1, "text": "\\kappa = c_w \\frac{\\rho_{air}}{\\rho_{water}}" }, { "math_id": 2, "text": "\\kappa" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": " \\Delta h = \\sqrt{2 \\kappa \\frac{u^2}{g} F \\cos \\phi + h^2} - h" }, { "math_id": 5, "text": "\\Delta h= 0.5 \\kappa \\frac{u^2}{gh} F \\cos \\phi " } ]
https://en.wikipedia.org/wiki?curid=73561616
735672
Cellular network
Communication network A cellular network or mobile network is a telecommunications network where the link to and from end nodes is wireless and the network is distributed over land areas called cells, each served by at least one fixed-location transceiver (such as a base station). These base stations provide the cell with the network coverage which can be used for transmission of voice, data, and other types of content. A cell typically uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell. When joined together, these cells provide radio coverage over a wide geographic area. This enables numerous portable transceivers (e.g., mobile phones, tablets and laptops equipped with mobile broadband modems, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission. Cellular networks offer a number of desirable features: Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area of Earth. This allows mobile phones and mobile computing devices to be connected to the public switched telephone network and public Internet access. Private cellular networks can be used for research or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company, as well as for local wireless communications in enterprise and industrial settings such as factories, warehouses, mines, power plants, substations, oil and gas facilities and ports. Concept. In a cellular radio system, a land area to be supplied with radio service is divided into cells in a pattern dependent on terrain and reception characteristics. These cell patterns roughly take the form of regular shapes, such as hexagons, squares, or circles although hexagonal cells are conventional. Each of these cells is assigned with multiple frequencies ("f"1 – "f"6) which have corresponding radio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent cells, which would cause co-channel interference. The increased capacity in a cellular network, compared with a network with a single transmitter, comes from the mobile communication switching system developed by Amos Joel of Bell Labs that permitted multiple callers in a given area to use the same frequency by switching calls to the nearest available cellular tower having that frequency available. This strategy is viable because a given radio frequency can be reused in a different area for an unrelated transmission. In contrast, a single transmitter can only handle one transmission for a given frequency. Inevitably, there is some level of interference from the signal from the other cells which use the same frequency. Consequently, there must be at least one cell gap between cells which reuse the same frequency in a standard frequency-division multiple access (FDMA) system. Consider the case of a taxi company, where each radio has a manually operated channel selector knob to tune to different frequencies. As drivers move around, they change from channel to channel. The drivers are aware of which frequency approximately covers some area. When they do not receive a signal from the transmitter, they try other channels until finding one that works. The taxi drivers only speak one at a time when invited by the base station operator. This is a form of time-division multiple access (TDMA). History. The history of cellular phone technology began on December 11, 1947 with an internal memo written by Douglas H. Ring, a Bell Labs engineer in which he proposed development of a cellular telephone system by AT&amp;T. The first commercial cellular network, the 1G generation, was launched in Japan by Nippon Telegraph and Telephone (NTT) in 1979, initially in the metropolitan area of Tokyo. Within five years, the NTT network had been expanded to cover the whole population of Japan and became the first nationwide 1G network. It was an analog wireless network. The Bell System had developed cellular technology since 1947, and had cellular networks in operation in Chicago and Dallas prior to 1979, but commercial service was delayed by the breakup of the Bell System, with cellular assets transferred to the Regional Bell Operating Companies. The wireless revolution began in the early 1990s, leading to the transition from analog to digital networks. This was enabled by advances in MOSFET technology. The MOSFET, originally invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, was adapted for cellular networks by the early 1990s, with the wide adoption of power MOSFET, LDMOS (RF amplifier), and RF CMOS (RF circuit) devices leading to the development and proliferation of digital wireless mobile networks. The first commercial digital cellular network, the 2G generation, was launched in 1991. This sparked competition in the sector as the new operators challenged the incumbent 1G analog network operators. Cell signal encoding. To distinguish signals from several different transmitters, frequency-division multiple access (FDMA, used by analog and D-AMPS systems), time-division multiple access (TDMA, used by GSM) and code-division multiple access (CDMA, first used for PCS, and the basis of 3G) were developed. With FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. Each cellular call was assigned a pair of frequencies (one for base to mobile, the other for mobile to base) to provide full-duplex operation. The original AMPS systems had 666 channel pairs, 333 each for the CLEC "A" system and ILEC "B" system. The number of channels was expanded to 416 pairs per carrier, but ultimately the number of RF channels limits the number of calls that a cell site could handle. FDMA is a familiar technology to telephone companies, which used frequency-division multiplexing to add channels to their point-to-point wireline plants before time-division multiplexing rendered FDM obsolete. With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other. TDMA typically uses digital signaling to store and forward bursts of voice data that are fit into time slices for transmission, and expanded at the receiving end to produce a somewhat normal-sounding voice at the receiver. TDMA must introduce latency (time delay) into the audio signal. As long as the latency time is short enough that the delayed audio is not heard as an echo, it is not problematic. TDMA is a familiar technology for telephone companies, which used time-division multiplexing to add channels to their point-to-point wireline plants before packet switching rendered FDM obsolete. The principle of CDMA is based on spread spectrum technology developed for military use during World War II and improved during the Cold War into direct-sequence spread spectrum that was used for early CDMA cellular systems and Wi-Fi. DSSS allows multiple simultaneous phone conversations to take place on a single wideband RF channel, without needing to channelize them in time or frequency. Although more sophisticated than older multiple access schemes (and unfamiliar to legacy telephone companies because it was not developed by Bell Labs), CDMA has scaled well to become the basis for 3G cellular radio systems. Other available methods of multiplexing such as MIMO, a more sophisticated version of antenna diversity, combined with active beamforming provides much greater spatial multiplexing ability compared to original AMPS cells, that typically only addressed one to three unique spaces. Massive MIMO deployment allows much greater channel reuse, thus increasing the number of subscribers per cell site, greater data throughput per user, or some combination thereof. Quadrature Amplitude Modulation (QAM) modems offer an increasing number of bits per symbol, allowing more users per megahertz of bandwidth (and decibels of SNR), greater data throughput per user, or some combination thereof. Frequency reuse. The key characteristic of a cellular network is the ability to reuse frequencies to increase both coverage and capacity. As described above, adjacent cells must use different frequencies, however, there is no problem with two cells sufficiently far apart operating on the same frequency, provided the masts and cellular network users' equipment do not transmit with too much power. The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance, "D" is calculated as formula_0, where "R" is the cell radius and "N" is the number of cells per cluster. Cells may vary in radius from . The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells. The frequency reuse factor is the rate at which the same frequency can be used in the network. It is "1/K" (or "K" according to some books) where "K" is the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12, depending on notation). In case of "N" sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. "N" is typically 3. A reuse pattern of "N/K" denotes a further division in frequency among "N" sector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM). If the total available bandwidth is "B", each cell can only use a number of frequency channels corresponding to a bandwidth of "B/K", and each sector can use a bandwidth of "B/NK". Code-division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. While "N" is shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually. Recently also orthogonal frequency-division multiple access based systems such as LTE are being deployed with a frequency reuse of 1. Since such systems do not spread the signal across the frequency band, inter-cell radio resource management is important to coordinate resource allocation between different cell sites and to limit the inter-cell interference. There are various means of inter-cell interference coordination (ICIC) already defined in the standard. Coordinated scheduling, multi-site MIMO or multi-site beamforming are other examples for inter-cell radio resource management that might be standardized in the future. Directional antennas. Cell towers frequently use a directional signal to improve reception in higher-traffic areas. In the United States, the Federal Communications Commission (FCC) limits omnidirectional cell tower signals to 100 watts of power. If the tower has directional antennas, the FCC allows the cell operator to emit up to 500 watts of effective radiated power (ERP). Although the original cell towers created an even, omnidirectional signal, were at the centers of the cells and were omnidirectional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge. Each tower has three sets of directional antennas aimed in three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three channels, and three towers for each cell and greatly increases the chances of receiving a usable signal from at least one direction. The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas. Cell phone companies also use this directional signal to improve reception along highways and inside buildings like stadiums and arenas. Broadcast messages and paging. Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles. Commonly, for example in mobile telephony systems, the most important use of broadcast information is to set up channels for one-to-one communication between the mobile transceiver and the base station. This is called paging. The three different paging procedures generally adopted are sequential, parallel and selective paging. The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in the GSM or UMTS system, or Routing Area if a data packet session is involved; in LTE, cells are grouped into Tracking Areas). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system where it allows for low downlink latency in packet-based connections. In LTE/4G, the Paging procedure is initiated by the MME when data packets need to be delivered to the UE. Paging types supported by the MME are: Movement from cell to cell and handing over. In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency. In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called the handover or handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues. The exact details of the mobile system's move from one base station to the other vary considerably from system to system (see the example below for how a mobile phone network manages handover). Mobile phone network. The most common example of a cellular network is a mobile phone (cell phone) network. A mobile phone is a portable telephone which receives or makes calls through a cell site (base station) or transmitting tower. Radio waves are used to transfer signals to and from the cell phone. Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that the usually limited number of radio frequencies can be simultaneously used by many callers with less interference. A cellular network is used by the mobile phone operator to achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected to telephone exchanges (or switches), which in turn connect to the public telephone network. In cities, each cell site may have a range of up to approximately , while in rural areas, the range could be as much as . It is possible that in clear open areas, a user may receive signals from a cell site away. In rural areas with low-band coverage and tall towers, basic voice and messaging service may reach , with limitations on bandwidth and number of simultaneous calls. Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS (analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However, satellite phones are mobile phones that do not communicate directly with a ground-based cellular tower but may do so indirectly by way of a satellite. There are a number of different digital cellular technologies, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). The transition from existing analog to the digital standard followed a very different path in Europe and the US. As a consequence, multiple digital standards surfaced in the US, while Europe and many countries converged towards the GSM standard. Structure of the mobile phone cellular network. A simple view of the cellular mobile-radio network consists of the following: This network is the foundation of the GSM system network. There are many functions that are performed by this network in order to make sure customers get the desired service including mobility management, registration, call set-up, and handover. Any phone connects to the network via an RBS (Radio Base Station) at a corner of the corresponding cell which in turn connects to the Mobile switching center (MSC). The MSC provides a connection to the public switched telephone network (PSTN). The link from a phone to the RBS is called an "uplink" while the other way is termed "downlink". Radio channels effectively use the transmission medium through the use of the following multiplexing and access schemes: frequency-division multiple access (FDMA), time-division multiple access (TDMA), code-division multiple access (CDMA), and space-division multiple access (SDMA). Small cells. Small cells, which have a smaller coverage area than base stations, are categorised as follows: Cellular handover in mobile phone networks. As the phone user moves from one cell area to another cell while a call is in progress, the mobile station will search for a new channel to attach to in order not to drop the call. Once a new channel is found, the network will command the mobile unit to switch to the new channel and at the same time switch the call onto the new channel. With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated by using a pseudonoise code (PN code) that is specific to each phone. As the user moves from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of the same site) simultaneously. This is known as "soft handoff" because, unlike with traditional cellular technology, there is no one defined point where the phone switches to the new cell. In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be impossible to test the target channel directly while communicating. In this case, other techniques have to be used such as pilot beacons in IS-95. This means that there is almost always a brief break in the communication while searching for the new channel followed by the risk of an unexpected return to the old channel. If there is no ongoing communication or the communication can be interrupted, it is possible for the mobile unit to spontaneously move from one cell to another and then notify the base station with the strongest signal. Cellular frequency choice in mobile phone networks. The effect of frequency on cell coverage means that different frequencies serve better for different uses. Low frequencies, such as 450  MHz NMT, serve very well for countryside coverage. GSM 900 (900 MHz) is suitable for light urban coverage. GSM 1800 (1.8 GHz) starts to be limited by structural walls. UMTS, at 2.1 GHz is quite similar in coverage to GSM 1800. Higher frequencies are a disadvantage when it comes to coverage, but it is a decided advantage when it comes to capacity. Picocells, covering e.g. one floor of a building, become possible, and the same frequency can be used for cells which are practically neighbors. Cell service area may also vary due to interference from transmitting systems, both within and around that cell. This is true especially in CDMA based systems. The receiver requires a certain signal-to-noise ratio, and the transmitter should not send with too high transmission power in view to not cause interference with other transmitters. As the receiver moves away from the transmitter, the power received decreases, so the power control algorithm of the transmitter increases the power it transmits to restore the level of received power. As the interference (noise) rises above the received power from the transmitter, and the power of the transmitter cannot be increased anymore, the signal becomes corrupted and eventually unusable. In CDMA-based systems, the effect of interference from other mobile transmitters in the same cell on coverage area is very marked and has a special name, "cell breathing". One can see examples of cell coverage by studying some of the coverage maps provided by real operators on their web sites or by looking at independently crowdsourced maps such as Opensignal or CellMapper. In certain cases they may mark the site of the transmitter; in others, it can be calculated by working out the point of strongest coverage. A cellular repeater is used to extend cell coverage into larger areas. They range from wideband repeaters for consumer use in homes and offices to smart or digital repeaters for industrial needs. Cell size. The following table shows the dependency of the coverage area of one cell on the frequency of a CDMA2000 network: See also. Lists and technical information: Starting with EVDO the following techniques can also be used to improve performance: Equipment: Other: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D=R\\sqrt{3N}" } ]
https://en.wikipedia.org/wiki?curid=735672
7357636
Caldesmon
Mammalian protein found in Homo sapiens Caldesmon is a protein that in humans is encoded by the "CALD1" gene. Caldesmon is a calmodulin binding protein. Like calponin, caldesmon tonically inhibits the ATPase activity of myosin in smooth muscle. This gene encodes a calmodulin- and actin-binding protein that plays an essential role in the regulation of smooth muscle and nonmuscle contraction. The conserved domain of this protein possesses the binding activities to formula_0-calmodulin, actin, tropomyosin, myosin, and phospholipids. This protein is a potent inhibitor of the actin-tropomyosin activated myosin MgATPase, and serves as a mediating factor for formula_0-dependent inhibition of smooth muscle contraction. Alternative splicing of this gene results in multiple transcript variants encoding distinct isoforms. Immunochemistry. In diagnostic immunochemistry, caldesmon is a marker for smooth muscle differentiation. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Ca}^{2+}" } ]
https://en.wikipedia.org/wiki?curid=7357636
7358011
Bismuth ferrite
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Bismuth ferrite (BiFeO3, also commonly referred to as BFO in materials science) is an inorganic chemical compound with perovskite structure and one of the most promising multiferroic materials. The room-temperature phase of BiFeO3 is classed as rhombohedral belonging to the space group R3c. It is synthesized in bulk and thin film form and both its antiferromagnetic (G type ordering) Néel temperature (approximately 653 K) and ferroelectric Curie temperature are well above room temperature (approximately 1100K). Ferroelectric polarization occurs along the pseudocubic direction (formula_0) with a magnitude of 90–95 μC/cm2. Sample Preparation. Bismuth ferrite is not a naturally occurring mineral and several synthesis routes to obtain the compound have been developed. Solid state synthesis. In the solid state reaction method bismuth oxide (Bi2O3) and iron oxide (Fe2O3) in a 1:1 mole ratio are mixed with a mortar or by ball milling and then fired at elevated temperatures. Preparation of pure stoichiometric BiFeO3 is challenging due to the volatility of bismuth during firing which leads to the formation of stable secondary Bi25FeO39 (selenite) and Bi2Fe4O9 (mullite) phase. Typically a firing temperature of 800 to 880 Celsius is used for 5 to 60 minutes with rapid subsequent cooling. Excess Bi2O3 has also been used a measure to compensate for bismuth volatility and to avoid formation of the Bi2Fe4O9 phase. Single crystal growth. Bismuth ferrite melts incongruently, but it can be grown from a bismuth oxide rich flux (e.g. a 4:1:1 mixture of Bi2O3, Fe2O3 and B2O3 at approximately 750-800 Celsius). High quality single crystals have been important for studying the ferroelectric, antiferromagnetic and magnetoelectric properties of bismuth ferrite. Chemical routes. Wet chemical synthesis routes based on sol-gel chemistry, modified Pechini routes, hydrothermal synthesis and precipitation have been used to prepare phase pure BiFeO3. The advantage of the chemical routes is the compositional homogeneity of the precursors and the reduced loss of bismuth due to the much lower temperatures needed. In sol-gel routes, an amorphous precursor is calcined at 300-600 Celsius to remove organic residuals and to promote crystallization of the bismuth ferrite perovskite phase, while the disadvantage is that the resulting powder must be sintered at high temperature to make a dense polycrystal. Solution combustion reaction is a low-cost method used to synthesize porous BiFeO3. In this method, a reducing agent (such glycine, citric acid, urea, etc.) and an oxidizing agent (nitrate ions, nitric acid, etc.) are used to generate the reduction-oxidation (RedOx) reaction. The appearance of the flame, and consequently the temperature of the mixture, depends on the oxidizing/reducing agents ratio used. Annealing up to 600 °C is sometimes needed to decompose the bismuth oxo-nitrates generated as intermediates. Since the content of Fe cations in this semiconductor material, Mӧssbauer spectroscopy is a proper technique to detect the presence of a paramagnetic component in the phase. Thin films. The electric and magnetic properties of high quality epitaxial thin films of bismuth ferrite reported in 2003 revived the scientific interest for bismuth ferrite. Epitaxial thin films have the great advantage that their properties can be tuned by processing or chemical doping, and that they can be integrated in electronic circuitry. Epitaxial strain induced by single crystalline substrates with different lattice parameters than bismuth ferrite can be used to modify the crystal structure to monoclinic or tetragonal symmetry and change the ferroelectric, piezoelectric or magnetic properties. Pulsed laser deposition (PLD) is a very common route to epitaxial BiFeO3 films, and SrTiO3 substrates with SrRuO3 electrodes are typically used. Sputtering, molecular-beam epitaxy (MBE), metal organic chemical vapor deposition (MOCVD), atomic layer deposition (ALD), and chemical solution deposition are other methods to prepare epitaxial bismuth ferrite thin films. Apart from its magnetic and electric properties bismuth ferrite also possesses photovoltaic properties which is known as ferroelectric photovoltaic (FPV) effect. Applications. Being a room temperature multiferroic material and due to its ferroelectric photovoltaic (FPV) effect, bismuth ferrite has several applications in the field of magnetism, spintronics, photovoltaics, etc. Photovoltaics. In the FPV effect, a photocurrent is generated in a ferroelectric material under illumination and its direction is dependent upon the ferroelectric polarization of that material. The FPV effect has a promising potential as an alternative to conventional photovoltaic devices. But the main hindrance is that a very small photocurrent is generated in ferroelectric materials like LiNbO3, which is due to its large bandgap and low conductivity. In this direction bismuth ferrite has shown a great potential since a large photocurrent effect and above bandgap voltage is observed in this material under illumination. Most of the works using bismuth ferrite as a photovoltaic material has been reported on its thin film form but in a few reports researchers have formed a bilayer structure with other materials like polymers, graphene and other semiconductors. In a report "p-i-n" heterojunction has been formed with bismuth ferrite nanoparticles along with two oxide based carrier transporting layers. In spite of such efforts the power conversion efficiency obtained from bismuth ferrite is still very low. References. &lt;templatestyles src="Reflist/styles.css" /&gt; https://doi.org/10.1016/j.jallcom.2011.05.106
[ { "math_id": 0, "text": "\\langle 111\\rangle_c" } ]
https://en.wikipedia.org/wiki?curid=7358011
73598108
Integral probability metric
Class of distance functions defined between probability distributions In probability theory, integral probability metrics are types of distance functions between probability distributions, defined by how well a class of functions can distinguish the two distributions. Many important statistical distances are integral probability metrics, including the Wasserstein-1 distance and the total variation distance. In addition to theoretical importance, integral probability metrics are widely used in areas of statistics and machine learning. The name "integral probability metric" was given by German statistician Alfred Müller; the distances had also previously been called "metrics with a "ζ"-structure." Definition. Integral probability metrics (IPMs) are distances on the space of distributions over a set formula_0, defined by a class formula_1 of real-valued functions on formula_2 as formula_3 here the notation "P" "f" refers to the expectation of "f" under the distribution "P". The absolute value in the definition is unnecessary, and often omitted, for the usual case where for every formula_4 its negation formula_5 is also in formula_6. The functions "f" being optimized over are sometimes called "critic" functions; if a particular formula_7 achieves the supremum, it is often termed a "witness function" (it "witnesses" the difference in the distributions). These functions try to have large values for samples from "P" and small (likely negative) values for samples from "Q"; this can be thought of as a weaker version of classifers, and indeed IPMs can be interpreted as the optimal risk of a particular classifier.sec. 4 The choice of formula_6 determines the particular distance; more than one formula_6 can generate the same distance. For any choice of formula_6, formula_8 satisfies all the definitions of a metric except that we may have we may have formula_9 for some "P" ≠ "Q"; this is variously termed a "pseudometric" or a "semimetric" depending on the community. For instance, using the class formula_10 which only contains the zero function, formula_11 is identically zero. formula_8 is a metric if and only if formula_6 separates points on the space of probability distributions, i.e. for any "P" ≠ "Q" there is some formula_4 such that formula_12; most, but not all, common particular cases satisfy this property. Examples. All of these examples are metrics except when noted otherwise. Relationship to "f"-divergences. The "f"-divergences are probably the best-known way to measure dissimilarity of probability distributions. It has been shownsec. 2 that the only functions which are both IPMs and "f"-divergences are of the form formula_16, where formula_17 and formula_18 is the total variation distance between distributions. One major difference between "f"-divergences and most IPMs is that when "P" and "Q" have disjoint support, all "f"-divergences take on a constant value; by contrast, IPMs where functions in formula_6 are "smooth" can give "partial credit." For instance, consider the sequence formula_19 of Dirac measures at 1/"n"; this sequence converges in distribution to formula_20, and many IPMs satisfy formula_21, but no nonzero "f"-divergence can satisfy this. That is, many IPMs are continuous in weaker topologies than "f"-divergences. This property is sometimes of substantial importance, although other options also exist, such as considering "f"-divergences between distributions convolved with continuous noise. Estimation from samples. Because IPM values between discrete distributions are often sensible, it is often reasonable to estimate formula_11 using a simple "plug-in" estimator: formula_22 where formula_23 and formula_24 are empirical measures of sample sets. These empirical distances can be computed exactly for some classes formula_6; estimation quality varies depending on the distance, but can be minimax-optimal in certain settings. When exact maximization is not available or too expensive, another commonly used scheme is to divide the samples into "training" sets (with empirical measures formula_25 and formula_26) and "test" sets (formula_27 and formula_28), find formula_29 approximately maximizing formula_30, then use formula_31 as an estimate. This estimator can possibly be consistent, but has a negative biasthm. 2. In fact, no unbiased estimator can exist for any IPMthm. 3, although there is for instance an unbiased estimator of the "squared" maximum mean discrepancy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal X" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "\\mathcal{X}" }, { "math_id": 3, "text": "D_{\\mathcal{F}}(P, Q) = \\sup_{f \\in \\mathcal F} \\big| \\mathbb E_{X \\sim P} f(X) - \\mathbb E_{Y \\sim Q} f(Y) \\big| = \\sup_{f \\in \\mathcal F} \\big| P f - Q f \\big|;" }, { "math_id": 4, "text": "f \\in \\mathcal F" }, { "math_id": 5, "text": "-f" }, { "math_id": 6, "text": "\\mathcal F" }, { "math_id": 7, "text": "f^* \\in \\mathcal F" }, { "math_id": 8, "text": "D_{\\mathcal F}" }, { "math_id": 9, "text": "D_{\\mathcal F}(P, Q) = 0" }, { "math_id": 10, "text": "\\mathcal F = \\{ x \\mapsto 0 \\}" }, { "math_id": 11, "text": "D_{\\mathcal F}(P, Q)" }, { "math_id": 12, "text": "P f \\ne Q f" }, { "math_id": 13, "text": "\\mathcal F = \\{ f : \\mathcal X \\to \\{0, 1\\} \\}" }, { "math_id": 14, "text": "\\mathcal F = \\{ f : \\mathcal X \\to [0, 1] \\}" }, { "math_id": 15, "text": "\\mathcal F = \\{ 1_{(-\\infty, t]} : t \\in \\mathbb R \\}" }, { "math_id": 16, "text": "c \\, \\operatorname{TV}(P, Q)" }, { "math_id": 17, "text": "c \\in [0, \\infty]" }, { "math_id": 18, "text": "\\operatorname{TV}" }, { "math_id": 19, "text": "\\delta_{1/n}" }, { "math_id": 20, "text": "\\delta_0" }, { "math_id": 21, "text": "D_{\\mathcal F}(\\delta_{1/n}, \\delta_0) \\to 0" }, { "math_id": 22, "text": "D_{\\mathcal F}(\\hat P, \\hat Q)" }, { "math_id": 23, "text": "\\hat P" }, { "math_id": 24, "text": "\\hat Q" }, { "math_id": 25, "text": "\\hat P_\\mathit{train}" }, { "math_id": 26, "text": "\\hat Q_\\mathit{train}" }, { "math_id": 27, "text": "\\hat P_\\mathit{test}" }, { "math_id": 28, "text": "\\hat Q_\\mathit{test}" }, { "math_id": 29, "text": "\\hat f" }, { "math_id": 30, "text": "\\big| \\hat P_\\mathit{train} f - \\hat Q_\\mathit{train} f \\big|" }, { "math_id": 31, "text": "\\big| \\hat P_\\mathit{test} \\hat f - \\hat Q_\\mathit{test} \\hat f \\big|" } ]
https://en.wikipedia.org/wiki?curid=73598108
73599200
Sister Beiter conjecture
Conjecture on the coefficients of cyclotomic polynomials In mathematics, the Sister Beiter conjecture is a conjecture about the size of coefficients of ternary cyclotomic polynomials (i.e. where the index is the product of three prime numbers). It is named after Marion Beiter, a Catholic nun who first proposed it in 1968. Background. For formula_0 the maximal coefficient (in absolute value) of the cyclotomic polynomial formula_1 is denoted by formula_2. Let formula_3 be three prime numbers. In this case the cyclotomic polynomial formula_4 is called "ternary". In 1895, A. S. Bang proved that formula_5. This implies the existence of formula_6 such that formula_7. Statement. Sister Beiter conjectured in 1968 that formula_8. This was later disproved, but a "corrected Sister Beiter conjecture" was put forward as formula_9. Status. A preprint from 2023 explains the history in detail and claims to prove this corrected conjecture. Explicitly it claims to prove formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\in\\mathbb{N}_{>0}" }, { "math_id": 1, "text": "\\Phi_n(x)" }, { "math_id": 2, "text": "A(n)" }, { "math_id": 3, "text": "3\\leq p\\leq q\\leq r" }, { "math_id": 4, "text": "\\Phi_{pqr}(x)" }, { "math_id": 5, "text": "A(pqr)\\leq p-1" }, { "math_id": 6, "text": "M(p):=\\max\\limits_{p\\leq q\\leq r\\text{ prime}}A(pqr)" }, { "math_id": 7, "text": "1\\leq M(p)\\leq p-1" }, { "math_id": 8, "text": "M(p)\\leq \\frac{p+1}{2}" }, { "math_id": 9, "text": "M(p)\\leq \\frac{2}{3}p" }, { "math_id": 10, "text": "\nM(p)\\leq\\frac{2}{3}p \\text{ and } \n\\lim\\limits_{p\\rightarrow\\infty}\\frac{M(p)}{p}= \\frac{2}{3}.\n" } ]
https://en.wikipedia.org/wiki?curid=73599200
7359952
Plateau–Rayleigh instability
Fluid breakup of a falling stream In fluid dynamics, the Plateau–Rayleigh instability, often just called the Rayleigh instability, explains why and how a falling stream of fluid breaks up into smaller packets with the same volume but less surface area. It is related to the Rayleigh–Taylor instability and is part of a greater branch of fluid dynamics concerned with fluid thread breakup. This fluid instability is exploited in the design of a particular type of ink jet technology whereby a jet of liquid is perturbed into a steady stream of droplets. The driving force of the Plateau–Rayleigh instability is that liquids, by virtue of their surface tensions, tend to minimize their surface area. A considerable amount of work has been done recently on the final pinching profile by attacking it with self-similar solutions. History. The Plateau–Rayleigh instability is named for Joseph Plateau and Lord Rayleigh. In 1873, Plateau found experimentally that a vertically falling stream of water will break up into drops if its length is greater than about 3.13 to 3.18 times its diameter, which he noted is close to π. Later, Rayleigh showed theoretically that a vertically falling column of non-viscous liquid with a circular cross-section should break up into drops if its length exceeded its circumference, which is indeed π times its diameter. Theory. The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is (for example, in the liquid jet nozzle, there is vibration on the liquid stream due to a friction between the nozzle and the liquid stream). If the perturbations are resolved into sinusoidal components, we find that some components grow with time, while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per unit length) and the radius of the original cylindrical stream. The diagram to the right shows an exaggeration of a single component. By assuming that all possible components exist initially in roughly equal (but minuscule) amplitudes, the size of the final drops can be predicted by determining by wave number which component grows the fastest. As time progresses, it is the component with the maximal growth rate that will come to dominate and will eventually be the one that pinches the stream into drops. Although a thorough understanding of how this happens requires a mathematical development (see references), the diagram can provide a conceptual understanding. Observe the two bands shown girdling the stream—one at a peak and the other at a trough of the wave. At the trough, the radius of the stream is smaller, hence according to the Young–Laplace equation the pressure due to surface tension is increased. Likewise at the peak the radius of the stream is greater and, by the same reasoning, pressure due to surface tension is reduced. If this were the only effect, we would expect that the higher pressure in the trough would squeeze liquid into the lower-pressure region in the peak. In this way we see how the wave grows in amplitude over time. But the Young–Laplace equation is influenced by two separate radius components. In this case one is the radius, already discussed, of the stream itself. The other is the radius of curvature of the wave itself. The fitted arcs in the diagram show these at a peak and at a trough. Observe that the radius of curvature at the trough is, in fact, negative, meaning that, according to Young–Laplace, it actually "decreases" the pressure in the trough. Likewise the radius of curvature at the peak is positive and increases the pressure in that region. The effect of these components is opposite the effects of the radius of the stream itself. The two effects, in general, do not exactly cancel. One of them will have greater magnitude than the other, depending upon wave number and the initial radius of the stream. When the wave number is such that the radius of curvature of the wave dominates that of the radius of the stream, such components will decay over time. When the effect of the radius of the stream dominates that of the curvature of the wave, such components grow exponentially with time. When all the maths is done, it is found that unstable components (that is, components that grow over time) are only those where the product of the wave number with the initial radius is less than unity (formula_0). The component that grows the fastest is the one whose wave number satisfies the equation formula_1 Examples. Water dripping from a faucet/tap. A special case of this is the formation of small droplets when water is dripping from a faucet/tap. When a segment of water begins to separate from the faucet, a neck is formed and then stretched. If the diameter of the faucet is big enough, the neck does not get sucked back in, and it undergoes a Plateau–Rayleigh instability and collapses into a small droplet. Urination. Another everyday example of Plateau–Rayleigh instability occurs in urination, particularly standing male urination. The stream of urine experiences instability after about 15 cm (6 inches), breaking into droplets, which causes significant splash-back on impacting a surface. By contrast, if the stream contacts a surface while still in a stable state – such as by urinating directly against a urinal or wall – splash-back is almost completely eliminated. Inkjet printing. Continuous inkjet printers (as opposed to drop-on-demand inkjet printers) generate a cylindrical stream of ink that breaks up into droplets prior to staining printer paper. By adjusting the size of the droplets using tunable temperature or pressure perturbations and imparting electrical charge to the ink, inkjet printers then steer the stream of droplets using electrostatics to form specific patterns on printer paper Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "kR_0 < 1" }, { "math_id": 1, "text": " kR_0 \\simeq 0.697." } ]
https://en.wikipedia.org/wiki?curid=7359952
736018
Consumption function
Economic model relating consumption and disposable income In economics, the consumption function describes a relationship between consumption and disposable income. The concept is believed to have been introduced into macroeconomics by John Maynard Keynes in 1936, who used it to develop the notion of a government spending multiplier. Details. Its simplest form is the "linear consumption function" used frequently in simple Keynesian models: formula_1 where formula_2 is the autonomous consumption that is independent of disposable income; in other words, consumption when disposable income is zero. The term formula_3 is the induced consumption that is influenced by the economy's income level formula_0. The parameter formula_4 is known as the marginal propensity to consume, i.e. the increase in consumption due to an incremental increase in disposable income, since formula_5. Geometrically, formula_4 is the slope of the consumption function. Keynes proposed this model to fit three stylized facts: By basing his model in how typical households decide how much to save and spend, Keynes was informally using a microfoundation approach to the macroeconomics of saving. Keynes also took note of the tendency for the marginal propensity to consume to decrease as income increases, i.e. formula_9. If this assumption is to be used, it would result in a nonlinear consumption function with a diminishing slope. Further theories on the shape of the consumption function include James Duesenberry's (1949) relative consumption expenditure, Franco Modigliani and Richard Brumberg's (1954) life-cycle hypothesis, and Milton Friedman's (1957) permanent income hypothesis. Some new theoretical works following Duesenberry's and based in behavioral economics suggest that a number of behavioural principles can be taken as microeconomic foundations for a behaviorally-based aggregate consumption function. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y_{d}" }, { "math_id": 1, "text": "C = a + b \\cdot Y_{d}" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "b \\cdot Y_{d}" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": " \\partial C / \\partial Y_{d} = b" }, { "math_id": 6, "text": "b \\in (0,1)" }, { "math_id": 7, "text": "\\frac{C}{Y_d}" }, { "math_id": 8, "text": "Y_d" }, { "math_id": 9, "text": " \\partial^{2} C / \\partial Y_{d}^{2} < 0" } ]
https://en.wikipedia.org/wiki?curid=736018
73605210
Motzkin–Taussky theorem
Theorem on linear operators The Motzkin–Taussky theorem is a result from operator and matrix theory about the representation of a sum of two bounded, linear operators (resp. matrices). The theorem was proven by Theodore Motzkin and Olga Taussky-Todd. The theorem is used in perturbation theory, where e.g. operators of the form formula_0 are examined. Statement. Let formula_1 be a finite-dimensional complex vector space. Furthermore, let formula_2 be such that all linear combinations formula_3 are diagonalizable for all formula_4. Then all eigenvalues of formula_5 are of the form formula_6 (i.e. they are linear in formula_7 und formula_8) and formula_9 are independent of the choice of formula_10. Here formula_11 stands for an eigenvalue of formula_12. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T+xT_1" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "A,B\\in B(X)" }, { "math_id": 3, "text": "T=\\alpha A+\\beta B" }, { "math_id": 4, "text": "\\alpha,\\beta\\in \\C" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "\\lambda_{T}=\\alpha\\lambda_{A} + \\beta \\lambda_{B}" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "\\beta" }, { "math_id": 9, "text": "\\lambda_{A},\\lambda_{B}" }, { "math_id": 10, "text": "\\alpha,\\beta" }, { "math_id": 11, "text": "\\lambda_{A}" }, { "math_id": 12, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=73605210
7360695
Data synchronization
Consistency among data between source and target data stores Data synchronization is the process of establishing consistency between source and target data stores, and the continuous harmonization of the data over time. It is fundamental to a wide variety of applications, including file synchronization and mobile device synchronization. Data synchronization can also be useful in encryption for synchronizing public key servers. Data synchronization is needed to update and keep multiple copies of a set of data coherent with one another or to maintain data integrity, Figure 3. For example, database replication is used to keep multiple copies of data synchronized with database servers that store data in different locations. Examples. Examples include: Challenges. Some of the challenges which user may face in data synchronization: Data formats complexity. Data formats tend to grow more complex with time as the organization grows and evolves. This results not only in building simple interfaces between the two applications (source and target), but also in a need to transform the data while passing them to the target application. ETL (extraction transformation loading) tools can be helpful at this stage for managing data format complexities. Real-timeliness. In real-time systems, customers want to see the current status of their order in e-shop, the current status of a parcel delivery—a real time parcel tracking—, the current balance on their account, etc. This shows the need of a real-time system, which is being updated as well to enable smooth manufacturing process in real-time, e.g., ordering material when enterprise is running out stock, synchronizing customer orders with manufacturing process, etc. From real life, there exist so many examples where real-time processing gives successful and competitive advantage. Data security. There are no fixed rules and policies to enforce data security. It may vary depending on the system which you are using. Even though the security is maintained correctly in the source system which captures the data, the security and information access privileges must be enforced on the target systems as well to prevent any potential misuse of the information. This is a serious issue and particularly when it comes for handling secret, confidential and personal information. So because of the sensitivity and confidentiality, data transfer and all in-between information must be encrypted. Data quality. Data quality is another serious constraint. For better management and to maintain good quality of data, the common practice is to store the data at one location and share with different people and different systems and/or applications from different locations. It helps in preventing inconsistencies in the data. Performance. There are five different phases involved in the data synchronization process: Each of these steps is critical. In case of large amounts of data, the synchronization process needs to be carefully planned and executed to avoid any negative impact on performance. File-based solutions. There are tools available for file synchronization, version control (CVS, Subversion, etc.), distributed filesystems (Coda, etc.), and mirroring (rsync, etc.), in that all these attempt to keep sets of files synchronized. However, only version control and file synchronization tools can deal with modifications to more than one copy of the files. Theoretical models. Several theoretical models of data synchronization exist in the research literature, and the problem is also related to the problem of Slepian–Wolf coding in information theory. The models are classified based on how they consider the data to be synchronized. Unordered data. The problem of synchronizing unordered data (also known as the set reconciliation problem) is modeled as an attempt to compute the symmetric difference formula_0 between two remote sets formula_1 and formula_2 of b-bit numbers. Some solutions to this problem are typified by: Ordered data. In this case, two remote strings formula_3 and formula_4 need to be reconciled. Typically, it is assumed that these strings differ by up to a fixed number of edits (i.e. character insertions, deletions, or modifications). Then data synchronization is the process of reducing edit distance between formula_3 and formula_4, up to the ideal distance of zero. This is applied in all filesystem based synchronizations (where the data is ordered). Many practical applications of this are discussed or referenced above. It is sometimes possible to transform the problem to one of unordered data through a process known as shingling (splitting the strings into "shingles"). Error handling. In fault-tolerant systems, distributed databases must be able to cope with the loss or corruption of (part of) their data. The first step is usually replication, which involves making multiple copies of the data and keeping them all up to date as changes are made. However, it is then necessary to decide which copy to rely on when loss or corruption of an instance occurs. The simplest approach is to have a single master instance that is the sole source of truth. Changes to it are replicated to other instances, and one of those instances becomes the new master when the old master fails. Paxos and Raft are more complex protocols that exist to solve problems with transient effects during failover, such as two instances thinking they are the master at the same time. Secret sharing is useful if failures of whole nodes are very common. This moves synchronization from an explicit recovery process to being part of each read, where a read of some data requires retrieving encoded data from several different nodes. If corrupt or out-of-date data may be present on some nodes, this approach may also benefit from the use of an error correction code. DHTs and Blockchains try to solve the problem of synchronization between many nodes (hundreds to billions). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_A \\oplus S_B = (S_A - S_B) \\cup (S_B - S_A)" }, { "math_id": 1, "text": "S_A" }, { "math_id": 2, "text": "S_B" }, { "math_id": 3, "text": "\\sigma_A" }, { "math_id": 4, "text": "\\sigma_B" } ]
https://en.wikipedia.org/wiki?curid=7360695
7363
Complexity
Properties of systems that cannot be simply described or modeled Complexity characterizes the behavior of a system or model whose components interact in multiple ways and follow local rules, leading to non-linearity, randomness, collective dynamics, hierarchy, and emergence. The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex linkages at various scales is the main goal of complex systems theory. The intuitive criterion of complexity can be formulated as follows: a system would be more complex if more parts could be distinguished, and if more connections between them existed. As of 2010[ [update]], a number of approaches to characterizing complexity have been used in science; Zayed "et al." reflect many of these. Neil Johnson states that "even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples..." Ultimately Johnson adopts the definition of "complexity science" as "the study of the phenomena which emerge from a collection of interacting objects". Overview. Definitions of complexity often depend on the concept of a "system" – a set of parts or elements that have relationships among them differentiated from relationships with other elements outside the relational regime. Many definitions tend to postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of relationships among the elements. However, what one sees as complex and what one sees as simple is relative and changes with time. Warren Weaver posited in 1948 two forms of complexity: disorganized complexity, and organized complexity. Phenomena of 'disorganized complexity' are treated using probability theory and statistical mechanics, while 'organized complexity' deals with phenomena that escape such approaches and confront "dealing simultaneously with a sizable number of factors which are interrelated into an organic whole". Weaver's 1948 paper has influenced subsequent thinking about complexity. The approaches that embody concepts of systems, multiple elements, multiple relational regimes, and state spaces might be summarized as implying that complexity arises from the number of distinguishable relational regimes (and their associated state spaces) in a defined system. Some definitions relate to the algorithmic basis for the expression of a complex phenomenon or model or mathematical expression, as later set out herein. Disorganized vs. organized. One of the problems in addressing complexity issues has been formalizing the intuitive conceptual distinction between the large number of variances in relationships extant in random collections, and the sometimes large, but smaller, number of relationships between elements in systems where constraints (related to correlation of otherwise independent elements) simultaneously reduce the variations from element independence and create distinguishable regimes of more-uniform, or correlated, relationships, or interactions. Weaver perceived and addressed this problem, in at least a preliminary way, in drawing a distinction between "disorganized complexity" and "organized complexity". In Weaver's view, disorganized complexity results from the particular system having a very large number of parts, say millions of parts, or many more. Though the interactions of the parts in a "disorganized complexity" situation can be seen as largely random, the properties of the system as a whole can be understood by using probability and statistical methods. A prime example of disorganized complexity is a gas in a container, with the gas molecules as the parts. Some would suggest that a system of disorganized complexity may be compared with the (relative) simplicity of planetary orbits – the latter can be predicted by applying Newton's laws of motion. Of course, most real-world systems, including planetary orbits, eventually become theoretically unpredictable even using Newtonian dynamics; as discovered by modern chaos theory. Organized complexity, in Weaver's view, resides in nothing else than the non-random, or correlated, interaction between the parts. These correlated relationships create a differentiated structure that can, as a system, interact with other systems. The coordinated system manifests properties not carried or dictated by individual parts. The organized aspect of this form of complexity in regards to other systems, rather than the subject system, can be said to "emerge," without any "guiding hand". The number of parts does not have to be very large for a particular system to have emergent properties. A system of organized complexity may be understood in its properties (behavior among the properties) through modeling and simulation, particularly modeling and simulation with computers. An example of organized complexity is a city neighborhood as a living mechanism, with the neighborhood people among the system's parts. Sources and factors. There are generally rules which can be invoked to explain the origin of complexity in a given system. The source of disorganized complexity is the large number of parts in the system of interest, and the lack of correlation between elements in the system. In the case of self-organizing living systems, usefully organized complexity comes from beneficially mutated organisms being selected to survive by their environment for their differential reproductive ability or at least success over inanimate matter or less organized complex organisms. See e.g. Robert Ulanowicz's treatment of ecosystems. Complexity of an object or system is a relative property. For instance, for many functions (problems), such a computational complexity as time of computation is smaller when multitape Turing machines are used than when Turing machines with one tape are used. Random Access Machines allow one to even more decrease time complexity (Greenlaw and Hoover 1998: 226), while inductive Turing machines can decrease even the complexity class of a function, language or set (Burgin 2005). This shows that tools of activity can be an important factor of complexity. Varied meanings. In several scientific fields, "complexity" has a precise meaning: Other fields introduce less precisely defined notions of complexity: Study. Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. From one perspective, that which is somehow complex – displaying variation without being random – is most worthy of interest given the rewards found in the depths of exploration. The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions. This means that complex is the opposite of independent, while complicated is the opposite of simple. While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human brains or social systems. One such interdisciplinary group of fields is relational order theories. Topics. Behaviour. The behavior of a complex system is often said to be due to emergence and self-organization. Chaos theory has investigated the sensitivity of systems to variations in initial conditions as one cause of complex behaviour. Mechanisms. Recent developments in artificial life, evolutionary computation and genetic algorithms have led to an increasing emphasis on complexity and complex adaptive systems. Simulations. In social science, the study on the emergence of macro-properties from the micro-properties, also known as macro-micro view in sociology. The topic is commonly recognized as social complexity that is often related to the use of computer simulation in social science, i.e. computational sociology. Systems. Systems theory has long been concerned with the study of complex systems (in recent times, "complexity theory" and "complex systems" have also been used as names of the field). These systems are present in the research of a variety disciplines, including biology, economics, social studies and technology. Recently, complexity has become a natural domain of interest of real world socio-cognitive systems and emerging systemics research. Complex systems tend to be high-dimensional, non-linear, and difficult to model. In specific circumstances, they may exhibit low-dimensional behaviour. Data. In information theory, algorithmic information theory is concerned with the complexity of strings of data. Complex strings are harder to compress. While intuition tells us that this may depend on the codec used to compress a string (a codec could be theoretically created in any arbitrary language, including one in which the very small command "X" could cause the computer to output a very complicated string like "18995316"), any two Turing-complete languages can be implemented in each other, meaning that the length of two encodings in different languages will vary by at most the length of the "translation" language – which will end up being negligible for sufficiently large data strings. These algorithmic measures of complexity tend to assign high values to random noise. However, under a certain understanding of complexity, arguably the most intuitive one, random noise is meaningless and so not complex at all. Information entropy is also sometimes used in information theory as indicative of complexity, but entropy is also high for randomness. In the case of complex systems, information fluctuation complexity was designed so as not to measure randomness as complex and has been useful in many applications. More recently, a complexity metric was developed for images that can avoid measuring noise as complex by using the minimum description length principle. Classification Problems. There has also been interest in measuring the complexity of classification problems in supervised machine learning. This can be useful in meta-learning to determine for which data sets filtering (or removing suspected noisy instances from the training set) is the most beneficial and could be expanded to other areas. For binary classification, such measures can consider the overlaps in feature values from differing classes, the separability of the classes, and measures of geometry, topology, and density of manifolds. For non-binary classification problems, instance hardness is a bottom-up approach that first seeks to identify instances that are likely to be misclassified (assumed to be the most complex). The characteristics of such instances are then measured using supervised measures such as the number of disagreeing neighbors or the likelihood of the assigned class label given the input features. In molecular recognition. A recent study based on molecular simulations and compliance constants describes molecular recognition as a phenomenon of organisation. Even for small molecules like carbohydrates, the recognition process can not be predicted or designed even assuming that each individual hydrogen bond's strength is exactly known. The law of requisite complexity. Driving from the law of requisite variety, Boisot and McKelvey formulated the ‘Law of Requisite Complexity’, that holds that, in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts. Positive, appropriate and negative complexity. The application in project management of the Law of Requisite Complexity, as proposed by Stefan Morcov, is the analysis of positive, appropriate and negative complexity. In project management. Project complexity is the property of a project which makes it difficult to understand, foresee, and keep under control its overall behavior, even when given reasonably complete information about the project system. In systems engineering. Maik Maurer considers complexity as a reality in engineering. He proposed a methodology for managing complexity in systems engineering :                              1.           Define the system.                              2.           Identify the type of complexity.                              3.           Determine the strategy.                              4.           Determine the method.                              5.           Model the system.                              6.           Implement the method. Applications. Computational complexity theory is the study of the complexity of problems – that is, the difficulty of solving them. Problems can be classified by complexity class according to the time it takes for an algorithm – usually a computer program – to solve them as a function of the problem size. Some problems are difficult to solve, while others are easy. For example, some difficult problems need algorithms that take an exponential amount of time in terms of the size of the problem to solve. Take the travelling salesman problem, for example. It can be solved, as denoted in Big O notation, in time formula_0 (where "n" is the size of the network to visit – the number of cities the travelling salesman must visit exactly once). As the size of the network of cities grows, the time needed to find the route grows (more than) exponentially. Even though a problem may be computationally solvable in principle, in actual practice it may not be that simple. These problems might require large amounts of time or an inordinate amount of space. Computational complexity may be approached from many different aspects. Computational complexity can be investigated on the basis of time, memory or other resources used to solve the problem. Time and space are two of the most important and popular considerations when problems of complexity are analyzed. There exist a certain class of problems that although they are solvable in principle they require so much time or space that it is not practical to attempt to solve them. These problems are called intractable. There is another form of complexity called hierarchical complexity. It is orthogonal to the forms of complexity discussed so far, which are called horizontal complexity. Emerging applications in other fields. The concept of complexity is being increasingly used in the study of cosmology, big history, and cultural evolution with increasing granularity, as well as increasing quantification. Application in cosmology. Eric Chaisson has advanced a cosmoglogical complexity metric which he terms Energy Rate Density. This approach has been expanded in various works, most recently applied to measuring evolving complexity of nation-states and their growing cities. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^2 2^n)" } ]
https://en.wikipedia.org/wiki?curid=7363
73634
Glossary of mathematical symbols
Meanings of symbols used in mathematics A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics. The most basic symbols are the decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the Hindu–Arabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other sorts of mathematical objects. As the number of these sorts has remarkably increased in modern mathematics, the Greek alphabet and some Hebrew letters are also used. In mathematical formulas, the standard typeface is italic type for Latin letters and lower-case Greek letters, and upright type for upper case Greek letters. For having more symbols, other typefaces are also used, mainly boldface formula_0, script typeface formula_1 (the lower-case script face is rarely used because of the possible confusion with the standard face), German fraktur formula_2, and blackboard bold formula_3 (the other letters are rarely used in this face, or their use is unconventional). The use of Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, see Variable (mathematics) and List of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as formula_4 and formula_5. These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin in punctuation marks and diacritics traditionally used in typography; others by deforming letter forms, as in the cases of formula_6 and formula_7. Others, such as + and =, were specially designed for mathematics. Arithmetic operators. &lt;templatestyles src="Glossary/styles.css" /&gt; Equality, equivalence and similarity. &lt;templatestyles src="Glossary/styles.css" /&gt; Comparison. &lt;templatestyles src="Glossary/styles.css" /&gt; Set theory. &lt;templatestyles src="Glossary/styles.css" /&gt; Basic logic. Several logical symbols are widely used in all mathematics, and are listed here. For symbols that are used only in mathematical logic, or are rarely used, see List of logic symbols. &lt;templatestyles src="Glossary/styles.css" /&gt; Blackboard bold. The blackboard bold typeface is widely used for denoting the basic number systems. These systems are often also denoted by the corresponding uppercase bold letter. A clear advantage of blackboard bold is that these symbols cannot be confused with anything else. This allows using them in any area of mathematics, without having to recall their definition. For example, if one encounters formula_9 in combinatorics, one should immediately know that this denotes the real numbers, although combinatorics does not study the real numbers (but it uses them for many proofs). &lt;templatestyles src="Glossary/styles.css" /&gt; Calculus. &lt;templatestyles src="Glossary/styles.css" /&gt; Linear and multilinear algebra. &lt;templatestyles src="Glossary/styles.css" /&gt; Advanced group theory. &lt;templatestyles src="Glossary/styles.css" /&gt; Infinite numbers. &lt;templatestyles src="Glossary/styles.css" /&gt; Brackets. Many sorts of brackets are used in mathematics. Their meanings depend not only on their shapes, but also on the nature and the arrangement of what is delimited by them, and sometimes what appears between or before them. For this reason, in the entry titles, the symbol □ is used as a placeholder for schematizing the syntax that underlies the meaning. Parentheses. &lt;templatestyles src="Glossary/styles.css" /&gt; Square brackets. &lt;templatestyles src="Glossary/styles.css" /&gt; Braces. &lt;templatestyles src="Glossary/styles.css" /&gt; Other brackets. &lt;templatestyles src="Glossary/styles.css" /&gt; Symbols that do not belong to formulas. In this section, the symbols that are listed are used as some sorts of punctuation marks in mathematical reasoning, or as abbreviations of natural language phrases. They are generally not used inside a formula. Some were used in classical logic for indicating the logical dependence between sentences written in plain language. Except for the first two, they are normally not used in printed mathematical texts since, for readability, it is generally recommended to have at least one word between two formulas. However, they are still used on a black board for indicating relationships between formulas. &lt;templatestyles src="Glossary/styles.css" /&gt; Miscellaneous. &lt;templatestyles src="Glossary/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf {a,A,b,B},\\ldots" }, { "math_id": 1, "text": "\\mathcal {A,B},\\ldots" }, { "math_id": 2, "text": "\\mathfrak {a,A,b,B},\\ldots" }, { "math_id": 3, "text": "\\mathbb {N, Z, Q, R, C, H, F}_q" }, { "math_id": 4, "text": "\\textstyle\\prod{}" }, { "math_id": 5, "text": "\\textstyle\\sum{}" }, { "math_id": 6, "text": "\\in" }, { "math_id": 7, "text": "\\forall" }, { "math_id": 8, "text": "\\Box" }, { "math_id": 9, "text": "\\mathbb R" } ]
https://en.wikipedia.org/wiki?curid=73634
73635681
Hyperchaos
A hyperchaotic system is a dynamical system with a bounded attractor set, on which there are at least two positive Lyapunov exponents. Since on an attractor, the sum of Lyapunov exponents is non-positive, there must be at least one negative Lyapunov exponent. If the system has continuous time, then along the trajectory, the Lyapunov exponent is zero, and so the minimal number of dimensions in which continuous-time hyperchaos can occur is 4. Similarly, a discrete-time hyperchaos requires at least 3 dimensions. Mathematical examples. The first two hyperchaotic systems were proposed in 1979. One is a discrete-time system ("folded-towel map"): formula_0Another is a continuous-time system:formula_1More examples are found in. Experimental examples. Only few experimental hyperchaotic behaviors have been identified. Examples include in an electronic circuit, in a NMR laser, in a semiconductor system, and in a chemical system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{aligned}\n& x_{t+1}=3.8 x_t\\left(1-x_t\\right)-0.05\\left(y_t+0.35\\right)\\left(1-2 z_t\\right), \\\\\n& y_{t+1}=0.1\\left[\\left(y_t+0.35\\right)\\left(1-2 z_t\\right)-1\\right]\\left(1-1.9 x_t\\right), \\\\\n& z_{t+1}=3.78 z_t\\left(1-z_t\\right)+0.2 y_t .\n\\end{aligned}" }, { "math_id": 1, "text": "\\begin{array}{ll}\n\\dot{x}=-y-z, & \\dot{y}=x+0.25 y+w, \\\\\n\\dot{z}=3+x z, & \\dot{w}=-0.5 z+0.05 w .\n\\end{array}" } ]
https://en.wikipedia.org/wiki?curid=73635681
73636093
Hurwitz space
Moduli spaces of ramified covers In mathematics, in particular algebraic geometry, Hurwitz spaces are moduli spaces of ramified covers of the projective line, and they are related to the moduli of curves. Their rational points are of interest for the study of the inverse Galois problem, and as such they have been extensively studied by arithmetic geometers. More precisely, Hurwitz spaces classify isomorphism classes of Galois covers with a given automorphism group formula_0 and a specified number of branch points. The monodromy conjugacy classes at each branch point are also commonly fixed. These spaces have been introduced by Adolf Hurwitz which (with Alfred Clebsch and Jacob Lüroth) showed the connectedness of the Hurwitz spaces in the case of simply branched covers (i.e., the case where formula_0 is a symmetric group and the monodromy classes are the conjugacy class of transpositions). Motivation. Let formula_0 be a finite group. The inverse Galois problem for formula_0 asks whether there exists a finite Galois extension formula_1 whose Galois group is isomorphic to formula_0. By Hilbert's irreducibility theorem, a positive answer to this question may be deduced from the existence, instead, of a finite Galois extension formula_2 of Galois group formula_0. In other words, one may try to find a connected ramified cover of the projective line formula_3 over formula_4 whose automorphism group is formula_0. If one requires that this cover be geometrically connected, that is formula_5, then this stronger form of the inverse Galois problem is called the regular inverse Galois problem. A motivation for constructing a moduli space of formula_0-covers (i.e., connected covers of formula_6 whose automorphism group is formula_0) is to transform the regular inverse Galois problem into a problem of Diophantine geometry: if (geometric) points of the moduli spaces correspond to formula_0-covers (or extensions of formula_7 with Galois group formula_0) then it is expected that rational points are related to regular extensions of formula_8 with Galois group formula_0. This geometric approach, pioneered by John G. Thompson, Michael D. Fried, Gunter Malle and Wolfgang Matzat, has been key to the realization of 25 of the 26 sporadic groups as Galois groups over formula_4 — the only remaining sporadic group left to realize being the Mathieu group M23. Definitions. Configuration spaces. Let formula_0 be a finite group and formula_9 be a fixed integer. A configuration is an unordered list of formula_9 distincts points of formula_10. Configurations form a topological space: the configuration space formula_11 of formula_9 points. This space is the analytification (see GAGA) of an algebraic scheme formula_12, which is the open subvariety of formula_13 obtained by removing the closed subset corresponding to the vanishing of the discriminant. The fundamental group of the (topological) configuration space formula_11 is the Artin braid group formula_14, generated by elementary braids formula_15 subject to the braid relations (formula_16 and formula_17 commute if formula_18, and formula_19). The configuration space has the homotopy type of an Eilenberg–MacLane space formula_20. "G"-covers and monodromy conjugacy classes. A formula_0-cover of formula_21 ramified at a configuration formula_22 is a triple formula_23 where formula_24 is a connected topological space, formula_25 is a covering map, and formula_26 is an isomorphism formula_27, satisfying the additional requirement that formula_28 does not factor through any formula_29 where formula_30 is a configuration with less than formula_9 points. An isomorphism class of formula_0-covers is determined by the monodromy morphism, which is an equivalence class of group morphisms formula_31 under the conjugacy action of formula_0. One may choose a generating set of the fundamental group formula_32 consisting of homotopy classes of loops formula_33, each rotating once counterclockwise around each branch point, and satisfying the relation formula_34. Such a choice induces a correspondence between formula_0-covers and equivalence classes of tuples formula_35 satisfying formula_36 and such that formula_37 generate formula_0, under the conjugacy action of formula_0: here, formula_38 is the image of the loop formula_39 under the monodromy morphism. The conjugacy classes of formula_0 containing the elements formula_37 do not depend on the choice of the generating loops. They are the monodromy conjugacy classes of a given formula_0-cover. We denote by formula_40 the set of formula_9-tuples formula_41 of elements of formula_0 satisfying formula_36 and generating formula_0. If formula_42 is a list of conjugacy classes of formula_0, then formula_43 is the set of such tuples with the additional constraint formula_44. Hurwitz spaces. Topologically, the Hurwitz space classifying formula_0-covers with formula_9 branch points is an unramified cover of the configuration space formula_11 whose fiber above a configuration formula_22 is in bijection, via the choice of a generating set of loops in formula_32, with the quotient formula_45 of formula_40 by the conjugacy action of formula_0. Two points in the fiber are in the same connected component if they are represented by tuples which are in the same orbit for the action of the braid group formula_14 induced by the following formula:formula_46 This topological space may be constructed as the Borel construction formula_47: its homotopy type is given by formula_48, where formula_49 is the universal cover formula_50 of the configuration space formula_51, and the action of the braid group formula_14 on formula_45 is as above. Using GAGA results, one shows that space is the analyfication of a complex scheme, and that scheme is shown to be obtained via extension of scalars of a formula_52-scheme formula_53 by a descent criterion of Weil. The scheme formula_53 is an étale cover of the algebraic configuration space formula_12. However, it is not a "fine" moduli space in general. In what follows, we assume that formula_0 is centerless, in which case formula_53 is a fine moduli space. Then, for any field formula_54 of characteristic relatively prime to formula_55, formula_54-points of formula_56 correspond bijectively to geometrically connected formula_0-covers of formula_57 (i.e., regular Galois extensions of formula_58 with Galois group formula_0) which are unramified outside formula_9 points. The absolute Galois group of formula_4 acts on the formula_59-points of the scheme formula_56, and the fixed points of this action are precisely its formula_4-points, which in this case correspond to regular extensions of formula_8 with Galois group formula_0, unramified outside formula_9 places. Applications. The rigidity method. If conjugacy classes formula_60 are given, the list formula_60 is rigid when there is a tuple formula_61 "unique up to conjugacy" such that formula_36 and formula_37 generate formula_0 — in other words, formula_62 is a singleton (see also rigid group). The conjugacy classes formula_63 are rational if for any element formula_44 and any integer formula_64 relatively prime to the order of formula_38, the element formula_65 belongs to formula_66. Assume formula_0 is a centerless group, and fix a rigid list of rational conjugacy classes formula_42. Since the classes formula_67 are rational, the action of the absolute Galois group formula_68 on a formula_0-cover with monodromy conjugacy classes formula_69 is (another) formula_0-cover with monodromy conjugacy classes formula_69 (this is an application of Fried's "branch cycle lemma"). As a consequence, one may define a subscheme formula_70 of formula_53 consisting of formula_0-covers whose monodromy conjugacy classes are formula_67. Take a configuration formula_22. If the points of this configuration are not globally rational, then the action of formula_71 on formula_0-covers ramified at formula_22 will not preserve the ramification locus. However, if formula_72 is a configuration defined over formula_4 (for example, all points of the configuration are in formula_73), then a formula_0-cover branched at formula_22 is mapped by an element of formula_71 to another formula_0-cover branched at formula_22, i.e. another element of the fiber. The fiber of formula_74 above formula_22 is in bijection with formula_62, which is a singleton by the rigidity hypothesis. Hence, the single point in the fiber is necessarily invariant under the formula_71-action, and it defines a formula_0-cover defined over formula_4. This proves a theorem due to Thompson: if there exists a rigid list of rational conjugacy classes of formula_0, and formula_75, then formula_0 is a Galois group over formula_4. This has been applied to the Monster group, for which a rigid triple of conjugacy classes formula_76 (with elements of respective orders 2, 3, and 29) exists. Thompson's proof does not explicitly use Hurwitz spaces (this rereading is due to Fried), but more sophisticated variants of the rigidity method (used for other sporadic groups) are best understood using moduli spaces. These methods involve defining a curve inside a Hurwitz space — obtained by fixing all branch points except one — and then applying standard methods used to find rational points on algebraic curves, notably the computation of their genus using the Riemann-Hurwitz formula. Statistics of extensions of function fields over finite fields. Several conjectures concern the asymptotical distribution of field extensions of a given base field as the discriminant gets larger. Such conjectures include the Cohen-Lenstra heuristics and the Malle conjecture. When the base field is a function field over a finite field formula_77, where formula_78 and formula_79 does not divide the order of the group formula_0, the count of extensions of formula_77 with Galois group formula_0 is linked with the count of formula_80-points on Hurwitz spaces. This approach was highlighted by works of Jordan Ellenberg, Akshay Venkatesh, Craig Westerland and TriThang Tran. Their strategy to count formula_80-points on Hurwitz spaces, for large values of formula_81, is to compute the homology of the Hurwitz spaces, which reduces to purely topological questions (approached with combinatorial means), and to use the Grothendieck trace formula and Deligne's estimations of eigenvalues of Frobenius (as explained in the article about Weil conjectures). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "F \\mid \\Q" }, { "math_id": 2, "text": "F \\mid \\Q(T)" }, { "math_id": 3, "text": "\\mathbb{P}^1_{\\Q}" }, { "math_id": 4, "text": "\\Q" }, { "math_id": 5, "text": "F \\cap \\bar\\Q = \\Q" }, { "math_id": 6, "text": "\\mathbb{P}^1" }, { "math_id": 7, "text": "\\bar\\Q(T)" }, { "math_id": 8, "text": "\\Q(T)" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "\\mathbb{A}^1(\\C)" }, { "math_id": 11, "text": "\\operatorname{Conf}_n" }, { "math_id": 12, "text": "\\mathcal{U}_n" }, { "math_id": 13, "text": "\\mathbb{A}^n" }, { "math_id": 14, "text": "B_n" }, { "math_id": 15, "text": "\\sigma_1, \\ldots, \\sigma_{n-1}" }, { "math_id": 16, "text": "\\sigma_i" }, { "math_id": 17, "text": "\\sigma_j" }, { "math_id": 18, "text": "|i-j|>1" }, { "math_id": 19, "text": "\\sigma_i \\sigma_{i+1} \\sigma_i = \\sigma_{i+1} \\sigma_i \\sigma_{i+1}" }, { "math_id": 20, "text": "K(B_n, 1)" }, { "math_id": 21, "text": "\\mathbb{P}^1_{\\C}" }, { "math_id": 22, "text": "\\mathbf{t}" }, { "math_id": 23, "text": "(Y, p, f)" }, { "math_id": 24, "text": "Y" }, { "math_id": 25, "text": "p : Y \\to \\mathbb{P}^1 \\smallsetminus \\mathbf{t}" }, { "math_id": 26, "text": "f" }, { "math_id": 27, "text": "\\operatorname{Aut}(Y, p) \\simeq G" }, { "math_id": 28, "text": "p : Y \\to \\mathbb{P}^1\\smallsetminus\\mathbf{t}" }, { "math_id": 29, "text": "Y \\to \\mathbb{P}^1\\smallsetminus\\mathbf{t'}" }, { "math_id": 30, "text": "\\mathbf{t'}" }, { "math_id": 31, "text": "\\pi_1(\\mathbb{P}^1 \\smallsetminus \\mathbf{t},\\, \\infty) \\to G" }, { "math_id": 32, "text": "\\pi_1(\\mathbb{P}^1 \\smallsetminus \\mathbf{t},\\, \\infty)" }, { "math_id": 33, "text": "\\gamma_1, \\ldots, \\gamma_n" }, { "math_id": 34, "text": "\\gamma_1 \\cdots \\gamma_n = 1" }, { "math_id": 35, "text": "(g_1, \\ldots, g_n) \\in G^n" }, { "math_id": 36, "text": "g_1 \\cdots g_n = 1" }, { "math_id": 37, "text": "g_1, \\ldots, g_n" }, { "math_id": 38, "text": "g_i" }, { "math_id": 39, "text": "\\gamma_i" }, { "math_id": 40, "text": "V_n" }, { "math_id": 41, "text": "(g_1, \\ldots, g_n)" }, { "math_id": 42, "text": "\\mathbf{c} = (c_1, \\ldots, c_n)" }, { "math_id": 43, "text": "V_n^{\\mathbf{c}}" }, { "math_id": 44, "text": "g_i \\in c_i" }, { "math_id": 45, "text": "V_n/G" }, { "math_id": 46, "text": "\\sigma_i.(g_1,\\ldots,g_n) = (g_1,\\ldots,g_{i+1}^{g_i}, g_i, \\ldots,g_n)." }, { "math_id": 47, "text": "G \\,\\backslash\\!\\!\\backslash\\, V_n \\,/\\!\\!/\\, B_n" }, { "math_id": 48, "text": "\\widetilde{\\operatorname{Conf}}_n \\underset{B_n}{\\times} (V_n/G)" }, { "math_id": 49, "text": "\\widetilde{\\operatorname{Conf}}_n" }, { "math_id": 50, "text": "EB_n" }, { "math_id": 51, "text": "\\operatorname{Conf}_n \\cong BB_n" }, { "math_id": 52, "text": "\\Z\\left[\\frac{1}{|G|}\\right]" }, { "math_id": 53, "text": "\\mathcal{H}_{G, n}" }, { "math_id": 54, "text": "K" }, { "math_id": 55, "text": "|G|" }, { "math_id": 56, "text": "\\mathcal{H}_{G,n}" }, { "math_id": 57, "text": "\\mathbb{P}^1_K" }, { "math_id": 58, "text": "K(T)" }, { "math_id": 59, "text": "\\bar\\Q" }, { "math_id": 60, "text": "(c_1, \\ldots, c_n)" }, { "math_id": 61, "text": "(g_1, \\ldots, g_n) \\in c_1 \\times \\cdots \\times c_n" }, { "math_id": 62, "text": "V_n^{\\mathbf{c}}/G" }, { "math_id": 63, "text": "c_1,\\ldots,c_n" }, { "math_id": 64, "text": "k" }, { "math_id": 65, "text": "g_i^k" }, { "math_id": 66, "text": "c_i" }, { "math_id": 67, "text": "c_1, \\ldots, c_n" }, { "math_id": 68, "text": "G_{\\Q}=\\operatorname{Gal}(\\bar\\Q\\mid\\Q)" }, { "math_id": 69, "text": "c_1,\\ldots,c_n " }, { "math_id": 70, "text": "\\mathcal{H}^{\\mathbf{c}}_{G, n}" }, { "math_id": 71, "text": "G_{\\Q}" }, { "math_id": 72, "text": "\\mathbf{t} \\in \\mathcal{U}_n(\\Q)" }, { "math_id": 73, "text": "\\mathbf{A}^1(\\Q)" }, { "math_id": 74, "text": "\\mathcal{H}^{\\mathbf{c}}_{G, n} \\to \\mathcal{U}_n" }, { "math_id": 75, "text": "Z(G)=1" }, { "math_id": 76, "text": "(c_1, c_2, c_3)" }, { "math_id": 77, "text": "\\mathbb{F}_q(T)" }, { "math_id": 78, "text": "q = p^r" }, { "math_id": 79, "text": "p" }, { "math_id": 80, "text": "\\mathbb{F}_q" }, { "math_id": 81, "text": "q" } ]
https://en.wikipedia.org/wiki?curid=73636093
7363661
Sergey Stechkin
Soviet mathematician (1920–1995) Sergey Borisovich Stechkin () (6 September 1920 – 22 November 1995) was a prominent Soviet mathematician who worked in theory of functions (especially in approximation theory) and number theory. Biography. Sergey Stechkin was born on 6 September 1920 in Moscow. His father (Boris Stechkin) was a Soviet turbojet engine designer, academician. His great uncle, N.Ye. Zhukovsky, was the founding father of modern aero- and hydrodynamics. His maternal grandfather, N.A. Shilov, was a notable chemist. His paternal grandfather was Sergey Solomin, a science fiction author. Stechkin attended school 58 and then attempted to matriculate to Moscow State University. He was turned down, likely due to the fact that the Soviet regime viewed his father as a political dissident at the time. He matriculated to Gorky State University instead. A year later, he was nevertheless able to transfer to the Mechanics and Mathematics department at Moscow State University, where he studied mathematics and was a student of D. E. Menshov. Stechkin received his PhD in 1948 with a dissertation titled "On the order of best approximations of continuous functions". Later he worked as a mathematician at the Steklov Institute of Mathematics in Moscow. He was the founder and first director of the department of the Institute in Yekaterinburg. Later this department became the Institute of Mechanics and Mathematics at the Ural branch of the Russian Academy of Sciences. Stechkin founded, and, for more than 20 years, served as editor-in-chief for the mathematical journal “Mathematical Notes” (). Stechkin served as professor of mathematics at Moscow State University and his honors include the Chebyshev Award of the Russian Academy of Sciences in 1993. He died in 1995 in Moscow from age-related chronic illness. His contribution to mathematics include the generalization of Jackson's inequality for all formula_0 spaces. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_p" } ]
https://en.wikipedia.org/wiki?curid=7363661
73645787
Diffuse correlation spectrometry
Medical imaging and optical technique Diffuse correlation spectroscopy (DCS) is a type of medical imaging and optical technique that utilizes near-infrared light to directly and non-invasively measure tissue blood flow. The imaging modality was created by David Boas and Arjun Yodh in 1995. Blood flow is one the most important factors affecting the delivery of oxygen and other nutrients to tissues. Abnormal blood flow is associated with many diseases such as stroke and cancer. Tumors from cancer can generate abnormal tumor blood flow compared to the surrounding tissue. Current treatments attempt to decrease blood flow to cancer cells. Therefore, there is an urgent need for a way to measure blood flow. However, blood flow is difficult to measure because of sensitivity and stability of the measurement as it depends on magnitude of flow, location, and the diameter of individual vessels. Current imaging modalities used to measure blood flow include Doppler ultrasound, PET, and MRI. Doppler ultrasound is limited to large vessels. PET requires arterial blood sampling and exposure to ionizing radiation. MRI cannot be used for patients with pacemakers and those with metal implants. All together, these imaging modalities have large and costly instrumentation and are not conducive to continuous measurements. With these considerations in mind, the first methodology used to measure blood flow is near-infrared spectroscopy (NIRS). It is based on a well known spectral window that exists in the near-infrared (NIR, 700-900 nm) where tissue absorption is relatively low so that light can penetrate into deep/thick volumes of tissue, up to several centimeters. It provides a fast and portable alternative to measure deep tissue hemodynamics. However, it has a poor spatial resolution and is a ‘static’ method. This means that it measures the relatively slow variation in tissue absorption and scattering. In other words, it measures the changes in the amount of scattering rather than the motion of the scatter. This led to the ‘dynamic’ NIRS technique or Diffuse correlation spectroscopy. It measures the motions of the scatters while also maintaining the advantages of NIRS. The primary moving scatterers are red blood cells. The main advantages of this method is no ionizing radiation, no contrast agents, high temporal resolution, and large penetration depth. The utility of DCS technology has been demonstrated in tumors, brains, and skeletal muscles. The general approach with DCS is that the temporal statistics of the fluctuations of the scattered light within a speckle area or pixel is monitored. Then, the electric field temporal autocorrelation function is measured. A model for photon propagation through tissues, the measured autocorrelation signal is used to determine the motion of blood flow. Mathematical principles. Diffuse correlation spectrometry is an extension of single-scattering dynamic light scattering (DLS). Single-scattering theory becomes inadequate as multiple scattering effects take place in biological thick tissues. Therefore, each scattering event contributes to the decay of the correlation function. The fields from individual photon paths are assumed to be uncorrelated; therefore, the total field autocorrelation function can be expressed as the weighted sum of the field autocorrelation function from each photon path. The physical effect that makes the blood flow measurement possible is the temporal electric field autocorrelation function, shown in equation 1, diffuses through tissue in a manner that is similar to the light fluence rate. formula_0 In a highly scattering media, the photon fluence rate obeys the time-dependent diffusion equation, shown in equation 2. Optical imaging variables used in these equation are here. formula_1 The blood flow measurement can be governed by the diffusion equation. Many tissue optical properties that affect diffusion such as tissue absorption and tissue reduced scattering coefficient are the same for temporal autocorrelation. Using the same set of approximations, the temporal field autocorrelation function obeys a formally similar diffusion equation, shown in equation 3. formula_2 The mean-square particle displacement has been found to be reasonably well approximated as an “effective” Brownian motion, i.e., "DB" represents the effective diffusion coefficient of the moving scatterers. In order to estimate relative blood flow from DCS data, we fit the measured intensity autocorrelation functions to solutions of the equation in equation 3. Currently, there is no evidence explaining why Brownian-motion correlation curves work effectively. This is the current empirical approach. The unit of α"DB" (cm2/s) has been found to correlate well with other blood flow measurement modalities and is used to measure blood flow. Therefore, is the blood flow index (BFI). To calculate the relative blood flow (rBF), the equation is shown in equation 4 where BFI0 is the DCS blood flow measurement at a baseline. formula_3 Instrumentation and data acquisition. The instrumentation needed in order to conduct the data acquisition include a multimode optical fiber, single-mode or few-mode fibers, photon-counting avalanche photodiodes (APDs), multi-tau correlator board, and a computer. The first step of data acquisition is probing the tissue with multimode optical fibers that deliver a long coherence length laser light to the tissue. The second step of data acquisition is collecting photons emitted from the tissue surface with single-mode or few-mode fibers. The third step of data acquisition is the APDs detect the photons from the single-mode or few-mode fibers. The APDs act like detectors. The APDs will have a transistor-transistor logic output or binary outputs with the use of transistors. These outputs will be fed into the multi-tau correlator board which will calculate the temporal intensity auto-correlation functions of the detected signal. Then, the function outputs onto the computer where the functions are fitted to the diffusion equation in the previous section in order to determine optical properties about the tissue as well as properties of the scatters or red blood cells such as blood flow index and many more. Application Example. A clinical application of DCS is for use in diagnosis of cancers. An example of this is measuring red blood cell flow in breast tumors. In this experiment, both healthy patients and patients with breast tumors were recruited. Researchers scanned the tumor with a hand-held optical probe with 4 sources and detectors 2.5 cm apart from each other.   Then, the resultant correlation functions were fit to the solution of the correlation diffusion equation to obtain the blood flow index. The average relative blood flow was reported at each position. Blood flow increased in both horizontal and vertical scans as the probe crossed over the tumor. These findings were consistent with previous Doppler ultrasound and PET results. Advantages, limitations, and future directions. Diffuse correlation spectrometry measures the motion of scatters or red blood cells in tissue by analyzing the intensity of autocorrelation functions. There are many advantages to this method. The first advantage is that DCS can be used for patients of all ages. This is significant as some modalities such as MRI are difficult to use for certain populations. The second advantage is that DCS instrumentation is easy to assemble and requires only one wavelength that can be chosen. The third advantage is that the theoretical concepts of DCS can be adapted to other blood flow imaging techniques. However, there are limitations associated with DCS.  First, the reason for why the dynamics of RBCs are so well approximated by a Brownian motion flow model is still not clear. Second, motion artifacts are common and can generate signals that can mislead physiological interpretation. Third, on the instrumentation side, the low SNR levels due to small fibers and tissues are challenging. Next steps for DCS include using this modality as a bedside monitor of cerebral perfusion. Furthermore, DCS should be used to increase our understanding of early brain development. The ability to monitor neurovascular responses will enable the use of more complex stimulation paradigms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle E^*(r,t)\\cdot E(t,\\tau) \\rangle (1)" }, { "math_id": 1, "text": "\\nabla \\cdot (D\\nabla\\phi(r,t)) - \\nu\\mu_a\\phi(r,t) + \\nu S(r,t) = {\\partial \\phi(r,t) \\over\\partial t} (2)" }, { "math_id": 2, "text": "[\\nabla \\cdot D(r) \\nabla - \\nu \\mu_a(r) - \\frac{\\alpha}{3} \\nu \\mu_s^' k_o^2 \\langle \\Delta r^2(\\tau)\\rangle]G_1(r,\\tau) = -\\nu S(r)~(3)" }, { "math_id": 3, "text": "rBF = \\frac{BFI}{BFI_0} ~ (4)" } ]
https://en.wikipedia.org/wiki?curid=73645787
7364629
Dutching
In gambling, Dutching is sharing the risk of losing across a number of runners by backing more than one selection in a race or event. One needs to calculate the correct stake to place on each selection so that the return is the same if any of them wins. Although not foolproof, because handicapping is still involved, there have been successful bettors throughout history who have applied this system. This is not to be confused with what constitutes a Dutch book which is when a bookmaker goes overbroke (the opposite to overround). It is thought the strategy behind Dutching was originally conceived and employed by Arthur Flegenheimer (also known as Dutch Schultz) alongside various rackets he had running at the racetrack. The system has since taken his name. The strategy can pay dividends when gamblers successfully reduce the potential winners of an event to a select few from the field or when information about runners not expected to perform well does not reach the market (so as to affect the odds), making it profitable to back the rest of the field. Dutching can also be used to reduce the price of the commission you would pay at a betting exchange by dutching at two bookmakers (normally Asian style) instead. Profitability. A Dutch or an arb is profitable if the sum of the reciprocals of the decimal odds of each selection is less than 1, and each bet is sized such that the payout in each outcome are the same. Additionally, the profitability of a Dutch/arb can be expressed as 1-R, where R is the sum of the reciprocals. In practice, bookmakers will always ensure that R is comfortably greater than 1, to generate a profit for themselves and to negate the effect of any slight arbitrage possibilities between different bookmakers. Worked examples. The simplest form of market to Dutch is two-way, such as a tennis match or the number of goals scored in a game of football, but any number of runners can be dutched. These examples are based on betting on goals scored in a football game. Example 1 - an unprofitable two-way arbitrage. formula_0 This would give a loss of formula_1, so the odds are not profitable. Example 2 - a profitable two-way arbitrage. In the same situation as above, another bookmaker (Bookmaker 3) is offering odds of 1.95 on the Under 2.5 outcome (unlikely). formula_2 Therefore, this would give a profit of formula_3 on the total stakes. In this instance, betting $100 on Over 2.5 and $100formula_4$107.69 on Under 2.5 would cost you $207.69. If Over 2.5 wins, you are awarded $100formula_5$210, while if Under 2.5 wins you are also awarded $107.69formula_6 $210, resulting in a guaranteed profit of $210 - $207.69 = $2.31. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{aligned}\n\\text{Sum of Reciprocals} &= \\frac{1}{\\text{Decimal Odds at Bookmaker 1}} + \\frac{1}{\\text{Decimal Odds at Bookmaker 2}} \\\\\n &= \\frac{1}{2.1} + \\frac{1}{1.8} = 0.476 + 0.556 = 1.032\n\\end{aligned}" }, { "math_id": 1, "text": "1 - 1.032 = -0.032 = -3.2\\%" }, { "math_id": 2, "text": "\\begin{aligned}\n\\text{Sum of Reciprocals} &= \\frac{1}{\\text{Decimal Odds at Bookmaker 1}} + \\frac{1}{\\text{Decimal Odds at Bookmaker 3}} \\\\\n &= \\frac{1}{2.1} + \\frac{1}{1.95} = 0.476 + 0.513 = 0.989\n\\end{aligned}" }, { "math_id": 3, "text": "1 - 0.989 = 0.011 = +1.1\\%" }, { "math_id": 4, "text": "\\times \\frac{2.1}{1.95}=" }, { "math_id": 5, "text": "\\times 2.1=" }, { "math_id": 6, "text": "\\times 1.95\\approx" } ]
https://en.wikipedia.org/wiki?curid=7364629
73647497
Three-photon adaptive optics microscopy
Fluorescence imaging technology Three-photon adaptive optics microscopy (3PAOM) is a technology that implements adaptive optics to correct wavefront aberrations produced by three-photon microscopy. This technique allows for significantly improved performance when compared to traditional confocal microscopy (also known as single-photon microscopy) and two-photon microscopy. Concept. Three-photon excitation microscopy (3PEF) was first performed in 1964 by S. Singh and L. T. Bradley at the National Research Council in Ottawa, Canada. This technology was further advanced in 1996 by Stefan Hell and others, who demonstrated the possibility of applying three-photon excitation microscopy to scanning fluorescence microscopy, paving the way for later applications of 3PEF. Adaptive optics was first theorized by American astronomer Horace W. Babcock in 1953, who surmised that it could be used to improve the quality of astronomical images. However, insufficient computational power was available at the time to make the technology practical, which came into widespread use only during the 1990s. While adaptive optics have found widespread use in astronomy and retinal imaging, their application in microscopy has been a more recent advancement as of 2017. In three-photon excitation, the target fluorophore absorbs three photons of roughly one-third of the fluorophore's excitation energy almost simultaneously (all photons arrive within approximately 1 femtosecond of each other). This permits 3PEF to have useful depths significantly greater than other fluorescent imaging techniques, but greater imaging depths produce more significant wavefront aberrations. Adaptive optics can be used to correct these aberrations, thereby maintaining image clarity at greater penetration depths. Advantages. Adaptive optics-corrected three-photon microscopy has the potential to improve deep tissue imaging significantly. Three-photon microscopy has vastly improved penetration depth compared to two-photon microscopy, and thanks to the correction made by adaptive optics, image quality can be preserved even at high penetration depths.  As a result, 3PEF also gives reduced degradation of the signal-to-background ratio with depth when compared with two-photon microscopy. Three-photon microscopy is also more resistant to out-of-focus light and is less prone to causing photobleaching due to the lower per-photon energy than two-photon microscopy or confocal microscopy. Development. In the early 2010s, deep tissue imaging was first performed using three-photon fluorescence microscopy. In 2013, Nicholas G. Horton were able to image a mouse brain with an excitation window of 1700 nm. In 2017, Christopher J. Rowlands increased useful penetration depth significantly by employing wide-field three-photon excitation. While these advancements drastically improved penetration depth over two-photon microscopy, image resolution was limited by the aberrations introduced by travel through tissue. In 2021, Lina Streich overcame this limitation by combining the effects of indirect adaptive optics with three-photon excitation to increase useful penetration depth up to 1.4 mm in a mouse brain. In 2022, David Sinefeld et al. further improved both resolution and penetration depth by applying adaptive optics techniques through a spatial light modulator. As of 2023, development is ongoing, with researchers investigating techniques of maintaining high resolution at even greater penetration depths. Wavefront Correction. In order to return the remove the aberration in the received wavefront, a wavefront correction is needed. Light from the tissue is incident upon a deformable mirror, which adjusts the wavefront through a feedback control system. A three-point parabolic approximation is applied to optimize the phase. For each Zernike order, both negative and positive phase patterns are used (formula_0 and formula_1). The signals  and  are measured by the control system and taken with the original signal  (to which no phase has been applied) to calculate the multiplication constant according to the three-point parabolic approximation equation. The correction weight  can thus be calculated for three-photon microscopy: formula_2 The current phase applied on the deformable mirror is added with the "i"th Zernike pattern with the calculated correction weight. A full sequence of corrections is complete when there is no measurable improvement in the signal. Signal improvement can be quantified in a number of different ways, including total fluorescence signal, signal-to-noise ratio, and contrast. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "+\\alpha" }, { "math_id": 1, "text": "-\\alpha" }, { "math_id": 2, "text": "C_i=\\frac{\\alpha}{2}\\frac{\\sqrt[3]{S_{i+}}-\\sqrt[3]{S_{i-}}}{-\\sqrt[3]{S_{i+}}+\\sqrt[3]{S_{i-}}-2\\sqrt[3]{S_{i0}}}" } ]
https://en.wikipedia.org/wiki?curid=73647497
7364791
ADIC
ADIC may refer to: adic may refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=7364791
73654860
Hyperdimensional computing
Computational approach Hyperdimensional computing (HDC) is an approach to computation, particularly artificial intelligence. HDC is motivated by the observation that the cerebellum cortex operates on high-dimensional data representations. In HDC, information is thereby represented as a hyperdimensional (long) vector called hypervector an array of numbers. A hyperdimensional vector (hypervector) could include thousands of numbers that represent a point in a space of thousands of dimensions. Vector Symbolic Architectures is an older name for the same broad approach. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Process. Data is mapped from the input space to sparse HD space under an encoding function φ : X → H. HD representations are stored in data structures that are subject to corruption by noise/hardware failures. Noisy/corrupted HD representations can still serve as input for learning, classification, etc. They can also be decoded to recover the input data. H is typically restricted to range-limited integers (-v-v) This is analogous to the learning process conducted by fruit flies olfactory system. The input is a roughly 50-dimensional vector corresponding to odor receptor neuron types. The HD representation uses ~2,000-dimensions. Transparency. HDC algebra reveals the logic of how and why systems makes decisions, unlike artificial neural networks. Physical world objects can be mapped to hypervectors, to be processed by the algebra. Performance. HDC is suitable for "in-memory computing systems", which compute and hold data on a single chip, avoiding data transfer delays. Analog devices operate at low voltages. They are energy-efficient, but prone to error-generating noise. HDC's can tolerate such errors. Various teams have developed low-power HDC hardware accelerators. Nanoscale memristive devices can be exploited to perform computation. An in-memory hyperdimensional computing system can implement operations on two memristive crossbar engines together with peripheral digital CMOS circuits. Experiments using 760,000 phase-change memory devices performing analog in-memory computing achieved accuracy comparable to software implementations. Errors. HDC is robust to errors such as an individual bit error (a 0 flips to 1 or vice versa) missed by error-correcting mechanisms. Eliminating such error-correcting mechanisms can save up to 25% of compute cost. This is possible because such errors leave the result "close" to the correct vector. Reasoning using vectors is not compromised. HDC is at least 10x more error tolerant than traditional artificial neural networks, which are already orders of magnitude more tolerant than traditional computing. Example. A simple example considers images containing black circles and white squares. Hypervectors can represent SHAPE and COLOR variables and hold the corresponding values: CIRCLE, SQUARE, BLACK and WHITE. Bound hypervectors can hold the pairs BLACK and CIRCLE, etc. Orthogonality. High-dimensional space allows many mutually orthogonal vectors. However, If vectors are instead allowed to be "nearly orthogonal", the number of distinct vectors in high-dimensional space is vastly larger. HDC uses the concept of distributed representations, in which an object/observation is represented by a pattern of values across many dimensions rather than a single constant. Operations. HDC can combine hypervectors into new hypervectors using well-defined vector space operations. Groups, rings, and fields over hypervectors become the underlying computing structures with addition, multiplication, permutation, mapping, and inverse as primitive computing operations. All computational tasks are performed in high-dimensional space using simple operations like element-wise additions and dot products. Binding creates ordered point tuples and is also a function ⊗ : H × H → H. The input is two points in H, while the output is a dissimilar point. Multiplying the SHAPE vector with CIRCLE "binds" the two, representing the idea “SHAPE is CIRCLE”. This vector is "nearly orthogonal" to SHAPE and CIRCLE. The components are recoverable from the vector (e.g., answer the question "is the shape a circle?"). Addition creates a vector that combines concepts. For example, adding “SHAPE is CIRCLE” to “COLOR is RED,” creates a vector that represents a red circle. Permutation rearranges the vector elements. For example, permuting a three-dimensional vector with values labeled "x", "y" and "z", can interchange "x" to "y", "y" to "z", and "z" to "x". Events represented by hypervectors A and B can be added, forming one vector, but that would sacrifice the event sequence. Combining addition with permutation preserves the order; the event sequence can be retrieved by reversing the operations. Bundling combines a set of elements in H as function ⊕ : H ×H → H. The input is two points in H and the output is a third point that is similar to both. History. Vector symbolic architectures (VSA) provided a systematic approach to high-dimensional symbol representations to support operations such as establishing relationships. Early examples include holographic reduced representations, binary spatter codes, and matrix binding of additive terms. HD computing advanced these models, particularly emphasizing hardware efficiency. In 2018, Eric Weiss showed how to fully represent an image as a hypervector. A vector could contain information about all the objects in the image, including properties such as color, position, and size. In 2023, Abbas Rahimi et al., used HDC with neural networks to solve Raven's progressive matrices. In 2023, Mike Heddes et Al. under the supervision of Professors Givargis, Nicolau and Veidenbaum created a hyper-dimensional computing library that is built on top of PyTorch. Applications. Image recognition. HDC algorithms can replicate tasks long completed by deep neural networks, such as classifying images. Classifying an annotated set of handwritten digits uses an algorithm to analyze the features of each image, yielding a hypervector per image. The algorithm then adds the hypervectors for all labeled images of e.g., zero, to create a prototypical hypervector for the concept of zero and repeats this for the other digits. Classifying an unlabeled image involves creating a hypervector for it and comparing it to the reference hypervectors. This comparison identifies the digit that the new image most resembles. Given labeled example set formula_0 is the class of a particular "xi". Given query xq ∈ X the most similar prototype can be found with formula_1. The similarity metric ρ is typically the dot-product. Reasoning. Hypervectors can also be used for reasoning. Raven's progressive matrices presents images of objects in a grid. One position in the grid is blank. The test is to choose from candidate images the one that best fits. A dictionary of hypervectors represents individual objects. Each hypervector represents an object concept with its attributes. For each test image a neural network generates a binary hypervector (valus are +1 or −1) that is as close as possible to some set of dictionary hypervectors. The generated hypervector thus describes all the objects and their attributes in the image. Another algorithm creates probability distributions for the number of objects in each image and their characteristics. These probability distributions describe the likely characteristics of both the context and candidate images. They too are transformed into hypervectors, then algebra predicts the most likely candidate image to fill the slot. This approach achieved 88% accuracy on one problem set, beating neural network–only solutions that were 61% accurate. For 3-by-3 grids, the system was 250x faster than a method that used symbolic logic to reason, because of the size of the associated rulebook. Other. Other applications include bio-signal processing, natural language processing, and robotics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S = \\{(x_{i}, y_{i})\\}_{i=1}^N, \\ {\\scriptstyle\\text{where}} \\ x_{i} \\in X \\ {\\scriptstyle\\text{and}} \\ y_{i} \\in \\{c_{i}\\}_{i=1}^K" }, { "math_id": 1, "text": "k^* = _{k \\in 1,...,K}^{argmax} \\ p(\\phi(x_{q})), \\phi(c_{k}))" } ]
https://en.wikipedia.org/wiki?curid=73654860
73655306
Dual-axis optical coherence tomography
Optical imaging modality Dual-axis optical coherence tomography (DA-OCT) is an imaging modality that is based on the principles of optical coherence tomography (OCT). These techniques are largely used for medical imaging. OCT is non-invasive and non-contact. It allows for real-time, in situ imaging and provides high image resolution. OCT is analogous to ultrasound but relies on light waves (typically near-infrared), which makes it faster than ultrasound. In general, OCT has proven to be compact and portable. It is compatible with arterial catheters and endoscopes, which helps diagnose diseases within long internal cavities, including the esophagus (Barrett's disease) and coronary arteries (cardiovascular disease). The biggest limitation with traditional OCT is that it relies on detecting ballistic (non-scattered) photons, which can have a mean free path of only 100 microns, or singly backscattered photons. This strongly restricts depth penetration in highly-scattering biological tissue. It causes unsatisfactory signal-to-noise ratio (SNR) at deep regions. To overcome this issue, DA-OCT uses angled source and detection components and a tunable lens to create an enhanced depth of focus and improve depth penetration in biological tissue. Design. Dual-axis architecture. DA-OCT applies a dual-axis architecture to a spectral-domain OCT system. The objective is to improve the depth of view within biological tissue. Dual-axis architecture with coherence imaging was introduced in the early 2010s. Prior to the development of DA-OCT, the dual-axis design was commonly used with multiple-scattering multispectral low coherence interferometry (ms2/LCI), a technique that also analyzes multiply scattered light to take depth-resolved images from optical scattering media. For this architecture, the light source and detector are tilted at equal and opposite angles to create a dual-axis. The slight scattering angle increases the chance of collecting more photons being scattered within the tissue. The greater the angle of the source and the detector, the deeper the focal zone. But there is also a problem: the greater the angle, the smaller the focal zone. Even though the chance of detecting a diffused photon increases, the size of the region has decreased. Tunable lens. To fix the decreasing focal zone size problem, a tunable lens is used. The tunable lens allows dynamic focusing, where the focal zone can be scanned at various tissue depths. The data from different scans are stitched into a single image using an algorithm similar to one used in Gabor-domain optical coherence microscopy. This forms an enhanced depth of focus, allowing for greater penetration depth within turbid media. Instrument setup. Light from a broadband supercontinuum laser is filtered to a range of 1240 to 1390 nm and directed into a fiber coupler. The fiber coupler implements an interferometer, the hallmark of OCT, which splits the input light into sample and reference arms. The dual-axis architecture was added to the sample arm, angling the both light coming from the laser source and the light directed at the detector. By changing the angle, it increases the chance of gathering more light scattered at random angles deep in the media. DA-OCT also uses a micro-electromechanical system (MEMS) mirror for faster beam scanning. This helps decrease the integration time since DA-OCT has to gather scans at multiple depths to form a single image. Experimental applications. For both DA-OCT and OCT, the research group imaged the samples with the tunable lens and without the tunable lens. In their results, they referred to DA-OCT with the tunable lens as DA-DOF+ and DA-OCT without the tunable lens as just DA-OCT. (DOF+ indicates "enhanced depth of focus".) The group referred to on-axis OCT with the tunable lens as On-Axis OCT DOF+. They referred to on-axis OCT without the tunable lens as OCT or On-Axis OCT. For quantitative experiments, contrast-to-noise ratio (CNR) was used as the main metric to determine image quality. They typically imaged a needle inside the scattering media, so CNR was expressed by: formula_0 where μs is the mean pixel count of the needle profile, μm is the mean pixel count of the surrounding media, σs and σm are the corresponding standard deviations. Imaging of scattering media. Wax's research group developed Intralipid-based hydrogel phantoms, which were imaged with DA-OCT, On-Axis OCT, and DA-DOF+. To mimic highly forward scattering biological tissue, one hydrogel phantom had a reduced scattering coefficient of 1.6 mm-1 and an anisotropy of 0.9. The other hydrogel phantom had a near-zero anisotropy value to act as the control. A needle was placed in both hydrogel phantoms to be imaged. In the high anisotropy case, there was no improvement in the CNR of DA-OCT compared to On-Axis OCT. Comparing DA-DOF+ to On-Axis OCT, there was a 17% increase in CNR. In the low anisotropy case, there was no significant increase in CNR of DA-OCT over On-Axis OCT, but there was a 31% increase for DA-DOF+ over On-Axis OCT. In-vivo imaging. Wax's research group also observed a needle's CNR profile at different depths (~0 mm, 1.3 mm,  2.5 mm) within mouse skin. They imaged with On-Axis OCT, DA-OCT, On-Axis OCT DOF+, and DA-DOF+. For larger depths (&gt;1 mm), DA-OCT and DA-DOF+ produced a better CNR than On-Axis OCT and On-Axis OCT DOF+. For example, the group found a 195% increase with DA-OCT versus On-Axis OCT, and a 169% increase with DA-DOF+ versus On-Axis OCT DOF+. The DA-OCT and DA-DOF+ did not show strong CNR at shallower depths compared to On-Axis OCT and On-Axis OCT DOF+ because the needle surface was located too far from the system's focal zone. In all cases, the modes with enhanced depth of focus (DOF+) had a significantly better CNR than the corresponding modes without the tunable lens. Overall, the trends match the group's conclusions: DA-OCT DOF+ provides the best CNR at greater depths. Ex-vivo imaging. The research group led by Wax conducted a couple of qualitative studies. Firstly, they examined ex-vivo porcine ear skin using DA-OCT and traditional OCT. The epidermis appears brighter in the DA-OCT image, whereas it blends into the dermis layer in the traditional OCT image. DA-OCT detected a stronger signal from the photons than traditional OCT detected. Also, the epidermis layer appears thicker in the DA-OCT image meaning that more multiply-scattered photons were detected with DA-OCT compared to traditional OCT. The group compared DA-OCT images of injured rat skin to histopathology slides of the same samples. According to the histopathology slides, the base of the rat skin is healthy (the control), while the middle and tip indicate injury and structural damage. The DA-OCT images match these conclusions. For the healthy base, the DA-OCT image shows homogeneous backscattering intensity. For the middle and tip, the DA-OCT images show regions of inhomogeneous backscattering, which are indicative of tissue necrosis.
[ { "math_id": 0, "text": "CNR = {\\left\\vert \\mu_s - \\mu_m \\right\\vert \\over \\sqrt{\\sigma_s^2 + \\sigma_m^2}}" } ]
https://en.wikipedia.org/wiki?curid=73655306
73656557
Lipid A phosphoethanolamine transferase
Enzyme Lipid A phosphoethanolamine transferase (EC 2.7.8.43, lipid A PEA transferase, "LptA", formerly EC 2.7.4.30) is an enzyme that modifies Lipid A by linkage to a phosphoethanolamine moiety. Doing so at some positions reduces the affinity to colistin and related polymyxins, resulting in reduced activity of the antimicrobial. This type of resistance is known as target modification. This type of enzyme is of special medical note, as it offers resistance to a last-resort antibiotic. The modifications also provide cross-resistance to host immunity factors, specifically antimicrobial peptides and lysozyme. EC 2.7.8.43 catalyzes one of the following three reactions: Enzyme databases may list a very long list of synonyms for this enzyme. Many of these names, such as "mcr-1", do not refer to this type of enzyme in general, but only to a specific member of the family. There are many non-mobile (chromosomal) versions of this enzyme scattered all around the evolutionary tree, but "mcr-1" was notable because it was found on a plasmid, therefore capable of horizontal gene transfer. Only one family of protein is currently known to perform the activity described by the EC number. Structure. The enzyme is composed of two domains. The N-terminal part (about 1/3 of the length) is a transmembrane domain, while the rest is catalytic. Both domains contribute to the phosphoethanolamine substrate cavity. The C-terminal domain binds zinc as a cofactor. Function. Polymyxins and other cationic antimicrobial peptides attach to the LPS cell walls of bacteria by virtue of the highly negatively-charged groups in LPS such as Lipid A and Kdo. Modification of LPS with positively-charged PEA shields these sites from binding. Not all members of this family perform the same reaction, contrary to the EC classification framework. For example, "E. coli" naturally has three related genes all from this family, "EptA" through "C", all with different preferences for where to attach PEA. Addition of PEA can happen on Lipid A (this EC entry), on Kdo (EC 2.7.8.42), or on Heptose 1 (no EC number), the latter two being parts of the core oligosaccharide. In the case of "EptC", addition of PEA to Heptose compacts the LPS by forming a network of hydrogen bonds. Regulation. In chromosomal versions of this enzyme, the gene is regulated by a two-component regulatory system termed "PmrAB" or "BasRS". The "PmrA" or "BasS" is the histidine kinase sensor, which activates the DNA-binding response regulator "BasR" or "PmrB". The sensor triggers in a variety of dangerous situations, such as metal ions and being ingested by a phagocyte, helping the bacterium build a stronger cell wall to survive. The "PhoPQ" system, which detects similar situations and the presence of antimicrobial peptides, can also cross-trigger "PmrA" via a "PmrD" connector. Antibiotic resistance can occur when this system, or its upstream signals, mutates to become constitutively active. In plasmid versions, the gene is simply constitutively activated by an upstream promoter. The extra metabolic resources diverted means that the resistant trait is disadvantageous in environments without antibiotic or antimicrobial peptide threats, specifically by about 3%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=73656557
73657212
Daniel Atkinson (biochemist)
American biochemist (1921–2024) Daniel Edward Atkinson (April 8, 1921 – February 2, 2024) was an American biochemist who worked at UCLA for 40 years from 1952 until his retirement in 1992, though he continued his scientific work as Emeritus Professor. He is best known for the concept of energy charge. Education. Atkinson was an undergraduate at the University of Nebraska, and obtained a Ph.D. at Iowa State University, where he investigated the synthesis of aromatic amino acids and effects of "p"-fluorophenylalanine in "Lactobacillus arabinosus", under the supervision of Sidney Fox. Career. After a post-doctoral period at the California Institute of Technology, followed by one as a research scientist at Argonne National laboratories, Atkinson moved to UCLA in 1952 as the second biochemist in the department. In his first work at UCLA he studied the bacterium "Hydrogenomonas facilis", beginning with a description of the purification of hydrogenase. Atkinson remained at UCLA for the remainder of his career, where he undertook numerous studies of metabolic regulation. Of these the best known is his introduction of the concept of energy charge. Energy charge. Atkinson and Walton introduced the concept of energy charge, later discussed more fully, as a way to rationalize the dependence of metabolic processes on the proportions of the adenylates. For pairs of metabolites, such as the reduced and oxidized forms of NAD, a straightforward ratio of concentrations is sufficient, but the case of the adenylates is more complicated, as there are three components to be considered, AMP, ADP and ATP. The three adenylates are related by the reaction catalysed by adenylate kinase: &lt;chem&gt;ATP + AMP &lt;=&gt; 2 ADP&lt;/chem&gt; and on the basis of this equation Atkinson proposed the following ratio as a measure of the metabolic state of a cell: formula_0 Metabolic regulation. Atkinson's work on the energy charge was part of a broader interest in metabolic regulation and its mechanisms, and he contributed numerous influential publications in this field. In addition to general articles on metabolic regulation he also worked on specific enzymes, such as isocitrate dehydrogenase and glutaminase, and on the role of urea synthesis in vertebrates. "Cellular Energy Metabolism and its Regulation". Atkinson's influential book on energy metabolism set out the concepts and understanding of metabolic regulation that had developed over the preceding decades (most notably by him), in particular explaining the role of ratios of metabolite concentrations, including the energy charge, in regulating enzyme properties. Later life and death. Atkinson spent his last years living in Corvallis, Oregon. He died on February 2, 2024, at the age of 102. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Energy} \\text{ } \\mathrm{charge} = \\frac{[\\mathrm{ATP}] + 0.5[\\mathrm{ADP}]}{[\\mathrm{ATP}] + [\\mathrm{ADP}] + [\\mathrm{AMP}]}" } ]
https://en.wikipedia.org/wiki?curid=73657212
736618
Reeh–Schlieder theorem
Theorem in axiomatic quantum field theory The Reeh–Schlieder theorem is a result in relativistic local quantum field theory published by Helmut Reeh and Siegfried Schlieder in 1961. The theorem states that the vacuum state formula_0 is a cyclic vector for the field algebra formula_1 corresponding to any open set formula_2 in Minkowski space. That is, any state formula_3 can be approximated to arbitrary precision by acting on the vacuum with an operator selected from the local algebra, even for formula_3 that contain excitations arbitrarily far away in space. In this sense, states created by applying elements of the local algebra to the vacuum state are not localized to the region formula_2. For practical purposes, however, local operators still generate quasi-local states. More precisely, the long range effects of the operators of the local algebra will diminish rapidly with distance, as seen by the cluster properties of the Wightman functions. And with increasing distance, creating a unit vector localized outside the region requires operators of ever increasing operator norm. This theorem is also cited in connection with quantum entanglement. But it is subject to some doubt whether the Reeh–Schlieder theorem can usefully be seen as the quantum field theory analog to quantum entanglement, since the exponentially-increasing energy needed for long range actions will prohibit any macroscopic effects. However, Benni Reznik showed that vacuum entanglement can be distilled into EPR pairs used in quantum information tasks. It is known that the Reeh–Schlieder property applies not just to the vacuum but in fact to any state with bounded energy. If some finite number "N" of space-like separated regions is chosen, the multipartite entanglement can be analyzed in the typical quantum information setting of "N" abstract quantum systems, each with a Hilbert space possessing a countable basis, and the corresponding structure has been called "superentanglement".
[ { "math_id": 0, "text": "\\vert \\Omega \\rangle" }, { "math_id": 1, "text": "\\mathcal{A}(\\mathcal{O})" }, { "math_id": 2, "text": "\\mathcal{O}" }, { "math_id": 3, "text": "\\vert \\psi \\rangle" } ]
https://en.wikipedia.org/wiki?curid=736618
73671575
Spatial frequency domain imaging
Non-invasive imaging technique Spatial Frequency Domain Imaging (SFDI) is a non-invasive optical imaging method that uses spatially modulated light to extract quantitative information about tissue properties. Its large field of view coupled with its quantitative approach to imaging has made it a novel imaging modality, with many use cases in murine pre-clinical trials. Its clinical relevance in human medical practice so far has been limited, but there are currently outstanding clinical trials in their recruitment phase for the use of the technology. Methodology. In spatial frequency domain imaging, the project utilizes either visible or near infrared light as its source. The source projector is positioned obliquely to the field of view being imaging. The camera which receives the output is positioned perpendicular to the field of view. In regard to the properties of the light, it can be represented as a function of wavelength, spatial frequency, and angle of the incidence (λ,fx,θ). The light will project onto the media where the transmitted and reflected light will then be received by the camera. In order to qualify as SFDI imaging, at least two spatial frequencies must be used in imaging. Processing. The raw output obtained through SFDI will give a raw output in DC and AC modes, with the DC mode usually being a 0 mm-1 representation, while the AC mode will be the raw output obtained at a higher spatial frequency. From there demodulation, surface correction, and calibration takes place. The mapping and demodulation of the image is based on a LUT (Look-up Table) derived from photon Monte Carlo simulations. Diffuse reflectance is calculated using the following two equations: (1) formula_0 (2) formula_1 Using the calculated diffuse reflectance and derived LUT, single-pixel demodulation is used to map reduced scattering coefficient and absorption coefficient at every pixel in the image. From there, insight dependent processing can reveal quantitative markers such as scattering amplitude or scattering power, which can be preferred lenses for image analysis. This is possible due to known chromophore extinction coefficient for deoxygenated and oxygenated hemoglobin. Uses. Current uses of the technology have mostly centered around preclinical studies of tumor-infected mice. There have been many such studies intending to show the proficiency of the technology in evaluating optical property change in tumors over time. These studies have employed anticancer drug treatments including CPA and DC101 on tumor infected mice and have demonstrated the ability to reveal the efficacy of cytotoxic and anti-angiogenic therapies. This is particularly significant as the development of in vivo, non-invasive evaluation techniques mean there is at the very least research potential for the technology, and in the best case, treatment potential. In terms of clinical trials involving humans, SFDI has been used to examine burn wounds, nonmelanoma skin cancer, and skin photodamage, but has not been used in in vivo cancer studies. Advantages &amp; Limitations. One advantage of SFDI is its quantitative approach to optical property analysis, favoring numerical insights over qualitative ones. As with many optical imaging technologies, it is non-invasive as does not pose any significant risks to the patient. The large field of view coupled with the high spatial resolution means that a large area can precisely imaged. The source properties of the light are also very adaptable, with multi-wavelength, multi-frequency, and multi-phase combination possibilities. Some of the limitations of SFDI include its limited depth, which is a persistent problem in optical imaging. The imaging and processing techniques can also become labor-intensive and expensive under certain settings, reducing the feasible range of applications. There has also been limited clinical implementation so far, and new clinical trials demonstrating use cases in the cancer research space will be needed for SFDI to prove its usefulness. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W_n = exp(-\\mu_{a,1}d_{1,n} - \\mu_{a,2}d_{2,n})exp(-2\\pi i f_x x_n) " }, { "math_id": 1, "text": "R_d(f_x)= \\frac{1}{N}\\sum\\limits_{n=1}^N W_n" } ]
https://en.wikipedia.org/wiki?curid=73671575
736803
Expected utility hypothesis
Concept in economics The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour. The expected utility hypothesis states an agent chooses between risky prospects by comparing expected utility values (i.e. the weighted sum of adding the respective utility values of payoffs multiplied by their probabilities). The summarised formula for expected utility is formula_0 where formula_1 is the probability that outcome indexed by formula_2 with payoff formula_3 is realized, and function "u" expresses the utility of each respective payoff. Graphically the curvature of the u function captures the agent's risk attitude. Standard utility functions represent ordinal preferences. The expected utility hypothesis imposes limitations on the utility function and makes utility cardinal (though still not comparable across individuals). Although the expected utility hypothesis is standard in economic modelling, it has been found to be violated in psychological experiments. For many years, psychologists and economic theorists have been developing new theories to explain these deficiencies. These include prospect theory, rank-dependent expected utility and cumulative prospect theory, and bounded rationality. Justification. Bernoulli's formulation. Nicolaus Bernoulli described the St. Petersburg paradox (involving infinite expected values) in 1713, prompting two Swiss mathematicians to develop expected utility theory as a solution. Bernoulli's paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory. He used this concept to formalize the idea that the same amount of additional money was less useful to an already-wealthy person than it would be to a poor person. The theory can also more accurately describe more realistic scenarios (where expected values are finite) than expected value alone. He proposed that a nonlinear function of utility of an outcome should be used instead of the expected value of an outcome, accounting for risk aversion, where the risk premium is higher for low-probability events than the difference between the payout level of a particular outcome and its expected value. Bernoulli further proposed that it was not the goal of the gambler to maximize his expected gain but to instead maximize the logarithm of his gain. Daniel Bernoulli drew attention to psychological and behavioral components behind the individual's decision-making process and proposed that the utility of wealth has a diminishing marginal utility. For example, as someone gets wealthier, an extra dollar or an additional good is perceived as less valuable. In other words, desirability related with a financial gain depends not only on the gain itself but also on the wealth of the person. Bernoulli suggested that people maximize "moral expectation" rather than expected monetary value. Bernoulli made a clear distinction between expected value and expected utility. Instead of using the weighted outcomes, he used the weighted utility multiplied by probabilities. He proved that the utility function used in real life is finite, even when its expected value is infinite. Ramsey-theoretic approach to subjective probability. In 1926, Frank Ramsey introduced the Ramsey's Representation Theorem. This representation theorem for expected utility assumed that preferences are defined over a set of bets where each option has a different yield. Ramsey believed that we always choose decisions to receive the best expected outcome according to our personal preferences. This implies that if we are able to understand the priorities and personal preferences of an individual we can anticipate what choices they are going to take. In this model, he defined numerical utilities for each option to exploit the richness of the space of prices. The outcome of each preference is exclusive of each other. For example, if you study, then you can not see your friends, however you will get a good grade in your course. In this scenario, we analyze personal preferences and beliefs and will be able to predict which option a person might choose (e.g. if someone prioritizes their social life over academic results, they will go out with their friends). Assuming that the decisions of a person are rational, according to this theorem, we should be able to know the beliefs and utilities from a person just by looking at the choices they make (which is wrong). Ramsey defines a proposition as "ethically neutral" when two possible outcomes have an equal value. In other words, if the probability can be defined in terms of a preference, each proposition should have in order to be indifferent between both options. Ramsey shows that formula_4 Savage's subjective expected utility representation. In the 1950s, Leonard Jimmie Savage, an American statistician, derived a framework for comprehending expected utility. At that point, it was considered the first and most thorough foundation to understanding the concept. Savage's framework involved proving that expected utility could be used to make an optimal choice among several acts through seven axioms. In his book, The Foundations of Statistics, Savage integrated a normative account of decision making under risk (when probabilities are known) and under uncertainty (when probabilities are not objectively known). Savage concluded that people have neutral attitudes towards uncertainty and that observation is enough to predict the probabilities of uncertain events. A crucial methodological aspect of Savage's framework is its focus on observable choices. Cognitive processes and other psychological aspects of decision making matter only to the extent that they have directly measurable implications on choice. The theory of subjective expected utility combines two concepts: first, a personal utility function, and second, a personal probability distribution (usually based on Bayesian probability theory). This theoretical model has been known for its clear and elegant structure and its considered by some researchers to be "the most brilliant axiomatic theory of utility ever developed". Instead of assuming the probability of an event, Savage defines it in terms of preferences over acts. Savage used the states (something a person doesn't control) to calculate the probability of an event. On the other hand, he used utility and intrinsic preferences to predict the outcome of the event. Savage assumed that each act and state are sufficient to uniquely determine an outcome. However, this assumption breaks in cases where an individual does not have enough information about the event. Additionally, he believed that outcomes must have the same utility regardless of state. For that reason, it is essential to correctly identify which statement is considered an outcome. For example, if someone says "I got the job" this affirmation is not considered an outcome, since the utility of the statement will be different for each person depending on intrinsic factors such as financial necessity or judgment about the company. For that reason, no state can rule out the performance of an act. Only when the state and the act are evaluated simultaneously, it becomes possible to determine an outcome with certainty. Savage's representation theorem. The Savage representation theorem (Savage, 1954) A preference &lt; satisfies P1–P7 if and only if there is a finitely additive probability measure P and a function u : C → R such that for every pair of acts "f" and "g". "f" &lt; "g" ⇐⇒ Z Ω "u"("f"("ω")) "dP" ≥ Z Ω "u"("g"("ω")) "dP" The key ingredients in Savage's theory are: Von Neumann–Morgenstern utility theorem. The von Neumann–Morgenstern axioms. There are four axioms of the expected utility theory that define a "rational" decision maker: completeness; transitivity; independence of irrelevant alternatives; and continuity. "Completeness" assumes that an individual has well defined preferences and can always decide between any two alternatives. This means that the individual prefers formula_5 to formula_6, formula_6 to formula_5, or is indifferent between formula_5 and formula_6. "Transitivity" assumes that, as an individual decides according to the completeness axiom, the individual also decides consistently. "Independence of irrelevant alternatives" pertains to well-defined preferences as well. It assumes that two gambles mixed with an irrelevant third one will maintain the same order of preference as when the two are presented independently of the third one. The independence axiom is the most controversial axiom.. "Continuity" assumes that when there are three lotteries (formula_9 and formula_10) and the individual prefers formula_5 to formula_6 and formula_6 to formula_10, then there should be a possible combination of formula_5 and formula_10 in which the individual is then indifferent between this mix and the lottery formula_6. If all these axioms are satisfied, then the individual is said to be rational and the preferences can be represented by a utility function, i.e. one can assign numbers (utilities) to each outcome of the lottery such that choosing the best lottery according to the preference formula_19 amounts to choosing the lottery with the highest expected utility. This result is called the von Neumann–Morgenstern utility representation theorem. In other words, if an individual's behavior always satisfies the above axioms, then there is a utility function such that the individual will choose one gamble over another if and only if the expected utility of one exceeds that of the other. The expected utility of any gamble may be expressed as a linear combination of the utilities of the outcomes, with the weights being the respective probabilities. Utility functions are also normally continuous functions. Such utility functions are also referred to as von Neumann–Morgenstern (vNM) utility functions. This is a central theme of the expected utility hypothesis in which an individual chooses not the highest expected value, but rather the highest expected utility. The expected utility maximizing individual makes decisions rationally based on the axioms of the theory. The von Neumann–Morgenstern formulation is important in the application of set theory to economics because it was developed shortly after the Hicks–Allen "ordinal revolution" of the 1930s, and it revived the idea of cardinal utility in economic theory. However, while in this context the "utility function" is cardinal, in that implied behavior would be altered by a non-linear monotonic transformation of utility, the "expected utility function" is ordinal because any monotonic increasing transformation of expected utility gives the same behavior. Examples of von Neumann–Morgenstern utility functions. The utility function formula_20 was originally suggested by Bernoulli (see above). It has relative risk aversion constant and equal to one, and is still sometimes assumed in economic analyses. The utility function formula_21 exhibits constant absolute risk aversion, and for this reason is often avoided, although it has the advantage of offering substantial mathematical tractability when asset returns are normally distributed. Note that, as per the affine transformation property alluded to above, the utility function formula_22 gives exactly the same preferences orderings as does formula_23; thus it is irrelevant that the values of formula_23 and its expected value are always negative: what matters for preference ordering is which of two gambles gives the higher expected utility, not the numerical values of those expected utilities. The class of constant relative risk aversion utility functions contains three categories. Bernoulli's utility function formula_24 has relative risk aversion equal to 1. The functions formula_25 for formula_26 have relative risk aversion equal to formula_27. And the functions formula_28 for formula_29 have relative risk aversion equal to formula_30 See also the discussion of utility functions having hyperbolic absolute risk aversion (HARA). Formula for expected utility. When the entity formula_31 whose value formula_32 affects a person's utility takes on one of a set of discrete values, the formula for expected utility, which is assumed to be maximized, is formula_33 where the left side is the subjective valuation of the gamble as a whole, formula_34 is the "i"th possible outcome, formula_35 is its valuation, and formula_36 is its probability. There could be either a finite set of possible values formula_37 in which case the right side of this equation has a finite number of terms; or there could be an infinite set of discrete values, in which case the right side has an infinite number of terms. When formula_31 can take on any of a continuous range of values, the expected utility is given by formula_38 where formula_39 is the probability density function of formula_40 Measuring risk in the expected utility context. Often people refer to "risk" in the sense of a potentially quantifiable entity. In the context of mean-variance analysis, variance is used as a risk measure for portfolio return; however, this is only valid if returns are normally distributed or otherwise jointly elliptically distributed, or in the unlikely case in which the utility function has a quadratic form. However, David E. Bell proposed a measure of risk which follows naturally from a certain class of von Neumann–Morgenstern utility functions. Let utility of wealth be given by formula_41 for individual-specific positive parameters "a" and "b". Then expected utility is given by formula_42 Thus the risk measure is formula_43, which differs between two individuals if they have different values of the parameter formula_44 allowing different people to disagree about the degree of risk associated with any given portfolio. Individuals sharing a given risk measure (based on given value of "a") may choose different portfolios because they may have different values of "b". See also Entropic risk measure. For general utility functions, however, expected utility analysis does not permit the expression of preferences to be separated into two parameters with one representing the expected value of the variable in question and the other representing its risk. Risk aversion. The expected utility theory takes into account that individuals may be risk-averse, meaning that the individual would refuse a fair gamble (a fair gamble has an expected value of zero). Risk aversion implies that their utility functions are concave and show diminishing marginal wealth utility. The risk attitude is directly related to the curvature of the utility function: risk neutral individuals have linear utility functions, while risk seeking individuals have convex utility functions and risk averse individuals have concave utility functions. The degree of risk aversion can be measured by the curvature of the utility function. Since the risk attitudes are unchanged under affine transformations of "u", the second derivative "u"" is not an adequate measure of the risk aversion of a utility function. Instead, it needs to be normalized. This leads to the definition of the Arrow–Pratt measure of absolute risk aversion: formula_45 where formula_46 is wealth. The Arrow–Pratt measure of relative risk aversion is: formula_47 Special classes of utility functions are the CRRA (constant relative risk aversion) functions, where RRA(w) is constant, and the CARA (constant absolute risk aversion) functions, where ARA(w) is constant. They are often used in economics for simplification. A decision that maximizes expected utility also maximizes the probability of the decision's consequences being preferable to some uncertain threshold. In the absence of uncertainty about the threshold, expected utility maximization simplifies to maximizing the probability of achieving some fixed target. If the uncertainty is uniformly distributed, then expected utility maximization becomes expected value maximization. Intermediate cases lead to increasing risk aversion above some fixed threshold and increasing risk seeking below a fixed threshold. The St. Petersburg paradox. The St. Petersburg paradox presented by Nicolas Bernoulli illustrates that decision making based on expected value of monetary payoffs lead to absurd conclusions. When a probability distribution function has an infinite expected value, a person who only cares about expected values of a gamble would pay an arbitrarily large finite amount to take this gamble. However, this experiment demonstrated that there is no upper bound on the potential rewards from very low probability events. In the hypothetical setup, a person flips a coin repeatedly. The participant's prize is determined by the number of times the coin lands on heads consecutively. For every time the coin comes up heads (1/2 probability), the participant's prize is doubled. The game ends when the participant flips the coin and it comes out tails. A player who only cares about expected value of the payoff should be willing to pay any finite amount of money to play because this entry cost will always be less than the expected, infinite, value of the game. However, in reality, people do not do this. "Only a few of the participants were willing to pay a maximum of $25 to enter the game because many of them were risk averse and unwilling to bet on a very small possibility at a very high price. Criticism. In the early days of the calculus of probability, classic utilitarians believed that the option which has the greatest utility will produce more pleasure or happiness for the agent and therefore must be chosen. The main problem with the expected value theory is that there might not be a unique correct way to quantify utility or to identify the best trade-offs. For example, some of the trade-offs may be intangible or qualitative. Rather than monetary incentives, other desirable ends can also be included in utility such as pleasure, knowledge, friendship, etc. Originally the total utility of the consumer was the sum of independent utilities of the goods. However, the expected value theory was dropped as it was considered too static and deterministic. The classical counter example to the expected value theory (where everyone makes the same "correct" choice) is the St. Petersburg Paradox. In empirical applications, a number of violations of expected utility theory have been shown to be systematic and these falsifications have deepened understanding of how people actually decide. Daniel Kahneman and Amos Tversky in 1979 presented their prospect theory which showed empirically, how preferences of individuals are inconsistent among the same choices, depending on the framing of the choices, i.e. how they are presented. Like any mathematical model, expected utility theory is a simplification of reality. The mathematical correctness of expected utility theory and the salience of its primitive concepts do not guarantee that expected utility theory is a reliable guide to human behavior or optimal practice. The mathematical clarity of expected utility theory has helped scientists design experiments to test its adequacy, and to distinguish systematic departures from its predictions. This has led to the field of behavioral finance, which has produced deviations from expected utility theory to account for the empirical facts. Other critics argue applying expected utility to economic and policy decisions, has engendered inappropriate valuations, particularly in scenarios in which monetary units are used to scale the utility of nonmonetary outcomes, such as deaths. Conservatism in updating beliefs. Psychologists have discovered systematic violations of probability calculations and behavior by humans. This have been evidenced with examples such as the Monty Hall problem where it was demonstrated that people do not revise their degrees on belief in line with experimented probabilities and also that probabilities cannot be applied to single cases. On the other hand, in updating probability distributions using evidence, a standard method uses conditional probability, namely the rule of Bayes. An experiment on belief revision has suggested that humans change their beliefs faster when using Bayesian methods than when using informal judgment. According to the empirical results there has been almost no recognition in decision theory of the distinction between the problem of justifying its theoretical claims regarding the properties of rational belief and desire. One of the main reasons is because people's basic tastes and preferences for losses cannot be represented with utility as they change under different scenarios. Irrational deviations. Behavioral finance has produced several generalized expected utility theories to account for instances where people's choices deviate from those predicted by expected utility theory. These deviations are described as "irrational" because they can depend on the way the problem is presented, not on the actual costs, rewards, or probabilities involved. Particular theories include prospect theory, rank-dependent expected utility and cumulative prospect theory are considered insufficient to predict preferences and the expected utility. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. This is because preferences and utility functions constructed under different contexts are significantly different. This is demonstrated in the contrast of individual preferences under the insurance and lottery context shows the degree of indeterminacy of the expected utility theory. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. In practice there will be many situations where the probabilities are unknown, and one is operating under uncertainty. In economics, Knightian uncertainty or ambiguity may occur. Thus one must make assumptions about the probabilities, but then the expected values of various decisions can be very sensitive to the assumptions. This is particularly a problem when the expectation is dominated by rare extreme events, as in a long-tailed distribution. Alternative decision techniques are robust to uncertainty of probability of outcomes, either not depending on probabilities of outcomes and only requiring scenario analysis (as in minimax or minimax regret), or being less sensitive to assumptions. Bayesian approaches to probability treat it as a degree of belief and thus they do not draw a distinction between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities with hierarchical models, i.e. where the uncertain probabilities are modelled as distributions whose parameters are themselves drawn from a higher-level distribution (hyperpriors). Preference reversals over uncertain outcomes. Starting with studies such as Lichtenstein &amp; Slovic (1971), it was discovered that subjects sometimes exhibit signs of preference reversals with regard to their certainty equivalents of different lotteries. Specifically, when eliciting certainty equivalents, subjects tend to value "p bets" (lotteries with a high chance of winning a low prize) lower than "$ bets" (lotteries with a small chance of winning a large prize). When subjects are asked which lotteries they prefer in direct comparison, however, they frequently prefer the "p bets" over "$ bets". Many studies have examined this "preference reversal", from both an experimental (e.g., Plott &amp; Grether, 1979) and theoretical (e.g., Holt, 1986) standpoint, indicating that this behavior can be brought into accordance with neoclassical economic theory under specific assumptions. Recommendations. There are three components in the psychology field that are seen as crucial to the development of a more accurate descriptive theory of decision under risks. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "U(p)=\\sum u(x_k)p_k " }, { "math_id": 1, "text": "p_k" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "x_k" }, { "math_id": 4, "text": " P(E) = (1-U(m))(U(b)-U(w)) " }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "B" }, { "math_id": 7, "text": "A \\succeq B" }, { "math_id": 8, "text": "A \\preceq B" }, { "math_id": 9, "text": "A, B" }, { "math_id": 10, "text": "C" }, { "math_id": 11, "text": " B \\succeq C" }, { "math_id": 12, "text": " A \\succeq C" }, { "math_id": 13, "text": "A \\succeq B" }, { "math_id": 14, "text": "tA+(1-t)C \\succeq t B+(1-t)C," }, { "math_id": 15, "text": "t \\in [0, 1]" }, { "math_id": 16, "text": "A \\succeq B \\succeq C" }, { "math_id": 17, "text": "pA+(1-p)C" }, { "math_id": 18, "text": "p\\in [0,1]" }, { "math_id": 19, "text": "\\succeq" }, { "math_id": 20, "text": "u(w)=\\log(w)" }, { "math_id": 21, "text": " u(w)= -e^{-aw}" }, { "math_id": 22, "text": "K-e^{-aw}" }, { "math_id": 23, "text": "-e^{-aw}" }, { "math_id": 24, "text": " u(w) = \\log(w)" }, { "math_id": 25, "text": " u(w) = w^{\\alpha}" }, { "math_id": 26, "text": "\\alpha \\in (0,1)" }, { "math_id": 27, "text": "1-\\alpha\\in (0,1)" }, { "math_id": 28, "text": " u(w) = -w^{\\alpha}" }, { "math_id": 29, "text": "\\alpha < 0" }, { "math_id": 30, "text": "1-\\alpha >1." }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": " x_i" }, { "math_id": 33, "text": "\\operatorname E[u(x)]=p_1 \\cdot u(x_1)+p_2 \\cdot u(x_2)+\\cdots" }, { "math_id": 34, "text": "x_i" }, { "math_id": 35, "text": "u(x_i)" }, { "math_id": 36, "text": "p_i" }, { "math_id": 37, "text": "x_i," }, { "math_id": 38, "text": "\\operatorname E[u(x)] = \\int_{-\\infty}^\\infty u(x)f(x) \\, dx," }, { "math_id": 39, "text": "f(x)" }, { "math_id": 40, "text": "x." }, { "math_id": 41, "text": " u(w)= w-be^{-aw}" }, { "math_id": 42, "text": "\n\\begin{align}\n\\operatorname{E}[u(w)]&=\\operatorname{E}[w]-b\\operatorname{E}[e^{-aw}]\\\\\n &=\\operatorname{E}[w]-b\\operatorname{E}[e^{-a\\operatorname{E}[w]-a(w-\\operatorname{E}[w])}]\\\\\n &=\\operatorname{E}[w]-be^{-a\\operatorname{E}[w]}\\operatorname{E}[e^{-a(w-\\operatorname{E}[w])}]\\\\\n &= \\text{expected wealth} - b \\cdot e^{-a\\cdot \\text{expected wealth}}\\cdot \\text{risk}.\n\\end{align}\n" }, { "math_id": 43, "text": "\\operatorname{E}(e^{-a(w-\\operatorname{E}w)})" }, { "math_id": 44, "text": "a," }, { "math_id": 45, "text": "\\mathit{ARA}(w) =-\\frac{u''(w)}{u'(w)}," }, { "math_id": 46, "text": "w" }, { "math_id": 47, "text": "\\mathit{RRA}(w) =-\\frac{wu''(w)}{u'(w)}" } ]
https://en.wikipedia.org/wiki?curid=736803
73682080
General equation of heat transfer
Entropy production in Newtonian fluids In fluid dynamics, the general equation of heat transfer is a nonlinear partial differential equation describing specific entropy production in a Newtonian fluid subject to thermal conduction and viscous forces: formula_0 where formula_1 is the specific entropy, formula_2 is the fluid's density, formula_3 is the fluid's temperature, formula_4 is the material derivative, formula_5 is the thermal conductivity, formula_6 is the dynamic viscosity, formula_7 is the second Lamé parameter, formula_8 is the flow velocity, formula_9 is the del operator used to characterize the gradient and divergence, and formula_10 is the Kronecker delta. If the flow velocity is negligible, the general equation of heat transfer reduces to the standard heat equation. It may also be extended to rotating, stratified flows, such as those encountered in geophysical fluid dynamics. Derivation. Extension of the ideal fluid energy equation. For a viscous, Newtonian fluid, the governing equations for mass conservation and momentum conservation are the continuity equation and the Navier-Stokes equations:formula_11where formula_12 is the pressure and formula_13 is the viscous stress tensor, with the components of the viscous stress tensor given by:formula_14The energy of a unit volume of the fluid is the sum of the kinetic energy formula_15 and the internal energy formula_16, where formula_17 is the specific internal energy. In an ideal fluid, as described by the Euler equations, the conservation of energy is defined by the equation:formula_18where formula_19 is the specific enthalpy. However, for conservation of energy to hold in a viscous fluid subject to thermal conduction, the energy flux due to advection formula_20 must be supplemented by a heat flux given by Fourier's law formula_21 and a flux due to internal friction formula_22. Then the general equation for conservation of energy is:formula_23 Equation for entropy production. Note that the thermodynamic relations for the internal energy and enthalpy are given by:formula_24We may also obtain an equation for the kinetic energy by taking the dot product of the Navier-Stokes equation with the flow velocity formula_8 to yield:formula_25The second term on the righthand side may be expanded to read:formula_26With the aid of the thermodynamic relation for enthalpy and the last result, we may then put the kinetic energy equation into the form:formula_27Now expanding the time derivative of the total energy, we have:formula_28Then by expanding each of these terms, we find that:formula_29And collecting terms, we are left with:formula_30Now adding the divergence of the heat flux due to thermal conduction to each side, we have that:formula_31However, we know that by the conservation of energy on the lefthand side is equal to zero, leaving us with:formula_32The product of the viscous stress tensor and the velocity gradient can be expanded as:formula_33Thus leading to the final form of the equation for specific entropy production:formula_34In the case where thermal conduction and viscous forces are absent, the equation for entropy production collapses to formula_35 - showing that ideal fluid flow is isentropic. Application. This equation is derived in Section 49, at the opening of the chapter on "Thermal Conduction in Fluids" in the sixth volume of L.D. Landau and E.M. Lifshitz's "Course of Theoretical Physics". It might be used to measure the heat transfer and air flow in a domestic refrigerator, to do a harmonic analysis of regenerators, or to understand the physics of glaciers.
[ { "math_id": 0, "text": " \\underbrace{\\rho T{Ds\\over{Dt}}}_{\\text{Heat Gain}} = \\underbrace{\\nabla\\cdot (\\kappa\\nabla T)}_{\\text{Thermal Conduction}} + \\underbrace{{\\mu\\over{2}}\\left( {\\partial v_{i}\\over{\\partial x_{j}}} + {\\partial v_{j}\\over{\\partial x_{i}}} - {2\\over{3}}\\delta_{ij}\\nabla\\cdot {\\bf v} \\right)^{2} + \\zeta(\\nabla\\cdot {\\bf v})^{2}}_{\\text{Viscous Dissipation}} " }, { "math_id": 1, "text": "s" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "D/Dt" }, { "math_id": 5, "text": "\\kappa" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": "\\zeta" }, { "math_id": 8, "text": "{\\bf v}" }, { "math_id": 9, "text": "\\nabla" }, { "math_id": 10, "text": "\\delta_{ij}" }, { "math_id": 11, "text": "\\begin{aligned}\n{\\partial \\rho\\over{\\partial t}} &= -\\nabla\\cdot (\\rho {\\bf v}) \\\\\n\\rho {D{\\bf v}\\over{Dt}} &= -\\nabla p + \\nabla \\cdot \\sigma\n\\end{aligned}" }, { "math_id": 12, "text": "p" }, { "math_id": 13, "text": "\\sigma" }, { "math_id": 14, "text": "\\sigma_{ij} = \\mu\\left( {\\partial v_{i}\\over{\\partial x_{j}}} + {\\partial v_{j}\\over{\\partial x_{i}}} - {2\\over{3}}\\delta_{ij}\\nabla\\cdot {\\bf v} \\right) + \\zeta \\delta_{ij}\\nabla\\cdot {\\bf v} " }, { "math_id": 15, "text": "\\rho v^{2}/2 \\equiv \\rho k" }, { "math_id": 16, "text": "\\rho\\varepsilon" }, { "math_id": 17, "text": "\\varepsilon" }, { "math_id": 18, "text": "{\\partial\\over{\\partial t}}\\left[ \\rho (k+\\varepsilon) \\right] + \\nabla\\cdot \\left[ \\rho {\\bf v}(k+ h) \\right] = 0 " }, { "math_id": 19, "text": "h" }, { "math_id": 20, "text": "\\rho {\\bf v}(k+h)" }, { "math_id": 21, "text": "{\\bf q} = -\\kappa\\nabla T" }, { "math_id": 22, "text": "-\\sigma\\cdot {\\bf v}" }, { "math_id": 23, "text": "{\\partial\\over{\\partial t}}\\left[ \\rho (k+\\varepsilon) \\right] + \\nabla\\cdot \\left[ \\rho {\\bf v}(k+ h) - \\kappa\\nabla T - \\sigma\\cdot {\\bf v} \\right] = 0 " }, { "math_id": 24, "text": "\\begin{aligned}\n\\rho d\\varepsilon &= \\rho Tds + {p\\over{\\rho}}d\\rho \\\\\n\\rho dh &= \\rho Tds + dp\n\\end{aligned}" }, { "math_id": 25, "text": "\\rho {Dk\\over{Dt}} = -{\\bf v}\\cdot \\nabla p + v_{i}{\\partial\\sigma_{ij}\\over{\\partial x_{j}}} " }, { "math_id": 26, "text": "\\begin{aligned}\nv_{i} {\\partial \\sigma_{ij}\\over{\\partial x_{j}}} &= {\\partial\\over{\\partial x_{j}}}\\left(\\sigma_{ij}v_{i} \\right ) - \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} \\\\\n&\\equiv \\nabla\\cdot (\\sigma \\cdot {\\bf v}) - \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}}\n\\end{aligned} " }, { "math_id": 27, "text": "\\rho {Dk\\over{Dt}} = -\\rho {\\bf v}\\cdot \\nabla h + \\rho T {\\bf v}\\cdot \\nabla s + \\nabla\\cdot (\\sigma \\cdot {\\bf v}) - \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} " }, { "math_id": 28, "text": "{\\partial\\over{\\partial t}}\\left[ \\rho (k+\\varepsilon) \\right] = \\rho {\\partial k\\over{\\partial t}} + \\rho {\\partial\\varepsilon\\over{\\partial t}} + (k+\\varepsilon) {\\partial \\rho\\over{\\partial t}} " }, { "math_id": 29, "text": "\\begin{aligned}\n\\rho {\\partial k\\over{\\partial t}} &= -\\rho {\\bf v}\\cdot\\nabla k - \\rho {\\bf v}\\cdot\\nabla h + \\rho T{\\bf v}\\cdot \\nabla s + \\nabla\\cdot(\\sigma\\cdot {\\bf v}) - \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} \\\\\n\\rho {\\partial\\varepsilon\\over{\\partial t}} &= \\rho T {\\partial s\\over{\\partial t}} - {p\\over{\\rho}}\\nabla\\cdot(\\rho {\\bf v}) \\\\\n(k+\\varepsilon){\\partial\\rho\\over{\\partial t}} &= -(k+\\varepsilon)\\nabla\\cdot (\\rho {\\bf v})\n\\end{aligned} " }, { "math_id": 30, "text": "{\\partial\\over{\\partial t}}\\left[\\rho(k+\\varepsilon) \\right ] + \\nabla \\cdot\\left[\\rho {\\bf v}(k+h) - \\sigma\\cdot {\\bf v} \\right ] = \\rho T {Ds\\over{Dt}} - \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} " }, { "math_id": 31, "text": "{\\partial\\over{\\partial t}}\\left[\\rho(k+\\varepsilon) \\right ] + \\nabla \\cdot\\left[\\rho {\\bf v}(k+h) - \\kappa\\nabla T - \\sigma\\cdot {\\bf v} \\right ] = \\rho T {Ds\\over{Dt}} - \\nabla\\cdot(\\kappa\\nabla T) - \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} " }, { "math_id": 32, "text": "\\rho T {Ds\\over{Dt}} = \\nabla\\cdot(\\kappa\\nabla T) + \\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} " }, { "math_id": 33, "text": "\\begin{aligned}\n\\sigma_{ij}{\\partial v_{i}\\over{\\partial x_{j}}} &= \\mu\\left( {\\partial v_{i}\\over{\\partial x_{j}}} + {\\partial v_{j}\\over{\\partial x_{i}}} - {2\\over{3}}\\delta_{ij}\\nabla\\cdot {\\bf v} \\right){\\partial v_{i}\\over{\\partial x_{j}}} + \\zeta \\delta_{ij}{\\partial v_{i}\\over{\\partial x_{j}}}\\nabla\\cdot {\\bf v} \\\\\n&= {\\mu\\over{2}}\\left( {\\partial v_{i}\\over{\\partial x_{j}}} + {\\partial v_{j}\\over{\\partial x_{i}}} - {2\\over{3}}\\delta_{ij}\\nabla\\cdot {\\bf v} \\right)^{2} + \\zeta(\\nabla \\cdot {\\bf v})^{2}\n\\end{aligned} " }, { "math_id": 34, "text": "\\rho T {Ds\\over{Dt}} = \\nabla\\cdot(\\kappa\\nabla T) + {\\mu\\over{2}}\\left( {\\partial v_{i}\\over{\\partial x_{j}}} + {\\partial v_{j}\\over{\\partial x_{i}}} - {2\\over{3}}\\delta_{ij}\\nabla\\cdot {\\bf v} \\right)^{2} + \\zeta(\\nabla \\cdot {\\bf v})^{2} " }, { "math_id": 35, "text": "Ds/Dt=0" } ]
https://en.wikipedia.org/wiki?curid=73682080
73688310
Weinstein's neighbourhood theorem
In symplectic geometry, a branch of mathematics, Weinstein's neighbourhood theorem refers to a few distinct but related theorems, involving the neighbourhoods of submanifolds in symplectic manifolds and generalising the classical Darboux's theorem. They were proved by Alan Weinstein in 1971. Darboux-Moser-Weinstein theorem. This statement is a direct generalisation of Darboux's theorem, which is recovered by taking a point as formula_0.Let formula_1 be a smooth manifold of dimension formula_2, and formula_3 and formula_4 two symplectic forms on formula_1. Consider a compact submanifold formula_5 such that formula_6. Then there exist such that formula_10 and formula_11.Its proof employs Moser's trick. Generalisation: equivariant Darboux theorem. The statement (and the proof) of Darboux-Moser-Weinstein theorem can be generalised in presence of a symplectic action of a Lie group.Let formula_1 be a smooth manifold of dimension formula_2, and formula_3 and formula_4 two symplectic forms on formula_1. Let also formula_12 be a compact Lie group acting on formula_1 and leaving both formula_3 and formula_4 invariant. Consider a compact and formula_12-invariant submanifold formula_5 such that formula_6. Then there exist such that formula_10 and formula_11.In particular, taking again formula_0 as a point, one obtains an equivariant version of the classical Darboux theorem. Weinstein's Lagrangian neighbourhood theorem. Let formula_1 be a smooth manifold of dimension formula_2, and formula_3 and formula_4 two symplectic forms on formula_1. Consider a compact submanifold formula_13 of dimension formula_14 which is a Lagrangian submanifold of both formula_15 and formula_16, i.e. formula_17. Then there exist such that formula_10 and formula_19.This statement is proved using the Darboux-Moser-Weinstein theorem, taking formula_20 a Lagrangian submanifold, together with a version of the Whitney Extension Theorem for smooth manifolds. Generalisation: Coisotropic Embedding Theorem. Weinstein's result can be generalised by weakening the assumption that formula_18 is Lagrangian.Let formula_1 be a smooth manifold of dimension formula_2, and formula_3 and formula_4 two symplectic forms on formula_1. Consider a compact submanifold formula_13 of dimension formula_21 which is a coisotropic submanifold of both formula_15 and formula_16, and such that formula_6. Then there exist such that formula_10 and formula_19. Weinstein's tubular neighbourhood theorem. While Darboux's theorem identifies locally a symplectic manifold formula_1 with formula_22, Weinstein's theorem identifies locally a Lagrangian formula_18 with the zero section of formula_22. More preciselyLet formula_23 be a symplectic manifold and formula_18 a Lagrangian submanifold. Then there exist such that formula_28 sends formula_18 to formula_26. Proof. This statement relies on the Weinstein's Lagrangian neighbourhood theorem, as well as on the standard tubular neighbourhood theorem.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "2n" }, { "math_id": 3, "text": "\\omega_1" }, { "math_id": 4, "text": "\\omega_2" }, { "math_id": 5, "text": "i: X \\hookrightarrow M" }, { "math_id": 6, "text": "i^* \\omega_1 = i^* \\omega_2" }, { "math_id": 7, "text": "U_1" }, { "math_id": 8, "text": "U_2" }, { "math_id": 9, "text": "f: U_1 \\to U_2" }, { "math_id": 10, "text": "f^* \\omega_2 = \\omega_1" }, { "math_id": 11, "text": "f |_X = \\mathrm{id}_X" }, { "math_id": 12, "text": "G" }, { "math_id": 13, "text": "i: L \\hookrightarrow M" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "(M,\\omega_1)" }, { "math_id": 16, "text": "(M, \\omega_2)" }, { "math_id": 17, "text": "i^* \\omega_1 = i^* \\omega_2 = 0" }, { "math_id": 18, "text": "L" }, { "math_id": 19, "text": "f |_L = \\mathrm{id}_L" }, { "math_id": 20, "text": "X = L" }, { "math_id": 21, "text": "k" }, { "math_id": 22, "text": "T^*L" }, { "math_id": 23, "text": "(M,\\omega)" }, { "math_id": 24, "text": "U" }, { "math_id": 25, "text": "V" }, { "math_id": 26, "text": "L_0" }, { "math_id": 27, "text": "f: U \\to V" }, { "math_id": 28, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=73688310
73693034
Quantum random circuits
Quantum random circuits Quantum random circuits (QRC) is a concept of incorporating an element of randomness into the local unitary operations and measurements of a quantum circuit. The idea is similar to that of random matrix theory which is to use the QRC to obtain almost exact results of non-integrable, hard-to-solve problems by averaging over an ensemble of outcomes. This incorporation of randomness into the circuits has many possible advantages, some of which are (i) the validation of quantum computers, which is the method that Google used when they claimed quantum supremacy in 2019., and (ii) understanding the universal structure of non-equilibrium and thermalization processes in quantum many-body dynamics. Quantum Random Circuits. The constituents of some general quantum circuits would be qubits, unitary gates, and measurements. The time evolution of the quantum circuits is discrete in time formula_0, and the states are evolved step by step in time by the application of unitary operators formula_1 under which a pure state evolves according toformula_2(note that unitary operators can entangle states). Thus, the time evolution from a starting time, say formula_3, to some time formula_4 would be given byformula_5where for each step, the unitary operator is represented by a tensor product of local unitary gates formula_6 where the formula_7 index specifies the lattice integer which connects a pair of qubits, and formula_8 is the time step. Figure 1, shows a time-space diagram of a quantum circuit which shows the local interactions at each time step. In the language of quantum information theory, the number of qubits formula_9 is the circuit's width, and we define its depth formula_10 as the number of layers of unitary gates. Hence, for the configuration in Figure 1, formula_11 and formula_12. Another way to interpret the circuit is to look at it as a tensor network in which each purple box is a local gate formula_13 operating on two qubits and the total contraction of qubits indices at the start formula_3 and the end at time formula_4 on the lattice integers would give the full unitary time evolution formula_14. Thus, the propagation amplitude from some initial state given by the indices formula_15 to a final state with the indices formula_16 isformula_17On the other side, measurements would disentangle the qubits. The used measurements are called projective measurements, defined as observations that leave the degrees of freedom in an eigenstate of the measured operator unchanged. Measurements in quantum mechanics are stochastic by nature, which means that circuits with the same exact structure (qubits and gates) would give different outcomes on different runs, see Figure 2. Though this stochastic nature, should be differentiated from randomness. Let formula_18 be the outcome set of some random measurement, then different measurements on a fixed set of unitary gates would yield distinct formula_19 records. See the schematic diagram in Figure 2, which sketches a tree diagram with each branch representing a possible outcome of the measurements shown on the circuit. Notice that each measurement results in a different formula_19, which would be kind of like a random walk. If our system is just a single qubit, then each measurement causes a jump on the Bloch sphere. However, in the many-body case, the situation is complicated due to correlations between different qubits. Applications. Near-term quantum computers validation. As we are currently in the Noisy Intermediate-Scale Quantum (NISQ) era, which means that our current quantum computers are not fault tolerant and are not large enough to reach supremacy, we are looking for tasks that have two features: The needed tasks must be feasible on a quantum computer but classically resource-consuming in terms of, for example, time. For instance, this task could be a system that is solvable in a short time using a classical computer; however, as the system's complexity increases (larger size or dimensions), the computation time would not increase linearly. In that case, a state-of-the-art classical computer would take an unreasonable amount of time (years); meanwhile, a quantum computer is believed to give an exponential reduction in the needed time of computation. Research on this subject to find such a task focused on sampling problems. One of the theoretically compelling methods that would provide such a task is Boson Sampling, as it shows strong complexity-theoretic evidence. However, researchers faced experimental difficulties in achieving the desired results using this sampling method. Another method is random circuit sampling, in which the main task is to sample the output of a random quantum circuit. Results have shown that this approach would be more experimentally feasible with the recent developments of superconducting qubits and has strong complexity-theoretic evidence. In Google's claim of quantum supremacy, they have used their sycamore processor, which took about 200 seconds to sample one instance of a quantum circuit a million times. While on the other hand, a state-of-the-art classical supercomputer would take 10,000 years. Non-equilibrium and thermalization of quantum many-body dynamics. One of the pressing questions in many-body dynamics is how entanglement spreads with time through for example a quantum quench that is an initially prepared system evolves unitarily in time by a sudden change in the parameters of the initial Hamiltonian. The answer to such a question for a fundamental part of thermalization and would provide a numerical tool to simulate quantum dynamics. Quantum random circuits would serve as a playground to experiment on and understand such processes. Results using QRC methods have shown that there is a universal structure behind noisy entanglement growth
[ { "math_id": 0, "text": "t \\in \\mathbb{Z}" }, { "math_id": 1, "text": "U_t \\equiv U(t; t-1)" }, { "math_id": 2, "text": "|\\psi(t)\\rangle=U_t |\\psi(t-1)\\rangle" }, { "math_id": 3, "text": "t=0" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "U(t; 0) = U_t U_{t-1} \\cdots U_3 U_2 U_1" }, { "math_id": 6, "text": "u_{\\tau, x}" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "\\tau" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "d" }, { "math_id": 11, "text": "n = 8" }, { "math_id": 12, "text": "d = 4\n" }, { "math_id": 13, "text": "u_{\\tau,x}" }, { "math_id": 14, "text": "U(t; 0)" }, { "math_id": 15, "text": "\\left\\{ a_1 a_2 \\cdots a_L \\right\\}" }, { "math_id": 16, "text": "\\left\\{ b_1 b_2 \\cdots b_L \\right\\}" }, { "math_id": 17, "text": "\\langle a_1 a_2 \\cdots a_L | U(t; 0) | b_1 b_2 \\cdots b_L \\rangle." }, { "math_id": 18, "text": "\\textbf{m} = \\left\\{ m_1, m_2, \\cdots, m_M \\right\\}" }, { "math_id": 19, "text": "\\textbf{m}" } ]
https://en.wikipedia.org/wiki?curid=73693034
737087
Toric variety
In algebraic geometry, a toric variety or torus embedding is an algebraic variety containing an algebraic torus as an open dense subset, such that the action of the torus on itself extends to the whole variety. Some authors also require it to be normal. Toric varieties form an important and rich class of examples in algebraic geometry, which often provide a testing ground for theorems. The geometry of a toric variety is fully determined by the combinatorics of its associated fan, which often makes computations far more tractable. For a certain special, but still quite general class of toric varieties, this information is also encoded in a polytope, which creates a powerful connection of the subject with convex geometry. Familiar examples of toric varieties are affine space, projective spaces, products of projective spaces and bundles over projective space. Toric varieties from tori. The original motivation to study toric varieties was to study torus embeddings. Given the algebraic torus "T", the group of characters Hom("T",Cx) forms a lattice. Given a collection of points "A", a subset of this lattice, each point determines a map to C and thus the collection determines a map to C|A|. By taking the Zariski closure of the image of such a map, one obtains an affine variety. If the collection of lattice points "A" generates the character lattice, this variety is a torus embedding. In similar fashion one may produce a parametrized projective toric variety, by taking the projective closure of the above map, viewing it as a map into an affine patch of projective space. Given a projective toric variety, observe that we may probe its geometry by one-parameter subgroups. Each one parameter subgroup, determined by a point in the lattice, dual to the character lattice, is a punctured curve inside the projective toric variety. Since the variety is compact, this punctured curve has a unique limit point. Thus, by partitioning the one-parameter subgroup lattice by the limit points of punctured curves, we obtain a lattice fan, a collection of polyhedral rational cones. The cones of highest dimension correspond precisely to the torus fixed points, the limits of these punctured curves. The toric variety of a fan. Suppose that "N" is a finite-rank free abelian group. A strongly convex rational polyhedral cone in "N" is a convex cone (of the real vector space of "N") with apex at the origin, generated by a finite number of vectors of "N", that contains no line through the origin. These will be called "cones" for short. For each cone σ its affine toric variety "U"σ is the spectrum of the monoid algebra of the dual cone. A fan is a collection of cones closed under taking intersections and faces. The toric variety of a fan is given by taking the affine toric varieties of its cones and gluing them together by identifying "U"σ with an open subvariety of "U"τ whenever σ is a face of τ. Conversely, every fan of strongly convex rational cones has an associated toric variety. The fan associated with a toric variety condenses some important data about the variety. For example, a variety is smooth if every cone in its fan can be generated by a subset of a basis for the free abelian group "N". Morphisms of toric varieties. Suppose that Δ1 and Δ2 are fans in lattices "N"1 and "N"2. If "f" is a linear map from "N"1 to "N"2 such that the image of every cone of Δ1 is contained in a cone of Δ2, then "f" induces a morphism "f"* between the corresponding toric varieties. This map "f"* is proper if and only if the preimage of |Δ2| under the map "f" is |Δ1|, where |Δ| is the underlying space of a fan Δ given by the union of its cones. Resolution of singularities. A toric variety is nonsingular if its cones of maximal dimension are generated by a basis of the lattice. This implies that every toric variety has a resolution of singularities given by another toric variety, which can be constructed by subdividing the maximal cones into cones of nonsingular toric varieties. The toric variety of a convex polytope. The fan of a rational convex polytope in "N" consists of the cones over its proper faces. The toric variety of the polytope is the toric variety of its fan. A variation of this construction is to take a rational polytope in the dual of "N" and take the toric variety of its polar set in "N". The toric variety has a map to the polytope in the dual of "N" whose fibers are topological tori. For example, the complex projective plane CP2 may be represented by three complex coordinates satisfying formula_0 where the sum has been chosen to account for the real rescaling part of the projective map, and the coordinates must be moreover identified by the following U(1) action: formula_1 The approach of toric geometry is to write formula_2 The coordinates formula_3 are non-negative, and they parameterize a triangle because formula_4 that is, formula_5 The triangle is the toric base of the complex projective plane. The generic fiber is a two-torus parameterized by the phases of formula_6; the phase of formula_7 can be chosen real and positive by the formula_8 symmetry. However, the two-torus degenerates into three different circles on the boundary of the triangle i.e. at formula_9 or formula_10 or formula_11 because the phase of formula_12 becomes inconsequential, respectively. The precise orientation of the circles within the torus is usually depicted by the slope of the line intervals (the sides of the triangle, in this case). Relation to mirror symmetry. The idea of toric varieties is useful for mirror symmetry because an interpretation of certain data of a fan as data of a polytope leads to a combinatorial construction of mirror manifolds.
[ { "math_id": 0, "text": "|z_1|^2+|z_2|^2+|z_3|^2 = 1 , \\,\\!" }, { "math_id": 1, "text": "(z_1,z_2,z_3)\\approx e^{i\\phi} (z_1,z_2,z_3) . \\,\\!" }, { "math_id": 2, "text": "(x,y,z) = (|z_1|^2,|z_2|^2,|z_3|^2) . \\,\\!" }, { "math_id": 3, "text": "x,y,z" }, { "math_id": 4, "text": "x+y+z=1 ; \\,\\! " }, { "math_id": 5, "text": "\\quad z=1-x-y . \\,\\!" }, { "math_id": 6, "text": "z_1,z_2" }, { "math_id": 7, "text": "z_3" }, { "math_id": 8, "text": "U(1)" }, { "math_id": 9, "text": "x=0" }, { "math_id": 10, "text": "y=0" }, { "math_id": 11, "text": "z=0" }, { "math_id": 12, "text": "z_1,z_2,z_3" } ]
https://en.wikipedia.org/wiki?curid=737087
7371308
Graphical timeline from Big Bang to Heat Death
Visual representation of the universe's past, present, and future This is the timeline of the Universe from Big Bang to Heat Death scenario. The different eras of the universe are shown. The heat death will occur in around 1.7×10106 years, if protons decay. Timelines. If protons decay: If protons do not decay: &lt;/timeline&gt; Usually the logarithmic scale is used for such timelines but it compresses the most interesting Stelliferous Era too much as this example shows. Therefore, a double-logarithmic scale "s" ("s*100" in the graphics) is used instead. The minimum of it is only 1, not 0 as needed, and the negative outputs for inputs smaller than 10 are useless. Therefore, the time from 0.1 to 10 years is collapsed to a single point 0, but that does not matter in this case because nothing special happens in the history of the universe during that time. formula_0 The seconds in the timescale have been converted to years by formula_1 using the Julian year. See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": " s =\n \\begin{cases}\n \\log_{10} \\log_{10} year & \\mbox{if } year > 10 \\mbox{ , corresponding to } year = 10^{10^{s}} \\\\\n 0 & \\mbox{if } 0.1 \\le year \\le 10 \\\\\n -\\log_{10} (-\\log_{10} year) & \\mbox{if } year < 0.1 \\mbox{ , corresponding to } year = 10^{-10^{-s}}\n \\end{cases}\n" }, { "math_id": 1, "text": "second / 31 557 600" } ]
https://en.wikipedia.org/wiki?curid=7371308
73713333
Neural scaling law
Law in machine learning In machine learning, a neural scaling law is an empirical scaling law that describes how neural network performance changes as key factors are scaled up or down. These factors typically include the number of parameters, training dataset size, and training cost. Introduction. In general, a neural model can be characterized by 4 parameters: size of the model, size of the training dataset, cost of training, error rate after training. Each of these four variables can be precisely defined into a real number, and they are empirically found to be related by simple statistical laws, called "scaling laws". These are usually written as formula_0 (number of parameters, dataset size, computing cost, loss). Size of the model. In most cases, the size of the model is simply the number of parameters. However, one complication arises with the use of sparse models, such as mixture-of-expert models. In sparse models, during every inference, only a fraction of the parameters are used. In comparison, most other kinds of neural networks, such as Transformer networks, always use all their parameters during every inference. Size of the training dataset. The size of the training dataset is usually quantified by the number of data points it contains. Larger training datasets are typically preferred as they provide a richer and more diverse source of information for the model to learn from. This in turn can lead to improved generalization performance when the model is applied to unseen data. However, increasing the size of the training dataset also increases the computational resources and time required for model training. With the "pretrain, then finetune" method used in most large language models, there are two kinds of training dataset: the pretraining dataset and the finetuning dataset. Their sizes would have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset. In some cases, a small amount of high quality data suffices for finetuning, and more data does not improve performance. Cost of training. The cost of training is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required to train the model). It's important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, and parallel computing on specialized hardware like GPUs or TPUs. The cost of training a neural model is a function of several factors including the size of the model, the size of the training dataset, the complexity of the training algorithm, and the computational resources available. In particular, doubling the training dataset does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an "epoch"). Performance. The performance of a neural model is evaluated based on its ability to accurately predict the output given the input data. Common metrics for evaluating model performance include: Performance can be improved by using more data, larger models, different training algorithms, regularizing the model to prevent overfitting, and early stopping using a validation set. Examples. (Hestness, Narang, et al, 2017). The 2017 paper is a common reference point for neural scaling laws fitted by statistical analysis on experimental data. Previous works before the 2000s, as cited in the paper, were either theoretical or orders of magnitude smaller in scale. Whereas previous works generally found the scaling exponent to scale like formula_1, with formula_2, the paper found that formula_3. Of the factors they varied, only task can change the exponent formula_4. Changing the architecture optimizers, regularizers, and loss functions, would only change the proportionality factor, not the exponent. For example, for the same task, one architecture might have formula_5 while another might have formula_6. The also found that for a given architecture, the number of parameters necessary to reach lowest levels of loss, given a fixed dataset size, grows like formula_7 for another exponent formula_8. They studied machine translation with LSTM (formula_9), generative language modelling with LSTM (formula_10), ImageNet classification with ResNet (formula_11), and speech recognition (formula_12). (Henighan, Kaplan, et al, 2020). A 2020 analysis studied statistical relations between formula_13 over a wide range of values and found similar scaling laws, over the range of formula_14, formula_15, and over multiple modalities (text, video, image, text to image, etc.). In particular, the scaling laws it found are (Table 1 of ): The scaling law of formula_37 was confirmed during the training of GPT-3 (Figure 3.1 ). Chinchilla scaling (Hoffmann, et al, 2022). One particular scaling law ("Chinchilla scaling") states that, for a large language model (LLM) autoregressively trained for one epoch, with a cosine learning rate schedule, we have:formula_38where the variables are and the statistical parameters are Although Besiroglu et. al. claims that the statistical estimation is slightly off, and should be formula_45. The statistical laws were fitted over experimental data with formula_46. Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additional optimization objective allows us to solve for all four variables. In particular, for any fixed formula_26, we can uniquely solve for all 4 variables that minimizes formula_39. This provides us with the optimal formula_47 for any fixed formula_26:formula_48Plugging in the numerical values, we obtain the "Chinchilla efficient" model size and training dataset size, as well as the test loss achievable:formula_49Similarly, we may find the optimal training dataset size and training compute budget for any fixed model parameter size, and so on. There are other estimates for "Chinchilla efficient" model size and training dataset size. The above is based on a statistical model of formula_50. One can also directly fit a statistical law for formula_47 without going through the detour, for which one obtains:formula_51or as tabulated: In simpler terms, the Chinchilla scaling law for training Transformer language models suggests that when given an increased budget (in FLOPs), to achieve compute-optimal, the number of model parameters (N) and the number of tokens for training the model (D) should scale in approximately equal proportions. This conclusion differs from the previous scaling law for neural language models, which states that N should be scaled faster than D. The discrepancy arises from setting different cycle lengths for cosine learning rate schedulers. In estimating the Chinchilla scaling, the authors set the cycle length to be the same as the training steps, as experimental results indicate that larger cycles overestimate the loss of the models. Beyond Chinchilla scaling. As Chinchilla scaling has been the reference point for many large-scaling training runs, there had been a concurrent effort to go "beyond Chinchilla scaling", meaning to modify some of the training pipeline in order to obtain the same loss with less effort, or deliberately train for longer than what is "Chinchilla optimal". Usually, the goal is to make the scaling law exponent larger, which means the same loss can be trained for much less compute. For instance, filtering data can make the scaling law exponent larger. Another strand of research studies how to deal with limited data, as according to Chinchilla scaling laws, the training dataset size for the largest language models already approaches what is available on the internet. found that augmenting the dataset with a mix of "denoising objectives" constructed from the dataset improves performance. studies optimal scaling when all available data is already exhausted (such as in rare languages), so one must train multiple epoches over the same dataset (whereas Chinchilla scaling requires only one epoch). The Phi series of small language models were trained on textbook-like data generated by large language models, for which data is only limited by amount of compute available. Chinchilla optimality was defined as "optimal for training compute", whereas in actual production-quality models, there will be a lot of inference after training is complete. "Overtraining" during training means better performance during inference. LLaMA models were overtrained for this reason. Subsequent studies discovered scaling laws in the overtraining regime, for dataset sizes up to 32x more than Chinchilla-optimal. Broken Neural Scaling Laws (BNSL). A 2022 analysis found that many scaling behaviors of artificial neural networks follow a smoothly broken power law functional form: formula_52 in which formula_20 refers to the quantity being scaled (i.e. formula_26, formula_22, formula_17, number of training steps, number of inference steps, or model input size) and formula_53 refers to the "downstream" (or upstream) performance evaluation metric of interest (e.g. prediction error, cross entropy, calibration error, AUROC, BLEU score percentage, F1 score, reward, Elo rating, solve rate, or FID score) in zero-shot, prompted, or fine-tuned settings. The parameters formula_54 are found by statistical fitting. On a log–log plot, when formula_55 is not too large and formula_56 is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; the formula_57 transitions between the segments are called "breaks", hence the name "Broken Neural Scaling Laws (BNSL)". The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, out-of-distribution detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, fairness, molecules, computer programming/coding, math word problems, arithmetic, emergent abilities, double descent, supervised learning, unsupervised/self-supervised learning, and reinforcement learning (single agent and multi-agent). The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form include ResNets, Transformers, MLPs, , Recurrent Neural Networks, Convolutional Neural Networks, Graph Neural Networks, U-Nets, Encoder-Decoder (and encoder-only) (and Decoder-only) Models, Ensembles (and Non-Ensembles), MoE (Mixture of Experts) (and Non-MoE) Models, and Sparse Pruned (and Non-Sparse Unpruned) Models. Other examples. Vision transformers. Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter counts formula_58, on image sets of sizes formula_59, for computing formula_60 (in units of TPUv3-core-days). After training the model, it is finetuned on ImageNet training set. Let formula_39 be the error probability of the finetuned model classifying ImageNet test set. They found formula_61. Neural machine translation. Ghorbani, Behrooz et al. studied scaling laws for neural machine translation (specifically, English as source, and German as target) in encoder-decoder Transformer models, trained until convergence on the same datasets (thus they did not fit scaling laws for computing cost formula_26 or dataset size formula_17). They varied formula_62 They found three results: The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit. trained Transformers for machine translations with sizes formula_69 on dataset sizes formula_70. They found the Kaplan et al (2020) scaling law applied to machine translation: formula_71. They also found the BLEU score scaling as formula_72. Transfer learning. Hernandez, Danny et al. studied scaling laws for transfer learning in language models. They trained a family of Transformers in three ways: The idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter count formula_22, and after being finetuned on formula_73 Python tokens, it achieves some loss formula_39. We say that its "transferred token count" is formula_74, if another model with the same formula_22 achieves the same formula_39 after training on formula_75 Python tokens. They found formula_76 for pretraining on English text, and formula_77 for pretraining on English and non-Python code. See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "N, D, C, L" }, { "math_id": 1, "text": "L \\propto D^{-\\alpha} " }, { "math_id": 2, "text": "\\alpha \\in \\{0.5, 1, 2\\} " }, { "math_id": 3, "text": "\\alpha \\in [0.07, 0.35]" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "L = 1000 D^{-0.3} " }, { "math_id": 6, "text": "L = 500 D^{-0.3}" }, { "math_id": 7, "text": "N \\propto D^{\\beta}" }, { "math_id": 8, "text": "\\beta" }, { "math_id": 9, "text": "\\alpha \\sim 0.13 " }, { "math_id": 10, "text": "\\alpha \\in [0.06, 0.09], \\beta \\approx 0.7" }, { "math_id": 11, "text": "\\alpha \\in [0.3, 0.5], \\beta \\approx 0.6" }, { "math_id": 12, "text": "\\alpha \\approx 0.3" }, { "math_id": 13, "text": "C, N, D, L" }, { "math_id": 14, "text": "N \\in [10^3, 10^9]" }, { "math_id": 15, "text": "C\\in [10^{12}, 10^{21}]" }, { "math_id": 16, "text": "C, N" }, { "math_id": 17, "text": "D" }, { "math_id": 18, "text": "D = C/6N" }, { "math_id": 19, "text": "L = L_0 + \\left( \\frac{x_0}{x}\\right)^\\alpha" }, { "math_id": 20, "text": "x" }, { "math_id": 21, "text": "L_0, x_0, \\alpha" }, { "math_id": 22, "text": "N" }, { "math_id": 23, "text": "0.037 " }, { "math_id": 24, "text": "0.24" }, { "math_id": 25, "text": "\\alpha = 0.34 " }, { "math_id": 26, "text": "C" }, { "math_id": 27, "text": "0.048 " }, { "math_id": 28, "text": "0.19" }, { "math_id": 29, "text": "\\beta = 0.28 " }, { "math_id": 30, "text": "N_{opt}(C) =\\left(\\frac{C}{5\\times 10^{-12}\\text{petaFLOP-day}}\\right)^{0.7} = 9.0\\times 10^{-7} C^{0.7}" }, { "math_id": 31, "text": "9.0 \\times 10^{-7}" }, { "math_id": 32, "text": "0.7" }, { "math_id": 33, "text": "0.64" }, { "math_id": 34, "text": "0.75" }, { "math_id": 35, "text": "\\approx 0.5" }, { "math_id": 36, "text": "D_{opt}(C) \\propto N_{opt}(C)^{0.4} \\propto C^{0.28}" }, { "math_id": 37, "text": "L = L_0 + (C_0/C)^{0.048}" }, { "math_id": 38, "text": "\\begin{cases}\nC = C_0 ND\\\\\nL = \\frac{A}{N^\\alpha} + \\frac{B}{D^{\\beta}} + L_0\n\\end{cases}" }, { "math_id": 39, "text": "L" }, { "math_id": 40, "text": "L_0" }, { "math_id": 41, "text": "\\frac{A}{N^\\alpha}" }, { "math_id": 42, "text": "\\frac{B}{D^\\beta}" }, { "math_id": 43, "text": " C_0 = 6" }, { "math_id": 44, "text": "\\alpha = 0.34, \\beta = 0.28, A = 406.4, B = 410.7, L_0 = 1.69" }, { "math_id": 45, "text": "\\alpha = 0.35, \\beta = 0.37, A = 482.01, B = 2085.43, L_0 = 1.82" }, { "math_id": 46, "text": "N\\in [7\\times 10^7, 1.6 \\times 10^{10}], D \\in [5\\times 10^9, 5\\times 10^{11}], C \\in [10^{18}, 10^{24}]" }, { "math_id": 47, "text": "D_{opt}(C), N_{opt}(C)" }, { "math_id": 48, "text": "N_{o p t}(C)=G\\left(\\frac{C}{6}\\right)^a, \\quad D_{o p t}(C)=G^{-1}\\left(\\frac{C}{6}\\right)^b, \\quad \\text { where } \\quad G=\\left(\\frac{\\alpha A}{\\beta B}\\right)^{\\frac{1}{\\alpha+\\beta}}, \\quad a=\\frac{\\beta}{\\alpha+\\beta} \\text {, and } b=\\frac{\\alpha}{\\alpha+\\beta} \\text {. }" }, { "math_id": 49, "text": "\\begin{cases}\nN_{opt}(C) = 0.6 \\; C^{0.45} \\\\\nD_{opt}(C) = 0.3 \\; C^{0.55} \\\\\nL_{opt}(C) = 1070 \\; C^{-0.154} + 1.7\n\\end{cases}" }, { "math_id": 50, "text": "L = \\frac{A}{N^\\alpha} + \\frac{B}{D^{\\beta}} + L_0" }, { "math_id": 51, "text": "\\begin{cases}\nN_{opt}(C) = 0.1 \\; C^{0.5}\\\\\nD_{opt}(C) = 1.7 \\; C^{0.5}\n\\end{cases}" }, { "math_id": 52, "text": "y = a + \\bigg(bx^{-c_0}\\bigg) \\prod_{i=1}^n \\left(1 + \\left(\\frac{x}{d_i}\\right)^{1/f_i}\\right)^{-c_i * f_i}" }, { "math_id": 53, "text": "y" }, { "math_id": 54, "text": "a, b, c_0, c_1 ... c_n, d_1 ... d_n, f_1 ... f_n" }, { "math_id": 55, "text": "f_i" }, { "math_id": 56, "text": "a" }, { "math_id": 57, "text": "n" }, { "math_id": 58, "text": "N\\in [5\\times 10^6, 2\\times 10^9]" }, { "math_id": 59, "text": "D \\in [3\\times 10^{7}, 3\\times 10^{9}]" }, { "math_id": 60, "text": "C\\in [0.2, 10^4] " }, { "math_id": 61, "text": "\\min_{N, D} L = 0.09 + \\frac {0.26}{(C + 0.01)^{0.35}}" }, { "math_id": 62, "text": "N \\in [10^8, 3.5 \\times 10^9]" }, { "math_id": 63, "text": "N_E, N_D" }, { "math_id": 64, "text": "N = N_E + N_D" }, { "math_id": 65, "text": "L\\left(N_e, N_d\\right)=\\alpha\\left(\\frac{\\bar{N}_e}{N_e}\\right)^{p_e}\\left(\\frac{\\bar{N}_d}{N_d}\\right)^{p_d}+L_{\\infty}" }, { "math_id": 66, "text": "\\alpha, p_e, p_d, L_{\\infty}, \\bar N_e, \\bar N_d" }, { "math_id": 67, "text": "N_d/N \\approx 0.55" }, { "math_id": 68, "text": "L_\\infty" }, { "math_id": 69, "text": "N \\in [4 \\times 10^5 , 5.6 \\times 10^7]" }, { "math_id": 70, "text": "D \\in [6\\times 10^5, 6 \\times 10^9]" }, { "math_id": 71, "text": "L(N, D)=\\left[\\left(\\frac{N_C}{N}\\right)^{\\frac{\\alpha_N}{\\alpha_D}}+\\frac{D_C}{D}\\right]^{\\alpha_D}" }, { "math_id": 72, "text": "BLEU \\approx C e^{-kL}" }, { "math_id": 73, "text": "D_F" }, { "math_id": 74, "text": "D_T" }, { "math_id": 75, "text": "D_F + D_T" }, { "math_id": 76, "text": "D_T=1.9 e 4\\left(D_F\\right)^{.18}(N)^{.38}" }, { "math_id": 77, "text": "D_T=2.1 e 5\\left(D_F\\right)^{.096}(N)^{.38}" } ]
https://en.wikipedia.org/wiki?curid=73713333
737155
Complex projective plane
In mathematics, the complex projective plane, usually denoted P2(C) or CP2, is the two-dimensional complex projective space. It is a complex manifold of complex dimension 2, described by three complex coordinates formula_0 where, however, the triples differing by an overall rescaling are identified: formula_1 That is, these are homogeneous coordinates in the traditional sense of projective geometry. Topology. The Betti numbers of the complex projective plane are 1, 0, 1, 0, 1, 0, 0, ... The middle dimension 2 is accounted for by the homology class of the complex projective line, or Riemann sphere, lying in the plane. The nontrivial homotopy groups of the complex projective plane are formula_2. The fundamental group is trivial and all other higher homotopy groups are those of the 5-sphere, i.e. torsion. Algebraic geometry. In birational geometry, a complex rational surface is any algebraic surface birationally equivalent to the complex projective plane. It is known that any non-singular rational variety is obtained from the plane by a sequence of blowing up transformations and their inverses ('blowing down') of curves, which must be of a very particular type. As a special case, a non-singular complex quadric in P3 is obtained from the plane by blowing up two points to curves, and then blowing down the line through these two points; the inverse of this transformation can be seen by taking a point "P" on the quadric "Q", blowing it up, and projecting onto a general plane in P3 by drawing lines through "P". The group of birational automorphisms of the complex projective plane is the Cremona group. Differential geometry. As a Riemannian manifold, the complex projective plane is a 4-dimensional manifold whose sectional curvature is quarter-pinched, but not strictly so. That is, it attains "both" bounds and thus evades being a sphere, as the sphere theorem would otherwise require. The rival normalisations are for the curvature to be pinched between 1/4 and 1; alternatively, between 1 and 4. With respect to the former normalisation, the imbedded surface defined by the complex projective line has Gaussian curvature 1. With respect to the latter normalisation, the imbedded real projective plane has Gaussian curvature 1. An explicit demonstration of the Riemann and Ricci tensors is given in the "n"=2 subsection of the article on the Fubini-Study metric.
[ { "math_id": 0, "text": "(Z_1,Z_2,Z_3) \\in \\mathbf{C}^3,\\qquad (Z_1,Z_2,Z_3)\\neq (0,0,0)" }, { "math_id": 1, "text": "(Z_1,Z_2,Z_3) \\equiv (\\lambda Z_1,\\lambda Z_2, \\lambda Z_3);\\quad \\lambda\\in \\mathbf{C},\\qquad \\lambda \\neq 0." }, { "math_id": 2, "text": "\\pi_2=\\pi_5=\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=737155
737164
Large diffeomorphism
Class of diffeomorphism In mathematics and theoretical physics, a large diffeomorphism is an equivalence class of diffeomorphisms under the equivalence relation where diffeomorphisms that can be continuously connected to each other are in the same equivalence class. For example, a two-dimensional real torus has a SL(2,Z) group of large diffeomorphisms by which the one-cycles formula_0 of the torus are transformed into their integer linear combinations. This group of large diffeomorphisms is called the modular group. More generally, for a surface "S", the structure of self-homeomorphisms up to homotopy is known as the mapping class group. It is known (for compact, orientable "S") that this is isomorphic with the automorphism group of the fundamental group of "S". This is consistent with the genus 1 case, stated above, if one takes into account that then the fundamental group is "Z"2, on which the modular group acts as automorphisms (as a subgroup of index 2 in all automorphisms, since the orientation may also be reverse, by a transformation with determinant −1).
[ { "math_id": 0, "text": "a,b" } ]
https://en.wikipedia.org/wiki?curid=737164
7371807
Superpattern
In the mathematical study of permutations and permutation patterns, a superpattern or universal permutation is a permutation that contains all of the patterns of a given length. More specifically, a "k"-superpattern contains all possible patterns of length "k". Definitions and example. If π is a permutation of length "n", represented as a sequence of the numbers from 1 to "n" in some order, and "s" = "s"1, "s"2, ..., "s""k" is a subsequence of π of length "k", then "s" corresponds to a unique "pattern", a permutation of length "k" whose elements are in the same order as "s". That is, for each pair "i" and "j" of indexes, the "i"-th element of the pattern for "s" should be less than the "j"-th element if and only if the "i"-th element of "s" is less than the "j"-th element. Equivalently, the pattern is order-isomorphic to the subsequence. For instance, if π is the permutation 25314, then it has ten subsequences of length three, forming the following patterns: A permutation π is called a "k"-superpattern if its patterns of length "k" include all of the length-"k" permutations. For instance, the length-3 patterns of 25314 include all six of the length-3 permutations, so 25314 is a 3-superpattern. No 3-superpattern can be shorter, because any two subsequences that form the two patterns 123 and 321 can only intersect in a single position, so five symbols are required just to cover these two patterns. Length bounds. Arratia (1999) introduced the problem of determining the length of the shortest possible "k"-superpattern. He observed that there exists a superpattern of length "k"2 (given by the lexicographic ordering on the coordinate vectors of points in a square grid) and also observed that, for a superpattern of length "n", it must be the case that it has at least as many subsequences as there are patterns. That is, it must be true that formula_0, from which it follows by Stirling's approximation that "n" ≥ "k"2/"e"2, where "e" ≈ 2.71828 is Euler's number. This lower bound was later improved very slightly by Chroman, Kwan, and Singhal (2021), who increased it to 1.000076"k"2/"e"2, disproving Arratia's conjecture that the "k"2/"e"2 lower bound was tight. The upper bound of "k"2 on superpattern length proven by Arratia is not tight. After intermediate improvements, Miller (2009) proved that there is a "k"-superpattern of length at most "k"("k" + 1)/2 for every "k". This bound was later improved by Engen and Vatter (2021), who lowered it to ⌈("k"2 + 1)/2⌉. Eriksson et al. conjectured that the true length of the shortest "k"-superpattern is asymptotic to "k"2/2. However, this is in contradiction with a conjecture of Alon on random superpatterns described below. Random superpatterns. Researchers have also studied the length needed for a sequence generated by a random process to become a superpattern. observes that, because the longest increasing subsequence of a random permutation has length (with high probability) approximately 2√"n", it follows that a random permutation must have length at least "k"2/4 to have high probability of being a "k"-superpattern: permutations shorter than this will likely not contain the identity pattern. He attributes to Alon the conjecture that, for any ε &gt; 0, with high probability, random permutations of length "k"2/(4 − ε) will be "k"-superpatterns. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tbinom{n}{k}\\ge k!" } ]
https://en.wikipedia.org/wiki?curid=7371807
7373748
Nonoblique correction
In four-fermion scattering processes of particle physics, a nonoblique correction, also called a direct correction, refers to a radiative correction of type in the electroweak sector of the Standard Model. These corrections are being studied at the CERN LEP collider. Together with the oblique corrections, "nonoblique corrections" can be used to constrain models of physics beyond the Standard Model. Classes. There are three classes of radiative corrections to these processes: The vertex and box corrections, which depend on the identity of the initial and final state fermions, are referred to as the non-oblique corrections. The vacuum polarization corrections are referred to as oblique corrections, since they only affect the mixing and propagation of the gauge bosons and they do not depend on which type of fermions appear in the initial or final states. Examples. An example of a vertex correction is the nonuniversality (flavor dependence) of the couplings of the quarks and leptons to the charged and neutral weak currents. Another example is the anomalous magnetic dipole moment. In order to affect the nonoblique corrections, particles must couple directly to the external fermions. Such couplings are expected to be suppressed in most cases, with one exception being the formula_0 vertex.
[ { "math_id": 0, "text": "Z b \\bar{b}" } ]
https://en.wikipedia.org/wiki?curid=7373748
7374132
De Branges space
In mathematics, a de Branges space (sometimes written De Branges space) is a concept in functional analysis and is constructed from a de Branges function. The concept is named after Louis de Branges who proved numerous results regarding these spaces, especially as Hilbert spaces, and used those results to prove the Bieberbach conjecture. De Branges functions. A Hermite-Biehler function, also known as de Branges function is an entire function "E" from formula_0 to formula_0 that satisfies the inequality formula_1, for all "z" in the upper half of the complex plane formula_2. Definition 1. Given a Hermite-Biehler function "E", the de Branges space "B"("E") is defined as the set of all entire functions "F" such that formula_3 where: Definition 2. A de Branges space can also be defined as all entire functions "F" satisfying all of the following conditions: Definition 3. There exists also an axiomatic description, useful in operator theory. As Hilbert spaces. Given a de Branges space "B"("E"). Define the scalar product: formula_8 A de Branges space with such a scalar product can be proven to be a Hilbert space.
[ { "math_id": 0, "text": "\\Complex" }, { "math_id": 1, "text": "|E(z)| > |E(\\bar z)|" }, { "math_id": 2, "text": "\\Complex^+ = \\{z \\in \\Complex \\mid \\operatorname{Im}(z) > 0\\}" }, { "math_id": 3, "text": "F/E,F^{\\#}/E \\in H_2(\\Complex^+)" }, { "math_id": 4, "text": "F^{\\#}(z) = \\overline{F(\\bar z)}" }, { "math_id": 5, "text": "H_2(\\Complex^+)" }, { "math_id": 6, "text": "\\int_{\\Reals} |(F/E)(\\lambda)|^2 d\\lambda < \\infty " }, { "math_id": 7, "text": "|(F/E)(z)|,|(F^{\\#}/E)(z)| \\leq C_F(\\operatorname{Im}(z))^{(-1/2)}, \\forall z \\in \\Complex^+" }, { "math_id": 8, "text": "[F,G]=\\frac{1}{\\pi} \\int_{\\Reals} \\overline{F(\\lambda)} G(\\lambda) \\frac{d\\lambda}{|E(\\lambda)|^2}." } ]
https://en.wikipedia.org/wiki?curid=7374132
73741430
Landau-Mignotte bound
Bound on the coefficients of a factor polynomial In algebra, a Landau-Mignotte bound (sometimes only referred to as Mignotte's bound) is one of a family of inequalities concerning a univariate integer polynomial "f"("x") and one of its factors "h"("x"). A basic version states that the coefficients of "h"("x") are bounded independently of "h"("x") by an exponential expression involving only the degree and coefficients of "f"("x"), i.e. only depending on "f"("x"). It has applications in computer algebra where these bounds can give a priori estimates on the run time and complexity of algorithms. Basic version. For formula_0 such that formula_1 divides formula_2 denote by formula_3 resp. formula_4 the sum of the absolute values of the coefficients of formula_1 resp. formula_2 and let formula_5 be the degree of formula_2, then formula_6 Notation. formula_7 will be univariate complex polynomials which later will be restricted to be integer polynomials, i.e. in formula_8. Explicitly formula_9 formula_10 are the degrees, the leading coefficients are formula_11. Define norms by considering the coefficients as vectors, explicitly formula_12 By the fundamental theorem of algebra formula_13 has formula_5 roots formula_14 (with multiplicity). Set the Mahler measure of formula_13 to be formula_15 Similarly define formula_16, formula_17, etc. Landau's inequality and other basic properties. Landau proved in 1905 a key inequality linking the Mahler measure of a polynomial to its Euclidean norm. formula_18 In general norms obey the following inequalities formula_19 The Mahler measure satisfies formula_20 which for non-trivial integer polynomials implies formula_21. See also Lehmer's conjecture. The Mahler measure is multiplicative, i.e. if formula_22 then formula_23 Mignotte's bound. Mignotte used Landau's inequality in 1974 to prove a basic version of the following bounds in the notation introduced above. For complex polynomials in formula_24, if formula_25 divides formula_13 then formula_26 and individual coefficients obey the inequalities formula_27 If additionally formula_13 and formula_25 are integer polynomials in formula_8 then formula_28 and if formula_13 is additionally monic then even formula_29. In these cases one can simplify by omitting the fraction. Including products in the analysis we have the following theorem. Let formula_30 such that formula_31 divides formula_13 then formula_32 formula_33 formula_34 formula_35 formula_36 Using Stirling's formula applied to binomial coefficients we get asymptotically a slight improvement when using binomial coefficients formula_37 From the bounds on the individual coefficients one can deduce the following related bound. If formula_38 is reducible then it has a non-trivial factor formula_25 of degree formula_39 such that formula_40 Combining this with Stirling's formula to replace the binomial coefficients leads to more explicit versions. While the upper bounds that are independent of formula_25 and only depend on formula_13 are of great theoretical interest and aesthetic appeal, in practical application one has usually information about the degree formula_41 of formula_25. This is why the sharper bounds that additionally depend on formula_41 are often more relevant. Sharpness of bounds. Cyclotomic polynomials. For formula_42 the cyclotomic polynomials formula_43 is an irreducible divisor of degree formula_44, Euler's totient function. In this case formula_45 and it is custom to denote formula_46. A result of Vaugn states for infinitely many positive integers formula_5 formula_47 a superpolynomial bound in the degree formula_5. Comparing with Mignotte's bound and using Stirling's formula as well as bounds for Euler's totient function we get for infinitely many formula_5 formula_48 This leaves a gap between Mignotte's upper bound and what is known to be attained through cyclotomic polynomials. Cyclotomic polynomials cannot close this gap by a result of Bateman that states for every formula_49 for all sufficiently large positive integers formula_5 we have formula_50 Also note that despite the superpolynomial growth of Vaugn's lower bound in practice looking at examples of cyclotomic polynomials the coefficients of formula_43 are far smaller than Mignotte's bound. A family of polynomials with exponential growth in the coefficients of its factors. Abbot gives the following example related to cyclotomic polynomials. Set formula_51 and consider for positive integers formula_52 formula_53 Note that the degrees are formula_54 resp. formula_55. Abbot shows that asymptotically for large formula_52 we have formula_56 Using Mignotte's bound in the version formula_57 we compare formula_58 Ignoring the root terms leads to formula_59 Abbot claims that24 An exhaustive search in low degrees suggests that this family of factorizations is close to extremal. While there is still an exponential gap between the example and Mignotte's bound, the example shows that exponential growth is the right order for such a general bound. Note that Abbot also compares Mignotte's bound with other types of bounds and gives examples where Mignotte's bound is best and examples where other bounds are better. Also note that, while the cyclotomic polynomials formula_43 from the previous section are irreducible factors, the factors formula_60 have many factors themselves. Abbot speculates32 The examples [...] compel any ideal “irreducible single factor bound” to grow with degree, though the rate of growth appears to be much slower than for single factor bounds valid for any (suitably scaled) factorization in formula_24. This suggests that such an ideal single factor bound could be very much smaller than the currently known ones. Generalizations. Usually the Mignotte bounds are only stated for complex or integer polynomials. They are equally valid for any subring formula_61, in particular when considering only monic polynomials for which formula_29. Any abstract number field and its ring of integers can be considered a subring of formula_62, however there can be multiple embeddings which are inequivalent with respect to absolute values. The Mignotte bounds are abstract and general enough that they hold independent of the chosen embedding. This may be taken as a hint that they are not as tight as possible in principle, as can indeed be seen from competing bounds that are sometimes better. Applications. In computer algebra when doing effective computations with integer polynomials often the following strategy is applied. One reduces a polynomial formula_13 modulo a suitable prime number formula_63 to get formula_64, solves a related problem over formula_65 instead of formula_66 which is often simpler, and finally uses Hensel lifting to transfer the result for formula_64 back to formula_13. Hensel lifting is an iterative process and it is in general not clear when to stop it. The Landau-Mignotte bounds can supply additional a priori information that makes it possible to give explicit bounds on how often Hensel lifting has to be iterated to recover the solution for formula_13 from a solution for formula_64. In particular this can be applied to factoring integer polynomials or for computing the gcd of integer polynomials166. Although effective, this approach may not be the most efficient, as can be seen in the case of factoring. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x),h(x)\\in\\mathbb{Z}[x]" }, { "math_id": 1, "text": "h(x)" }, { "math_id": 2, "text": "f(x)" }, { "math_id": 3, "text": "\\|h\\|_{1}" }, { "math_id": 4, "text": "\\|f\\|_{1}" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\|h\\|_{1}\\leq 2^{n}\\|f\\|_{1}" }, { "math_id": 7, "text": "f,g,h\\in\\mathbb{C}[x]" }, { "math_id": 8, "text": "\\mathbb{Z}[x]" }, { "math_id": 9, "text": "\nf=\\sum\\limits_{i=0}^nf_ix^i,\\ \\ \\ \ng=\\sum\\limits_{i=0}^mg_ix^i,\\ \\ \\ \nh=\\sum\\limits_{i=0}^kh_ix^i.\n" }, { "math_id": 10, "text": "n,m,k" }, { "math_id": 11, "text": "f_n,g_m,h_k" }, { "math_id": 12, "text": "\n\\|f\\|_{\\infty}=\\max_{0\\leq i\\leq n}|f_i|,\\ \\ \\ \n\\|f\\|_{2}=\\left(\\sum\\limits_{i=0}^n|f_i|^2\\right)^{1/2},\\ \\ \\ \n\\|f\\|_{1}=\\sum\\limits_{i=0}^n|f_i|.\n" }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "z_1,z_2,\\ldots,z_n" }, { "math_id": 15, "text": "M(f)=|f_n|\\prod\\limits_{i=1}^n\\max\\{1,|z_i|\\}." }, { "math_id": 16, "text": "\\|g\\|_{2}" }, { "math_id": 17, "text": "M(h)" }, { "math_id": 18, "text": "M(f)\\leq\\|f\\|_{2}" }, { "math_id": 19, "text": "\\|f\\|_{\\infty}\\leq\\|f\\|_{2}\\leq\\|f\\|_{1}\\leq\\sqrt{n+1}\\|f\\|_{2}\\leq(n+1)\\|f\\|_{\\infty}." }, { "math_id": 20, "text": "M(f)\\geq|f_n|" }, { "math_id": 21, "text": "M(f)\\geq 1" }, { "math_id": 22, "text": "f=gh" }, { "math_id": 23, "text": "M(f)=M(g)M(h)." }, { "math_id": 24, "text": "\\mathbb{C}[x]" }, { "math_id": 25, "text": "h" }, { "math_id": 26, "text": "\n\\|h\\|_{1} \\leq\n2^{k}M(h) \\leq \n2^k\\frac{|h_k|}{|f_n|}\\|f\\|_2 \\leq \n2^n\\frac{|h_k|}{|f_n|}\\|f\\|_2\n" }, { "math_id": 27, "text": "\n|h_i| \\leq\n\\binom{k}{i}M(h) \\leq \n\\binom{k}{i}\\frac{|h_k|}{|f_n|}\\|f\\|_2 \\leq \n\\binom{n}{i}\\frac{|h_k|}{|f_n|}\\|f\\|_2\n" }, { "math_id": 28, "text": "0<\\frac{|h_k|}{|f_n|}\\leq 1" }, { "math_id": 29, "text": "\\frac{|h_k|}{|f_n|}=1" }, { "math_id": 30, "text": "f,g,h\\in\\mathbb{Z}[x]" }, { "math_id": 31, "text": "gh" }, { "math_id": 32, "text": "\n\\|g\\|_{\\infty}\\|h\\|_{\\infty} \\leq\n\\|g\\|_{2}\\|h\\|_{2} \\leq\n\\|g\\|_{1}\\|h\\|_{1} \\leq\n2^{m+k}\\|f\\|_2 \\leq \n2^{n}\\sqrt{n+1}\\|f\\|_{\\infty},\n" }, { "math_id": 33, "text": "\n\\|h\\|_{\\infty} \\leq\n\\|h\\|_{2} \\leq\n\\|h\\|_{1} \\leq\n2^{k}\\|f\\|_2 \\leq \n2^n\\|f\\|_2 \\leq \n2^n\\|f\\|_1,\n" }, { "math_id": 34, "text": "\n\\|h\\|_{\\infty} \\leq\n\\|h\\|_{2} \\leq\n\\|h\\|_{1} \\leq\n2^{k}\\|f\\|_2 \\leq \n2^{k}\\sqrt{n+1}\\|f\\|_{\\infty} \\leq \n2^{n}\\sqrt{n+1}\\|f\\|_{\\infty},\n" }, { "math_id": 35, "text": "\n|h_i| \\leq\n\\binom{k}{i}M(h) \\leq \n\\binom{k}{i}\\|f\\|_2 \\leq \n\\binom{n}{i}\\|f\\|_2,\n" }, { "math_id": 36, "text": "\n\\|h\\|_{\\infty} \\leq \n\\binom{k}{\\lfloor k/2\\rfloor}\\|f\\|_2 \\leq \n\\binom{n}{\\lfloor n/2\\rfloor}\\|f\\|_2 \\leq \n\\binom{n}{\\lfloor n/2\\rfloor}\\|f\\|_1.\n" }, { "math_id": 37, "text": "\n\\|h\\|_{\\infty} \\leq \n\\binom{n}{\\lfloor n/2\\rfloor}\\|f\\|_2 \\approx\n2^n\\sqrt{\\frac{2}{\\pi n}}\\|f\\|_2.\n" }, { "math_id": 38, "text": "f\\in\\mathbb{Z}[x]" }, { "math_id": 39, "text": "k\\leq\\lfloor n/2\\rfloor" }, { "math_id": 40, "text": "\n\\|h\\|_{\\infty} \\leq \n\\binom{\\lfloor n/2\\rfloor}{\\lfloor n/4\\rfloor}\\|f\\|_2 \\leq \n\\binom{\\lfloor n/2\\rfloor}{\\lfloor n/4\\rfloor}\\|f\\|_1.\n" }, { "math_id": 41, "text": "k" }, { "math_id": 42, "text": "f=x^n-1" }, { "math_id": 43, "text": "h=\\Phi_n(x)" }, { "math_id": 44, "text": "k=\\varphi(n)" }, { "math_id": 45, "text": "\\|f\\|_{2}=\\sqrt{2}" }, { "math_id": 46, "text": "\\|h\\|_{\\infty}=A(n) " }, { "math_id": 47, "text": "\\|h\\|_{\\infty} =A(n) > e^{\\left(n^{(\\log 2)/(\\log\\log n)}\\right)}," }, { "math_id": 48, "text": "\ne^{\\left(n^{(\\log 2)/(\\log\\log n)}\\right)}<\n\\|h\\|_{\\infty} \\leq \n\\binom{k}{\\lfloor k/2\\rfloor}\\|f\\|_2 =\n\\binom{\\varphi(n)}{\\lfloor \\varphi(n)/2\\rfloor}\\sqrt{2} \\approx\n2^{\\varphi(n)}\\sqrt{\\frac{2}{\\pi \\varphi(n)}}\\sqrt{2}\\geq\n2^{e^{-\\gamma}n/(\\log\\log n)}\\frac{2}{\\sqrt{\\pi e^{-\\gamma}n/(\\log\\log n)}}. \n" }, { "math_id": 49, "text": "\\varepsilon>0" }, { "math_id": 50, "text": "\\|h\\|_{\\infty} =A(n) < e^{\\left(n^{(\\log 2+\\varepsilon)/(\\log\\log n)}\\right)}." }, { "math_id": 51, "text": "\nH(x)=(x+1)(x^2+x+1)=x^3+2x^2+2x+1, \\ \\ \\ \nF(x)=H(x)\\cdot H(-x)=-x^6+1\n" }, { "math_id": 52, "text": "j" }, { "math_id": 53, "text": "\nh=h_j=H(x)^j, \\ \\ \\ \nf=f_j=F(x)^j.\n" }, { "math_id": 54, "text": "k=3j" }, { "math_id": 55, "text": "n=6j" }, { "math_id": 56, "text": "\n\\|h_j\\|_{\\infty}\\geq 6^j\\frac{1}{3j+1}, \\ \\ \\ \n\\|f_j\\|_{\\infty}\\approx 2^j\\sqrt{\\frac{2}{\\pi j}}.\n" }, { "math_id": 57, "text": "\n\\|h\\|_{\\infty} \\leq\n2^{k}\\sqrt{n+1}\\|f\\|_{\\infty}\n" }, { "math_id": 58, "text": "\n3^{n/6}\\sqrt{\\frac{\\pi}{3n}} \\approx\n\\frac{6^j\\frac{1}{3j+1}}{2^j\\sqrt{\\frac{2}{\\pi j}}} \\lesssim\n\\frac{\\|h\\|_{\\infty}}{\\|f\\|_{\\infty}} \\leq\n2^{k}\\sqrt{n+1}=\n2^{n/2}\\sqrt{n+1}\n" }, { "math_id": 59, "text": "\n1.2009^n \\approx\n\\sqrt[6]{3}^n \\lesssim\n\\frac{\\|h\\|_{\\infty}}{\\|f\\|_{\\infty}} \\lesssim\n\\sqrt{2}^n\\approx \n1.4142^n.\n" }, { "math_id": 60, "text": "h=h_j=H(x)^j=(x+1)^j(x^2+x+1)^j" }, { "math_id": 61, "text": "R\\subset\\mathbb{C}" }, { "math_id": 62, "text": "\\mathbb{C}" }, { "math_id": 63, "text": "p" }, { "math_id": 64, "text": "f_p" }, { "math_id": 65, "text": "\\mathbb{Z}/p\\mathbb{Z}" }, { "math_id": 66, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=73741430
73741832
Jargonness
Jargon and science communication Jargonness is a piecewise mathematical function mapping the frequencies of a word's appearance in scientific and contemporary English corpora to a parameter quantifying the word's association with scientific jargon - the "jargonness" of that word. It is expressed mathematically as:formula_0 In the above equation, formula_1 stands for the frequency of a word's appearance in a general English-language corpus and formula_2 stands for its frequency in a scientific corpus. Method of use. Both the frequencies (formula_1 and formula_2) must be determined and then substituted in the above equation to calculate the word's jargonness. In case a word has no mention in the general English corpus, 3 is taken as its jargonness as suggested by the second part of the equation. Noticing that the logarithm in the first part of the equation is a common one (to the base 10), this simply means that the word is assumed to be a thousand times more likely to appear in a scientific text than a non-scientific one. Examples of corpora. The corpora that have most commonly been employed to determine the frequencies mentioned above are the following: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "jargonness = \\begin{cases} log \\left ( \\frac{f_s}{f_g} \\right ), & f_g > 0 \\\\ 3, & f_g = 0 \\end{cases}" }, { "math_id": 1, "text": "f_g" }, { "math_id": 2, "text": "f_s" } ]
https://en.wikipedia.org/wiki?curid=73741832
73752293
Rubidium selenide
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Rubidium selenide is an inorganic compound composed of selenium and rubidium. It is a selenide with a chemical formula of Rb2Se. Rubidium selenide is used together with caesium selenide in photovoltaic cells. Preparation. Rubidium selenide can be prepared by reacting mercury selenide and metallic rubidium. The elements can be synthesized in liquid ammonia. Hydrogen selenide can also be dissolved in an aqueous solution of rubidium hydroxide to eventually form rubidium selenide. This method is similar to the method for preparing rubidium sulfide, because they are both chalcogenide compounds. RbOH + H2Se → RbHSe + H2O RbHSe + RbOH → Rb2Se + H2O Crystal structure. Rubidium selenide has cubic crystal structure, which belongs to the antifluorite structure, and the space group is formula_0 and the lattice parameters are a=801.0 pm, per unit. The unit cell has 4 units. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Fm\\bar{3}m" } ]
https://en.wikipedia.org/wiki?curid=73752293
73752551
Fundamental sequence (set theory)
In set theory, a mathematical discipline, a fundamental sequence is a cofinal sequence of ordinals all below a given limit ordinal. Depending on author, fundamental sequences may be restricted to ω-sequences only or permit fundamental sequences of length formula_0. The formula_1 element of the fundamental sequence of formula_2 is commonly denoted formula_3, although it may be denoted formula_4 or formula_5. Additionally, some authors may allow fundamental sequences to be defined on successor ordinals. The term dates back to (at the latest) Veblen's construction of normal functions formula_6, while the concept dates back to Hardy's 1904 attempt to construct a set of cardinality formula_7. Definition. Given an ordinal formula_2, a fundamental sequence for formula_2 is a sequence formula_8 such that formula_9 and formula_10. An additional restriction may be that the sequence of ordinals must be strictly increasing. Examples. The following is a common assignment of fundamental sequences to all limit ordinals less than formula_11. This is very similar to the system used in the Wainer hierarchy. Usage. Fundamental sequences arise in some settings of definitions of large countable ordinals, definitions of hierarchies of fast-growing functions, and proof theory. Bachmann defined a hierarchy of functions formula_16 in 1950, providing a system of names for ordinals up to what is now known as the Bachmann–Howard ordinal, by defining fundamental sequences for namable ordinals below formula_17. This system was subsequently simplified by Feferman and Aczel to reduce the reliance on fundamental sequences. The fast-growing hierarchy, Hardy hierarchy, and slow-growing hierarchy of functions are all defined via a chosen system of fundamental sequences up to a given ordinal. The fast-growing hierarchy is closely related to the Hardy hierarchy, which is used in proof theory along with the slow-growing hierarchy to majorize the provably computable functions of a given theory. Additional conditions. A system of fundamental sequences up to formula_2 is said to have the Bachmann property if for all ordinals formula_18 in the domain of the system and for all formula_19, formula_20. If a system of fundamental sequences has the Bachmann property, all the functions in its associated fast-growing hierarchy are monotone, and formula_21 eventually dominates formula_22 when formula_23. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{\\omega}_1" }, { "math_id": 1, "text": "n^{\\text{th}}" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\alpha[n]" }, { "math_id": 4, "text": "\\alpha_n" }, { "math_id": 5, "text": "\\{\\alpha\\}(n)" }, { "math_id": 6, "text": "\\varphi_\\alpha" }, { "math_id": 7, "text": "\\aleph_1" }, { "math_id": 8, "text": "(\\alpha[n])_{n\\in\\mathbb N}" }, { "math_id": 9, "text": "\\forall(n\\in\\mathbb N)(\\alpha[n]<\\alpha)" }, { "math_id": 10, "text": "\\textrm{sup}\\{\\alpha[n]\\mid n\\in\\mathbb N\\}=\\alpha" }, { "math_id": 11, "text": "\\varepsilon_0" }, { "math_id": 12, "text": "\\omega^{\\alpha+1}[n]=\\omega^\\alpha\\cdot(n+1)" }, { "math_id": 13, "text": "\\omega^\\alpha[n]=\\omega^{\\alpha[n]}" }, { "math_id": 14, "text": "(\\omega^{\\alpha_1}+\\ldots+\\omega^{\\alpha_k})[n]=\\omega^{\\alpha_1}+\\ldots+(\\omega^{\\alpha_k}[n])" }, { "math_id": 15, "text": "\\alpha_1 \\geq \\dots \\geq \\alpha_k" }, { "math_id": 16, "text": "\\phi_\\alpha" }, { "math_id": 17, "text": "\\omega_1" }, { "math_id": 18, "text": "\\alpha,\\beta" }, { "math_id": 19, "text": "n\\in\\mathbb N" }, { "math_id": 20, "text": "\\alpha[n]<\\beta<\\alpha\\implies\\alpha[n]<\\beta[0]" }, { "math_id": 21, "text": "f_\\beta" }, { "math_id": 22, "text": "f_\\alpha" }, { "math_id": 23, "text": "\\alpha<\\beta" } ]
https://en.wikipedia.org/wiki?curid=73752551
73755426
Null infinity
Boundary region of asymptotically flat spacetimes in general relativity In theoretical physics, null infinity is a region at the boundary of asymptotically flat spacetimes. In general relativity, straight paths in spacetime, called geodesics, may be space-like, time-like, or light-like (also called null). The distinction between these paths stems from whether the spacetime interval of the path is positive (corresponding to space-like), negative (corresponding to time-like), or zero (corresponding to null). Light-like paths physically correspond to physical phenomena which propagate through space at the speed of light, such as electromagnetic radiation and gravitational radiation. The boundary of a flat spacetime is known as conformal infinity, and can be thought of as the end points of all geodesics as they go off to infinity. The region of null infinity corresponds to the terminus of all null geodesics in a flat Minkowski space. The different regions of conformal infinity are most often visualized on a Penrose diagram, where they make up the boundary of the diagram. There are two distinct region of null infinity, called past and future null infinity, which can be denoted using a script 'I' as formula_0 and formula_1. These two regions are often referred to as 'scri-plus' and 'scri-minus' respectively. Geometrically, each of these regions actually has the structure of a topologically cylindrical three dimensional region. The study of null infinity originated from the need to describe the global properties of spacetime. While early methods in general relativity focused on the local structure built around local frames of reference, work beginning in the 1960s began analyzing global descriptions of general relativity, analyzing the structure of spacetime as a whole. The original study of null infinity originated with Roger Penrose's work analyzing black hole spacetimes. Null infinity is a useful mathematical tool for analyzing behavior in asymptotically flat spaces when limits of null paths need to be taken. For instance, black hole spacetimes are asymptotically flat, and null infinity can be used to characterize radiation in the limit that it travels outward away from the black hole. Null infinity can also be considered in the context of spacetimes which are not necessarily asymptotically flat, such as in the FLRW cosmology. Conformal compactification in Minkowski spacetime. The metric for a flat Minkowski spacetime in spherical coordinates is formula_2. Conformal compactification induces a transformation which preserves angles, but changes the local structure of the metric and adds the boundary of the manifold, thus making it compact. For a given metric formula_3, a conformal compactification scales the entire metric by some conformal factor such that formula_4 such that all of the points at infinity are scaled down to a finite value. Typically, the radial and time coordinates are transformed into null coordinates formula_5 and formula_6. These are then transformed as formula_7 and formula_8 in order to use the properties of the inverse tangent function to map infinity to a finite value. The typical time and space coordinates may be introduced as formula_9 and formula_10. After these coordinate transformations, a conformal factor is introduced, leading to a new unphysical metric for Minkowski space: formula_11. This is the metric on a Penrose diagram, illustrated. Unlike the original metric, this metric describes, a manifold with a boundary, given by the restrictions on formula_12 and formula_13. There are two null surfaces on this boundary, corresponding to past and future null infinity. Specifically, future null infinity consists of all points where formula_14 and formula_15, and past null infinity consists of all points where formula_16 and formula_15. From the coordinate restrictions, null infinity is a three dimensional null surface, with a cylindrical topology formula_17. The construction given here is specific to the flat metric of Minkowski space. However, such a construction generalizes to other asymptotically flat spaces as well. In such scenarios, null infinity still exists as a three dimensional null surface at the boundary of the spacetime manifold, but the manifold's overall structure might be different. For instance, in Minkowski space, all null geodesics begin at past null infinity and end at future null infinity. However, in the Schwarzschild black hole spacetime, the black hole event horizon leads to two possibilities: geodesics may end at null infinity, but may also end at the black hole's future singularity. The presence of null infinity (along with the other regions of conformal infinity) guarantees geodesic completion on the spacetime manifold, where all geodesics terminate either at a true singularity or intersect the boundary of infinity. Other physical applications. The symmetries of null infinity are characteristically different from that of the typical regions of spacetime. While the symmetries of a flat Minkowski spacetime are given by the Poincaré group, the symmetries of null infinity are instead given by the Bondi–Metzner–Sachs (BMS) group. The work by Bondi, Metzner, and Sachs characterized gravitational radiation using analyses related to null infinity, whereas previous work such as the ADM framework dealt with characterizations of spacelike infinity. In recent years, interest has grown in studying gravitons on the boundary null infinity. Using the BMS group, quanta on null infinity can be characterized as massless spin-2 particles, consistent with the quanta of general relativity being gravitons.
[ { "math_id": 0, "text": "\\mathcal{I}^+" }, { "math_id": 1, "text": "\\mathcal{I}^-" }, { "math_id": 2, "text": "ds^2=-dt^2+dr^2+r^2d\\Omega^2" }, { "math_id": 3, "text": "g_{ij}" }, { "math_id": 4, "text": "\\overline{g_{ij}}=\\Omega^2 g_{ij}" }, { "math_id": 5, "text": "u= t+r" }, { "math_id": 6, "text": "v = t-r" }, { "math_id": 7, "text": "p = \\tan^{-1}u" }, { "math_id": 8, "text": "q = \\tan^{-1}v" }, { "math_id": 9, "text": "T = p +q" }, { "math_id": 10, "text": "R = p-q" }, { "math_id": 11, "text": "ds^2 = - dT^2 + dR^2 + (\\sin^2 R) d\\Omega^2" }, { "math_id": 12, "text": "R" }, { "math_id": 13, "text": "T" }, { "math_id": 14, "text": "T= \\pi -R" }, { "math_id": 15, "text": "0<R<\\pi" }, { "math_id": 16, "text": "T = R - \\pi" }, { "math_id": 17, "text": "\\mathbb{R}\\times S^2" } ]
https://en.wikipedia.org/wiki?curid=73755426
7376
Cosmic microwave background
Trace radiation from the early universe The cosmic microwave background (CMB, CMBR), or relic radiation, is microwave radiation that fills all space in the observable universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s. The CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods, the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space. However, the photons have grown less energetic due to the cosmological redshift associated with the expansion of the universe. The "surface of last scattering" refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling. The CMB is not completely smooth and uniform, showing a faint anisotropy that can be mapped by sensitive detectors. Ground and space-based experiments such as COBE, WMAP and Planck have been used to measure these temperature inhomogeneities. The anisotropy structure is determined by various interactions of matter and photons up to the point of decoupling, which results in a characteristic lumpy pattern that varies with angular scale. The distribution of the anisotropy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters. Features. The cosmic microwave background radiation is an emission of uniform black body thermal energy coming from all directions. Intensity of the CMB is expressed in kelvin (K), the SI unit of temperature. The CMB has a thermal black body spectrum at a temperature of . Variations in intensity are expressed as variations in temperature. The blackbody temperature uniquely characterizes the intensity of the radiation at all wavelengths; a measured brightness temperature at any wavelength can be converted to a blackbody temperature. The radiation is remarkably uniform across the sky, very unlike the almost point-like structure of stars or clumps of stars in galaxies. The radiation is isotropic to roughly one part in 25,000: the root mean square variations are just over 100 μK, after subtracting a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at 369.82 ± 0.11 km/s towards the constellation Crater near its boundary with the constellation Leo The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion. Despite the very small degree of anisotropy in the CMB, many aspects can be measured with high precision and such measurements are critical for cosmological theories. In addition to temperature anisotropy, the CMB should have an angular variation in polarization. The polarization at each direction in the sky has an orientation described in terms of E-mode and B-mode polarization. The E-mode signal is a factor of 10 less strong than the temperature anisotropy; it supplements the temperature data as they are correlated. The B-mode signal is even weaker but may contain additional cosmological data. The anisotropy is related to physical origin of the polarization. Excitation of an electron by linear polarized light generates polarized light at 90 degrees to the incident direction. If the incoming radiation is isotropic, different incoming directions create polarizations that cancel out. If the incoming radiation has quadrupole anisotropy, residual polarization will be seen. Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time. The CMB contains the vast majority of photons in the universe by a factor of 400 to 1; the number density of photons in the CMB is one billion times (109) the number density of matter in the universe. Without the expansion of the universe to cause the cooling of the CMB, the night sky would shine as brightly as the Sun. The energy density of the CMB is , about 411 photons/cm3. History. Early speculations. In 1931, Georges Lemaître speculated that remnants of the early universe may be observable as radiation, but his candidate was cosmic rays. Richard C. Tolman showed in 1934 that expansion of the universe would cool blackbody radiation while maintaining a thermal spectrum. The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in a correction they prepared for a paper by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K. Discovery. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. The antenna was constructed in 1959 to support Project Echo—the National Aeronautics and Space Administration's passive communications satellites, which used large earth orbiting aluminized plastic balloons as reflectors to bounce radio signals from one point on the Earth to another. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery. Cosmic origin. The interpretation of the cosmic microwave background was a controversial issue in the late 1960s. Alternative explanations included energy from within the solar system, from galaxies, from intergalactic plasma, from multiple extragalactic radio sources. Two requirements would show that the microwave radiation was truly "cosmic". First the intensity vs frequency or spectrum needed to be shown to match a thermal or blackbody source. This was accomplished by 1968 in a series of measurements of the radiation temperature at higher and lower wavelengths. Second the radiation needed be shown to be isotropic, the same from all directions. This was also accomplished by 1970, demonstrating that this radiation was truly cosmic in origin. Progress on theory. In the 1970s numerous studies showed that tiny deviations from isotropy in the CMB could result from events in the early universe. Harrison, Peebles and Yu, and Zel'dovich realized that the early universe would require quantum inhomogeneities that would result in temperature anisotropy at the level of 10−4 or 10−5. Rashid Sunyaev, using the alternative name "relic radiation", calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. COBE. After a lull in the 1970s caused in part by the many experimental difficulties in measuring CMB at high precision, increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983), gave the first upper limits on the large-scale anisotropy. The other key event in the 1980s was the proposal by Alan Guth for cosmic inflation. This theory of rapid spatial expansion gave an explanation for large-scale isotropy by allowing causal connection just before the epoch of last scattering. With this and similar theories, detailed prediction encouraged larger and more ambitious experiments. The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 detected and quantified the large scale anisotropies at the limit of its detection capabilities. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery. Precision cosmology. Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the two decades. The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the MAT/TOCO experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation. Observations after COBE. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. Wilkinson Microwave Anisotropy Probe. In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers at five frequencies to minimize non-sky signal noise. The data from the mission was released in five installments, the last being the nine year summary. The results are broadly consistent Lambda CDM models based on 6 free parameters and fitting in to Big Bang cosmology with cosmic inflation. Planck Surveyor. A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10-30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be . Theoretical models. The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory. In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was more compact, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation. The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to , it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called "the surface of last scattering". This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe. Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature. Predictions based on the Big Bang model. In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there. According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about , which is much less than the ionization energy of hydrogen. This epoch is generally known as the "time of last scattering" or the period of recombination or decoupling. Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,089 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature "T"r of the CMB as a function of redshift, "z", can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): "T"r = 2.725 K × (1 + "z") The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.&lt;ref name="hep-ph/0309057"&gt;&lt;/ref&gt; Primary anisotropy. The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density. The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called "adiabatic" and "isocurvature". A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales ("ℓ" values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the "photon visibility function" (PVF). This function is defined so that, denoting the PVF by "P"("t"), the probability that a CMB photon last scattered between time "t" and "t" + "dt" is given by "P"("t") "dt". The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which "P"("t") has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how long it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and thus when it was complete, the universe was roughly 487,000 years old. Late time anisotropy. Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift around 10. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zeldovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Alternative theories. The standard cosmology that includes the Big Bang "enjoys considerable popularity among the practicing cosmologists" However there are challenges to the standard big bang framework for explaining CMB data. In particular standard cosmology requires fine-tuning of some free parameters, with different values supported by different experimental data. As an example of the fine-tuning issue, standard cosmology cannot predict the present temperature of the relic radiation, formula_0. This value of formula_0 is one of the best results of experimental cosmology and the steady state model can predict it. However, alternative models have their own set of problems and they have only made post-facto explanations of existing observations. Nevertheless these alternatives have played an important historic role in providing ideas for and challenges to the standard explanation. Polarization. The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-mode (or gradient-mode) and B-mode (or curl mode). This is in analogy to electrostatics, in which the electric field ("E"-field) has a vanishing curl and the magnetic field ("B"-field) has a vanishing divergence. E-modes. The E-modes arise from Thomson scattering in a heterogeneous plasma. E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes. B-modes are expected to be an order of magnitude weaker than the E-modes. The former are not produced by standard scalar type perturbations, but are generated by gravitational waves during cosmic inflation shortly after the big bang. However, gravitational lensing of the stronger E-modes can also produce B-mode polarization. Detecting the original B-modes signal requires analysis of the contamination caused by lensing of the relatively strong E-mode signal. Primordial gravitational waves. Models of "slow-roll" cosmic inflation in the early universe predicts primordial gravitational waves that would impact the polarisation of the cosmic microwave background, creating a specific pattern of B-mode polarization. Detection of this pattern would support the theory of inflation and their strength can confirm and exclude different models of inflation. Claims that this characteristic pattern of B-mode polarization had been measured by BICEP2 instrument were later attributed to cosmic dust due to new results of the Planck experiment. Gravitational lensing. The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Multipole analysis. The CMB angular anisotropies are usually presented in terms of power per multipole. The angular the map of temperature across the sky, formula_1 is written as coefficients of spherical harmonics, formula_2 where the formula_3 term measures the strength of the angular oscillation in formula_4, and "ℓ" is the multipole number while "m" is the azimuthal number. The azimuthal variation is not significant and is removed by applying the angular correlation function, giving power spectrum term formula_5 Increasing values of "ℓ" correspond to higher multipole moments of CMB, meaning more rapid variation with angle. CMBR monopole term ("ℓ" = 0). The monopole term, "ℓ" = 0, is the constant isotropic mean temperature of the CMB, "T""γ" = with one standard deviation confidence. This term must be measured with absolute temperature devices, such as the FIRAS instrument on the COBE satellite. CMBR dipole anisotropy ("ℓ" = 1). CMB dipole represents the largest anisotropy, which is in the first spherical harmonic ("ℓ" = 1), a cosine function. The amplitude of CMB dipole is around . The CMB dipole moment is interpreted as the peculiar motion of the Earth relative to the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude "ℓ" =, "b" =. The dipole is now used to calibrate mapping studies. Multipole ("ℓ" ≥ 2). The temperature variation in the CMB temperature maps at higher multipoles, or "ℓ" ≥ 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch at a redshift of around "z" ⋍ 1100. Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. Data analysis challenges. Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum. Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques. Anomalies. With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions.&lt;ref name="arXiv:astro-ph/0511666"&gt;&lt;/ref&gt;&lt;ref name="arXiv:astro-ph/0503213"&gt;&lt;/ref&gt; The most longstanding of these is the low-"ℓ" multipole controversy. Even in the COBE map, it was observed that the quadrupole ("ℓ" = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole ("ℓ" = 3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data. Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." Measurements of the density of quasars based on Wide-field Infrared Survey Explorer data finds a dipole significantly different from the one extracted from the CMB anisotropy. This difference is conflict with the cosmological principle. Future evolution. Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_0" }, { "math_id": 1, "text": "T(\\theta,\\varphi)," }, { "math_id": 2, "text": "T(\\theta,\\varphi) = \\sum_{\\ell m} a_{\\ell m} Y_{\\ell m}(\\theta,\\varphi)" }, { "math_id": 3, "text": "a_{\\ell m}" }, { "math_id": 4, "text": "Y_{\\ell m}(\\theta,\\varphi)" }, { "math_id": 5, "text": "C_{\\ell}\\equiv \\langle |a_{\\ell m}|^2 \\rangle." } ]
https://en.wikipedia.org/wiki?curid=7376
73761875
Lamé's theorem
Theorem related to the Euclidean algorithm. Lamé's Theorem is the result of Gabriel Lamé's analysis of the complexity of the Euclidean algorithm. Using Fibonacci numbers, he proved in 1844 that when looking for the greatest common divisor (GCD) of two integers "a" and "b", the algorithm finishes in at most 5"k" steps, where "k" is the number of digits (decimal) of "b". Statement. The number of division steps in Euclidean algorithm with entries formula_0 and formula_1 is less than formula_2 times the number of decimal digits of formula_3. Proof. Let formula_4 be two positive integers. Applying to them the Euclidean algorithm provides two sequences formula_5 and formula_6 of positive integers such that, setting formula_7 formula_8 and formula_9 one has formula_10 for formula_11 and formula_12 The number n is called the "number of steps" of the Euclidean algorithm, since it is the number of Euclidean divisions that are performed. The Fibonacci numbers are defined by formula_13 formula_14 and formula_15 for formula_16 The above relations show that formula_17 and formula_18 By induction, formula_19 So, if the Euclidean algorithm requires n steps, one has formula_20 One has formula_21 for every integer formula_22, where formula_23 is the Golden ratio. This can be proved by induction, starting with formula_24 formula_25 and continuing by using that formula_26 formula_27 So, if n is the number of steps of the Euclidean algorithm, one has formula_28 and thus formula_29 using formula_30 If k is the number of decimal digits of formula_31, one has formula_32 and formula_33 So, formula_34 and, as both members of the inequality are integers, formula_35 which is exactly what Lamé's theorem asserts. As a side result of this proof, one gets that the pairs of integers formula_36 that give the maximum number of steps of the Euclidean algorithm (for a given size of formula_31) are the pairs of consecutive Fibonacci numbers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u\\,\\!" }, { "math_id": 1, "text": "v\\,\\!" }, { "math_id": 2, "text": "5" }, { "math_id": 3, "text": "\\min(u,v)\\,\\!" }, { "math_id": 4, "text": "u>v" }, { "math_id": 5, "text": "(q_1,\\ldots, q_{n})" }, { "math_id": 6, "text": "(v_2,\\ldots, v_{n})" }, { "math_id": 7, "text": "v_0=u," }, { "math_id": 8, "text": "v_1=v" }, { "math_id": 9, "text": "v_{n+1}=0," }, { "math_id": 10, "text": "v_{i-1}=q_iv_i +v_{i+1}" }, { "math_id": 11, "text": "i=1,\\ldots, n," }, { "math_id": 12, "text": "u>v>v_2>\\cdots >v_{n}>0." }, { "math_id": 13, "text": "F_0=0," }, { "math_id": 14, "text": "F_1=1," }, { "math_id": 15, "text": "F_{n+1}=F_n+F_{n-1}" }, { "math_id": 16, "text": "n>0." }, { "math_id": 17, "text": "v_{n}\\ge 1=F_2," }, { "math_id": 18, "text": "v_{n-1}\\ge 2=F_3." }, { "math_id": 19, "text": "\\begin{align}\nv_{n-i-1}&=q_{n-i}v_{n-i}+v_{n-i+1}\\\\\n &\\ge v_{n-i}+v_{n-i+1}\\\\\n &\\ge F_{i+2}+F_{i+1} =F_{i+3}.\n\\end{align}" }, { "math_id": 20, "text": "v\\ge F_{n+1}." }, { "math_id": 21, "text": "F_{k}\\ge \\varphi^{k-2}" }, { "math_id": 22, "text": "k>2" }, { "math_id": 23, "text": "\\varphi=\\frac {1+\\sqrt 5}2" }, { "math_id": 24, "text": "F_2 =\\varphi^0=1," }, { "math_id": 25, "text": "F_3=2> \\varphi, " }, { "math_id": 26, "text": "\\varphi^2=\\varphi +1:" }, { "math_id": 27, "text": "\\begin{align}\nF_{k+1}&=F_k+F_{k-1}\\\\\n&\\ge \\varphi^{k-2}+\\varphi^{k-3}\\\\\n&=\\varphi^{k-3}(1+\\varphi)\\\\\n&=\\varphi^{k-1}.\n\\end{align}" }, { "math_id": 28, "text": "v\\ge \\varphi^{n-1}," }, { "math_id": 29, "text": "n-1 \\le \\frac {\\log_{10} v}{\\log_{10} \\varphi}< 5\\log_{10} v," }, { "math_id": 30, "text": "\\frac 1{\\log_{10} \\varphi}<5." }, { "math_id": 31, "text": "v" }, { "math_id": 32, "text": "v<10^k" }, { "math_id": 33, "text": "\\log_{10} v<k." }, { "math_id": 34, "text": "n-1<5k," }, { "math_id": 35, "text": "n\\le 5k," }, { "math_id": 36, "text": "(u,v)" } ]
https://en.wikipedia.org/wiki?curid=73761875
7377730
Beta-binomial distribution
Discrete probability distribution In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The beta-binomial distribution is the binomial distribution in which the probability of success at each of "n" trials is not fixed but randomly drawn from a beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. The beta-binomial is a one-dimensional version of the Dirichlet-multinomial distribution as the binomial and beta distributions are univariate versions of the multinomial and Dirichlet distributions respectively. The special case where "α" and "β" are integers is also known as the negative hypergeometric distribution. Motivation and derivation. As a compound distribution. The Beta distribution is a conjugate distribution of the binomial distribution. This fact leads to an analytically tractable compound distribution where one can think of the formula_0 parameter in the binomial distribution as being randomly drawn from a beta distribution. Suppose we were interested in predicting the number of heads, formula_1 in formula_2 future trials. This is given by formula_3 Using the properties of the beta function, this can alternatively be written formula_4 As an urn model. The beta-binomial distribution can also be motivated via an urn model for positive integer values of "α" and "β", known as the Pólya urn model. Specifically, imagine an urn containing "α" red balls and "β" black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, then two black balls are returned to the urn. If this is repeated "n" times, then the probability of observing "x" red balls follows a beta-binomial distribution with parameters "n", "α" and "β". By contrast, if the random draws are with simple replacement (no balls over and above the observed ball are added to the urn), then the distribution follows a binomial distribution and if the random draws are made without replacement, the distribution follows a hypergeometric distribution. Moments and properties. The first three raw moments are formula_5 and the kurtosis is formula_6 Letting formula_7 we note, suggestively, that the mean can be written as formula_8 and the variance as formula_9 where formula_10. The parameter formula_11 is known as the "intra class" or "intra cluster" correlation. It is this positive correlation which gives rise to overdispersion. Note that when formula_12, no information is available to distinguish between the beta and binomial variation, and the two models have equal variances. Factorial moments. The "r"-th factorial moment of a Beta-binomial random variable "X" is formula_13. Point estimates. Method of moments. The method of moments estimates can be gained by noting the first and second moments of the beta-binomial and setting those equal to the sample moments formula_14 and formula_15. We find formula_16 These estimates can be non-sensically negative which is evidence that the data is either undispersed or underdispersed relative to the binomial distribution. In this case, the binomial distribution and the hypergeometric distribution are alternative candidates respectively. Maximum likelihood estimation. While closed-form maximum likelihood estimates are impractical, given that the pdf consists of common functions (gamma function and/or Beta functions), they can be easily found via direct numerical optimization. Maximum likelihood estimates from empirical data can be computed using general methods for fitting multinomial Pólya distributions, methods for which are described in (Minka 2003). The R package VGAM through the function vglm, via maximum likelihood, facilitates the fitting of glm type models with responses distributed according to the beta-binomial distribution. There is no requirement that n is fixed throughout the observations. Example: Sex ratio heterogeneity. The following data gives the number of male children among the first 12 children of family size 13 in 6115 families taken from hospital records in 19th century Saxony (Sokal and Rohlf, p. 59 from Lindsey). The 13th child is ignored to blunt the effect of families non-randomly stopping when a desired gender is reached. The first two sample moments are formula_17 and therefore the method of moments estimates are formula_18 The maximum likelihood estimates can be found numerically formula_19 and the maximized log-likelihood is formula_20 from which we find the AIC formula_21 The AIC for the competing binomial model is AIC = 25070.34 and thus we see that the beta-binomial model provides a superior fit to the data i.e. there is evidence for overdispersion. Trivers and Willard postulate a theoretical justification for heterogeneity in gender-proneness among mammalian offspring. The superior fit is evident especially among the tails Role in Bayesian statistics. The beta-binomial distribution plays a prominent role in the Bayesian estimation of a Bernoulli success probability formula_22 which we wish to estimate based on data. Let formula_23 be a sample of independent and identically distributed Bernoulli random variables formula_24. Suppose, our knowledge of formula_22 - in Bayesian fashion - is uncertain and is modeled by the prior distribution formula_25. If formula_26 then through compounding, the prior predictive distribution of formula_27. After observing formula_28 we note that the posterior distribution for formula_22 formula_29 where formula_30 is a normalizing constant. We recognize the posterior distribution as a formula_31. Thus, again through compounding, we find that the posterior predictive distribution of a sum of a future sample of size formula_32 of formula_33 random variables is formula_34. Generating random variates. To draw a beta-binomial random variate formula_35 simply draw formula_36 and then draw formula_37.
[ { "math_id": 0, "text": " p " }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\n \\begin{align} f(x\\mid n,\\alpha,\\beta) & = \\int_0^1 \\mathrm{Bin}(x|n,p)\\mathrm{Beta}(p\\mid \\alpha, \\beta) \\, dp \\\\[6pt]\n & = {n\\choose x}\\frac{1}{\\mathrm{B}(\\alpha,\\beta)}\n \\int_0^1 p^{x+\\alpha-1}(1-p)^{n-x+\\beta-1} \\, dp \\\\[6pt]\n & = {n\\choose x}\\frac{\\mathrm{B}(x+\\alpha,n-x+\\beta)} {\\mathrm{B}(\\alpha,\\beta)}. \n \\end{align}\n" }, { "math_id": 4, "text": "\n f(x\\mid n,\\alpha,\\beta) = \\frac{\\Gamma(n+1)\\Gamma(x+\\alpha)\\Gamma(n-x+\\beta)}{\\Gamma(n+\\alpha+\\beta)\\Gamma(x+1)\\Gamma(n-x+1)} \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}\n\n" }, { "math_id": 5, "text": " \n \\begin{align} \n \\mu_1 & =\\frac{n\\alpha}{\\alpha+\\beta} \\\\[8pt]\n \\mu_2 & =\\frac{n\\alpha[n(1+\\alpha)+\\beta]}{(\\alpha+\\beta)(1+\\alpha+\\beta)}\\\\[8pt]\n \\mu_3 & =\\frac{n\\alpha[n^{2}(1+\\alpha)(2+\\alpha)+3n(1+\\alpha)\\beta+\\beta(\\beta-\\alpha)]}{(\\alpha+\\beta)(1+\\alpha+\\beta)(2+\\alpha+\\beta)}\n \\end{align}\n" }, { "math_id": 6, "text": " \n \\beta_2 = \\frac{(\\alpha + \\beta)^2 (1+\\alpha+\\beta)}{n \\alpha \\beta( \\alpha + \\beta + 2)(\\alpha + \\beta + 3)(\\alpha + \\beta + n) } \\left[ (\\alpha + \\beta)(\\alpha + \\beta - 1 + 6n) + 3 \\alpha\\beta(n - 2) + 6n^2 -\\frac{3\\alpha\\beta n(6-n)}{\\alpha + \\beta} - \\frac{18\\alpha\\beta n^{2}}{(\\alpha+\\beta)^2} \\right].\n" }, { "math_id": 7, "text": "p=\\frac{\\alpha}{\\alpha+\\beta} \\!" }, { "math_id": 8, "text": "\n\\mu = \\frac{n\\alpha}{\\alpha+\\beta}=np\n\\!" }, { "math_id": 9, "text": "\n\\sigma^2 = \\frac{n\\alpha\\beta(\\alpha+\\beta+n)}{(\\alpha+\\beta)^2(\\alpha+\\beta+1)}\n = np(1-p) \\frac{\\alpha + \\beta + n}{\\alpha + \\beta + 1} = np(1-p)[1+(n-1)\\rho]\n\\!" }, { "math_id": 10, "text": "\\rho= \\tfrac{1}{\\alpha+\\beta+1}\\!" }, { "math_id": 11, "text": "\\rho \\; \\!" }, { "math_id": 12, "text": "n=1" }, { "math_id": 13, "text": "\\operatorname{E}\\bigl[(X)_r\\bigr] = \\frac{n!}{(n-r)!}\\frac{B(\\alpha+r,\\beta)}{B(\\alpha,\\beta)} =\n(n)_r \\frac{B(\\alpha+r,\\beta)}{B(\\alpha,\\beta)} " }, { "math_id": 14, "text": "m_1" }, { "math_id": 15, "text": "m_2" }, { "math_id": 16, "text": "\n \\begin{align} \n \\widehat{\\alpha} & =\\frac{nm_1-m_2}{n(\\frac{m_2}{m_1}-m_1-1)+m_1} \\\\[5pt]\n \\widehat{\\beta} & =\\frac{(n-m_1)(n-\\frac{m_2}{m_1})}{n(\\frac{m_2}{m_1}-m_1 - 1)+m_1}.\n \\end{align}\n" }, { "math_id": 17, "text": " \n \\begin{align} \n m_1 & = 6.23\\\\\n m_2 & = 42.31 \\\\\n n & = 12\n \\end{align}\n" }, { "math_id": 18, "text": " \n \\begin{align} \n \\widehat{\\alpha} & = 34.1350\\\\\n \\widehat{\\beta} & = 31.6085.\n \\end{align}\n" }, { "math_id": 19, "text": " \n \\begin{align} \n \\widehat\\alpha_\\mathrm{mle} & = 34.09558\\\\\n \\widehat\\beta_\\mathrm{mle} & = 31.5715\n \\end{align}\n" }, { "math_id": 20, "text": "\n \\log \\mathcal{L} = -12492.9\n" }, { "math_id": 21, "text": "\n \\mathit{AIC}=24989.74.\n" }, { "math_id": 22, "text": "p" }, { "math_id": 23, "text": "\\mathbf{X}=\\{X_1, X_2, \\cdots X_{n_1}\\} " }, { "math_id": 24, "text": "X_i \\sim \\text{Bernoulli}(p)" }, { "math_id": 25, "text": "p \\sim \\text{Beta}(\\alpha,\\beta)" }, { "math_id": 26, "text": "Y_1=\\sum_{i=1}^{n_1} X_i" }, { "math_id": 27, "text": "\nY_1 \\sim \\text{BetaBin}(n_1, \\alpha,\\beta)\n" }, { "math_id": 28, "text": "Y_1" }, { "math_id": 29, "text": "\n\\begin{align}\nf(p|\\mathbf{X},\\alpha,\\beta) & \\propto \\left(\\prod_{i=1}^{n_1} p^{x_i}(1-p)^{1-x_i} \\right)p^{\\alpha-1}(1-p)^{\\beta-1}\\\\\n & = Cp^{\\sum x_i +\\alpha-1}(1-p)^{n_1 -\\sum x_i +\\beta-1} \\\\\n& = Cp^{y_1 +\\alpha-1}(1-p)^{n_1-y_1 +\\beta-1} \n\\end{align}\n" }, { "math_id": 30, "text": "C" }, { "math_id": 31, "text": "\\mathrm{Beta}(y_1+\\alpha,n_1-y_1+\\beta)" }, { "math_id": 32, "text": "n_2" }, { "math_id": 33, "text": "\\mathrm{Bernoulli}(p)" }, { "math_id": 34, "text": "\nY_2 \\sim \\mathrm{BetaBin}(n_2, y_1+\\alpha, n_1-y_1+\\beta)\n" }, { "math_id": 35, "text": "X \\sim \\mathrm{BetaBin}(n, \\alpha,\\beta)" }, { "math_id": 36, "text": "p \\sim \\mathrm{Beta}(\\alpha,\\beta) " }, { "math_id": 37, "text": "X \\sim \\mathrm{B}(n,p)" }, { "math_id": 38, "text": "\\mathrm{BetaBin}(1, \\alpha, \\beta) \\sim \\mathrm{Bernoulli}(p)\\," }, { "math_id": 39, "text": "p=\\frac{\\alpha}{\\alpha+\\beta}\\," }, { "math_id": 40, "text": "\\mathrm{BetaBin}(n, 1, 1) \\sim U(0,n)\\," }, { "math_id": 41, "text": "U(a,b)\\," }, { "math_id": 42, "text": " \\lim_{s \\rightarrow \\infty} \\mathrm{BetaBin}(n, ps, (1-p)s) \\sim \\mathrm{B}(n,p)\\," }, { "math_id": 43, "text": "s=\\alpha+\\beta\\," }, { "math_id": 44, "text": "\\mathrm{B}(n,p)\\," }, { "math_id": 45, "text": "\\lim_{n \\rightarrow \\infty} \\mathrm{BetaBin}(n, \\alpha, \\frac{np}{(1-p)}) \\sim \\mathrm{NB}(\\alpha,p)\\," }, { "math_id": 46, "text": "\\mathrm{NB}(\\alpha,p)\\," } ]
https://en.wikipedia.org/wiki?curid=7377730
7378088
Aberth method
The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. This method converges cubically, an improvement over the Durand–Kerner method, another algorithm for approximating all roots at once, which converges quadratically. (However, both algorithms converge linearly at multiple zeros.) This method is used in MPSolve, which is the reference software for approximating all roots of a polynomial to an arbitrary precision. Description. Let formula_0 be a univariate polynomial of degree "formula_1" with real or complex coefficients. Then there exist complex numbers formula_2, the roots of "formula_3", that give the factorization: formula_4 Although those numbers are unknown, upper and lower bounds for their absolute values are computable from the coefficients of the polynomial. Now one can pick "formula_1" distinct numbers in the complex plane—randomly or evenly distributed—such that their absolute values are within the same bounds. (Also, if the zeros are symmetrical, the starting points must not be exactly symmetrical along the same axis, as this can prevent convergence.) A set of such numbers is called an initial approximation of the set of roots of "formula_3". This approximation can be iteratively improved using the following procedure. Let formula_5 be the current approximations of the zeros of "formula_3". Then offset numbers formula_6 are computed as formula_7 where formula_8 is the polynomial derivative of "formula_9" evaluated in the point "formula_10". The next set of approximations of roots of formula_3 is then formula_11. One can measure the quality of the current approximation by the values of the polynomial or by the size of the offsets. Conceptually, this method uses an electrostatic analogy, modeling the approximated zeros as movable negative point charges, which converge toward the true zeros, represented by fixed positive point charges. A direct application of Newton's method to each approximated zero will often cause multiple starting points to incorrectly converge to the same root. The Aberth method avoids this by also modeling the repulsive effect the movable charges have on each other. In this way, when a movable charge has converged on a zero, their charges will cancel out, so that other movable charges are no longer attracted to that location, encouraging them to converge to other "unoccupied" zeros. (Stieltjes also modeled the positions of zeros of polynomials as solutions to electrostatic problems.) Inside the formula of the Aberth method one can find elements of Newton's method and the Durand–Kerner method. Details for an efficient implementation, esp. on the choice of good initial approximations, can be found in Bini (1996). The updates of the roots may be executed as a simultaneous Jacobi-like iteration where first all new approximations are computed from the old approximations or as a sequential Gauss–Seidel-like iteration that uses each new approximation from the time it is computed. A very similar method is the Newton-Maehly method. It computes the zeros one after another, but instead of an explicit deflation it divides by the already acquired linear factors on the fly. The Aberth method is like the Newton-Maehly method for computing the last root while pretending you have already found the other ones. Derivation from Newton's method. The iteration formula is the univariate Newton iteration for the function formula_12 If the values formula_13 are already close to the roots of formula_3, then the rational function formula_14 is almost linear with a dominant root close to formula_10 and poles at formula_15 that direct the Newton iteration away from the roots of "p(x)" that are close to them. That is, the corresponding basins of attraction get rather small, while the root close to formula_10 has a wide region of attraction. The Newton step formula_16 in the univariate case is the reciprocal value to the logarithmic derivative formula_17 Thus, the new approximation is computed as formula_18 which is the update formula of the Aberth–Ehrlich method.
[ { "math_id": 0, "text": " p(x)=p_nx^n+p_{n-1}x^{n-1}+\\cdots+p_1x+p_0 " }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "z^*_1,\\,z^*_2,\\dots,z^*_n" }, { "math_id": 3, "text": "p(x)" }, { "math_id": 4, "text": "p(x)=p_n\\cdot(x-z^*_1)\\cdot(x-z^*_2)\\cdots(x-z^*_n)." }, { "math_id": 5, "text": "z_1,\\dots,z_n\\in\\mathbb C" }, { "math_id": 6, "text": "w_1,\\dots,w_n\\in\\mathbb C" }, { "math_id": 7, "text": "w_k=\\frac{\\frac{p(z_k)}{p'(z_k)}}{1-\\frac{p(z_k)}{p'(z_k)}\\cdot \\sum_{j\\ne k}\\frac1{z_k-z_j}}," }, { "math_id": 8, "text": "p'(z_k)" }, { "math_id": 9, "text": "p" }, { "math_id": 10, "text": "z_k" }, { "math_id": 11, "text": " z_1-w_1,\\dots,z_n-w_n " }, { "math_id": 12, "text": "F(x)=\\frac{p(x)}{\\prod_{j=1;\\,j\\ne k}^n(x-z_j)}" }, { "math_id": 13, "text": "z_1,\\dots,z_n" }, { "math_id": 14, "text": "F(x)" }, { "math_id": 15, "text": "z_1,\\dots,z_{k-1},z_{k+1},\\dots,z_n" }, { "math_id": 16, "text": "\\tfrac{F(x)}{F'(x)}" }, { "math_id": 17, "text": "\\begin{align}\n \\frac{F'(x)}{F(x)}\n &= \\frac{d}{dx}\\ln|F(x)|\\\\\n &= \\frac{d}{dx}\\big(\\ln|p(x)|-\\sum_{j=1;\\,j\\ne k}^n\\ln|x-z_j|\\big)\\\\\n &= \\frac{p'(x)}{p(x)}-\\sum_{j=1;\\,j\\ne k}^n\\frac1{x-z_j}\n\\end{align}\n" }, { "math_id": 18, "text": "z_k'=z_k-\\frac{F(z_k)}{F'(z_k)}=z_k-\\frac1{\\frac{p'(z_k)}{p(z_k)}-\\sum_{j=1;\\,j\\ne k}^n\\frac1{z_k-z_j}}\\,," } ]
https://en.wikipedia.org/wiki?curid=7378088
73781308
Wood–Anderson seismometer
Instrument fo measuring strength of earthquakes The Wood–Anderson seismometer (also known as the Wood–Anderson seismograph) is a torsion seismometer developed in the United States by Harry O. Wood and John August Anderson in the 1920s to record local earthquakes in southern California. It photographically records the horizontal motion. The seismometer uses a pendulum of 0.8g, its period is 0.8 seconds, its magnification is 2,800 times, and its damping constant is 0.8. Charles Francis Richter developed the Richter magnitude scale using the Wood–Anderson seismometer. Overview. In 1908, geologist Grove K. Gilbert paid Harry Wood $1,000 to draft a map of potentially active faults in northern California and several years later Lawson assigned Wood to oversee the University's seismometers, where attention was focused on local earthquakes as well as the distant events that were used (especially by European scientists like Beno Gutenberg) to study the attributes of the Earth's interior. Seismometers that were in use up until that time had been developed and optimized for detecting the long-period seismic waves from distant earthquakes and did not detect local events well. Wood left Berkeley in 1912 and spent several years researching volcano seismology in Hawaii and made contact with Arthur L. Day, the director of the Carnegie Institution's geophysical laboratory, while Day also conducted volcanological research there. He would serve as Wood's mentor who took his advice and went to work at the Bureau of Standards in Washington D. C. where a relationship was developed with George Ellery Hale, the director of Carnegie's Mount Wilson Observatory in Pasadena. In March 1921, the Carnegie Institution accepted a proposal from Wood to provide financing for a long-duration program of seismological research in Southern California. As a researcher for the Institute, Wood worked in a partnership with John A. Anderson (an instrument designer and astrophysicist from the Mount Wilson Observatory) to pursue the development of a seismometer that could record the short-period waves from local earthquakes. Their instrument would require the ability to measure the seismic waves with periods from .5–2.0 seconds, which were considerably shorter than what the existing units were able to detect. In September 1923, with the successful completion of what became known as the Wood-Anderson torsion seismometer, the focus became establishing a network of the instruments throughout the region that would be able to pinpoint earthquake epicenters and eventually allow mapping of the corresponding fault zones. Wood suggested that the Carnegie Institute establish a small network of the units at five locations throughout the region (Pasadena, Mount Wilson, Riverside, Santa Catalina Island, and Fallbrook) and the Institute agreed to move forward with the proposal. Richter magnitude scale. Prior to the development of the magnitude scale, the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the epicenter of the earthquake, categorized by various seismic intensity scales such as the Rossi-Forel scale. ("Size" is used in the sense of the quantity of energy released, not the size of the area affected by shaking, though higher-energy earthquakes do tend to affect a wider area, depending on the local geology.) In 1883 John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, and in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo. In the 1920s Harry Wood and John Anderson developed the Wood–Anderson Seismograph, one of the first practical instruments for recording seismic waves. Wood then built, under the auspices of the California Institute of Technology and the Carnegie Institute, a network of seismographs stretching across Southern California. He also recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves. In 1931 Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter. He then plotted the logarithm of the amplitude against the distance and found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes. Richter resolved some difficulties with this method and then, using data collected by his colleague Beno Gutenberg, he produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes. To produce a practical method of assigning an absolute measure of magnitude required additional developments. First, to span the wide range of possible values, Richter adopted Gutenberg's suggestion of a logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers for star brightness. Second, he wanted a magnitude of zero to be around the limit of human perceptibility. Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was then defined as "the logarithm of the maximum trace amplitude, expressed in microns", measured at a distance of . The scale was calibrated by defining a magnitude 0 shock as one that produces (at a distance of ) a maximum amplitude of 1 micron (1 μm, or 0.001 millimeters) on a seismogram recorded by a Wood-Anderson torsion seismometer. Finally, Richter calculated a table of distance corrections, in that for distances less than 200 kilometers the attenuation is strongly affected by the structure and properties of the regional geology. When Richter presented the resulting scale in 1935, he called it (at the suggestion of Harry Wood) simply a "magnitude" scale. "Richter magnitude" appears to have originated when Perry Byerly told the press that the scale was Richter's and "should be referred to as such." In 1956, Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol , to distinguish it from two other scales they had developed, the surface wave magnitude (MS) and body wave magnitude (MB) scales. The Richter magnitude of an earthquake is determined from the logarithm of the amplitude of waves recorded by seismographs (adjustments are included to compensate for the variation in the distance between the various seismographs and the epicenter of the earthquake). The original formula is: formula_0 where A is the maximum excursion of the Wood–Anderson seismograph, the empirical function A0 depends only on the epicentral distance of the station, formula_1. In practice, readings from all observing stations are averaged after adjustment with station-specific corrections to obtain the value. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Div col/styles.css"/&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "M_\\mathrm{L} = \\log_{10} A - \\log_{10} A_\\mathrm{0}(\\delta) = \\log_{10} [A / A_\\mathrm{0}(\\delta)],\\ " }, { "math_id": 1, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=73781308