id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
6053993
Distributive category
In mathematics, a category is distributive if it has finite products and finite coproducts and such that for every choice of objects formula_0, the canonical map formula_1 is an isomorphism, and for all objects formula_2, the canonical map formula_3 is an isomorphism (where 0 denotes the initial object). Equivalently, if for every object formula_2 the endofunctor formula_4 defined by formula_5 preserves coproducts up to isomorphisms formula_6. It follows that formula_6 and aforementioned canonical maps are equal for each choice of objects. In particular, if the functor formula_4 has a right adjoint (i.e., if the category is cartesian closed), it necessarily preserves all colimits, and thus any cartesian closed category with finite coproducts (i.e., any bicartesian closed category) is distributive. Example. The category of sets is distributive. Let A, B, and C be sets. Then formula_7 where formula_8 denotes the coproduct in Set, namely the disjoint union, and formula_9 denotes a bijection. In the case where A, B, and C are finite sets, this result reflects the distributive property: the above sets each have cardinality formula_10. The categories Grp and Ab are not distributive, even though they have both products and coproducts. An even simpler category that has both products and coproducts but is not distributive is the category of pointed sets. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "A,B,C" }, { "math_id": 1, "text": "[\\mathit{id}_A \\times\\iota_1, \\mathit{id}_A \\times\\iota_2] : A\\!\\times\\!B \\,+ A\\!\\times\\!C \\to A\\!\\times\\!(B+C)" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "0 \\to A\\times 0" }, { "math_id": 4, "text": "A \\times -" }, { "math_id": 5, "text": "B\\mapsto A\\times B" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "\\begin{align}\n A\\times (B\\amalg C) \n &= \\{(a,d) \\mid a \\in A\\text{ and }d \\in B \\amalg C\\} \\\\\n &\\cong \\{(a,d) \\mid a \\in A\\text{ and }d \\in B\\} \\amalg \\{(a,d) \\mid a \\in A\\text{ and }d \\in C\\} \\\\\n &= (A \\times B) \\amalg (A \\times C)\n\\end{align}" }, { "math_id": 8, "text": "\\amalg" }, { "math_id": 9, "text": "\\cong" }, { "math_id": 10, "text": "|A|\\cdot (|B|+|C|)=|A|\\cdot|B| + |A|\\cdot|C|" } ]
https://en.wikipedia.org/wiki?curid=6053993
60544754
Markov chain central limit theorem
In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theory, but the quantity in the role taken by the variance in the classic CLT has a more complicated definition. See also the general form of Bienaymé's identity. Statement. Suppose that: Now let formula_5 Then as formula_6 we have formula_7 where the decorated arrow indicates convergence in distribution. Monte Carlo Setting. The Markov chain central limit theorem can be guaranteed for functionals of general state space Markov chains under certain conditions. In particular, this can be done with a focus on Monte Carlo settings. An example of the application in a MCMC (Markov Chain Monte Carlo) setting is the following: Consider a simple hard spheres model on a grid. Suppose formula_8. A proper configuration on formula_9 consists of coloring each point either black or white in such a way that no two adjacent points are white. Let formula_10 denote the set of all proper configurations on formula_9, formula_11 be the total number of proper configurations and π be the uniform distribution on formula_10 so that each proper configuration is equally likely. Suppose our goal is to calculate the typical number of white points in a proper configuration; that is, if formula_12 is the number of white points in formula_13 then we want the value of formula_14 If formula_15 and formula_16 are even moderately large then we will have to resort to an approximation to formula_17 . Consider the following Markov chain on formula_10. Fix formula_18 and set formula_19 where formula_20 is an arbitrary proper configuration. Randomly choose a point formula_21 and independently draw formula_22. If formula_23 and all of the adjacent points are black then color formula_24 white leaving all other points alone. Otherwise, color formula_24 black and leave all other points alone. Call the resulting configuration formula_25. Continuing in this fashion yields a Harris ergodic Markov chain formula_26 having formula_27 as its invariant distribution. It is now a simple matter to estimate formula_28 with formula_29. Also, since formula_10 is finite (albeit potentially large) it is well known that formula_9 will converge exponentially fast to formula_27 which implies that a CLT holds for formula_30. Implications. Not taking into account the additional terms in the variance which stem from correlations (e.g. serial correlations in markov chain monte carlo simulations) can result in the problem of pseudoreplication when computing e.g. the confidence intervals for the sample mean. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " X_1,X_2,X_3,\\ldots " }, { "math_id": 1, "text": " X_1" }, { "math_id": 2, "text": " X_1,X_2,X_3,\\ldots" }, { "math_id": 3, "text": " g" }, { "math_id": 4, "text": " \\operatorname{var}(g(X_1)) <+\\infty." }, { "math_id": 5, "text": "\n\\begin{align}\n\\mu & = \\operatorname E(g(X_1)), \\\\\n\\widehat\\mu_n & = \\frac 1 n \\sum_{k=1}^n g(X_k)\\\\\n\\sigma^2 & := \\lim_{n\\to \\infty} \\operatorname{var}(\\sqrt{n}\\widehat\\mu_n) = \\lim_{n\\to \\infty} n \\operatorname{var}(\\widehat\\mu_n) = \\operatorname{var}(g(X_1)) + 2\\sum_{k=1}^\\infty \\operatorname{cov}( g(X_1), g(X_{1+k})).\n\\end{align}\n" }, { "math_id": 6, "text": " n \\to\\infty," }, { "math_id": 7, "text": "\n\\sqrt{n} (\\hat{\\mu}_n - \\mu) \\ \\xrightarrow{\\mathcal{D}} \\ \\text{Normal}(0, \\sigma^2),\n" }, { "math_id": 8, "text": "X = \\{1, \\ldots, n_1\\} \\times \\{1, \\ldots, n_2 \\} \\subseteq Z^2" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "\\chi" }, { "math_id": 11, "text": "N_{\\chi}(n_1, n_2)" }, { "math_id": 12, "text": "W(x)" }, { "math_id": 13, "text": "x \\in \\chi" }, { "math_id": 14, "text": "E_{\\pi}W=\\sum_{x \\in \\chi}\\frac{W(x)}{N_\\chi\\bigl(n_1,n_2\\bigr)}" }, { "math_id": 15, "text": "n_1" }, { "math_id": 16, "text": "n_2" }, { "math_id": 17, "text": "E_{\\pi}W" }, { "math_id": 18, "text": "p \\in (0, 1)" }, { "math_id": 19, "text": "X_1 = x_1" }, { "math_id": 20, "text": "x_1 \\in \\chi" }, { "math_id": 21, "text": "(x, y) \\in X" }, { "math_id": 22, "text": "U \\sim \\mathrm{Uniform}(0, 1)" }, { "math_id": 23, "text": "u \\le p" }, { "math_id": 24, "text": "(x, y)" }, { "math_id": 25, "text": "X_1" }, { "math_id": 26, "text": "\\{X_1 , X_2 , X_3 , \\ldots\\}" }, { "math_id": 27, "text": "\\pi" }, { "math_id": 28, "text": "E_{\\pi} W" }, { "math_id": 29, "text": "\\overline{w_n}=\\sum_{i=1}^{n} W(X_i)/n" }, { "math_id": 30, "text": "\\overline{w_n}" } ]
https://en.wikipedia.org/wiki?curid=60544754
60545283
Ordered exponential field
Ordered field with a function generalizing the exponential function In mathematics, an ordered exponential field is an ordered field together with a function which generalises the idea of exponential functions on the ordered field of real numbers. Definition. An exponential formula_0 on an ordered field formula_1 is a strictly increasing isomorphism of the additive group of formula_1 onto the multiplicative group of positive elements of formula_1. The ordered field formula_2 together with the additional function formula_3 is called an ordered exponential field. Formally exponential fields. A formally exponential field, also called an exponentially closed field, is an ordered field that can be equipped with an exponential formula_0. For any formally exponential field formula_1, one can choose an exponential formula_0 on formula_1 such that formula_8 for some natural number formula_9. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "K\\," }, { "math_id": 3, "text": "E\\," }, { "math_id": 4, "text": " a^x" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "\\mathbf{No}" }, { "math_id": 7, "text": "\\mathbb{T}^{LE}" }, { "math_id": 8, "text": "1+1/n<E(1)<n" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "E\\left(\\frac{1}{n}E^{-1}(a)\\right)^n=E(E^{-1}(a))=a" }, { "math_id": 11, "text": "a>0" }, { "math_id": 12, "text": "E(x)=a^x\\," }, { "math_id": 13, "text": " 1<a\\in K" }, { "math_id": 14, "text": "E(\\sqrt{2})=a^\\sqrt{2}" }, { "math_id": 15, "text": " 1<a " }, { "math_id": 16, "text": "E_2\\colon K\\rightarrow K^+" }, { "math_id": 17, "text": "E_2(x+y)=E_2(x)E_2(y)" }, { "math_id": 18, "text": "E_2(1)=2" }, { "math_id": 19, "text": "E_2" } ]
https://en.wikipedia.org/wiki?curid=60545283
60546
Unique factorization domain
Type of integral domain In mathematics, a unique factorization domain (UFD) (also sometimes called a factorial ring following the terminology of Bourbaki) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is an integral domain (a nontrivial commutative ring in which the product of any two non-zero elements is non-zero) in which every non-zero non-unit element can be written as a product of irreducible elements, uniquely up to order and units. Important examples of UFDs are the integers and polynomial rings in one or more variables with coefficients coming from the integers or from a field. Unique factorization domains appear in the following chain of class inclusions: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ algebraically closed fields Definition. Formally, a unique factorization domain is defined to be an integral domain "R" in which every non-zero element "x" of "R" can be written as a product of a unit "u" and zero or more irreducible elements "p""i" of "R": "x" = "u" "p"1 "p"2 ⋅⋅⋅ "p""n" with "n" ≥ 0 and this representation is unique in the following sense: If "q"1, ..., "q""m" are irreducible elements of "R" and "w" is a unit such that "x" = "w" "q"1 "q"2 ⋅⋅⋅ "q""m" with "m" ≥ 0, then "m" = "n", and there exists a bijective map "φ" : {1, ..., "n"} → {1, ..., "m"} such that "p""i" is associated to "q""φ"("i") for "i" ∈ {1, ..., "n"}. Examples. Most rings familiar from elementary mathematics are UFDs: Properties. Some concepts defined for integers can be generalized to UFDs: Equivalent conditions for a ring to be a UFD. A Noetherian integral domain is a UFD if and only if every height 1 prime ideal is principal (a proof is given at the end). Also, a Dedekind domain is a UFD if and only if its ideal class group is trivial. In this case, it is in fact a principal ideal domain. In general, for an integral domain "A", the following conditions are equivalent: In practice, (2) and (3) are the most useful conditions to check. For example, it follows immediately from (2) that a PID is a UFD, since every prime ideal is generated by a prime element in a PID. For another example, consider a Noetherian integral domain in which every height one prime ideal is principal. Since every prime ideal has finite height, it contains a height one prime ideal (induction on height) that is principal. By (2), the ring is a UFD. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}\\left[e^{\\frac{2 \\pi i}{n}}\\right]" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "\\mathbb Z[\\sqrt{-5}]" }, { "math_id": 3, "text": "a+b\\sqrt{-5}" }, { "math_id": 4, "text": "\\left(1+\\sqrt{-5}\\right)\\left(1-\\sqrt{-5}\\right)" }, { "math_id": 5, "text": "1+\\sqrt{-5}" }, { "math_id": 6, "text": "1-\\sqrt{-5}" }, { "math_id": 7, "text": " \\mathbb Q[\\sqrt{-d}]" }, { "math_id": 8, "text": "\\sin \\pi z = \\pi z \\prod_{n=1}^{\\infty} \\left(1-{{z^2}\\over{n^2}}\\right)." } ]
https://en.wikipedia.org/wiki?curid=60546
6054639
Jensen's formula
Mathematical formula in complex analysis In the mathematical field known as complex analysis, Jensen's formula, introduced by Johan Jensen (1899), relates the average magnitude of an analytic function on a circle with the number of its zeros inside the circle. It forms an important statement in the study of entire functions. Formal statement. Suppose that formula_0 is an analytic function in a region in the complex plane formula_1 which contains the closed disk formula_2 of radius formula_3 about the origin, formula_4 are the zeros of formula_0 in the interior of formula_2 (repeated according to their respective multiplicity), and that formula_5. Jensen's formula states that formula_6 This formula establishes a connection between the moduli of the zeros of formula_0 in the interior of formula_2 and the average of formula_7 on the boundary circle formula_8, and can be seen as a generalisation of the mean value property of harmonic functions. Namely, if formula_0 has no zeros in formula_2, then Jensen's formula reduces to formula_9 which is the mean-value property of the harmonic function formula_10. An equivalent statement of Jensen's formula that is frequently used is formula_11 where formula_12 denotes the number of zeros of formula_0 in the disc of radius formula_13 centered at the origin. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof It suffices to prove the case for formula_14. Applications. Jensen's formula can be used to estimate the number of zeros of an analytic function in a circle. Namely, if formula_0 is a function analytic in a disk of radius formula_32 centered at formula_33 and if formula_34 is bounded by formula_35 on the boundary of that disk, then the number of zeros of formula_0 in a circle of radius formula_36 centered at the same point formula_33 does not exceed formula_37 Jensen's formula is an important statement in the study of value distribution of entire and meromorphic functions. In particular, it is the starting point of Nevanlinna theory, and it often appears in proofs of Hadamard factorization theorem, which requires an estimate on the number of zeros of an entire function. Jensen's formula is also used to prove a generalization of Paley-Wiener theorem for quasi-analytic functions with formula_38. In the field of control theory (in particular: spectral factorization methods) this generalization is often referred to as the Paley–Wiener condition. Generalizations. Jensen's formula may be generalized for functions which are merely meromorphic on formula_2. Namely, assume that formula_39 where formula_18 and formula_40 are analytic functions in formula_2 having zeros at formula_41 and formula_42 respectively, then Jensen's formula for meromorphic functions states that formula_43 Jensen's formula is a consequence of the more general Poisson–Jensen formula, which in turn follows from Jensen's formula by applying a Möbius transformation to formula_44. It was introduced and named by Rolf Nevanlinna. If formula_0 is a function which is analytic in the unit disk, with zeros formula_4 located in the interior of the unit disk, then for every formula_45 in the unit disk the Poisson–Jensen formula states that formula_46 Here, formula_47 is the Poisson kernel on the unit disk. If the function formula_0 has no zeros in the unit disk, the Poisson-Jensen formula reduces to formula_48 which is the Poisson formula for the harmonic function formula_10. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "\\mathbb{C}" }, { "math_id": 2, "text": "\\mathbb{D}_r" }, { "math_id": 3, "text": "r>0" }, { "math_id": 4, "text": "a_1,a_2,\\ldots,a_n" }, { "math_id": 5, "text": "f(0)\\neq 0" }, { "math_id": 6, "text": "\\log |f(0)| = -\\sum_{k=1}^n \\log \\left( \\frac{r}{|a_k|}\\right) + \\frac{1}{2\\pi} \\int_0^{2\\pi} \\log|f(re^{i\\theta})| \\, d\\theta." }, { "math_id": 7, "text": "\\log|f(z)|" }, { "math_id": 8, "text": "|z|=r" }, { "math_id": 9, "text": "\\log |f(0)| = \\frac{1}{2\\pi} \\int_0^{2\\pi} \\log|f(re^{i\\theta})| \\, d\\theta," }, { "math_id": 10, "text": "\\log |f(z)|" }, { "math_id": 11, "text": "\\frac{1}{2\\pi} \\int_0^{2\\pi} \\log |f(re^{i\\theta})| \\, d\\theta - \\log |f(0)| = \\int_0^r \\frac{n(t)}{t} \\, dt\n" }, { "math_id": 12, "text": "n(t)" }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "r=1" }, { "math_id": 15, "text": "g(z) = \\frac{f(z)}{\\prod_k (z-e^{i\\theta_k})}" }, { "math_id": 16, "text": "e^{i\\theta_k}" }, { "math_id": 17, "text": "\\int_0^{2\\pi} \\ln|e^{i\\theta} -e^{i\\theta_k}| d\\theta = 2\\int_0^\\pi \\ln(2\\sin \\theta)d\\theta= 0," }, { "math_id": 18, "text": "g" }, { "math_id": 19, "text": "F(z) := \\frac{f(z)}{\\prod_{k=1}^n(z-a_k)}" }, { "math_id": 20, "text": "F" }, { "math_id": 21, "text": "B(0, 1+\\epsilon)" }, { "math_id": 22, "text": "B(0, 1)" }, { "math_id": 23, "text": "\\log |F| = Re(\\log F)" }, { "math_id": 24, "text": "\\log |F(0)|=\\frac{1}{2 \\pi} \\int_0^{2 \\pi} \\log |F(e^{i \\theta})|\\, d \\theta\n\t = \\frac{1}{2\\pi} \\int_0^{2\\pi} \\log|f(e^{i\\theta})| \\, d\\theta -\\sum_{k=1}^n \\frac{1}{2\\pi} \\int_0^{2\\pi} \\log |e^{i\\theta} - a_k| \\, d\\theta." }, { "math_id": 25, "text": "\\int_0^{2\\pi} \\log|e^{i\\theta} - a_k| \\, d\\theta" }, { "math_id": 26, "text": "\\int_0^{2\\pi} \\log|e^{i\\theta} - a_k| \\, d\\theta\n\t = \\int_0^{2\\pi} \\log|1 - a_ke^{-i\\theta}| \\, d\\theta = Re \\int_0^{2\\pi} \\log(1 - a_ke^{-i\\theta}) \\, d\\theta." }, { "math_id": 27, "text": "\\int_0^{2\\pi} \\log(1 - a_ke^{-i\\theta}) \\, d\\theta" }, { "math_id": 28, "text": "\\log (1-z)/z" }, { "math_id": 29, "text": "|a_k| < 1" }, { "math_id": 30, "text": "\\log(1-z)/z" }, { "math_id": 31, "text": "B(0, |a_k|)" }, { "math_id": 32, "text": "R" }, { "math_id": 33, "text": "z_0" }, { "math_id": 34, "text": "| f |" }, { "math_id": 35, "text": "M" }, { "math_id": 36, "text": "r<R" }, { "math_id": 37, "text": " \n\\frac{1}{\\log (R/r)} \\log \\frac{M}{|f(z_0)|}.\n" }, { "math_id": 38, "text": "r \\rightarrow 1" }, { "math_id": 39, "text": "f(z)=z^l \\frac{g(z)}{h(z)}," }, { "math_id": 40, "text": "h" }, { "math_id": 41, "text": "a_1,\\ldots,a_n \\in \\mathbb{D}_r\\setminus\\{0\\}" }, { "math_id": 42, "text": "b_1,\\ldots,b_m \\in \\mathbb{D}_r\\setminus\\{0\\}" }, { "math_id": 43, "text": "\\log \\left|\\frac{g(0)}{h(0)}\\right| = \\log \\left |r^{m-n-l} \\frac{a_1\\ldots a_n}{b_1\\ldots b_m}\\right| + \\frac{1}{2\\pi} \\int_0^{2\\pi} \\log|f(re^{i\\theta})| \\, d\\theta." }, { "math_id": 44, "text": "z" }, { "math_id": 45, "text": "z_0=r_0e^{i\\varphi_0}" }, { "math_id": 46, "text": "\\log |f(z_0)| = \\sum_{k=1}^n \\log \\left|\\frac{z_0-a_k}{1-\\bar {a}_k z_0} \\right| + \\frac{1}{2\\pi} \\int_0^{2\\pi} P_{r_0}(\\varphi_0-\\theta) \\log |f(e^{i\\theta})| \\, d\\theta." }, { "math_id": 47, "text": " P_{r}(\\omega)= \\sum_{n\\in\\mathbb{Z}} r^{|n|} e^{i n\\omega} " }, { "math_id": 48, "text": "\\log |f(z_0)| = \\frac{1}{2\\pi} \\int_0^{2\\pi} P_{r_0}(\\varphi_0-\\theta) \\log |f(e^{i\\theta})| \\, d\\theta," } ]
https://en.wikipedia.org/wiki?curid=6054639
6054681
Finite difference method
Class of numerical techniques In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain (if applicable) are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points. Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. Modern computers can perform these linear algebra computations efficiently which, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis. Today, FDMs are one of the most common approaches to the numerical solution of PDE, along with finite element methods. Derive difference quotient from Taylor's polynomial. For a "n"-times differentiable function, by Taylor's theorem the Taylor series expansion is given as formula_0 Where "n"! denotes the factorial of "n", and "R""n"("x") is a remainder term, denoting the difference between the Taylor polynomial of degree "n" and the original function. Following is the process to derive an approximation for the first derivative of the function "f" by first truncating the Taylor polynomial plus remainder: formula_1 Dividing across by "h" gives: formula_2 Solving for formula_3: formula_4 Assuming that formula_5 is sufficiently small, the approximation of the first derivative of "f" is: formula_6 This is similar to the definition of derivative, which is: formula_7 except for the limit towards zero (the method is named after this). Accuracy and order. The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (no round-off). To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image). This means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner. An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity formula_8 if formula_9 refers to the exact value and formula_10 to the numerical approximation. The remainder term of the Taylor polynomial can be used to analyze local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for formula_11, which is formula_12 the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that formula_13, formula_14 and with some algebraic manipulation, this leads to formula_15 and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is: formula_16 In this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size. Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality. The von Neumann and Courant-Friedrichs-Lewy criteria are often evaluated to determine the numerical model stability. Example: ordinary differential equation. For example, consider the ordinary differential equation formula_17 The Euler method for solving this equation uses the finite difference quotient formula_18 to approximate the differential equation by first substituting it for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get formula_19 The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation. Example: The heat equation. Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions formula_20 One way to numerically solve this equation is to approximate all the derivatives by finite differences. First partition the domain in space using a mesh formula_21 and in time using a mesh formula_22. Assume a uniform partition both in space and in time, so the difference between two consecutive space points will be "h" and between two consecutive time points will be "k". The points formula_23 will represent the numerical approximation of formula_24 Explicit method. Using a forward difference at time formula_25 and a second-order central difference for the space derivative at position formula_26 (FTCS) gives the recurrence equation: formula_27 This is an explicit method for solving the one-dimensional heat equation. One can obtain formula_28 from the other values this way: formula_29 where formula_30 So, with this recurrence relation, and knowing the values at time "n", one can obtain the corresponding values at time "n"+1. formula_31 and formula_32 must be replaced by the boundary conditions, in this example they are both 0. This explicit method is known to be numerically stable and convergent whenever formula_33. The numerical errors are proportional to the time step and the square of the space step: formula_34 Implicit method. Using the backward difference at time formula_35 and a second-order central difference for the space derivative at position formula_26 (The Backward Time, Centered Space Method "BTCS") gives the recurrence equation: formula_36 This is an implicit method for solving the one-dimensional heat equation. One can obtain formula_28 from solving a system of linear equations: formula_37 The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step: formula_38 Crank–Nicolson method. Finally, using the central difference at time formula_39 and a second-order central difference for the space derivative at position formula_26 ("CTCS") gives the recurrence equation: formula_40 This formula is known as the Crank–Nicolson method. One can obtain formula_28 from solving a system of linear equations: formula_41 The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step: formula_42 Comparison. To summarize, usually the Crank–Nicolson scheme is the most accurate scheme for small time steps. For larger time steps, the implicit scheme works better since it is less computationally demanding. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. Here is an example. The figures below present the solutions given by the above methods to approximate the heat equation formula_43 with the boundary condition formula_44 The exact solution is formula_45 Example: The Laplace operator. The (continuous) Laplace operator in formula_46-dimensions is given by formula_47. The discrete Laplace operator formula_48 depends on the dimension formula_46. In 1D the Laplace operator is approximated as formula_49 This approximation is usually expressed via the following stencil formula_50 and which represents a symmetric, tridiagonal matrix. For an equidistant grid one gets a Toeplitz matrix. The 2D case shows all the characteristics of the more general n-dimensional case. Each second partial derivative needs to be approximated similar to the 1D case formula_51 which is usually given by the following stencil formula_52 Consistency. Consistency of the above-mentioned approximation can be shown for highly regular functions, such as formula_53. The statement is formula_54 To prove this, one needs to substitute Taylor Series expansions up to order 3 into the discrete Laplace operator. Properties. Subharmonic. Similar to continuous subharmonic functions one can define "subharmonic functions" for finite-difference approximations formula_55 formula_56 Mean value. One can define a general stencil of "positive type" via formula_57 If formula_58 is (discrete) subharmonic then the following" mean value property" holds formula_59 where the approximation is evaluated on points of the grid, and the stencil is assumed to be of positive type. A similar mean value property also holds for the continuous case. Maximum principle. For a (discrete) subharmonic function formula_58 the following holds formula_60 where formula_61 are discretizations of the continuous domain formula_62, respectively the boundary formula_63. A similar maximum principle also holds for the continuous case. The SBP-SAT method. The SBP-SAT ("summation by parts - simultaneous approximation term") method is a stable and accurate technique for discretizing and imposing boundary conditions of a well-posed partial differential equation using high order finite differences. The method is based on finite differences where the differentiation operators exhibit summation-by-parts properties. Typically, these operators consist of differentiation matrices with central difference stencils in the interior with carefully chosen one-sided boundary stencils designed to mimic integration-by-parts in the discrete setting. Using the SAT technique, the boundary conditions of the PDE are imposed weakly, where the boundary values are "pulled" towards the desired conditions rather than exactly fulfilled. If the tuning parameters (inherent to the SAT technique) are chosen properly, the resulting system of ODE's will exhibit similar energy behavior as the continuous PDE, i.e. the system has no non-physical energy growth. This guarantees stability if an integration scheme with a stability region that includes parts of the imaginary axis, such as the fourth order Runge-Kutta method, is used. This makes the SAT technique an attractive method of imposing boundary conditions for higher order finite difference methods, in contrast to for example the injection method, which typically will not be stable if high order differentiation operators are used. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x_0 + h) = f(x_0) + \\frac{f'(x_0)}{1!}h + \\frac{f^{(2)}(x_0)}{2!}h^2 + \\cdots + \\frac{f^{(n)}(x_0)}{n!}h^n + R_n(x)," }, { "math_id": 1, "text": "f(x_0 + h) = f(x_0) + f'(x_0)h + R_1(x)." }, { "math_id": 2, "text": "{f(x_0+h)\\over h} = {f(x_0)\\over h} + f'(x_0)+{R_1(x)\\over h} " }, { "math_id": 3, "text": " f'(x_0) " }, { "math_id": 4, "text": "f'(x_0) = {f(x_0 +h)-f(x_0)\\over h} - {R_1(x)\\over h}." }, { "math_id": 5, "text": "R_1(x)" }, { "math_id": 6, "text": "f'(x_0)\\approx {f(x_0+h)-f(x_0)\\over h}." }, { "math_id": 7, "text": "f'(x_0)=\\lim_{h\\to 0}\\frac{f(x_0+h)-f(x_0)}{h}." }, { "math_id": 8, "text": "f'(x_i) - f'_i" }, { "math_id": 9, "text": "f'(x_i)" }, { "math_id": 10, "text": "f'_i" }, { "math_id": 11, "text": "f(x_0 + h)" }, { "math_id": 12, "text": "\n R_n(x_0 + h) = \\frac{f^{(n+1)}(\\xi)}{(n+1)!} (h)^{n+1} \\, , \\quad x_0 < \\xi < x_0 + h,\n" }, { "math_id": 13, "text": "f(x_i)=f(x_0+i h)" }, { "math_id": 14, "text": " f(x_0 + i h) = f(x_0) + f'(x_0)i h + \\frac{f''(\\xi)}{2!} (i h)^{2}, " }, { "math_id": 15, "text": " \\frac{f(x_0 + i h) - f(x_0)}{i h} = f'(x_0) + \\frac{f''(\\xi)}{2!} i h, " }, { "math_id": 16, "text": " \\frac{f(x_0 + i h) - f(x_0)}{i h} = f'(x_0) + O(h). " }, { "math_id": 17, "text": " u'(x) = 3u(x) + 2. " }, { "math_id": 18, "text": "\\frac{u(x+h) - u(x)}{h} \\approx u'(x)" }, { "math_id": 19, "text": " u(x+h) \\approx u(x) + h(3u(x)+2). " }, { "math_id": 20, "text": " \\begin{cases}\nU_t = U_{xx} \\\\\nU(0,t) = U(1,t) = 0 & \\text{(boundary condition)} \\\\\nU(x,0) = U_0(x) & \\text{(initial condition)}\n\\end{cases} " }, { "math_id": 21, "text": " x_0, \\dots, x_J " }, { "math_id": 22, "text": " t_0, \\dots, t_N " }, { "math_id": 23, "text": " u(x_j,t_n) = u_{j}^n " }, { "math_id": 24, "text": " u(x_j, t_n). " }, { "math_id": 25, "text": " t_n " }, { "math_id": 26, "text": " x_j " }, { "math_id": 27, "text": " \\frac{u_{j}^{n+1} - u_{j}^{n}}{k} = \\frac{u_{j+1}^n - 2u_{j}^n + u_{j-1}^n}{h^2}. " }, { "math_id": 28, "text": " u_j^{n+1} " }, { "math_id": 29, "text": " u_{j}^{n+1} = (1-2r)u_{j}^{n} + ru_{j-1}^{n} + ru_{j+1}^{n} " }, { "math_id": 30, "text": " r=k/h^2. " }, { "math_id": 31, "text": " u_0^n " }, { "math_id": 32, "text": " u_J^n " }, { "math_id": 33, "text": " r\\le 1/2 " }, { "math_id": 34, "text": " \\Delta u = O(k)+O(h^2) " }, { "math_id": 35, "text": " t_{n+1} " }, { "math_id": 36, "text": " \\frac{u_{j}^{n+1} - u_{j}^{n}}{k} =\\frac{u_{j+1}^{n+1} - 2u_{j}^{n+1} + u_{j-1}^{n+1}}{h^2}. " }, { "math_id": 37, "text": " (1+2r)u_j^{n+1} - r u_{j-1}^{n+1} - ru_{j+1}^{n+1}= u_j^n " }, { "math_id": 38, "text": " \\Delta u = O(k)+O(h^2). " }, { "math_id": 39, "text": " t_{n+1/2} " }, { "math_id": 40, "text": " \\frac{u_j^{n+1} - u_j^{n}}{k} = \\frac{1}{2} \\left(\\frac{u_{j+1}^{n+1} - 2u_j^{n+1} + u_{j-1}^{n+1}}{h^2}+\\frac{u_{j+1}^{n} - 2u_j^{n} + u_{j-1}^{n}}{h^2}\\right). " }, { "math_id": 41, "text": " (2+2r)u_j^{n+1} - ru_{j-1}^{n+1} - ru_{j+1}^{n+1}= (2-2r)u_j^n + ru_{j-1}^n + ru_{j+1}^n " }, { "math_id": 42, "text": " \\Delta u = O(k^2)+O(h^2). " }, { "math_id": 43, "text": "U_t = \\alpha U_{xx}, \\quad \\alpha = \\frac{1}{\\pi^2}," }, { "math_id": 44, "text": "U(0, t) = U(1, t) = 0." }, { "math_id": 45, "text": "U(x, t) = \\frac{1}{\\pi^2}e^{-t}\\sin(\\pi x)." }, { "math_id": 46, "text": " n " }, { "math_id": 47, "text": " \\Delta u(x) = \\sum_{i=1}^n \\partial_i^2 u(x) " }, { "math_id": 48, "text": " \\Delta_h u " }, { "math_id": 49, "text": "\n \\Delta u(x) = u''(x)\n \\approx \\frac{u(x-h)-2u(x)+u(x+h)}{h^2 }\n =: \\Delta_h u(x) \\,.\n" }, { "math_id": 50, "text": "\n \\Delta_h = \\frac{1}{h^2} \n \\begin{bmatrix}\n 1 & -2 & 1\n \\end{bmatrix}\n" }, { "math_id": 51, "text": "\n \\begin{align}\n \\Delta u(x,y) &= u_{xx}(x,y)+u_{yy}(x,y) \\\\\n &\\approx \\frac{u(x-h,y)-2u(x,y)+u(x+h,y) }{h^2}\n + \\frac{u(x,y-h) -2u(x,y) +u(x,y+h)}{h^2} \\\\\n &= \\frac{u(x-h,y)+u(x+h,y) -4u(x,y)+u(x,y-h)+u(x,y+h)}{h^2} \\\\\n &=: \\Delta_h u(x, y) \\,,\n \\end{align}\n" }, { "math_id": 52, "text": "\n \\Delta_h = \n \\frac{1}{h^2} \n \\begin{bmatrix}\n & 1 \\\\\n 1 & -4 & 1 \\\\\n & 1 \n \\end{bmatrix}\n \\,.\n" }, { "math_id": 53, "text": " u \\in C^4(\\Omega) " }, { "math_id": 54, "text": "\n \\Delta u - \\Delta_h u = \\mathcal{O}(h^2) \\,.\n" }, { "math_id": 55, "text": "u_h" }, { "math_id": 56, "text": "\n -\\Delta_h u_h \\leq 0 \\,.\n" }, { "math_id": 57, "text": " \n \\begin{bmatrix}\n & \\alpha_N \\\\\n \\alpha_W & -\\alpha_C & \\alpha_E \\\\\n & \\alpha_S\n \\end{bmatrix}\n \\,, \\quad \\alpha_i >0\\,, \\quad \\alpha_C = \\sum_{i\\in \\{N,E,S,W\\}} \\alpha_i \\,.\n" }, { "math_id": 58, "text": " u_h " }, { "math_id": 59, "text": "\n u_h(x_C) \\leq \\frac{ \\sum_{i\\in \\{N,E,S,W\\}} \\alpha_i u_h(x_i) }{ \\sum_{i\\in \\{N,E,S,W\\}} \\alpha_i } \\,,\n" }, { "math_id": 60, "text": "\n \\max_{\\Omega_h} u_h \\leq \\max_{\\partial \\Omega_h} u_h \\,,\n" }, { "math_id": 61, "text": " \\Omega_h, \\partial\\Omega_h " }, { "math_id": 62, "text": " \\Omega " }, { "math_id": 63, "text": " \\partial \\Omega " } ]
https://en.wikipedia.org/wiki?curid=6054681
60549
Orrery
Mechanical model of the Solar System An orrery is a mechanical model of the Solar System that illustrates or predicts the relative positions and motions of the planets and moons, usually according to the heliocentric model. It may also represent the relative sizes of these bodies; however, since accurate scaling is often not practical due to the actual large ratio differences, a scaled-down approximation may be used instead. The Greeks had working planetaria, but the first modern example was produced c. 1712 by John Rowley. He named it orrery for Charles Boyle, 4th Earl of Orrery (in County Cork, Ireland). The plaque on it reads "Orrery invented by Graham 1700 improved by Rowley and presented by him to John [sic] Earl of Orrery after whom it was named by at the suggestion of Richard Steele." They are typically driven by a clockwork mechanism with a globe representing the Sun at the centre, and with a planet at the end of each of the arms. History. Ancient. The Antikythera mechanism, discovered in 1901 in a wreck off the Greek island of Antikythera in the Mediterranean Sea, exhibited the diurnal motions of the Sun, Moon, and the five planets known to the ancient Greeks. It has been dated between 205 to 87 BC. The mechanism is considered one of the first orreries. It was geocentric and used as a mechanical calculator to calculate astronomical positions. Cicero, the Roman philosopher and politician writing in the first century BC, has references describing planetary mechanical models. According to him, the Greek polymaths Thales and Posidonius both constructed a device modeling celestial motion. Early Modern. In 1348, Giovanni Dondi built the first known clock driven mechanism of the system. It displays the ecliptic position of the Moon, Sun, Mercury, Venus, Mars, Jupiter and Saturn according to the complicated geocentric Ptolemaic planetary theories. The clock itself is lost, but Dondi left a complete description of its astronomic gear trains. As late as 1650, P. Schirleus built a geocentric planetarium with the Sun as a planet, and with Mercury and Venus revolving around the Sun as its moons. At the court of William IV, Landgrave of Hesse-Kassel two complicated astronomic clocks were built in 1561 and 1563–1568. These use four sides to show the ecliptical positions of the Sun, Mercury, Venus, Mars, Jupiter, Saturn, the Moon, Sun and Dragon (Nodes of the Moon) according to Ptolemy, a calendar, the sunrise and sunset, and an automated celestial sphere with an animated Sun symbol which, for the first time on a celestial globe, shows the real position of the Sun, including the equation of time. The clocks are now on display in Kassel at the Astronomisch-Physikalisches Kabinett and in Dresden at the Mathematisch-Physikalischer Salon. In "De revolutionibus orbium coelestium", published in Nuremberg in 1543, Nicolaus Copernicus challenged the Western teaching of a geocentric universe in which the Sun revolved daily around the Earth. He observed that some Greek philosophers such as Aristarchus of Samos had proposed a heliocentric universe. This simplified the apparent epicyclic motions of the planets, making it feasible to represent the planets' paths as simple circles. This could be modeled by the use of gears. Tycho Brahe's improved instruments made precise observations of the skies (1576–1601), and from these Johannes Kepler (1621) deduced that planets orbited the Sun in ellipses. In 1687 Isaac Newton explained the cause of elliptic motion in his theory of gravitation. Modern. There is an orrery built by clock makers George Graham and Thomas Tompion dated c. 1710 in the History of Science Museum, Oxford. Graham gave the first model, or its design, to the celebrated instrument maker John Rowley of London to make a copy for Prince Eugene of Savoy. Rowley was commissioned to make another copy for his patron Charles Boyle, 4th Earl of Orrery, from which the device took its name in English. This model was presented to Charles' son John, later the 5th Earl of Cork and 5th Earl of Orrery. Independently, Christiaan Huygens published in 1703 details of a heliocentric planetary machine which he had built while living in Paris between 1665 and 1681. He calculated the gear trains needed to represent a year of 365.242 days, and used that to produce the cycles of the principal planets. Joseph Wright's painting "A Philosopher giving a Lecture on the Orrery" (c. 1766), which hangs in the Derby Museum and Art Gallery, depicts a group listening to a lecture by a natural philosopher. The Sun in a brass orrery provides the only light in the room. The orrery depicted in the painting has rings, which give it an appearance similar to that of an armillary sphere. The demonstration was thereby able to depict eclipses. To put this in chronological context, in 1762 John Harrison's marine chronometer first enabled accurate measurement of longitude. In 1766, astronomer Johann Daniel Titius first demonstrated that the mean distance of each planet from the Sun could be represented by the following progression: formula_0 That is, 0.4, 0.7, 1.0, 1.6, 2.8, ... The numbers refer to astronomical units, the mean distance between Sun and Earth, which is 1.496 × 108 km (93 × 106 miles). The Derby Orrery does not show mean distance, but demonstrated the relative planetary movements. The Eisinga Planetarium was built from 1774 to 1781 by Eise Eisinga in his home in Franeker, in the Netherlands. It displays the planets across the width of a room's ceiling, and has been in operation almost continually since it was created. This orrery is a planetarium in both senses of the word: a complex machine showing planetary orbits, and a theatre for depicting the planets' movement. Eisinga house was bought by the Dutch Royal family who gave him a pension. In 1764, Benjamin Martin devised a new type of planetary model, in which the planets were carried on brass arms leading from a series of concentric or coaxial tubes. With this construction it was difficult to make the planets revolve, and to get the moons to turn around the planets. Martin suggested that the conventional orrery should consist of three parts: the planetarium where the planets revolved around the Sun, the tellurion (also "tellurian" or "tellurium") which showed the inclined axis of the Earth and how it revolved around the Sun, and the lunarium which showed the eccentric rotations of the Moon around the Earth. In one orrery, these three motions could be mounted on a common table, separately using the central spindle as a prime mover. Workings. All orreries are "planetariums". The term "orrery" has only existed since 1714. A "grand orrery" is one that includes the outer planets known at the time of its construction. The word planetarium has shifted meaning, and now usually refers to hemispherical theatres in which images of the night sky are projected onto an overhead surface. Orreries can range widely in size from hand-held to room-sized. An orrery is used to demonstrate the motion of the planets, while a mechanical device used to predict eclipses and transits is called an astrarium. An orrery should properly include the Sun, the Earth and the Moon (plus optionally other planets). A model that only includes the Earth, the Moon, and the Sun is called a tellurion or tellurium, and one which only includes the Earth and the Moon is a lunarium. A jovilabe is a model of Jupiter and its moons. A planetarium will show the orbital period of each planet and the "rotation rate", as shown in the table above. A tellurion will show the Earth with the Moon revolving around the Sun. It will use the angle of "inclination of the equator" from the table above to show how it rotates around its own axis. It will show the Earth's Moon, rotating around the Earth. A lunarium is designed to show the complex motions of the Moon as it revolves around the Earth. Orreries are usually not built to scale. Human orreries, where humans move about as the planets, have also been constructed, but most are temporary. There is a permanent human orrery at Armagh Observatory in Northern Ireland, which has the six ancient planets, Ceres, and comets Halley and Encke. Uranus and beyond are also shown, but in a fairly limited way. Another is at Sky's the Limit Observatory and Nature Center in Twentynine Palms, California; it is a true to scale (20 billion to one), true to position (accurate to within four days) human orrery. The first four planets are relatively close to one another, but the next four require a certain amount of hiking in order to visit them. A census of all permanent human orreries has been initiated by the French group F-HOU with a new effort to study their impact for education in schools. A map of known human orreries is available. A normal mechanical clock could be used to produce an extremely simple orrery to demonstrate the principle, with the Sun in the centre, Earth on the minute hand and Jupiter on the hour hand; Earth would make 12 revolutions around the Sun for every 1 revolution of Jupiter. As Jupiter's actual year is 11.86 Earth years long, the model would lose accuracy rapidly. Projection. Many planetariums have a projection orrery, which projects onto the dome of the planetarium a Sun with either dots or small images of the planets. These usually are limited to the planets from Mercury to Saturn, although some include Uranus. The light sources for the planets are projected onto mirrors which are geared to a motor which drives the images on the dome. Typically the Earth will circle the Sun in one minute, while the other planets will complete an orbit in time periods proportional to their actual motion. Thus Venus, which takes 224.7 days to orbit the Sun, will take 37 seconds to complete an orbit on an orrery, and Jupiter will take 11 minutes, 52 seconds. Some planetariums have taken advantage of this to use orreries to simulate planets and their moons. Thus Mercury orbits the Sun in 0.24 of an Earth year, while Phobos and Deimos orbit Mars in a similar 4:1 time ratio. Planetarium operators wishing to show this have placed a red cap on the Sun (to make it resemble Mars) and turned off all the planets but Mercury and Earth. Similar approximations can be used to show Pluto and its five moons. Notable examples. Shoemaker John Fulton of Fenwick, Ayrshire, built three between 1823 and 1833. The last is in Glasgow's Kelvingrove Art Gallery and Museum. The Eisinga Planetarium built by a wool carder named Eise Eisinga in his own living room, in the small city of Franeker in Friesland, is in fact an orrery. It was constructed between 1774 and 1781. The base of the model faces down from the ceiling of the room, with most of the mechanical works in the space above the ceiling. It is driven by a pendulum clock, which has 9 weights or ponds. The planets move around the model in real time. An innovative concept is to have people play the role of the moving planets and other Solar System objects. Such a model, called a human orrery, has been laid out at the Armagh Observatory. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{4+0}{10},\\frac{4+3}{10},\\frac{4+6}{10},\\frac{4+12}{10},\\frac{4+24}{10},..." } ]
https://en.wikipedia.org/wiki?curid=60549
60559003
Hurwitz scheme
In algebraic geometry, the Hurwitz scheme formula_0 is the scheme parametrizing pairs (formula_1) where "C" is a smooth curve of genus "g" and π has degree "d". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{H}_{d, g}" }, { "math_id": 1, "text": "C, \\pi: C \\to \\mathbf{P}^1" } ]
https://en.wikipedia.org/wiki?curid=60559003
6056440
Cerebroside-sulfatase
Cerebroside-sulfatase (EC 3.1.6.8, arylsulfatase A, cerebroside sulfate sulfatase) is an enzyme with systematic name cerebroside-3-sulfate 3-sulfohydrolase. This enzyme catalyses the following chemical reaction a cerebroside 3-sulfate + H2O formula_0 a cerebroside + sulfate This enzyme hydrolyses galactose-3-sulfate residues in a number of lipids. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=6056440
60567021
Binary regression
In statistics, specifically regression analysis, a binary regression estimates a relationship between one or more explanatory variables and a single output binary variable. Generally the probability of the two alternatives is modeled, instead of simply outputting a single value, as in linear regression. Binary regression is usually analyzed as a special case of binomial regression, with a single outcome (formula_0), and one of the two alternatives considered as "success" and coded as 1: the value is the count of successes in 1 trial, either 0 or 1. The most common binary regression models are the logit model (logistic regression) and the probit model (probit regression). Applications. Binary regression is principally applied either for prediction (binary classification), or for estimating the association between the explanatory variables and the output. In economics, binary regressions are used to model binary choice. Interpretations. Binary regression models can be interpreted as latent variable models, together with a measurement model; or as probabilistic models, directly modeling the probability. Latent variable model. The latent variable interpretation has traditionally been used in bioassay, yielding the probit model, where normal variance and a cutoff are assumed. The latent variable interpretation is also used in item response theory (IRT). Formally, the latent variable interpretation posits that the outcome "y" is related to a vector of explanatory variables "x" by formula_1 where formula_2 and formula_3, "β" is a vector of parameters and "G" is a probability distribution. This model can be applied in many economic contexts. For instance, the outcome can be the decision of a manager whether invest to a program, formula_4 is the expected net discounted cash flow and "x" is a vector of variables which can affect the cash flow of this program. Then the manager will invest only when she expects the net discounted cash flow to be positive. Often, the error term formula_5 is assumed to follow a normal distribution conditional on the explanatory variables "x". This generates the standard probit model. Probabilistic model. The simplest direct probabilistic model is the logit model, which models the log-odds as a linear function of the explanatory variable or variables. The logit model is "simplest" in the sense of generalized linear models (GLIM): the log-odds are the natural parameter for the exponential family of the Bernoulli distribution, and thus it is the simplest to use for computations. Another direct probabilistic model is the linear probability model, which models the probability itself as a linear function of the explanatory variables. A drawback of the linear probability model is that, for some values of the explanatory variables, the model will predict probabilities less than zero or greater than one. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n = 1" }, { "math_id": 1, "text": "y=1 [y^*>0]" }, { "math_id": 2, "text": "y^*=x\\beta +\\varepsilon " }, { "math_id": 3, "text": "\\varepsilon \\mid x\\sim G" }, { "math_id": 4, "text": "y^*" }, { "math_id": 5, "text": "\\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=60567021
60569
Rectifier
Electrical device that converts AC to DC A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction. The reverse operation (converting DC to AC) is performed by an inverter. The process is known as "rectification", since it "straightens" the direction of current. Physically, rectifiers take a number of forms, including vacuum tube diodes, wet chemical cells, mercury-arc valves, stacks of copper and selenium oxide plates, semiconductor diodes, silicon-controlled rectifiers and other silicon-based semiconductor switches. Historically, even synchronous electromechanical switches and motor-generator sets have been used. Early radio receivers, called crystal radios, used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or "crystal detector". Rectifiers have many uses, but are often found serving as components of DC power supplies and high-voltage direct current power transmission systems. Rectification may serve in roles other than to generate direct current for use as a source of power. As noted, rectifiers can serve as detectors of radio signals. In gas heating systems flame rectification is used to detect the presence of a flame. Depending on the type of alternating current supply and the arrangement of the rectifier circuit, the output voltage may require additional smoothing to produce a uniform steady voltage. Many applications of rectifiers, such as power supplies for radio, television and computer equipment, require a "steady" constant DC voltage (as would be produced by a battery). In these applications the output of the rectifier is smoothed by an electronic filter, which may be a capacitor, choke, or set of capacitors, chokes and resistors, possibly followed by a voltage regulator to produce a steady voltage. More complex circuitry that performs the opposite function, that is converting DC to AC, is called an inverter. Rectifier devices. Before the development of silicon semiconductor rectifiers, vacuum tube thermionic diodes and copper oxide- or selenium-based metal rectifier stacks were used. The first vacuum tube diodes designed for rectifier application in power supply circuits were introduced in April 1915 by Saul Dushman of General Electric. With the introduction of semiconductor electronics, vacuum tube rectifiers became obsolete, except for some enthusiasts of vacuum tube audio equipment. For power rectification from very low to very high current, semiconductor diodes of various types (junction diodes, Schottky diodes, etc.) are widely used. Other devices that have control electrodes as well as acting as unidirectional current valves are used where more than simple rectification is required—e.g., where variable output voltage is needed. High-power rectifiers, such as those used in high-voltage direct current power transmission, employ silicon semiconductor devices of various types. These are thyristors or other controlled switching solid-state switches, which effectively function as diodes to pass current in only one direction. Rectifier circuits. Rectifier circuits may be single-phase or multi-phase. Most low power rectifiers for domestic equipment are single-phase, but three-phase rectification is very important for industrial applications and for the transmission of energy as DC (HVDC). Single-phase rectifiers. Half-wave rectification. In half-wave rectification of a single-phase supply, either the positive or negative half of the AC wave is passed, while the other half is blocked. Because only one half of the input waveform reaches the output, mean voltage is lower. Half-wave rectification requires a single diode in a single-phase supply, or three in a three-phase supply. Rectifiers yield a unidirectional but pulsating direct current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much more filtering is needed to eliminate harmonics of the AC frequency from the output. The no-load output DC voltage of an ideal half-wave rectifier for a sinusoidal input voltage is: &lt;math&gt;\begin{align} V_\mathrm {rms} &amp;= \frac{V_\mathrm {peak}}{2}\\[8pt] \end{align}&lt;/math&gt; where: "V"dc, "V"av – the DC or average output voltage, "V"peak, the peak value of the phase input voltages, "V"rms, the root mean square (RMS) value of output voltage. Full-wave rectification. A full-wave rectifier converts the whole of the input waveform to one of constant polarity (positive or negative) at its output. Mathematically, this corresponds to the absolute value function. Full-wave rectification converts both polarities of the input waveform to pulsating DC (direct current), and yields a higher average output voltage. Two diodes and a center-tapped transformer, or four diodes in a bridge configuration and any AC source (including a transformer without center tap), are needed. Single semiconductor diodes, double diodes with a common cathode or common anode, and four- or six-diode bridges are manufactured as single components. For single-phase AC, if the transformer is center-tapped, then two diodes back-to-back (cathode-to-cathode or anode-to-anode, depending on output polarity required) can form a full-wave rectifier. Twice as many turns are required on the transformer secondary to obtain the same output voltage than for a bridge rectifier, but the power rating is unchanged. The average and RMS no-load output voltages of an ideal single-phase full-wave rectifier are: formula_0 Very common double-diode rectifier vacuum tubes contained a single common cathode and two anodes inside a single envelope, achieving full-wave rectification with positive output. The 5U4 and the 80/5Y3 (4 pin)/(octal) were popular examples of this configuration. Three-phase rectifiers. Single-phase rectifiers are commonly used for power supplies for domestic equipment. However, for most industrial and high-power applications, three-phase rectifier circuits are the norm. As with single-phase rectifiers, three-phase rectifiers can take the form of a half-wave circuit, a full-wave circuit using a center-tapped transformer, or a full-wave bridge circuit. Thyristors are commonly used in place of diodes to create a circuit that can regulate the output voltage. Many devices that provide direct current actually 'generate' three-phase AC. For example, an automobile alternator contains nine diodes, six of which function as a full-wave rectifier for battery charging. Three-phase, half-wave circuit. An uncontrolled three-phase, half-wave midpoint circuit requires three diodes, one connected to each phase. This is the simplest type of three-phase rectifier but suffers from relatively high harmonic distortion on both the AC and DC connections. This type of rectifier is said to have a pulse-number of three, since the output voltage on the DC side contains three distinct pulses per cycle of the grid frequency: The peak values formula_1 of this three-pulse DC voltage are calculated from the RMS value formula_2 of the input phase voltage (line to neutral voltage, 120 V in North America, 230 V within Europe at mains operation): formula_3. The average no-load output voltage formula_4 results from the integral under the graph of a positive half-wave with the period duration of formula_5 (from 30° to 150°): formula_6 Three-phase, full-wave circuit using center-tapped transformer. If the AC supply is fed via a transformer with a center tap, a rectifier circuit with improved harmonic performance can be obtained. This rectifier now requires six diodes, one connected to each end of each transformer secondary winding. This circuit has a pulse-number of six, and in effect, can be thought of as a six-phase, half-wave circuit. Before solid state devices became available, the half-wave circuit, and the full-wave circuit using a center-tapped transformer, were very commonly used in industrial rectifiers using mercury-arc valves. This was because the three or six AC supply inputs could be fed to a corresponding number of anode electrodes on a single tank, sharing a common cathode. With the advent of diodes and thyristors, these circuits have become less popular and the three-phase bridge circuit has become the most common circuit. Three-phase bridge rectifier uncontrolled. For an uncontrolled three-phase bridge rectifier, six diodes are used, and the circuit again has a pulse number of six. For this reason, it is also commonly referred to as a six-pulse bridge. The B6 circuit can be seen simplified as a series connection of two three-pulse center circuits. For low-power applications, double diodes in series, with the anode of the first diode connected to the cathode of the second, are manufactured as a single component for this purpose. Some commercially available double diodes have all four terminals available so the user can configure them for single-phase split supply use, half a bridge, or three-phase rectifier. For higher-power applications, a single discrete device is usually used for each of the six arms of the bridge. For the very highest powers, each arm of the bridge may consist of tens or hundreds of separate devices in parallel (where very high current is needed, for example in aluminium smelting) or in series (where very high voltages are needed, for example in high-voltage direct current power transmission). The pulsating DC voltage results from the differences of the instantaneous positive and negative phase voltages formula_2, phase-shifted by 30°: The ideal, no-load average output voltage formula_7 of the B6 circuit results from the integral under the graph of a DC voltage pulse with the period duration of formula_8 (from 60° to 120°) with the peak value formula_9: formula_10 formula_11 If the three-phase bridge rectifier is operated symmetrically (as positive and negative supply voltage), the center point of the rectifier on the output side (or the so-called isolated reference potential) opposite the center point of the transformer (or the neutral conductor) has a potential difference in the form of a triangular common-mode voltage. For this reason, these two centers must never be connected to each other, otherwise short-circuit currents would flow. The ground of the three-phase bridge rectifier in symmetrical operation is thus decoupled from the neutral conductor or the earth of the mains voltage. Powered by a transformer, earthing of the center point of the bridge is possible, provided that the secondary winding of the transformer is electrically isolated from the mains voltage and the star point of the secondary winding is not on earth. In this case, however, (negligible) leakage currents are flowing over the transformer windings. The common-mode voltage is formed out of the respective average values of the differences between the positive and negative phase voltages, which form the pulsating DC voltage. The peak value of the delta voltage formula_12 amounts of the peak value of the phase input voltage formula_1 and is calculated with formula_1 minus half of the DC voltage at 60° of the period: formula_13 The RMS value of the common-mode voltage is calculated from the form factor for triangular oscillations: formula_14 If the circuit is operated asymmetrically (as a simple supply voltage with just one positive pole), both the positive and negative poles (or the isolated reference potential) are pulsating opposite the center (or the ground) of the input voltage analogously to the positive and negative waveforms of the phase voltages. However, the differences in the phase voltages result in the six-pulse DC voltage (over the duration of a period). The strict separation of the transformer center from the negative pole (otherwise short-circuit currents will flow) or a possible grounding of the negative pole when powered by an isolating transformer apply correspondingly to the symmetrical operation. Three-phase bridge rectifier controlled. The controlled three-phase bridge rectifier uses thyristors in place of diodes. The output voltage is reduced by the factor cos(α): formula_15 Or, expressed in terms of the line to line input voltage: formula_16 where: "V"LLpeak is the peak value of the line to line input voltages, "V"peak is the peak value of the phase (line to neutral) input voltages, and "α" is the firing angle of the thyristor (0 if diodes are used to perform rectification) The above equations are only valid when no current is drawn from the AC supply or in the theoretical case when the AC supply connections have no inductance. In practice, the supply inductance causes a reduction of DC output voltage with increasing load, typically in the range 10–20% at full load. The effect of supply inductance is to slow down the transfer process (called commutation) from one phase to the next. As result of this is that at each transition between a pair of devices, there is a period of overlap during which three (rather than two) devices in the bridge are conducting simultaneously. The overlap angle is usually referred to by the symbol μ (or u), and may be 20 30° at full load. With supply inductance taken into account, the output voltage of the rectifier is reduced to formula_17 The overlap angle "μ" is directly related to the DC current, and the above equation may be re-expressed as formula_18 where: "L"c is the commutating inductance per phase, and "I"d is the direct current. Twelve-pulse bridge. Although better than single-phase rectifiers or three-phase half-wave rectifiers, six-pulse rectifier circuits still produce considerable harmonic distortion on both the AC and DC connections. For very high-power rectifiers the twelve-pulse bridge connection is usually used. A twelve-pulse bridge consists of two six-pulse bridge circuits connected in series, with their AC connections fed from a supply transformer that produces a 30° phase shift between the two bridges. This cancels many of the characteristic harmonics the six-pulse bridges produce. The 30-degree phase shift is usually achieved by using a transformer with two sets of secondary windings, one in star (wye) connection and one in delta connection. Voltage-multiplying rectifiers. The simple half-wave rectifier can be built in two electrical configurations with the diodes pointing in opposite directions, one version connects the negative terminal of the output direct to the AC supply and the other connects the positive terminal of the output direct to the AC supply. By combining both of these with separate output smoothing it is possible to get an output voltage of nearly double the peak AC input voltage. This also provides a tap in the middle, which allows use of such a circuit as a split rail power supply. A variant of this is to use two capacitors in series for the output smoothing on a bridge rectifier then place a switch between the midpoint of those capacitors and one of the AC input terminals. With the switch open, this circuit acts like a normal bridge rectifier. With the switch closed, it acts like a voltage doubling rectifier. In other words, this makes it easy to derive a voltage of roughly 320 V (±15%, approx.) DC from any 120 V or 230 V mains supply in the world, this can then be fed into a relatively simple switched-mode power supply. However, for a given desired ripple, the value of both capacitors must be twice the value of the single one required for a normal bridge rectifier; when the switch is closed each one must filter the output of a half-wave rectifier, and when the switch is open the two capacitors are connected in series with an equivalent value of half one of them. In a Cockcroft-Walton voltage multiplier, stages of capacitors and diodes are cascaded to amplify a low AC voltage to a high DC voltage. These circuits are capable of producing a DC output voltage potential up to about ten times the peak AC input voltage, in practice limited by current capacity and voltage regulation issues. Diode voltage multipliers, frequently used as a trailing boost stage or primary high voltage (HV) source, are used in HV laser power supplies, powering devices such as cathode ray tubes (CRT) (like those used in CRT based television, radar and sonar displays), photon amplifying devices found in image intensifying and photo multiplier tubes (PMT), and magnetron based radio frequency (RF) devices used in radar transmitters and microwave ovens. Before the introduction of semiconductor electronics, transformerless vacuum tube receivers powered directly from AC power sometimes used voltage doublers to generate roughly 300 VDC from a 100–120 V power line. Quantification of rectifiers. Several ratios are used to quantify the function and performance of rectifiers or their output, including transformer utilization factor (TUF), conversion ratio ("η"), ripple factor, form factor, and peak factor. The two primary measures are DC voltage (or offset) and peak-peak ripple voltage, which are constituent components of the output voltage. Conversion ratio. Conversion ratio (also called "rectification ratio", and confusingly, "efficiency") "η" is defined as the ratio of DC output power to the input power from the AC supply. Even with ideal rectifiers, the ratio is less than 100% because some of the output power is AC power rather than DC which manifests as ripple superimposed on the DC waveform. The ratio can be improved with the use of smoothing circuits which reduce the ripple and hence reduce the AC content of the output. Conversion ratio is reduced by losses in transformer windings and power dissipation in the rectifier element itself. This ratio is of little practical significance because a rectifier is almost always followed by a filter to increase DC voltage and reduce ripple. In some three-phase and multi-phase applications the conversion ratio is high enough that smoothing circuitry is unnecessary. In other circuits, like filament heater circuits in vacuum tube electronics where the load is almost entirely resistive, smoothing circuitry may be omitted because resistors dissipate both AC and DC power, so no power is lost. For a half-wave rectifier the ratio is very modest. formula_19 (the divisors are 2 rather than √2 because no power is delivered on the negative half-cycle) formula_20 Thus maximum conversion ratio for a half-wave rectifier is, formula_21 Similarly, for a full-wave rectifier, formula_22 formula_23 formula_24 Three-phase rectifiers, especially three-phase full-wave rectifiers, have much greater conversion ratios because the ripple is intrinsically smaller. For a three-phase half-wave rectifier, formula_25 formula_26 For a three-phase full-wave rectifier, formula_27 formula_28 Transformer utilization ratio. The transformer utilization factor (TUF) of a rectifier circuit is defined as the ratio of the DC power available at the input resistor to the AC rating of the output coil of a transformer. formula_29 The formula_30 rating of the transformer can be defined as: formula_31 Rectifier voltage drop. See also: A real rectifier characteristically drops part of the input voltage (a voltage drop, for silicon devices, of typically 0.7 volts plus an equivalent resistance, in general non-linear)—and at high frequencies, distorts waveforms in other ways. Unlike an ideal rectifier, it dissipates some power. An aspect of most rectification is a loss from the peak input voltage to the peak output voltage, caused by the built-in voltage drop across the diodes (around 0.7 V for ordinary silicon p–n junction diodes and 0.3 V for Schottky diodes). Half-wave rectification and full-wave rectification using a center-tapped secondary produces a peak voltage loss of one diode drop. Bridge rectification has a loss of two diode drops. This reduces output voltage, and limits the available output voltage if a very low alternating voltage must be rectified. As the diodes do not conduct below this voltage, the circuit only passes current through for a portion of each half-cycle, causing short segments of zero voltage (where instantaneous input voltage is below one or two diode drops) to appear between each "hump". Peak loss is very important for low voltage rectifiers (for example, 12 V or less) but is insignificant in high-voltage applications such as HVDC power transmission systems. Harmonic distortion. Non-linear loads like rectifiers produce current harmonics of the source frequency on the AC side and voltage harmonics of the source frequency on the DC side, due to switching behavior. Rectifier output smoothing. While half-wave and full-wave rectification deliver unidirectional current, neither produces a constant voltage. There is a large AC ripple voltage component at the source frequency for a half-wave rectifier, and twice the source frequency for a full-wave rectifier. Ripple voltage is usually specified peak-to-peak. Producing steady DC from a rectified AC supply requires a smoothing circuit or filter. In its simplest form this can be just a capacitor (functioning as both a smoothing capacitor as well as a reservoir, buffer or bulk capacitor), choke, resistor, Zener diode and resistor, or voltage regulator placed at the output of the rectifier. In practice, most smoothing filters utilize multiple components to efficiently reduce ripple voltage to a level tolerable by the circuit. The filter capacitor releases its stored energy during the part of the AC cycle when the AC source does not supply any power, that is, when the AC source changes its direction of flow of current. Performance with low impedance source. The above diagram shows the voltage waveforms of the reservoir performance when supplied from a voltage source with near zero impedance, such as a mains supply. Both voltages start from zero at time t=0 at the far left of the image, then the capacitor voltage follows the rectified AC voltage as it increases, the capacitor is charged and current is supplied to the load. At the end of the mains quarter cycle, the capacitor is charged to the peak value Vp of the rectifier voltage. Following this, the rectifier input voltage starts to decrease to its minimum value Vmin as it enters the next quarter cycle. This initiates the discharge of the capacitor through the load while the capacitor holds up the output voltage to the load. The size of the capacitor C is determined by the amount of ripple r that can be tolerated, where r=(Vp-Vmin)/Vp. These circuits are very frequently fed from transformers, which may have significant internal impedance in the form of resistance and/or reactance. Transformer internal impedance modifies the reservoir capacitor waveform, changes the peak voltage, and introduces regulation issues. Capacitor input filter. For a given load, sizing of a smoothing capacitor is a tradeoff between reducing ripple voltage and increasing ripple current. The peak current is set by the rate of rise of the supply voltage on the rising edge of the incoming sine-wave, reduced by the resistance of the transformer windings. High ripple currents increase I2R losses (in the form of heat) in the capacitor, rectifier and transformer windings, and may exceed the ampacity of the components or VA rating of the transformer. Vacuum tube rectifiers specify the maximum capacitance of the input capacitor, and SS diode rectifiers also have current limitations. Capacitors for this application need low ESR, or ripple current may overheat them. To limit ripple voltage to a specified value the required capacitor size is proportional to the load current and inversely proportional to the supply frequency and the number of output peaks of the rectifier per input cycle. Full-wave rectified output requires a smaller capacitor because it is double the frequency of half-wave rectified output. To reduce ripple to a satisfactory limit with just a single capacitor would often require a capacitor of impractical size. This is because the ripple current rating of a capacitor does not increase linearly with size and there may also be height limitations. For high current applications banks of capacitors are used instead. Choke input filter. It is also possible to put the rectified waveform into a choke-input filter. The advantage of this circuit is that the current waveform is smoother: current is drawn over the entire cycle, instead of being drawn in pulses at the peaks of AC voltage each half-cycle as in a capacitor input filter. The disadvantage is that the voltage output is much lower – the average of an AC half-cycle rather than the peak; this is about 90% of the RMS voltage versus formula_32 times the RMS voltage (unloaded) for a capacitor input filter. Offsetting this is superior voltage regulation and higher available current, which reduce peak voltage and ripple current demands on power supply components. Inductors require cores of iron or other magnetic materials, and add weight and size. Their use in power supplies for electronic equipment has therefore dwindled in favour of semiconductor circuits such as voltage regulators. Resistor as input filter. In cases where ripple voltage is insignificant, like battery chargers, the input filter may be a single series resistor to adjust the output voltage to that required by the circuit. A resistor reduces both output voltage and ripple voltage proportionately. A disadvantage of a resistor input filter is that it consumes power in the form of waste heat that is not available to the load, so it is employed only in low current circuits. Higher order and cascade filters. To further reduce ripple, the initial filter element may be followed by additional alternating series and shunt filter components, or by a voltage regulator. Series filter components may be resistors or chokes; shunt elements may be resistors or capacitors. The filter may raise DC voltage as well as reduce ripple. Filters are often constructed from pairs of series/shunt components called RC (series resistor, shunt capacitor) or LC (series choke, shunt capacitor) sections. Two common filter geometries are known as Pi (capacitor, choke, capacitor) and T (choke, capacitor, choke) filters. Sometimes the series elements are resistors - because resistors are smaller and cheaper - when a lower DC output is desirable or permissible. Another kind of special filter geometry is a series resonant choke or tuned choke filter. Unlike the other filter geometries which are low-pass filters, a resonant choke filter is a band-stop filter: it is a parallel combination of choke and capacitor which resonates at the frequency of the ripple voltage, presenting a very high impedance to the ripple. It may be followed by a shunt capacitor to complete the filter. Voltage regulators. A more usual alternative to additional filter components, if the DC load requires very low ripple voltage, is to follow the input filter with a voltage regulator. A voltage regulator operates on a different principle than a filter, which is essentially a voltage divider that shunts voltage at the ripple frequency away from the load. Rather, a regulator increases or decreases current supplied to the load in order to maintain a constant output voltage. A simple passive shunt voltage regulator may consist of a series resistor to drop source voltage to the required level and a Zener diode shunt with reverse voltage equal to the set voltage. When input voltage rises, the diode dumps current to maintain the set output voltage. This kind of regulator is usually employed only in low voltage, low current circuits because Zener diodes have both voltage and current limitations. It is also very inefficient, because it dumps excess current, which is not available to the load. A more efficient alternative to a shunt voltage regulator is an active voltage regulator circuit. An active regulator employs reactive components to store and discharge energy, so that most or all current supplied by the rectifier is passed to the load. It may also use negative and positive feedback in conjunction with at least one voltage amplifying component like a transistor to maintain output voltage when source voltage drops. The input filter must prevent the troughs of the ripple dropping below the minimum voltage required by the regulator to produce the required output voltage. The regulator serves both to significantly reduce the ripple and to deal with variations in supply and load characteristics. Applications. The primary application of rectifiers is to derive DC power from an AC supply (AC to DC converter). Rectifiers are used inside the power supplies of virtually all electronic equipment. AC/DC power supplies may be broadly divided into linear power supplies and switched-mode power supplies. In such power supplies, the rectifier will be in series following the transformer and be followed by a smoothing filter and possibly a voltage regulator. Converting DC power from one voltage to another is much more complicated. One method of DC-to-DC conversion first converts power to AC (using a device called an inverter), then uses a transformer to change the voltage, and finally rectifies power back to DC. A frequency of typically several tens of kilohertz is used, as this requires much smaller inductance than at lower frequencies and obviates the use of heavy, bulky, and expensive iron-cored transformers. Another method of converting DC voltages uses a charge pump, using rapid switching to change the connections of capacitors; this technique is generally limited to supplies up to a couple of watts, owing to the size of capacitors required. Rectifiers are also used for detection of amplitude modulated radio signals. The signal may be amplified before detection. If not, a very low voltage drop diode or a diode biased with a fixed voltage must be used. When using a rectifier for demodulation the capacitor and load resistance must be carefully matched: too low a capacitance makes the high frequency carrier pass to the output, and too high makes the capacitor just charge and stay charged. Rectifiers supply polarized voltage for welding. In such circuits control of the output current is required; this is sometimes achieved by replacing some of the diodes in a bridge rectifier with thyristors, effectively diodes whose voltage output can be regulated by switching on and off with phase-fired controllers. Thyristors are used in various classes of railway rolling stock systems so that fine control of the traction motors can be achieved. Gate turn-off thyristors are used to produce alternating current from a DC supply, for example on the Eurostar Trains to power the three-phase traction motors. Rectification technologies. Electromechanical. Before about 1905 when tube-type rectifiers were developed, power conversion devices were purely electro-mechanical in design. Mechanical rectifiers used some form of rotation or resonant vibration driven by electromagnets, which operated a switch or commutator to reverse the current. These mechanical rectifiers were noisy and had high maintenance requirements, including lubrication and replacement of moving parts due to wear. Opening mechanical contacts under load resulted in electrical arcs and sparks that heated and eroded the contacts. They also were not able to handle AC frequencies above several thousand cycles per second. Synchronous rectifier. To convert alternating into direct current in electric locomotives, a synchronous rectifier may be used. It consists of a synchronous motor driving a set of heavy-duty electrical contacts. The motor spins in time with the AC frequency and periodically reverses the connections to the load at an instant when the sinusoidal current goes through a zero-crossing. The contacts do not have to "switch" a large current, but they must be able to "carry" a large current to supply the locomotive's DC traction motors. Vibrating rectifier. These consisted of a resonant reed, vibrated by an alternating magnetic field created by an AC electromagnet, with contacts that reversed the direction of the current on the negative half cycles. They were used in low power devices, such as battery chargers, to rectify the low voltage produced by a step-down transformer. Another use was in battery power supplies for portable vacuum tube radios, to provide the high DC voltage for the tubes. These operated as a mechanical version of modern solid state switching inverters, with a transformer to step the battery voltage up, and a set of vibrator contacts on the transformer core, operated by its magnetic field, to repeatedly break the DC battery current to create a pulsing AC to power the transformer. Then a second set of rectifier contacts on the vibrator rectified the high AC voltage from the transformer secondary to DC. Motor-generator set. A "motor-generator set", or the similar "rotary converter", is not strictly a rectifier as it does not actually "rectify" current, but rather "generates" DC from an AC source. In an "M-G set", the shaft of an AC motor is mechanically coupled to that of a DC generator. The DC generator produces multiphase alternating currents in its armature windings, which a commutator on the armature shaft converts into a direct current output; or a homopolar generator produces a direct current without the need for a commutator. M-G sets are useful for producing DC for railway traction motors, industrial motors and other high-current applications, and were common in many high-power DC uses (for example, carbon-arc lamp projectors for outdoor theaters) before high-power semiconductors became widely available. Electrolytic. The electrolytic rectifier was a device from the early twentieth century that is no longer used. A home-made version is illustrated in the 1913 book "The Boy Mechanic" but it would be suitable for use only at very low voltages because of the low breakdown voltage and the risk of electric shock. A more complex device of this kind was patented by G. W. Carpenter in 1928 (US Patent 1671970). When two different metals are suspended in an electrolyte solution, direct current flowing one way through the solution sees less resistance than in the other direction. Electrolytic rectifiers most commonly used an aluminum anode and a lead or steel cathode, suspended in a solution of triammonium orthophosphate. The rectification action is due to a thin coating of aluminium hydroxide on the aluminum electrode, formed by first applying a strong current to the cell to build up the coating. The rectification process is temperature-sensitive, and for best efficiency should not operate above 86 °F (30 °C). There is also a breakdown voltage where the coating is penetrated and the cell is short-circuited. Electrochemical methods are often more fragile than mechanical methods, and can be sensitive to usage variations, which can drastically change or completely disrupt the rectification processes. Similar electrolytic devices were used as lightning arresters around the same era by suspending many aluminium cones in a tank of triammonium orthophosphate solution. Unlike the rectifier above, only aluminium electrodes were used, and used on A.C., there was no polarization and thus no rectifier action, but the chemistry was similar. The modern electrolytic capacitor, an essential component of most rectifier circuit configurations was also developed from the electrolytic rectifier. Plasma type. The development of vacuum tube technology in the early 20th century resulted in the invention of various tube-type rectifiers, which largely replaced the noisy, inefficient mechanical rectifiers. Mercury-arc. A rectifier used in high-voltage direct current (HVDC) power transmission systems and industrial processing between about 1909 to 1975 is a "mercury-arc rectifier" or "mercury-arc valve". The device is enclosed in a bulbous glass vessel or large metal tub. One electrode, the cathode, is submerged in a pool of liquid mercury at the bottom of the vessel and one or more high purity graphite electrodes, called anodes, are suspended above the pool. There may be several auxiliary electrodes to aid in starting and maintaining the arc. When an electric arc is established between the cathode pool and suspended anodes, a stream of electrons flows from the cathode to the anodes through the ionized mercury, but not the other way (in principle, this is a higher-power counterpart to flame rectification, which uses the same one-way current transmission properties of the plasma naturally present in a flame). These devices can be used at power levels of hundreds of kilowatts, and may be built to handle one to six phases of AC current. Mercury-arc rectifiers have been replaced by silicon semiconductor rectifiers and high-power thyristor circuits in the mid-1970s. The most powerful mercury-arc rectifiers ever built were installed in the Manitoba Hydro Nelson River Bipole HVDC project, with a combined rating of more than 1 GW and 450 kV. Argon gas electron tube. The General Electric Tungar rectifier was a mercury vapor (ex.:5B24) or argon (ex.:328) gas-filled electron tube device with a tungsten filament cathode and a carbon button anode. It operated similarly to the thermionic vacuum tube diode, but the gas in the tube ionized during forward conduction, giving it a much lower forward voltage drop so it could rectify lower voltages. It was used for battery chargers and similar applications from the 1920s until lower-cost metal rectifiers, and later semiconductor diodes, supplanted it. These were made up to a few hundred volts and a few amperes rating, and in some sizes strongly resembled an incandescent lamp with an additional electrode. The 0Z4 was a gas-filled rectifier tube commonly used in vacuum tube car radios in the 1940s and 1950s. It was a conventional full-wave rectifier tube with two anodes and one cathode, but was unique in that it had no filament (thus the "0" in its type number). The electrodes were shaped such that the reverse breakdown voltage was much higher than the forward breakdown voltage. Once the breakdown voltage was exceeded, the 0Z4 switched to a low-resistance state with a forward voltage drop of about 24 V. Diode vacuum tube (valve). The thermionic vacuum tube diode, originally called the Fleming valve, was invented by John Ambrose Fleming in 1904 as a detector for radio waves in radio receivers, and evolved into a general rectifier. It consisted of an evacuated glass bulb with a filament heated by a separate current, and a metal plate anode. The filament emitted electrons by thermionic emission (the Edison effect), discovered by Thomas Edison in 1884, and a positive voltage on the plate caused a current of electrons through the tube from filament to plate. Since only the filament produced electrons, the tube would only conduct current in one direction, allowing the tube to rectify an alternating current. Thermionic diode rectifiers were widely used in power supplies in vacuum tube consumer electronic products, such as phonographs, radios, and televisions, for example the All American Five radio receiver, to provide the high DC plate voltage needed by other vacuum tubes. "Full-wave" versions with two separate plates were popular because they could be used with a center-tapped transformer to make a full-wave rectifier. Vacuum tube rectifiers were made for very high voltages, such as the high voltage power supply for the cathode ray tube of television receivers, and the kenotron used for power supply in X-ray equipment. However, compared to modern semiconductor diodes, vacuum tube rectifiers have high internal resistance due to space charge and therefore high voltage drops, causing high power dissipation and low efficiency. They are rarely able to handle currents exceeding 250 mA owing to the limits of plate power dissipation, and cannot be used for low voltage applications, such as battery chargers. Another limitation of the vacuum tube rectifier is that the heater power supply often requires special arrangements to insulate it from the high voltages of the rectifier circuit. Solid state. Crystal detector. The crystal detector, the earliest type of semiconductor diode, was used as a detector in some of the earliest radio receivers, called crystal radios, to rectify the radio carrier wave and extract the modulation which produced the sound in the earphones. Invented by Jagadish Chandra Bose and G. W. Pickard around 1902, it was a significant improvement over earlier detectors such as the coherer. One popular type of crystal detector, often called a "cat's whisker detector", consists of a crystal of some semiconducting mineral, usually galena (lead sulfide), with a light springy wire touching its surface. Its fragility and limited current capability made it unsuitable for power supply applications. It was used widely in radios until the 1920s when vacuum tubes replaced it. In the 1930s, researchers miniaturized and improved the crystal detector for use at microwave frequencies, developing the first semiconductor diodes. Selenium and copper oxide rectifiers. Once common until replaced by more compact and less costly silicon solid-state rectifiers in the 1970s, these units used stacks of oxide-coated metal plates and took advantage of the semiconductor properties of selenium or copper oxide. While selenium rectifiers were lighter in weight and used less power than comparable vacuum tube rectifiers, they had the disadvantage of finite life expectancy, increasing resistance with age, and were only suitable to use at low frequencies. Both selenium and copper oxide rectifiers have somewhat better tolerance of momentary voltage transients than silicon rectifiers. Typically these rectifiers were made up of stacks of metal plates or washers, held together by a central bolt, with the number of stacks determined by voltage; each cell was rated for about 20 V. An automotive battery charger rectifier might have only one cell: the high-voltage power supply for a vacuum tube might have dozens of stacked plates. Current density in an air-cooled selenium stack was about 600 mA per square inch of active area (about 90 mA per square centimeter). Silicon and germanium diodes. Silicon diodes are the most widely used rectifiers for lower voltages and powers, and have largely replaced other rectifiers. Due to their substantially lower forward voltage (0.3V versus 0.7V for silicon diodes) germanium diodes have an inherent advantage over silicon diodes in low voltage circuits. High power: thyristors (SCRs) and newer silicon-based voltage sourced converters. In high-power applications, from 1975 to 2000, most mercury valve arc-rectifiers were replaced by stacks of very high power thyristors, silicon devices with two extra layers of semiconductor, in comparison to a simple diode. In medium-power transmission applications, even more complex and sophisticated voltage sourced converter (VSC) silicon semiconductor rectifier systems, such as insulated gate bipolar transistors (IGBT) and gate turn-off thyristors (GTO), have made smaller high voltage DC power transmission systems economical. All of these devices function as rectifiers. As of 2009[ [update]] it was expected that these high-power silicon "self-commutating switches", in particular IGBTs and a variant thyristor (related to the GTO) called the integrated gate-commutated thyristor (IGCT), would be scaled-up in power rating to the point that they would eventually replace simple thyristor-based AC rectification systems for the highest power-transmission DC applications. Active rectifier. Active rectification is a technique for improving the efficiency of rectification by replacing diodes with actively controlled switches such as transistors, usually power MOSFETs or power BJTs. Whereas normal semiconductor diodes have a roughly fixed voltage drop of around 0.5 to 1 volts, active rectifiers behave as resistances, and can have arbitrarily low voltage drop. Historically, vibrator-driven switches or motor-driven commutators have also been used for mechanical rectifiers and synchronous rectification. Active rectification has many applications. It is frequently used for arrays of photovoltaic panels to avoid reverse current flow that can cause overheating with partial shading while giving minimum power loss. Current research. A major area of research is to develop higher frequency rectifiers, that can rectify into terahertz and light frequencies. These devices are used in optical heterodyne detection, which has myriad applications in optical fiber communication and atomic clocks. Another prospective application for such devices is to directly rectify light waves picked up by tiny antennas, called nantennas, to produce DC electric power. It is thought that arrays of antennas could be a more efficient means of producing solar power than solar cells. A related area of research is to develop smaller rectifiers, because a smaller device has a higher cutoff frequency. Research projects are attempting to develop a unimolecular rectifier, a single organic molecule that would function as a rectifier. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\nV_\\mathrm{dc}=V_\\mathrm {av}&=\\frac{2 \\cdot V_\\mathrm{peak}}{\\pi}\\\\[8pt]\nV_\\mathrm {rms}&=\\frac {V_\\mathrm{peak}}{\\sqrt 2}\n\\end{align}" }, { "math_id": 1, "text": "V_\\mathrm{peak}" }, { "math_id": 2, "text": "V_\\mathrm{LN}" }, { "math_id": 3, "text": "V_\\mathrm{peak} = \\sqrt 2 \\cdot V_{\\mathrm{LN}}" }, { "math_id": 4, "text": "V_\\mathrm {av}" }, { "math_id": 5, "text": "\\frac{2}{3} \\pi" }, { "math_id": 6, "text": "\n\\begin{align}\nV_\\mathrm{dc} = {} & V_\\mathrm {av} = \\frac{1}{\\frac{2}{3} \\pi} \\int_{30^\\circ}^{150^\\circ} V_\\mathrm{peak} \\sin\\varphi \\, \\mathrm d\\varphi = \\frac{3 V_\\mathrm{peak}}{2 \\pi} \\cdot \\left(-\\cos 150^\\circ + \\cos 30^\\circ \\right) \\\\[8pt]\n= {} & \\frac{3 V_\\mathrm{peak}}{2 \\pi} \\cdot \\Biggl[ -\\left(-\\frac{\\sqrt3}{2} \\right)+\\frac{\\sqrt3}{2} \\Biggl] = \\frac{3\\sqrt3 \\cdot V_\\mathrm{peak}}{2 \\pi} \\\\[8pt]\n\\Longrightarrow {} & V_\\mathrm{dc} = V_\\mathrm {av} = \\frac{3 \\sqrt3 \\cdot \\sqrt 2 \\cdot V_\\mathrm{LN}}{2 \\pi} \\\\[8pt]\n\\Longrightarrow {} & V_\\mathrm{av} = \\frac{3 \\sqrt 6 \\cdot V_\\mathrm{LN}}{2 \\pi} \\approx 1.17 V_\\mathrm{LN}\n\\end{align}\n" }, { "math_id": 7, "text": "V_\\mathrm{av}" }, { "math_id": 8, "text": "\\frac{1}{3} \\pi" }, { "math_id": 9, "text": "\\hat v_{\\mathrm{DC}} = \\sqrt 3 \\cdot V_\\mathrm{peak}" }, { "math_id": 10, "text": "\n\\begin{align}\nV_\\mathrm{dc} = {} & V_\\mathrm {av} = \\frac{1}{\\frac{1}{3} \\pi} \\int_{60^\\circ}^{120^\\circ} \\sqrt 3 \\cdot V_\\mathrm{peak} \\cdot \\sin\\varphi \\cdot \\mathrm d\\varphi \\\\[8pt]\n= {} & \\frac{3 \\sqrt 3 \\cdot V_\\mathrm{peak}}{\\pi} \\cdot \\left(-\\cos 120^\\circ + \\cos 60^\\circ \\right) \\\\[8pt]\n= {} & \\frac{3\\sqrt 3 \\cdot V_\\mathrm{peak}}{\\pi} \\cdot \\Biggl[-\\left(-\\frac{1}{2} \\right)+\\frac{1}{2} \\Biggl] = \\frac{3\\sqrt3 \\cdot V_\\mathrm{peak}}{\\pi}\n\\end{align}\n" }, { "math_id": 11, "text": " \\Longrightarrow V_\\mathrm{dc}=V_\\mathrm {av}= \\frac{3\\sqrt{3} \\cdot \\sqrt 2 \\cdot V_\\mathrm{LN}}{\\pi} \\Longrightarrow V_\\mathrm {av} = \\frac{3\\sqrt 6 \\cdot V_\\mathrm{LN}}{\\pi} \\approx 2.34 V_\\mathrm{LN}" }, { "math_id": 12, "text": "\\hat v_{\\mathrm{common-mode}}" }, { "math_id": 13, "text": "\n\\begin{align}\n\\hat v_{\\text{common mode}} = {} & V_\\mathrm{peak} - \\frac{\\sqrt 3 \\cdot V_\\mathrm{peak} \\cdot \\sin 60^\\circ}{2} \\\\[8pt]\n= {} & V_\\mathrm{peak} \\cdot \\Biggl( 1- \\frac{\\sqrt 3 \\cdot \\sin 60^\\circ}{2} \\Biggl) = V_\\mathrm{peak} \\cdot 0.25\n\\end{align}\n" }, { "math_id": 14, "text": "V_{\\text{common mode}} = \\frac{\\hat v_{\\text{common mode}}}{\\sqrt 3}" }, { "math_id": 15, "text": "V_\\mathrm{dc}=V_\\mathrm {av}=\\frac{3\\sqrt 3 \\cdot V_\\mathrm{peak}}{\\pi} \\cdot \\cos \\alpha" }, { "math_id": 16, "text": "V_\\mathrm{dc}=V_\\mathrm {av}=\\frac{3 V_\\mathrm {LLpeak}}{\\pi} \\cdot \\cos \\alpha" }, { "math_id": 17, "text": "V_\\mathrm{dc} = V_\\mathrm {av}=\\frac{3 V_\\mathrm {LLpeak}}{\\pi} \\cdot \\cos (\\alpha + \\mu). " }, { "math_id": 18, "text": " V_\\mathrm{dc}=V_\\mathrm {av}=\\frac{3 V_\\mathrm {LLpeak}}{\\pi} \\cdot \\cos(\\alpha) - 6 f L_\\mathrm {c} I_\\mathrm {d} " }, { "math_id": 19, "text": "P_\\mathrm {AC} = {V_\\mathrm{peak} \\over 2} \\cdot {I_\\mathrm {peak} \\over 2}" }, { "math_id": 20, "text": "P_\\mathrm {DC} = {V_\\mathrm{peak} \\over \\pi} \\cdot {I_\\mathrm {peak} \\over \\pi}" }, { "math_id": 21, "text": "\\eta = {P_\\mathrm {DC} \\over P_\\mathrm {AC}} \\approx 40.5\\% " }, { "math_id": 22, "text": "P_\\mathrm {AC} = {V_\\mathrm{peak} \\over \\sqrt 2} \\cdot {I_\\mathrm {peak} \\over \\sqrt 2}" }, { "math_id": 23, "text": "P_\\mathrm {DC} = {2 \\cdot V_\\mathrm{peak} \\over \\pi} \\cdot {2 \\cdot I_\\mathrm {peak} \\over \\pi}" }, { "math_id": 24, "text": "\\eta = {P_\\mathrm {DC} \\over P_\\mathrm {AC}} \\approx 81.0\\% " }, { "math_id": 25, "text": "P_\\mathrm {AC} = 3 \\cdot {V_\\mathrm{peak} \\over 2} \\cdot {I_\\mathrm {peak} \\over 2}" }, { "math_id": 26, "text": "P_\\mathrm {DC} = \\frac{3\\sqrt{3} \\cdot V_\\mathrm{peak}}{2 \\pi} \\cdot \\frac{3\\sqrt3 \\cdot I_\\mathrm{peak}}{2 \\pi}" }, { "math_id": 27, "text": "P_\\mathrm {AC} = 3 \\cdot {V_\\mathrm{peak} \\over \\sqrt 2} \\cdot {I_\\mathrm {peak} \\over \\sqrt 2}" }, { "math_id": 28, "text": "P_\\mathrm {DC} = \\frac{3\\sqrt3 \\cdot V_\\mathrm{peak}} \\pi \\cdot \\frac{3\\sqrt3 \\cdot I_\\mathrm{peak}} \\pi" }, { "math_id": 29, "text": "\n\\text{T.U.F} = \\frac{P_\\text{odc}}{\\text{VA rating of transformer}}\n" }, { "math_id": 30, "text": "VA" }, { "math_id": 31, "text": "\nVA = V_{\\mathrm{rms}} \\dot I_{\\mathrm{rms}} (\\text{For secondary coil.})\n" }, { "math_id": 32, "text": "\\sqrt 2" } ]
https://en.wikipedia.org/wiki?curid=60569
6057100
Berlekamp's algorithm
In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields (also known as "Galois fields"). The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems. Overview. Berlekamp's algorithm takes as input a square-free polynomial formula_0 (i.e. one with no repeated factors) of degree formula_1 with coefficients in a finite field formula_2 and gives as output a polynomial formula_3 with coefficients in the same field such that formula_3 divides formula_0. The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of formula_0 into powers of irreducible polynomials (recalling that the ring of polynomials over a finite field is a unique factorization domain). All possible factors of formula_0 are contained within the factor ring formula_4 The algorithm focuses on polynomials formula_5 which satisfy the congruence: formula_6 These polynomials form a subalgebra of R (which can be considered as an formula_1-dimensional vector space over formula_2), called the "Berlekamp subalgebra". The Berlekamp subalgebra is of interest because the polynomials formula_3 it contains satisfy formula_7 In general, not every GCD in the above product will be a non-trivial factor of formula_0, but some are, providing the factors we seek. Berlekamp's algorithm finds polynomials formula_3 suitable for use with the above result by computing a basis for the Berlekamp subalgebra. This is achieved via the observation that Berlekamp subalgebra is in fact the kernel of a certain formula_8 matrix over formula_2, which is derived from the so-called Berlekamp matrix of the polynomial, denoted formula_9. If formula_10 then formula_11 is the coefficient of the formula_12-th power term in the reduction of formula_13 modulo formula_0, i.e.: formula_14 With a certain polynomial formula_5, say: formula_15 we may associate the row vector: formula_16 It is relatively straightforward to see that the row vector formula_17 corresponds, in the same way, to the reduction of formula_18 modulo formula_0. Consequently, a polynomial formula_5 is in the Berlekamp subalgebra if and only if formula_19 (where formula_20 is the formula_8 identity matrix), i.e. if and only if it is in the null space of formula_21. By computing the matrix formula_21 and reducing it to reduced row echelon form and then easily reading off a basis for the null space, we may find a basis for the Berlekamp subalgebra and hence construct polynomials formula_3 in it. We then need to successively compute GCDs of the form above until we find a non-trivial factor. Since the ring of polynomials over a field is a Euclidean domain, we may compute these GCDs using the Euclidean algorithm. Conceptual algebraic explanation. With some abstract algebra, the idea behind Berlekamp's algorithm becomes conceptually clear. We represent a finite field formula_22, where formula_23 for some prime formula_24, as formula_25. We can assume that formula_26 is square free, by taking all possible pth roots and then computing the gcd with its derivative. Now, suppose that formula_27 is the factorization into irreducibles. Then we have a ring isomorphism, formula_28, given by the Chinese remainder theorem. The crucial observation is that the Frobenius automorphism formula_29 commutes with formula_30, so that if we denote formula_31, then formula_30 restricts to an isomorphism formula_32. By finite field theory, formula_33 is always the prime subfield of that field extension. Thus, formula_34 has formula_35 elements if and only if formula_36 is irreducible. Moreover, we can use the fact that the Frobenius automorphism is formula_37-linear to calculate the fixed set. That is, we note that formula_34 is a formula_37-subspace, and an explicit basis for it can be calculated in the polynomial ring formula_38 by computing formula_39 and establishing the linear equations on the coefficients of formula_40 polynomials that are satisfied iff it is fixed by Frobenius. We note that at this point we have an efficiently computable irreducibility criterion, and the remaining analysis shows how to use this to find factors. The algorithm now breaks down into two cases: For further details one can consult. Applications. One important application of Berlekamp's algorithm is in computing discrete logarithms over finite fields formula_56, where formula_57 is prime and formula_58. Computing discrete logarithms is an important problem in public key cryptography and error-control coding. For a finite field, the fastest known method is the index calculus method, which involves the factorisation of field elements. If we represent the field formula_56 in the usual way - that is, as polynomials over the base field formula_59, reduced modulo an irreducible polynomial of degree formula_1 - then this is simply polynomial factorisation, as provided by Berlekamp's algorithm. Implementation in computer algebra systems. Berlekamp's algorithm may be accessed in the PARI/GP package using the factormod command, and the WolframAlpha website. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\mathbb{F}_q" }, { "math_id": 3, "text": "g(x)" }, { "math_id": 4, "text": "R = \\frac{\\mathbb{F}_q[x]}{\\langle f(x) \\rangle}." }, { "math_id": 5, "text": "g(x) \\in R" }, { "math_id": 6, "text": "g(x)^q \\equiv g(x) \\pmod{f(x)}.\\," }, { "math_id": 7, "text": "f(x) = \\prod_{s \\in \\mathbb{F}_q} \\gcd(f(x),g(x)-s)." }, { "math_id": 8, "text": "n \\times n" }, { "math_id": 9, "text": "\\mathcal{Q}" }, { "math_id": 10, "text": "\\mathcal{Q}=[q_{i,j}]" }, { "math_id": 11, "text": "q_{i,j}" }, { "math_id": 12, "text": "j" }, { "math_id": 13, "text": "x^{iq}" }, { "math_id": 14, "text": "x^{iq} \\equiv q_{i,n-1}x^{n-1} + q_{i,n-2}x^{n-2} + \\ldots + q_{i,0} \\pmod{f(x)}.\\," }, { "math_id": 15, "text": "g(x) = g_{n-1}x^{n-1}+g_{n-2}x^{n-2} + \\ldots + g_0,\\," }, { "math_id": 16, "text": "g = (g_0, g_1, \\ldots, g_{n-1}).\\," }, { "math_id": 17, "text": "g\\mathcal{Q}" }, { "math_id": 18, "text": "g(x)^q" }, { "math_id": 19, "text": "g(\\mathcal{Q}-I)=0" }, { "math_id": 20, "text": "I" }, { "math_id": 21, "text": "\\mathcal{Q}-I" }, { "math_id": 22, "text": " \\mathbb{F}_q " }, { "math_id": 23, "text": " q = p^m " }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": " \\mathbb{F}_p[y]/(g(y)) " }, { "math_id": 26, "text": " f(x) \\in \\mathbb{F}_q[x] " }, { "math_id": 27, "text": " f(x) = f_1(x) \\ldots f_n(x) " }, { "math_id": 28, "text": " \\sigma: \\mathbb{F}_q[x]/(f(x)) \\to \\prod_i \\mathbb{F}_q[x]/(f_i(x)) " }, { "math_id": 29, "text": " x \\to x^p " }, { "math_id": 30, "text": " \\sigma " }, { "math_id": 31, "text": " \\text{Fix}_p(R) = \\{ f \\in R : f^p = f \\} " }, { "math_id": 32, "text": " \\text{Fix}_p( \\mathbb{F}_q[x]/(f(x)) ) \\to \\prod_{i = 1}^n \\text{Fix}_p( \\mathbb{F}_q[x]/(f_i(x)) ) " }, { "math_id": 33, "text": " \\text{Fix}_p( \\mathbb{F}_q[x]/(f_i(x)) )" }, { "math_id": 34, "text": " \\text{Fix}_p( \\mathbb{F}_q[x]/(f(x)) ) " }, { "math_id": 35, "text": " p " }, { "math_id": 36, "text": " f(x) " }, { "math_id": 37, "text": " \\mathbb{F}_p " }, { "math_id": 38, "text": " \\mathbb{F}_p[x,y]/(f,g) " }, { "math_id": 39, "text": " (x^i y^j)^p " }, { "math_id": 40, "text": " x,y " }, { "math_id": 41, "text": " g \\in \\text{Fix}_p( \\mathbb{F}_q[x]/(f(x)) ) \\setminus \\mathbb{F}_p " }, { "math_id": 42, "text": " a \\in \\mathbb{F}_p " }, { "math_id": 43, "text": " i,j " }, { "math_id": 44, "text": " g - a = 0 \\mod f_i" }, { "math_id": 45, "text": " g - a \\not = 0 \\mod f_j " }, { "math_id": 46, "text": " g - a " }, { "math_id": 47, "text": " p " }, { "math_id": 48, "text": " a " }, { "math_id": 49, "text": " \\mathbb{F}_p^* " }, { "math_id": 50, "text": " 1/2 " }, { "math_id": 51, "text": " x \\to x^{ \\frac{ p -1}{2}} " }, { "math_id": 52, "text": " 1 " }, { "math_id": 53, "text": " -1 " }, { "math_id": 54, "text": " g \\in \\text{Fix}_p( \\mathbb{F}_q[x]/f(x)) " }, { "math_id": 55, "text": " g^{ \\frac{ p - 1}{2}} - 1 " }, { "math_id": 56, "text": "\\mathbb{F}_{p^n}" }, { "math_id": 57, "text": "p" }, { "math_id": 58, "text": "n\\geq 2" }, { "math_id": 59, "text": "\\mathbb{F}_{p}" } ]
https://en.wikipedia.org/wiki?curid=6057100
6057201
Steroid sulfatase
Protein-coding gene in the species Homo sapiens Steroid sulfatase (STS), or steryl-sulfatase (EC 3.1.6.2), formerly known as arylsulfatase C, is a sulfatase enzyme involved in the metabolism of steroids. It is encoded by the "STS" gene. Reactions. This enzyme catalyses the following chemical reaction 3β-hydroxyandrost-5-en-17-one 3-sulfate + H2O formula_0 3β-hydroxyandrost-5-en-17-one + sulfate Also acts on some related steryl sulfates. Function. The protein encoded by this gene catalyzes the conversion of sulfated steroid precursors to the free steroid. This includes DHEA sulfate, estrone sulfate, pregnenolone sulfate, and cholesterol sulfate, all to their unconjugated forms (DHEA, estrone, pregnenolone, and cholesterol, respectively). The encoded protein is found in the endoplasmic reticulum, where it is present as a homodimer. Clinical significance. A congenital deficiency in the enzyme is associated with X-linked ichthyosis, a scaly-skin disease affecting roughly 1 in every 2,000 to 6,000 males. The excessive skin scaling and hyperkeratosis is caused by a lack of breakdown and thus accumulation of cholesterol sulfate, a steroid that stabilizes cell membranes and adds cohesion, in the outer layers of the skin. Genetic deletions including STS are associated with an increased risk of developmental and mood disorders (and associated traits), and of atrial fibrillation or atrial flutter in males. Both steroid sulfatase deficiency and common genetic risk variants within STS may confer increased atrial fibrillation risk. Cardiac arrhythmia in STS deficiency may be related to abnormal development of the interventricular septum or interatrial septum. Blood-clotting abnormalities may occur more frequently in males with XLI and female carriers. Knockdown of STS gene expression in human skin cell cultures affects pathways associated with skin function, brain and heart development, and blood-clotting that may be relevant for explaining the skin condition and increased likelihood of ADHD/autism, cardiac arrhythmias and disorders of hemostasis in XLI. Steroid sulfates like DHEA sulfate and estrone sulfate serve as large biologically inert reservoirs for conversion into androgens and estrogens, respectively, and hence are of significance for androgen- and estrogen-dependent conditions like prostate cancer, breast cancer, endometriosis, and others. A number of clinical trials have been performed with inhibitors of the enzyme that have demonstrated clinical benefit, particularly in oncology and so far up to Phase II. The non-steroidal drug Irosustat has been the most studied to date. Inhibitors. Inhibitors of STS include irosustat, estrone sulfamate (EMATE), estradiol sulfamate (E2MATE), and danazol. The most potent inhibitors are based around the aryl sulfamate pharmacophore and it is thought that such compounds irreversibly modify the active site formylglycine residue of steroid sulfatase. Names. Steryl-sulfatase is also known as "arylsulfatase", "steroid sulfatase", "sterol sulfatase", "dehydroepiandrosterone sulfate sulfatase", "arylsulfatase C", "steroid 3-sulfatase", "steroid sulfate sulfohydrolase", "dehydroepiandrosterone sulfatase", "pregnenolone sulfatase", "phenolic steroid sulfatase", "3-beta-hydroxysteroid sulfate sulfatase", as well as by its systematic name "steryl-sulfate sulfohydrolase". References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=6057201
605727
Kronecker's theorem
In mathematics, Kronecker's theorem is a theorem about diophantine approximation, introduced by Leopold Kronecker (1884). Kronecker's approximation theorem had been firstly proved by L. Kronecker in the end of the 19th century. It has been now revealed to relate to the idea of n-torus and Mahler measure since the later half of the 20th century. In terms of physical systems, it has the consequence that planets in circular orbits moving uniformly around a star will, over time, assume all alignments, unless there is an exact dependency between their orbital periods. Statement. Kronecker's theorem is a result in diophantine approximations applying to several real numbers "xi", for 1 ≤ "i" ≤ "n", that generalises Dirichlet's approximation theorem to multiple variables. The classical Kronecker approximation theorem is formulated as follows. "Given real "n"-tuples formula_0 and formula_1 , the condition: " formula_2 "holds if and only if for any formula_3 with" formula_4 "the number formula_5 is also an integer." In plainer language, the first condition states that the tuple formula_6 can be approximated arbitrarily well by linear combinations of the formula_7s (with integer coefficients) and integer vectors. For the case of a formula_8 and formula_9, Kronecker's Approximation Theorem can be stated as follows. For any formula_10 with formula_11 irrational and formula_12 there exist integers formula_13 and formula_14 with formula_15, such that formula_16 Relation to tori. In the case of "N" numbers, taken as a single "N"-tuple and point "P" of the torus "T" = "RN/ZN", the closure of the subgroup &lt;"P"&gt; generated by "P" will be finite, or some torus "T′" contained in "T". The original Kronecker's theorem (Leopold Kronecker, 1884) stated that the necessary condition for "T′" = "T", which is that the numbers "xi" together with 1 should be linearly independent over the rational numbers, is also sufficient. Here it is easy to see that if some linear combination of the "xi" and 1 with non-zero rational number coefficients is zero, then the coefficients may be taken as integers, and a character χ of the group "T" other than the trivial character takes the value 1 on "P". By Pontryagin duality we have "T′" contained in the kernel of χ, and therefore not equal to "T". In fact a thorough use of Pontryagin duality here shows that the whole Kronecker theorem describes the closure of &lt;"P"&gt; as the intersection of the kernels of the χ with χ("P") = 1. This gives an (antitone) Galois connection between monogenic closed subgroups of "T" (those with a single generator, in the topological sense), and sets of characters with kernel containing a given point. Not all closed subgroups occur as monogenic; for example a subgroup that has a torus of dimension ≥ 1 as connected component of the identity element, and that is not connected, cannot be such a subgroup. The theorem leaves open the question of how well (uniformly) the multiples "mP" of "P" fill up the closure. In the one-dimensional case, the distribution is uniform by the equidistribution theorem.
[ { "math_id": 0, "text": "\\alpha_i=(\\alpha_{i 1},\\dots,\\alpha_{i n})\\in\\mathbb{R}^n, i=1,\\dots,m " }, { "math_id": 1, "text": "\\beta=(\\beta_1,\\dots,\\beta_n)\\in \\mathbb{R}^n" }, { "math_id": 2, "text": "\\forall \\epsilon > 0 \\, \\exists q_i, p_j \\in \\mathbb Z : \\biggl| \\sum^m_{i=1}q_i\\alpha_{ij}-p_j-\\beta_j\\biggr|<\\epsilon, 1\\le j\\le n" }, { "math_id": 3, "text": "r_1,\\dots,r_n\\in\\mathbb{Z},\\ i=1,\\dots,m" }, { "math_id": 4, "text": "\\sum^n_{j=1}\\alpha_{ij}r_j\\in\\mathbb{Z}, \\ \\ i=1,\\dots,m\\ ," }, { "math_id": 5, "text": "\\sum^n_{j=1}\\beta_jr_j" }, { "math_id": 6, "text": "\\beta = (\\beta_1, \\ldots, \\beta_n)" }, { "math_id": 7, "text": "\\alpha_i" }, { "math_id": 8, "text": "m=1" }, { "math_id": 9, "text": "n=1" }, { "math_id": 10, "text": "\\alpha, \\beta, \\epsilon \\in \\mathbb{R}" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "\\epsilon > 0" }, { "math_id": 13, "text": "p" }, { "math_id": 14, "text": "q" }, { "math_id": 15, "text": "q>0" }, { "math_id": 16, "text": "|\\alpha q - p - \\beta| < \\epsilon." } ]
https://en.wikipedia.org/wiki?curid=605727
6057273
Sulfatase
Class of enzymes which break up sulfate esters by hydrolysis In biochemistry, sulfatases EC 3.1.6.- are a class of enzymes of the esterase class that catalyze the hydrolysis of sulfate esters into an alcohol and a bisulfate: formula_0 These may be found on a range of substrates, including steroids, carbohydrates and proteins. Sulfate esters may be formed from various alcohols and amines. In the latter case the resultant "N"-sulfates can also be termed sulfamates. Sulfatases play important roles in the cycling of sulfur in the environment, in the degradation of sulfated glycosaminoglycans and glycolipids in the lysosome, and in remodelling sulfated glycosaminoglycans in the extracellular space. Together with sulfotransferases, sulfatases form the major catalytic machinery for the synthesis and breakage of sulfate esters. Occurrence and importance. Sulfatases are found in lower and higher organisms. In higher organisms they are found in intracellular and extracellular spaces. Steroid sulfatase is distributed in a wide range of tissues throughout the body, enabling sulfated steroids synthesized in the adrenals and gonads to be desulfated following distribution through the circulation system. Many sulfatases are localized in the lysosome, an acidic digestive organelle found within the cell. Lysosomal sulfatases cleave a range of sulfated carbohydrates including sulfated glycosaminoglycans and glycolipids. Genetic defects in sulfatase activity can arise through mutations in individual sulfatases and result in certain lysosomal storage disorders with a spectrum of phenotypes ranging from defects in physical and intellectual development. Three-dimensional structure. The following sulfatases have been shown to be structurally related based on their sequence homology: Human proteins containing this domain. ARSA; ARSB; ARSD; ARSF; ARSG; ARSH; ARSI; ARSJ; ARSK; ARSL; GALNS; GNS; IDS; PIGG; SGSH; STS; SULF1; SULF2; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{R-OSO3 + H2O} \\quad \\xrightarrow[\\text{sulfatase}]{} \\quad \\ce{R-OH + HSO4-}" } ]
https://en.wikipedia.org/wiki?curid=6057273
60574741
Striation (fatigue)
Striations are marks produced on the fracture surface that show the incremental growth of a fatigue crack. A striation marks the position of the crack tip at the time it was made. The term "striation" generally refers to "ductile striations" which are rounded bands on the fracture surface separated by depressions or fissures and can have the same appearance on both sides of the mating surfaces of the fatigue crack. Although some research has suggested that many loading cycles are required to form a single striation, it is now generally thought that each striation is the result of a single loading cycle. The presence of striations is used in failure analysis as an indication that a fatigue crack has been growing. Striations are generally not seen when a crack is small even though it is growing by fatigue, but will begin to appear as the crack becomes larger. Not all periodic marks on the fracture surface are striations. The size of a striation for a particular material is typically related to the magnitude of the loading characterised by stress intensity factor range, the mean stress and the environment. The width of a striation is indicative of the overall crack growth rate but can be locally faster or slower on the fracture surface. Striation features. The study of the fracture surface is known as fractography. Images of the crack can be used to reveal features and understand the mechanisms of crack growth. While striations are fairly straight, they tend to curve at the ends allowing the direction of crack growth to be determined from an image. Striations generally form at different levels in metals and are separated by a "tear band" between them. Tear bands are approximately parallel to the direction of crack growth and produce what is known as a "river pattern", so called, because it looks like the diverging pattern seen with river flows. The source of the river pattern converges to a single point that is typically the origin of the fatigue failure. Striations can appear on both sides of the mating fracture surface. There is some dispute as to whether striations produced on both sides of the fracture surface match peak-to-peak or peak-to-valley. The shape of striations may also be different on each side of the fracture surface. Striations do not occur uniformly over all of the fracture surface and many areas of a fatigue crack may be devoid of striations. Striations are most often observed in metals but also occur in plastics such as Poly(methyl_methacrylate). Small striations can be seen with the aid of a scanning electron microscope. Once the size of a striation is over 500 nm (resolving wavelength of light), they can be seen with an optical microscope. The first image of striations was taken by Zapffe and Worden in 1951 using an optical microscope. The width of a striation indicates the local rate of crack growth and is typical of the overall rate of growth over the fracture surface. The rate of growth can be predicted with a crack growth equation such as the Paris-Erdogan equation. Defects such as inclusions and grain boundaries may locally slow down the rate of growth. "Variable amplitude" loads produce striations of different widths and the study of these striation patterns has been used to understand fatigue. Although various cycle counting methods can be used to extract the equivalent constant amplitude cycles from a variable amplitude sequence, the striation pattern differs from the cycles extracted using the rainflow counting method. The height of a striation has been related to the "stress ratio" formula_0 of the applied loading cycle, where formula_1 and is thus a function of the minimum formula_2 and maximum formula_3 stress intensity of the applied loading cycle. The striation profile depends on the degree of loading and unloading in each cycle. The unloading part of the cycle causing plastic deformation on the surface of the striation. Crack extension only occurs from the rising part of the load cycle. Striation-like features. Other periodic marks on the fracture surface can be mistaken for striations. Marker bands. Variable amplitude loading causes cracks to change the plane of growth and this effect can be used to create "marker bands" on the fracture surface. When a number of constant amplitude cycles are applied they may produce a plateau of growth on the fracture surface. Marker bands (also known as "progression marks" or "beach marks") may be produced and readily identified on the fracture surface even though the magnitude of the loads may too small to produce individual striations. In addition, marker bands may also be produced by large loads (also known as overloads) producing a region of "fast fracture" on the crack surface. Fast fracture can produce a region of rapid extension before blunting of the crack tip stops the growth and further growth occurs during fatigue. Fast fracture occurs through a process of microvoid coalescence where failures initiate around inter-metallic particles. The F111 aircraft was subjected to periodic proof testing to ensure any cracks present were smaller than a certain critical size. These loads left marks on the fracture surface that could be identified, allowing the rate of intermediate growth occurring in service to be measured. Marks also occur from a change in the environment where oil or corrosive environments can deposit or from excessive heat exposure and colour the fracture surface up to the current position of the crack tip. Marker bands may be used to measure the instantaneous rate of growth of the applied loading cycles. By applying a repeated sequence separated by loads that produce a distinctive pattern the growth from each segment of loading can be measured using a microscope in a technique called "quantitative fractography", the rate of growth for loading segments of "constant amplitude" or "variable amplitude" loading can be directly measured from the fracture surface. Tyre tracks. "Tyre tracks" are the marks on the fracture surface produced by something making an impression onto the surface from the repeated opening and closing of the crack faces. This can be produced by either a particle that becomes trapped between the crack faces or the faces themselves shifting and directly contacting the opposite surface. Coarse striations. "Coarse striations" are a general rumpling of the fracture surface and do not correspond to a single loading cycle and are therefore not considered to be true striations. They are produced instead of regular striations when there is insufficient atmospheric moisture to form hydrogen on the surface of the crack tip in aluminium alloys, thereby preventing the slip planes activation. The wrinkles in the surface cross over and so do not represent the position of the crack tip. Striation formation in aluminium. Environmental influence. Striations are often produced in high strength aluminium alloys. In these alloys, the presence of "water vapour" is necessary to produce ductile striations, although too much water vapour will produce "brittle striations" also known as "cleavage striations". Brittle striations are flatter and larger than ductile striations produced with the same load. There is sufficient water vapour present in the atmosphere to generate ductile striations. Cracks growing internally are isolated from the atmosphere and grow in a vacuum. When water vapour deposits onto the freshly exposed aluminium fracture surface, it dissociates into hydroxides and atomic hydrogen. Hydrogen interacts with the crack tip affecting the appearance and size of the striations. The growth rate increases typically by an order of magnitude, with the presence of water vapour. The mechanism is thought to be hydrogen embrittlement as a result of hydrogen being absorbed into the plastic zone at the crack tip. When an internal crack breaks through to the surface, the rate of crack growth and the fracture surface appearance will change due to the presence of water vapour. Coarse striations occur when a fatigue crack grows in a vacuum such as when growing from an internal flaw. Cracking plane. In aluminium (a face-centred cubic material), cracks grow close to low index planes such as the {100} and the {110} planes (see Miller Index). Both of these planes bisect a pair of slip planes. Crack growth involving a single slip plane is term "Stage I" growth and crack growth involving two slip planes is termed "Stage II" growth. Striations are typically only observed in Stage II growth. Brittle striations are typically formed on {100} planes. Models of striation formation. There have been many models developed to explain the process of how a striation is formed and their resultant shape. Some of the significant models are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "R = K_\\text{min}/K_\\text{max}" }, { "math_id": 2, "text": "K_\\text{min}" }, { "math_id": 3, "text": "K_\\text{max}" } ]
https://en.wikipedia.org/wiki?curid=60574741
6057493
Superoperator
In physics, a linear operator acting on a vector space of linear operators In physics, a superoperator is a linear operator acting on a vector space of linear operators. Sometimes the term refers more specially to a completely positive map which also preserves or does not increase the trace of its argument. This specialized meaning is used extensively in the field of quantum computing, especially quantum programming, as they characterise mappings between density matrices. The use of the super- prefix here is in no way related to its other use in mathematical physics. That is to say superoperators have no connection to supersymmetry and superalgebra which are extensions of the usual mathematical concepts defined by extending the ring of numbers to include Grassmann numbers. Since superoperators are themselves operators the use of the super- prefix is used to distinguish them from the operators upon which they act. Left/Right Multiplication. Fix a choice of basis for the underlying Hilbert space formula_0. Defining the left and right multiplication superoperators by formula_1 and formula_2 respectively one can express the commutator as formula_3 Next we vectorize the matrix formula_4 which is the mapping formula_5 where formula_6 denotes a vector in the Fock-Liouville space. The matrix representation of formula_7 is then calculated by using the same mapping formula_8 indicating that formula_9. Similarly one can show that formula_10. These representations allows us to calculate things like eigenvalues associated to superoperators. These eigenvalues are particularly useful in the field of open quantum systems, where the real parts of the Lindblad superoperator's eigenvalues will indicate whether a quantum system will relax or not. Example von Neumann Equation. In quantum mechanics the Schrödinger Equation, formula_11 expresses the time evolution of the state vector formula_12 by the action of the Hamiltonian formula_13 which is an operator mapping state vectors to state vectors. In the more general formulation of John von Neumann, statistical states and ensembles are expressed by density operators rather than state vectors. In this context the time evolution of the density operator is expressed via the von Neumann equation in which density operator is acted upon by a superoperator formula_14 mapping operators to operators. It is defined by taking the commutator with respect to the Hamiltonian operator: formula_15 where formula_16 As commutator brackets are used extensively in QM this explicit superoperator presentation of the Hamiltonian's action is typically omitted. Example Derivatives of Functions on the Space of Operators. When considering an operator valued function of operators formula_17 as for example when we define the quantum mechanical Hamiltonian of a particle as a function of the position and momentum operators, we may (for whatever reason) define an “Operator Derivative” formula_18 as a superoperator mapping an operator to an operator. For example, if formula_19 then its operator derivative is the superoperator defined by: formula_20 This “operator derivative” is simply the Jacobian matrix of the function (of operators) where one simply treats the operator input and output as vectors and expands the space of operators in some basis. The Jacobian matrix is then an operator (at one higher level of abstraction) acting on that vector space (of operators). See also. Lindblad superoperator References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{| i\\rangle\\}_i" }, { "math_id": 1, "text": "\\mathcal{L}(A)[\\rho] = A\\rho" }, { "math_id": 2, "text": "\\mathcal{R}(A)[\\rho] = \\rho A" }, { "math_id": 3, "text": "[A,\\rho] = \\mathcal{L}(A)[\\rho] - \\mathcal{R}(A)[\\rho]." }, { "math_id": 4, "text": "\\rho " }, { "math_id": 5, "text": "\\rho = \\sum_{i,j} \\rho_{ij} | i\\rangle \\langle j | \\to |\\rho\\rangle\\!\\rangle = \\sum_{i,j} \\rho_{ij} | i\\rangle\\otimes | j \\rangle," }, { "math_id": 6, "text": "|\\cdot\\rangle\\!\\rangle" }, { "math_id": 7, "text": "\\mathcal{L}(A)" }, { "math_id": 8, "text": " A\\rho = \\sum_{i,j} \\rho_{ij}A | i\\rangle\\langle j| \\to \\sum_{i,j} \\rho_{ij}(A| i\\rangle)\\otimes| j \\rangle = \\sum_{i,j} \\rho_{ij}(A\\otimes I) (| i\\rangle\\otimes| j \\rangle) = (A\\otimes I)|\\rho\\rangle\\!\\rangle = \\mathcal{L}(A)[\\rho], " }, { "math_id": 9, "text": "\\mathcal{L}(A) = A \\otimes I " }, { "math_id": 10, "text": "\\mathcal{R}(A) = (I\\otimes A^T) " }, { "math_id": 11, "text": "i \\hbar \\frac{\\partial}{\\partial t}\\Psi = \\hat H \\Psi" }, { "math_id": 12, "text": "\\psi" }, { "math_id": 13, "text": "\\hat{H}" }, { "math_id": 14, "text": "\\mathcal{H}" }, { "math_id": 15, "text": "i \\hbar \\frac{\\partial}{\\partial t}\\rho = \\mathcal{H}[\\rho]" }, { "math_id": 16, "text": "\\mathcal{H}[\\rho] = [\\hat{H},\\rho] \\equiv \\hat{H}\\rho - \\rho\\hat{H}" }, { "math_id": 17, "text": "\\hat{H} = \\hat{H}(\\hat{P})" }, { "math_id": 18, "text": " \\frac{\\Delta \\hat{H}}{\\Delta \\hat{P}} " }, { "math_id": 19, "text": " H(P) = P^3 = PPP" }, { "math_id": 20, "text": " \\frac{\\Delta H}{\\Delta P}[X] = X P^2 + PXP + P^2X" } ]
https://en.wikipedia.org/wiki?curid=6057493
6057975
Depletion (accounting)
Reduction of the quantity of natural resources Depletion is an accounting and tax concept used most often in the mining, timber, and petroleum industries. It is similar to depreciation in that it is a cost recovery system for accounting and tax reporting: "The depletion deduction" allows an owner or operator to account for the reduction of a product's reserves. Types of depletion. For tax purposes, the two types of depletion are percentage depletion and cost depletion. For mineral property, the method leading to the largest deduction is generally used. For standing timber, use of the cost depletion method is required. Depletion, for both accounting purposes and United States tax purposes, is a method of recording the gradual expense or use of natural resources over time. Depletion is the using up of natural resources by mining, quarrying, drilling, or felling. According to the IRS Newswire, over 50 percent of oil and gas extraction businesses use cost depletion to figure their depletion deduction. Mineral property includes oil and gas wells, mines, and other natural resource deposits (including geothermal deposits). For that purpose, property is each separate interest businesses own in each mineral deposit in each separate tract or parcel of land. Businesses can treat two or more separate interests as one property or as separate properties. Percentage depletion. To figure percentage depletion, a certain percentage, specified for each mineral, is multiplied by gross income from the property during the tax year. The rates to be used and other conditions and qualifications for oil and gas wells are discussed below under "Independent Producers and Royalty Owners" and under "Natural Gas Wells". Rates and other rules for percentage depletion of other specific minerals are found later in "Mines and Geothermal Deposits". Cost depletion. Cost depletion is an accounting method by which costs of natural resources are allocated to depletion over the period that make up the life of the asset. Cost depletion is computed by estimating the total quantity of mineral or other resources acquired and assigning a proportionate amount of the total resource cost to the quantity extracted in the period. For example, assume Big Texas Oil, Co. had discovered a large reserve of oil and estimates that the oil well will produce 200,000 barrels of oil. If the company invests $100,000 to extract the oil and extracts 10,000 barrels the first year, the depletion deduction is $5,000 ($100,000 X 10,000/200,000). Cost depletion for tax purposes may be completely different from cost depletion for accounting purposes: formula_0 CD = Cost Depletion. S = Units sold in the current year R = Reserves on hand at the end of the current year AB = Adjusted basis of the property at the end of the current year Accounting. Adjusted basis is the basis at end of year adjusted for prior years depletion in cost or percentage. It automatically allows for adjustments to the basis during the taxable year. By using the units remaining at the end of the year, the adjustment allows for revised estimates of the reserves. Depletion is based upon sales and not production. Units are considered sold in the year the proceeds are taxable under the taxpayer's accounting method. Reserves. Reserves generally include proven developed reserves and "probable" or "prospective" reserves if there is reasonable evidence to have believed that such quantities existed at that time. Example. If producer X has capitalized costs on property A of $40,000, originally consisting of the lease bonus, capitalized exploration costs, and some capitalized carrying costs, and the lease has been producing for several years and during this time, X has claimed $10,000 of allowable depletion. In 2009, X's share of production sold was 40,000 barrels and an engineer's report indicated that 160,000 barrels could be recovered after December 31, 2009. The calculation of cost depletion for this lease would be as follows: Cost depletion = S/(R+S) × AB or AB/(R+S) × S CD = 40,000/(40,000 + 160,000) × ($40,000 − $10,000) = 40,000/200,000 × $30,000 = $6,000 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "CD = S/(R+S) \\times AB = AB/(R+S) \\times S " } ]
https://en.wikipedia.org/wiki?curid=6057975
6058
Collagen
Most abundant structural protein in animals Collagen () is the main structural protein in the extracellular matrix of a body's various connective tissues. As the main component of connective tissue, it is the most abundant protein in mammals. 25% to 35% of a mammalian body's protein content is collagen. Amino acids are bound together to form a triple helix of elongated fibril known as a collagen helix. The collagen helix is mostly found in connective tissue such as cartilage, bones, tendons, ligaments, and skin. Vitamin C is vital for collagen synthesis, and Vitamin E improves the production of collagen. Depending upon the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes 1% to 2% of muscle tissue and accounts for 6% of the weight to skeletal muscle. The fibroblast is the most common cell creating collagen in a body. Gelatin, which is used in food and industry, is collagen that was irreversibly hydrolyzed using heat, basic solutions, or weak acids. Etymology. The name "collagen" comes from the Greek κόλλα ("kólla"), meaning "glue", and suffix -γέν, "-gen", denoting "producing". Human types. Over 90% of the collagen in the human body is type I collagen. However, as of 2011, 28 types of human collagen have been identified, described, and divided into several groups according to the structure they form. All of the types contain at least one triple helix. The number of types shows collagen's diverse functionality. The five most common types are: In human biology. Cardiac. The collagenous cardiac skeleton which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease. Bone grafts. As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting as it has a triple helical structure, making it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure of collagen prevents it from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix. Tissue regeneration. Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally. Reconstructive surgical uses. Collagens are widely employed in the construction of artificial skin substitutes used in the management of severe burns and wounds. These collagens may be derived from bovine, equine, porcine, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances. Wound healing. Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. Wound deterioration, followed sometimes by procedures such as amputation, can thus be avoided. Collagen is a natural product and is thus used as a natural wound dressing and has properties that artificial wound dressings do not have. It is resistant against bacteria, which is of vital importance in a wound dressing. It helps to keep the wound sterile, because of its natural ability to fight infection. When collagen is used as a burn dressing, healthy granulation tissue is able to form very quickly over the burn, helping it to heal rapidly. Throughout the four phases of wound healing, collagen performs the following functions: Basic research. Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models. Biology. The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in the amino acid sequence of collagen are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline. The average amino acid composition for fish and mammal skin is given. Synthesis. First, a three-dimensional stranded structure is assembled with amino acids glycine and proline as its principal components. This is not yet collagen but is its precursor: procollagen. Procollagen is then modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of a triple helix structure to collagen. Because the hydroxylase enzymes performing these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by two different enzymes: prolyl 4-hydroxylase and lysyl hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. The synthesis of collagen occurs inside and outside of a cell. The formation of collagen which results in fibrillary collagen (most common form) is discussed here. Meshwork collagen, which is often involved in the formation of filtration systems, is another common form of collagen. All types of collagens are triple helices, and the differences lie in the make-up of their alpha peptides created in step 2. Amino acids. Collagen has an unusual amino acid composition and sequence: Cortisol stimulates degradation of (skin) collagen into amino acids. Collagen I formation. Most collagen forms in a similar manner, but the following process is typical for type I: Molecular structure. A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or "super helix", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline. A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin. Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding. Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals leads to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications. The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the "overlap", and a part containing only four molecules, called the "gap". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern. There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure "in situ". These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions. Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength. Associated disorders. Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production. In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma. Diseases. One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level. Osteogenesis imperfecta – Caused by a mutation in "type 1 collagen", dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen. Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in "type 2 collagen", further research is being conducted to confirm this. Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in "collagen type 3". Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years. Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link. Characteristics. Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called "collagen fibers" are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic. Mechanical Properties. Collagen is a complex hierarchical material with mechanical properties that vary significantly across different scales. On the molecular scale, atomistic and course-grained modeling simulations, as well as numerous experimental methods, have led to several estimates of the Young’s modulus of collagen at the molecular level. Only above a certain strain rate is there a strong relationship between elastic modulus and strain rate, possibly due to the large number of atoms in a collagen molecule. The length of the molecule is also important, where longer molecules have lower tensile strengths than shorter ones due to short molecules having a large proportion of hydrogen bonds being broken and reformed. On the fibrillar scale, collagen has a lower modulus compared to the molecular scale, and varies depending on geometry, scale of observation, deformation state, and hydration level. By increasing the crosslink density from zero to 3 per molecule, the maximum stress the fibril can support increases from 0.5 GPa to 6 GPa. Limited tests have been done on the tensile strength of the collagen fiber, but generally it has been shown to have a lower Young’s modulus compared to fibrils. When studying the mechanical properties of collagen, tendon is often chosen as the ideal material because it is close to a pure and aligned collagen structure. However, at the macro, tissue scale, the vast number of structures that collagen fibers and fibrils can be arranged into results in highly variable properties. For example, tendon has primarily parallel fibers, whereas skin consists of a net of wavy fibers, resulting in a much higher strength and lower ductility in tendon compared to skin. The mechanical properties of collagen at multiple hierarchical levels is given. Collagen is known to be a viscoelastic solid. When the collagen fiber is modeled as two Kelvin-Voigt models in series, each consisting of a spring and a dashpot in parallel, the strain in the fiber can be modeled according to the following equation: formula_0 where α, β, and γ are defined materials properties, εD is fibrillar strain, and εT is total strain. Uses. Collagen has a wide variety of applications, from food to medical. In the medical industry, it is used in cosmetic surgery and burn surgery. In the food sector, one use example is in casings for sausages. If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process. From the Greek for glue, "kolla", the word collagen means "glue producer" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia. Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs. Cosmetics. Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Collagen is a vital protein in skin, hair, nails, and other tissues. Its production decreases with age and factors like sun damage and smoking. Collagen supplements, derived from sources like fish and cattle, are marketed to improve skin, hair, and nails. Studies show some skin benefits, but these supplements often contain other beneficial ingredients, making it unclear if collagen alone is effective. There's minimal evidence supporting collagen's benefits for hair and nails. Overall, the effectiveness of oral collagen supplements is not well-proven, and focusing on a healthy lifestyle and proven skincare methods like sun protection is recommended. History. The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical "Madras" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century. The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed "microfibril". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{d\\epsilon_D}{d\\epsilon_T}=\\alpha + (\\beta - \\alpha) exp[-\\gamma\\frac{\\epsilon_T}{\\dot{\\epsilon_T}}]" } ]
https://en.wikipedia.org/wiki?curid=6058
60582221
Magnetic diffusion
Type of motion of magnetic fields Magnetic diffusion refers to the motion of magnetic fields, typically in the presence of a conducting solid or fluid such as a plasma. The motion of magnetic fields is described by the magnetic diffusion equation and is due primarily to induction and diffusion of magnetic fields through the material. The magnetic diffusion equation is a partial differential equation commonly used in physics. Understanding the phenomenon is essential to magnetohydrodynamics and has important consequences in astrophysics, geophysics, and electrical engineering. Equation. The magnetic diffusion equation (also referred to as the "induction equation") is formula_0 where formula_1 is the permeability of free space and formula_2 is the electrical conductivity of the material, which is assumed to be constant. formula_3 denotes the (non-relativistic) velocity of the plasma. The first term on the right hand side accounts for effects from induction of the plasma, while the second accounts for diffusion. The latter acts as a dissipation term, resulting in a loss of magnetic field energy to heat. The relative importance of the two terms is characterized by the magnetic Reynolds number, formula_4. In the case of a non-uniform conductivity the magnetic diffusion equation is formula_5 however, it becomes significantly harder to solve. Derivation. Starting from the generalized Ohm's law: formula_6 and the curl equations for small displacement currents (i.e. low frequencies) formula_7 formula_8 substitute formula_9 into the Ampere-Maxwell law to get formula_10 Taking the curl of the above equation and substituting into Faraday's law, formula_11 This expression can be simplified further by writing it in terms of the "i"-th component of formula_12 and the Levi-Cevita tensor formula_13: formula_14 Using the identity formula_15 and recalling formula_16, the cross products can be eliminated: formula_17 Written in vector form, the final expression is formula_18 where formula_19 is the material derivative. This can be rearranged into a more useful form using vector calculus identities and formula_20: formula_21 In the case formula_22, this becomes a diffusion equation for the magnetic field, formula_23 where formula_24 is the magnetic diffusivity. Limiting Cases. In some cases it is possible to neglect one of the terms in the magnetic diffusion equation. This is done by estimating the magnetic Reynolds number formula_25 where formula_26 is the diffusivity, formula_27 is the magnitude of the plasma's velocity and formula_28 is a characteristic length of the plasma. Relation to Skin Effect. At low frequencies, the skin depth formula_29 for the penetration of an AC electromagnetic field into a conductor is: formula_30 Comparing with the formula for formula_31, the skin depth is the diffusion length of the field over one period of oscillation: formula_32 Examples and Visualization. For the limit formula_33, the magnetic field lines become "frozen in" to the motion of the conducting fluid. A simple example illustrating this behavior has a sinusoidally-varying shear flow formula_34 with a uniform initial magnetic field formula_35. The equation for this limit, formula_36, has the solution formula_37 As can be seen in the figure to the right, the fluid drags the magnetic field lines so that they obtain the sinusoidal character of the flow field. For the limit formula_38, the magnetic diffusion equation formula_39 is just a vector-valued form of the heat equation. For a localized initial magnetic field (e.g. Gaussian distribution) within a conducting material, the maxima and minima will asymptotically decay to a value consistent with Laplace's equation for the given boundary conditions. This behavior is illustrated in the figure below. Diffusion Times for Stationary Conductors. For stationary conductors formula_40 with simple geometries a time constant called magnetic diffusion time can be derived. Different one-dimensional equations apply for conducting slabs and conducting cylinders with constant magnetic permeability. Also, different diffusion time equations can be derived for nonlinear saturable materials such as steel. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial \\vec{B}}{\\partial t} = \\nabla \\times \\left[\\vec{v} \\times \\vec{B}\\right] + \\frac{1}{\\mu_0 \\sigma}\\nabla^2 \\vec{B}" }, { "math_id": 1, "text": " \\mu_0 " }, { "math_id": 2, "text": " \\sigma " }, { "math_id": 3, "text": " \\vec{v} " }, { "math_id": 4, "text": " R_m " }, { "math_id": 5, "text": "\\frac{\\partial \\vec{B}}{\\partial t} = \\nabla \\times \\left[\\vec{v} \\times \\vec{B}\\right] - \\frac{1}{\\mu_0} \\nabla \\times \\left[\\frac{1}{\\sigma} \\nabla \\times \\vec{B} \\right] " }, { "math_id": 6, "text": "\\vec{J} = \\sigma \\left(\\vec{E}+\\vec{v}\\times\\vec{B} \\right)" }, { "math_id": 7, "text": "\\nabla\\times\\vec{B} = \\mu_0 \\vec{J} + \\epsilon_0 \\mu_0 \\frac{\\partial \\vec{E}}{\\partial t} \\approx \\mu_0 \\vec{J}" }, { "math_id": 8, "text": "\\nabla\\times\\vec{E} = -\\frac{\\partial \\vec{B}}{\\partial t}" }, { "math_id": 9, "text": "\\vec{J}" }, { "math_id": 10, "text": "\\frac{1}{\\mu_0 \\sigma} \\nabla\\times\\vec{B} = \\vec{E} + \\vec{v}\\times\\vec{B} \\quad\\Rightarrow\\quad \\vec{E} = \\frac{1}{\\mu_0 \\sigma}\\nabla\\times\\vec{B}-\\vec{v}\\times\\vec{B}." }, { "math_id": 11, "text": "\\nabla\\times\\vec{E} = \\nabla\\times\\left(\\frac{1}{\\mu_0 \\sigma}\\nabla\\times\\vec{B} - \\vec{v}\\times\\vec{B}\\right) = -\\frac{\\partial \\vec{B}}{\\partial t}." }, { "math_id": 12, "text": "\\vec{B}" }, { "math_id": 13, "text": "\\varepsilon_{ijk}" }, { "math_id": 14, "text": "\\begin{align}\n-\\frac{\\partial B_i}{\\partial t} & = \\varepsilon_{ijk} \\partial_j \\left( \\frac{1}{\\mu_0 \\sigma}\\varepsilon_{klm}\\partial_l B_m - \\varepsilon_{klm}v_l B_m \\right)\\\\\n& = \\varepsilon_{kij} \\varepsilon_{klm} \\left(\\frac{1}{\\mu_0 \\sigma}\\partial_j\\partial_l B_m - \\left(v_l \\partial_j B_m + B_m \\partial_j v_l \\right)\\right)\n\\end{align}" }, { "math_id": 15, "text": "\\varepsilon_{kij} \\varepsilon_{klm}= \\delta_{il}\\delta_{jm}-\\delta_{im}\\delta_{jl}" }, { "math_id": 16, "text": "\\partial_j B_j = 0" }, { "math_id": 17, "text": "\\begin{align}\n-\\frac{\\partial B_i}{\\partial t} & = \\frac{1}{\\mu_0 \\sigma}\\left(\\partial_i\\partial_j B_j - \\partial_j \\partial_j B_i\\right) - \\left(v_i \\partial_j B_j - v_j \\partial_j B_i\\right) - \\left(B_j \\partial_j v_i - B_i \\partial_j v_j\\right) \\\\\n& = -\\frac{1}{\\mu_0 \\sigma}\\partial_j \\partial_j B_i + v_j \\partial_j B_i - \\left(B_j \\partial_j v_i - B_i \\partial_j v_j\\right)\n\\end{align}" }, { "math_id": 18, "text": "\\frac{\\partial \\vec{B}}{\\partial t}+\\left(\\vec{v}\\cdot\\nabla\\right)\\vec{B} = \\frac{D\\vec{B}}{Dt} = \\left(\\vec{B}\\cdot\\nabla\\right)\\vec{v}-\\vec{B}\\left(\\nabla\\cdot\\vec{v}\\right)+\\frac{1}{\\mu_0 \\sigma}\\nabla^2 \\vec{B}" }, { "math_id": 19, "text": "\\frac{D}{Dt}=\\frac{\\partial}{\\partial t}+\\vec{v}\\cdot\\nabla" }, { "math_id": 20, "text": " \\nabla \\cdot \\vec{B}=0 " }, { "math_id": 21, "text": "\\frac{\\partial \\vec{B}}{\\partial t}= \\nabla \\times [\\vec{v} \\times \\vec{B}] + \\frac{1}{\\mu_0 \\sigma}\\nabla^2 \\vec{B}" }, { "math_id": 22, "text": "\\vec{v}=0" }, { "math_id": 23, "text": "\\frac{\\partial \\vec{B}}{\\partial t} = \\frac{1}{\\mu_0 \\sigma}\\nabla^2 \\vec{B} = \\eta\\nabla^2 \\vec{B}" }, { "math_id": 24, "text": "\\eta = \\frac{1}{\\mu_0 \\sigma}" }, { "math_id": 25, "text": " R_m = \\frac{v L}{\\eta} " }, { "math_id": 26, "text": " \\eta " }, { "math_id": 27, "text": " v " }, { "math_id": 28, "text": " L " }, { "math_id": 29, "text": "\\delta" }, { "math_id": 30, "text": "\\delta = \\sqrt{\\frac{2}{\\mu \\sigma \\omega}}" }, { "math_id": 31, "text": "\\eta" }, { "math_id": 32, "text": "\\delta = \\sqrt{\\frac{2\\eta}{\\omega}} = \\sqrt{\\frac{\\eta T}{\\pi}}" }, { "math_id": 33, "text": "R_m \\gg 1" }, { "math_id": 34, "text": "\\vec{v} = v_0\\sin(k y)\\hat{x}" }, { "math_id": 35, "text": "\\vec{B}\\left(\\vec{r},0\\right) = B_0\\hat{y}" }, { "math_id": 36, "text": "\\frac{\\partial \\vec{B}}{\\partial t} = \\nabla \\times [\\vec{v} \\times \\vec{B}] " }, { "math_id": 37, "text": "\\vec{B}\\left(\\vec{r},t\\right) = B_0 k v_0 t\\cos(k y)\\hat{x}+B_0\\hat{y}" }, { "math_id": 38, "text": "R_m \\ll 1" }, { "math_id": 39, "text": "\\frac{\\partial \\vec{B}}{\\partial t} = \\frac{1}{\\mu_0 \\sigma} \\nabla^2 \\vec{B}" }, { "math_id": 40, "text": "(R_m=0)" } ]
https://en.wikipedia.org/wiki?curid=60582221
6058229
Tetradecagon
Polygon with 14 edges In geometry, a tetradecagon or tetrakaidecagon or 14-gon is a fourteen-sided polygon. Regular tetradecagon. A "regular tetradecagon" has Schläfli symbol {14} and can be constructed as a quasiregular truncated heptagon, t{7}, which alternates two types of edges. The area of a regular tetradecagon of side length "a" is given by formula_0 Construction. As 14 = 2 × 7, a regular tetradecagon cannot be constructed using a compass and straightedge. However, it is constructible using neusis with use of the angle trisector, or with a marked ruler, as shown in the following two examples. Symmetry. The "regular tetradecagon" has Dih14 symmetry, order 28. There are 3 subgroup dihedral symmetries: Dih7, Dih2, and Dih1, and 4 cyclic group symmetries: Z14, Z7, Z2, and Z1. These 8 symmetries can be seen in 10 distinct symmetries on the tetradecagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. Full symmetry of the regular form is r28 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g14 subgroup has no degrees of freedom but can be seen as directed edges. The highest symmetry irregular tetradecagons are d14, an isogonal tetradecagon constructed by seven mirrors which can alternate long and short edges, and p14, an isotoxal tetradecagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular tetradecagon. Dissection. Coxeter states that every zonogon (a 2"m"-gon whose opposite sides are parallel and of equal length) can be dissected into "m"("m"-1)/2 parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the "regular tetradecagon", "m"=7, and it can be divided into 21: 3 sets of 7 rhombs. This decomposition is based on a Petrie polygon projection of a 7-cube, with 21 of 672 faces. The list OEIS:  defines the number of solutions as 24698, including up to 14-fold rotations and chiral forms in reflection. Numismatic use. The regular tetradecagon is used as the shape of some commemorative gold and silver Malaysian coins, the number of sides representing the 14 states of the Malaysian Federation. Related figures. A tetradecagram is a 14-sided star polygon, represented by symbol {14/n}. There are two regular star polygons: {14/3} and {14/5}, using the same vertices, but connecting every third or fifth points. There are also three compounds: {14/2} is reduced to 2{7} as two heptagons, while {14/4} and {14/6} are reduced to 2{7/2} and 2{7/3} as two different heptagrams, and finally {14/7} is reduced to seven digons. A notable application of a fourteen-pointed star is in the flag of Malaysia, which incorporates a yellow {14/6} tetradecagram in the top-right corner, representing the unity of the thirteen states with the federal government. Deeper truncations of the regular heptagon and heptagrams can produce isogonal (vertex-transitive) intermediate tetradecagram forms with equally spaced vertices and two edge lengths. Other truncations can form double covering polygons 2{p/q}, namely: t{7/6}={14/6}=2{7/3}, t{7/4}={14/4}=2{7/2}, and t{7/2}={14/2}=2{7}. Isotoxal forms. An isotoxal polygon can be labeled as {pα} with outer most internal angle α, and a star polygon {("p"/"q")α}, with "q" is a winding number, and gcd("p","q")=1, "q"&lt;"p". Isotoxal tetradecagons have "p"=7, and since 7 is prime all solutions, q=1..6, are polygons. Petrie polygons. Regular skew tetradecagons exist as Petrie polygon for many higher-dimensional polytopes, shown in these skew orthogonal projections, including: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\frac{14}{4}a^2\\cot\\frac{\\pi}{14} \\approx 15.3345a^2" } ]
https://en.wikipedia.org/wiki?curid=6058229
60583225
Spatial cloaking
Spatial cloaking is a privacy mechanism that is used to satisfy specific privacy requirements by blurring users’ exact locations into cloaked regions. This technique is usually integrated into applications in various environments to minimize the disclosure of private information when users request location-based service. Since the database server does not receive the accurate location information, a set including the satisfying solution would be sent back to the user. General privacy requirements include K-anonymity, maximum area, and minimum area. Background. With the emergence and popularity of location-based services, people are getting more personalized services, such as getting the names and locations of nearby restaurants and gas stations. Receiving these services requires users to send their positions either directly or indirectly to the service provider. A user's location information could be shared more than 5000 times in two weeks. Therefore, this convenience also exposes users’ privacy to certain risks, since the attackers may illegally identify the users’ locations and even further exploit their personal information. Continuously tracking users' location has not only been identified as a technical issue, but also a privacy concern as well. It has been realized that Quasi-identifiers, which refer to a set of information attributes, can be used to re-identify the user when linked with some external information. For example, the social security number could be used to identify a specific user by adversaries, and the combined disclosure of birth date, zip code, and gender can uniquely identify a user. Thus, multiple solutions have been proposed to preserve and enhance users’ privacy when using location-based services. Among all the proposed mechanisms, spatial cloaking is one of those which has been widely accepted and revised, thus having been integrated into many practical applications. Location privacy. Location privacy is usually considered falling into the category of information privacy, though there is little consensus on the definition of location privacy. There are often three aspects of location information: identity, location (spatial information), and time (temporal information). Identity usually refers to a user's name, email address, or any characteristic which makes a user distinguishable. For example, Pokémon Go requires a consistent user identity, since users are required to log in. Spatial information is considered as the main approach to determine a location. Temporal information can be separated into real-time and non-real time and is usually described as a time stamp with a place. If a link is established between them, then the location privacy is considered violated. Accessing personal location data has been raised as a severe privacy concern, even with personal permission. Therefore, privacy-aware management of location information has been identified as an essential challenge, which is designed to provide privacy protection against abuse of location information. The overall idea of preserving location privacy is to introduce enough noise and quantization to reduce the chances of successful attacks. Spatial crowdsourcing uses devices that has GPS (global positioning system) and collects information. Data retrieved includes location data that can be used to analyze maps and local spatial characteristics. In recent years, researchers have been making a connection between social aspects and technological aspects regarding location information. For example, if co-location information is considered as the data which potential attackers would get and take into consideration, the location privacy is decreased by more than 60%. Also, by a constant report of a user's location information, a movement profile could be constructed for this specific user based on statistical analysis, and a large amount of information could be exploited and generated from this profile such as user's office location, medical records, financial status, and political views. Therefore, more and more researchers have taken account of the social influence in their algorithms, since this socially networked information is accessible to the public and might be used by potential attackers. History. In order to meet user's requirements for location privacy in the process of data transportation, researchers have been exploring and investigating models to address the disclosure of private information. The secure-multi-party model is constructed based on the idea of sharing accurate information among n parties. Each party has access to a particular segment of the precise information and at the same time being prevented from acquiring the other shares of the data. However, the computation problem is introduced in the process, since a large amount of data processing is required to satisfy the requirement. The minimal information sharing model is introduced to use cryptographic techniques to perform join and intersection operations. However, the inflexibility of this model to fit into other queries makes it hard to be satisfying to most practical applications. The untrusted third-party model is adopted in peer-to-peer environments. The most popular model right now is the trusted third-party model. Some of the practical applications have already adopted the idea of a trusted third party into their services to preserve privacy. For example, Anonymizer is integrated into various websites, which could give anonymous surfing service to its users. Also, when purchasing through PayPal, users are not required to provide their credit card information. Therefore, by introducing a trusted-third-party, users’ private information is not directly exposed to the service providers. Approaches for preserving location information. The promising approach of preserving location privacy is to report data on users' behavior and at the same time protect identity and location privacy. Several methods have been investigated to enhance the performances of location-preserving techniques, such as location perturbation and the report of landmark objects. Location perturbation. The idea of location perturbation is to replace the exact location information with a coarser grained spatial range, and thus uncertainty would be introduced when the adversaries try to match the user to either a known location identity or external observation of location identity. Location perturbation is usually satisfied by using spatial cloaking, temporal cloaking, or location obfuscation. Spatial and temporal cloaking refers to the wrong or imprecise location and time reported to the service providers, instead of the exact information. For example, location privacy could be enhanced by increasing the time between location reports, since higher report frequencies makes reidentification more possible to happen through data mining. There are other cases when the report of location information is delayed until the visit of K users is identified in that region. However, this approach could affect the service reported by the service providers since the data they received are not accurate. The accuracy and timelessness issues are usually discussed in this approach. Also, some attacks have been recognized based on the idea of cloaking and break user privacy. Landmark objects. Based on the idea of landmark objects, a particular landmark or a significant object is reported to the service provider, instead of a region. Avoid location tracking. In order to avoid location tracking, usually less or no location information would be reported to the service provider. For example, when requesting weather, a zip code instead of a tracked location would be accurate enough for the quality of the service received. Environment. Centralized scheme. A centralized scheme is constructed based on a central location anonymizer (anonymizing server) and is considered as an intermediate between the user and the service provider. Generally, the responsibilities of a location anonymizer include tracking users' exact location, blurring user specific location information into cloaked areas and communicate with the service provider. For example, one of the methods to achieve this is by replacing the correct network addresses with fake-IDs before the information are forward to the service provider. Sometimes user identity is hidden, while still allowing the service provider to authenticate the user and possibly charge the user for the service. These steps are usually achieved through spatial cloaking or path confusion. Except in some cases where the correct location information are sent for high service quality, the exact location information or temporal information are usually modified to preserve user privacy. Serving as an intermediate between the user and location-based server, location anonymizer generally conducts the following activities: The location anonymizer could also be considered as a trusted-third party since it is trusted by the user with the accurate location information and private profile stored in the location anonymizer. However, this could also expose users’ privacy into great risks at the same time. First, since the anonymizer keeps tracking users' information and has access to the users’ exact location and profile information, it is usually the target of most attackers and thus under higher risks Second, the extent to which users trust the location anonymizers could be essential. If a fully trusted third party is integrated into the algorithm, user location information would be reported continuously to the location anonymizer, which may cause privacy issues if the anonymizer is compromised. Third, the location anonymizer may lead to a performance bottleneck when a large number of requests are presented and required to be cloaked. This is because the location anonymizer is responsible for maintaining the number of users in a region in order to provide an acceptable level of service quality. Distributed scheme (decentralized scheme). In a distributed environment, users anonymize their location information through fixed communication infrastructures, such as base stations. Usually, a certification server is introduced in a distributed scheme where users are registered. Before participating in this system, users are required to obtain a certificate which means that they are trusted. Therefore, every time after user request a location-based service and before the exact location information is forward to the server, the auxiliary users registered in this system collaborate to hide the precise location of the user. The number of assistant users involved in cloaking this region is based on K-anonymity, which is usually set be the specific user. In the cases where there are not enough users nearby, S-proximity is generally adopted to generate a high number of paired user identities and location information for the actual user to be indistinguishable in the specific area. The other profiles and location information sent to the service provider are sometimes also referred to as dummies. However, the complexity of the data structure which is used to anonymize the location could result in difficulties when applying this mechanism to highly dynamic location-based mobile applications. Also, the issue of large computation and communication is posed to the environment. Peer-to-peer environment. A peer-to-peer (P2P) environment relies on the direct communication and information exchange between devices in a community where users could only communicate through P2P multi-hop routing without fixed communication infrastructures. The P2P environment aims to extend the scope of cellular coverage in a sparse environment. In this environment, peers have to trust each other and work together, since their location information would be reported to each other when a cloaked area is constructed to achieve the desired K-anonymity during the requesting for location-based services. Researchers have been discussing some privacy requirements and security requirements which would make the privacy-preserving techniques appropriate for the peer-to-peer environment. For example, authentication and authorization are required to secure and identify the user and thus making authorized users distinguishable from unauthorized users. Confidentiality and integrity make sure that only those who are authorized have access to the data transmitted between peers, and the transmitted information cannot be modified. Some of the drawbacks identified in a peer-to-peer environment are the communication costs, not enough users and threats of potential malicious users hiding in the community. Mobile environments. Mobile devices have been considered as an essential tool for communication, and mobile computing has thus become a research interest in recent years. From online purchase to online banking, mobile devices have frequently been connected to service providers for online activities, and at the same time sending and receiving information. Generally, mobile users can receive very personal services from anywhere at any time through location-based services. In mobile devices, Global Positioning System (GPS) is the most commonly used component to provide location information. Besides that, Global System for Mobile Communications (GSM) and WiFi signals could also help with estimating locations. There are generally two types of privacy concerns in mobile environments, data privacy and contextual privacy. Usually, location privacy and identity privacy are included in the discussion of contextual privacy in a mobile environment, while the data transferred between various mobile devices is discussed under data privacy. In the process of requesting location-based services and exchanging location data, both the quality of data transferred and the safety of information exchanged could be potentially exposed to malicious people. Privacy requirements. No matter what the specific privacy-preserving solution is integrated to cloak a particular region in which the service requester stays. It is usually constructed from several angles to satisfy different privacy requirements better. These standards are either adjusted by the users or are decided by the application designers. Some of the privacy parameters include K-anonymity, entropy, minimum area, and maximum area. K-anonymity. The concept of K-anonymity was first introduced in relational data privacy to guarantee the usefulness of the data and the privacy of users, when data holders want to release their data. K-anonymity usually refers to the requirement that the information of the user should be indistinguishable from a minimum of formula_0people in the same region, with k being any real number. Thus, the disclosed location scope would be expected to keep expanding until formula_1users could be identified in the region and these formula_1people form an anonymity set. Usually, the higher the K-anonymity, the stricter the requirements, the higher the level of anonymity. If K-anonymity is satisfied, then the possibility of identifying the exact user would be around formula_2 which subjects to different algorithms, and therefore the location privacy would be effectively preserved. Usually, if the cloaking region is designed to be more significant when the algorithm is constructed, the chances of identifying the exact service requester would be much lower even though the precise location of the user is exposed to the service providers, let alone the attackers' abilities to run complex machine learning or advanced analysis techniques. Some approaches have also been discussed to introduce more ambiguity to the system, such as historical K-anonymity, p-sensitivity, and l-diversity. The idea of historical K-anonymity is proposed to guarantee the moving objects by making sure that there are at least formula_0users who share the same historical requests, which requires the anonymizer to track not only the current movement of the user but also the sequence location of the user. Therefore, even user's historical location points are disclosed, the adversaries could not distinguish the specific user from a group of potential users. P-sensitivity is used to ensure that the critical attributes such as the identity information have at least formula_3different values within formula_1users. Moreover, l-diversity aims to guarantee the user is unidentifiable from l different physical locations. However, setting a large K value would also requires additional spatial and temporal cloaking which leads to a low resolution of information, which in turn could lead to degraded quality of service. Minimum area size. Minimum area size refers to the smallest region expanded from the exact location point which satisfies the specific privacy requirements. Usually, the higher the privacy requirements, the bigger the area is required to increase the complicity of distinguishing the exact location of users. Also, the idea of minimum area is particularly important in dense areas when K-anonymity might not be efficient to provide the guaranteed privacy-preserving performance. For example, if the requestor is in a shopping mall which has a promising discount, there might be a lot of people around him or her, and thus this could be considered a very dense environment. Under such a situation, a large K-anonymity such as L=100 would only correspond to a small region, since it does not require a large area to include 100 people near the user. This might result in an inefficient cloaked area since the space where the user could potentially reside is smaller compared with the situation of the same level of K-anonymity, yet people are more scattered from each other. Maximum area size. Since there is a tradeoff relationship between quality of service and privacy requirements in most location-based services, sometimes a maximum area size is also required. This is because a sizable cloaked area might introduce too much inaccuracy to the service received by the user, since increasing the reported cloaked area also increases the possible satisfying results to the user's request. These solutions would match the specific requirements of the user, yet are not necessarily applicable to the users’ exact location. Applications. The cloaked region generated by the method of spatial cloaking could fit into multiple environments, such as snapshot location, continuous location, spatial networks, and wireless sensor networks. Sometimes, the algorithms which generate a cloaked area are designed to fit into various frameworks without changing the original coordinate. In fact, with the specification of the algorithms and well-establishment of most generally adopted mechanisms, more privacy-preserving techniques are designed specifically for the desired environment to fit into different privacy requirements better. Geosocial applications. Geosocial applications are generally designed to provide a social interaction based on location information. Some of the services include collaborative network services and games, discount coupons, local friend recommendation for dining and shopping, and social rendezvous. For example, Motion Based allows users to share exercise path with others. Foursquare was one of the earliest location-based applications to enable location sharing among friends. Moreover, SCVNGR was a location-based platform where users could earn points by going to places. Despite the privacy requirements such as K-anonymity, maximum area size, and minimum area size, there are other requirements regarding the privacy preserved in geosocial applications. For example, location and user unlinkability require that the service provider should not be able to identify the user who conducts the same request twice or the correspondence between a given cloaked area and its real-time location. Also, the location data privacy requires that the service provider should not have access to the content of data in a specific location. For example, LoX is mainly designed to satisfy these privacy requirements of geosocial applications. Location-based services. With the popularity and development of global positioning system (GPS) and wireless communication, location-based information services have been in high growth in recent years. It has already been developed and deployed in both the academia and the practical sphere. Many practical applications have integrated the idea and techniques of location-based services, such as mobile social networks, finding places of interest (POI), augmented reality (AR) games, awareness of location-based advertising, transportation service, location tracking, and location-aware services. These services usually require the service providers to analyze the received location information based on their algorithms and a database to come up with an optimum solution, and then report it back to the requesting user. Usually, the location-based services are requested either through snapshot queries or continuous queries. Snapshot queries generally require the report of an exact location at a specific time, such as “where is the nearest gas station?” while continuous queries need the tracking of location during a period of time, such as “constantly reporting the nearby gas stations.” With the advancement of global positioning systems and the development of wireless communication which are introduced in the extensive use of location-based applications, high risks have been placed on user privacy. Both the service providers and users are under the dangers of being attacked and information being abused. It has been reported that some GPS devices have been used to exploit personal information and stalk personal locations. Sometimes, only reporting location information would already indicate much private information. One of the attacks specific to location-based services is the space or time correlated inference attacks, in which the visited location is correlated with the particular time, and this could lead to the disclosure of private life and private business. Some of the popular location-based services include: Continuous location-based service Continuous location-based services require a constant report of location information to the service providers. During the process of requesting a continuous location-based service, pressure has been recognized on privacy leakage issues. Since the a series of cloaked areas are reported, with the advancing technological performances, a correlation could be generated between the blurred regions. Therefore, many types of research have been conducted addressing the location privacy issues in continuous location-based services. Snapshot location-based services While snapshot location generally refers to the linear relation between the specific location point and a point in the temporal coordinate. Some mechanisms have been proposed to either address the privacy-preserving issues in both of the two environments simultaneously or concentrate on fulfilling each privacy requirement respectively. For example, a privacy grid called a dynamic grid system is proposed to fit into both snapshot and continuous location-based service environments. Other privacy mechanisms. The existing privacy solutions generally fall into two categories: data privacy and context privacy. Besides addressing the issues in location privacy, these mechanisms might be applied to other scenarios. For example, tools such as cryptography, anonymity, obfuscation and caching have been proposed, discussed, and tested to better preserve user privacy. These mechanisms usually try to solve location privacy issues from different angles and thus fit into different situations. Concerns. Even though the effectiveness of spatial cloaking has been widely accepted and the idea of spatial cloaking has been integrated into multiple designs, there are still some concerns towards it. First, the two schemes of spatial cloaking both have their limitations. For example, in the centralized scheme, although users' other private information including identity has been cloaked, the location itself would be able to release sensitive information, especially when a specific user requests service for multiple times with the same pseudonym. In a decentralized scheme, there are issues with large computation and not enough peers in a region. Second, the ability of attackers requires a more in-depth consideration and investigation according to the advancement of technology such as machine learning and its connection with social relations, particularly the share of information online. Third, the credibility of a trusted-third-party has also been identified as one of the issues. There is a large number of software published on app markets every day, and some of them have not undergone a strict examination. Software bugs, configuration errors at the trusted-third-party and malicious administrators could expose private user data under high risks. Based on a study from 2010, two-thirds of all the trusted-third-party applications in the Android market are considered to be suspicious towards sensitive information. Fourth, location privacy has been recognized as a personalized requirement and is sensitive to various contexts. Customizing privacy parameters has been exploring in recent years since different people have different expectations on the amount of privacy preserved and sometimes the default settings do not fully satisfy user needs. Considering that there is often a trade-off relation between privacy and personalization and personalization usually leads to better service, people would have different preferences. In the situations where users can change the default configurations, accepting the default instead of customizing seems to be a more popular choice. Also, people's attitudes towards disclosing their location information could vary based on the service's usefulness, privacy safeguards, and the disclosed quantity etc. In most situations, people are weighing the price of privacy sharing and the benefits they received. Fifth, there are many protection mechanism proposed in literature yet few of them have been practically integrated into commercial applications. Since there is little analysis regarding the implementation of location privacy-preserving mechanisms, there is still a large gap between theory and privacy. Attack. During the process of exchanging data, the three main parties—the user, the server, and the networks—can be attacked by adversaries. The knowledge held by adversaries which could be used to carry out location attacks includes observed location information, precise location information, and context knowledge. The techniques of machine learning and big data have also led to an emerging trend in location privacy, and the popularity of smart devices has led to an increasing number of attacks. Some of the adopted approaches include the virus, the Trojan applications, and several cyber-attacks. Man-in-the-middle attacks usually occur in the mobile environment which assumes that all the information going through the transferring process from user to the service provider could be under attacks and might be manipulated further by attackers revealing more personal information. Cross-servicing attacks usually take place when users are using poorly protected wireless connectivity, especially in public places. Video-based attacks are more prevalent in mobile devices usually due to the use of Bluetooth, camera, and video capacities, since there are malicious software applications secretly recording users’ behavior data and reporting that information to a remote device. Stealthy Video Capture is one of the intentionally designed applications which spies an unconscious user and further report the information. Sensor sniffing attacks usually refer to the cases where intentionally designed applications are installed on a device. Under this situation, even adversaries do not have physical contact with the mobile device, users’ personal information would still under risks of being disclosed. In a localization attack, contextual knowledge is combined with observed location information to disclose a precise location. The contextual knowledge can also be combined with precise location information to carry out identity attacks. Integrating learning algorithms and other deep learning methods are posing a huge challenge to location privacy, along with the massive amount of data online. For example, current deep learning methods can come up with predictions about geolocations based on the personal photos from social networks and performs types of object detection based on their abilities to analyze millions of photos and videos. Regulations and policies. Policy approaches have also been discussed in recent years which intend to revise relevant guidelines or propose new regulations to better manage location-based service applications. The current technology state does not have a sufficiently aligned policies and legal environment, and there are efforts from both academia and industry trying to address this issue. Two uniformly accepted and well- established requirements are the users' awareness of location privacy policies in a specific service and their consents of sending their personal location to a service provider. Besides these two approaches, researchers have also been focusing on guarding the app markets, since an insecure app market would expose unaware users to several privacy risks. For example, there have been identified much malware in the Android app market, which are designed to carry cyber attacks on Android devices. Without effective and clear guidelines to regulate location information, it would generate both ethical and lawful problems. Therefore, many guidelines have been discussed in years recently, to monitor the use of location information. European data protection guideline. European data protection guideline was recently revised to include and specify the privacy of an individual's data and personally identifiable information (PIIs). These adjustments intend to make a safe yet effective service environment. Specifically, location privacy is enhanced by making sure that the users are fully aware and consented on the location information which would be sent to the service providers. Another important adjustment is that a complete responsibility would be given to the service providers when users’ private information is being processed. European Union's Directive. The European Union's "Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data" specifies that the limited data transfer to non-EU countries which are with "an adequate level of privacy protection". The notion of "explicit consent" is also introduced in the Directive, which stated that except for legal and contractual purpose, personal data might only be processed if the user has unambiguously given his or her consent. European Union's "Directive 2002/58/EC on privacy and electronic communication" explicitly defines location information, user consent requirements and corporate disposal requirement which helps to regulate and protect European citizens' location privacy. Under the situation when data are unlinkable to the user, the legal frameworks such as the EU Directive has no restriction on the collection of anonymous data. The electronic communications privacy act of 1986. The electronic communications privacy act discusses the legal framework of privacy protection and gives standards of law enforcement access to electronic records and communications. It is also very influential in deciding electronic surveillance issues. Global system for mobile communication association (GSMA). GSMA published a new privacy guideline, and some mobile companies in Europe have signed it and started to implement it so that users would have a better understanding of the information recorded and analyzed when using location-based services. Also, GSMA has recommended the operating companies to inform their customers about people who have access to the users’ private information. Cases. Corporate examples. Even though many privacy preserving mechanisms have not been integrated into common use due to effectiveness, efficiency, and practicality, some location-based service providers have started to address privacy issues in their applications. For example, Twitter enables its users to customize location accuracy. Locations posted in Glympse will automatically expire. Also, SocialRadar allows its users to choose to be anonymous or invisible when using this application. Google. It has been stated that Google does not meet the European Union’s data privacy law and thus increasing attention has been placed on the advocation of guidelines and policies regarding data privacy. Facebook. It has been arguing that less than a week after Facebook uses its “Places” feature, the content of that location information has been exploited by thieves and are used to conduct a home invasion. Court cases. United States v. Knotts case. In this case, the police used a beeper to keep track of the suspect's vehicle. After using the beeper alone to track the suspect, the officers secured a search warrant and confirmed that the suspect was producing illicit drugs in the van. The suspect tried to suppress the evidence based on the tracking device used during the monitoring process, but the court denied this. The court concluded that “A person traveling in an automobile on a public thouroughfare ["sic"] has no reasonable expectation of privacy in his movement from one place to another.” Nevertheless, the court reserved the discussion of whether twenty-four-hour surveillance would constitute a search. However, the cases using GPS and other tracking devices are different with this case, since GPS tracking can be conducted without human interaction, while the beeper is considered as a method to increase police's sensory perception through maintaining visual contact of the suspect. Police presence is required when using beepers yet is not needed when using GPS to conduct surveillance. Therefore, law enforcement agents are required to secure a warrant before obtaining vehicle's location information with the GPS tracking devices. United States v. Jones. In this case (https://www.oyez.org/cases/2011/10-1259), the police had a search warrant to install Global Positioning System on a respondent wife's car, while the actual installation was on the 11th day in Maryland, instead of the authorized installation district and beyond the approved ten days. The District Court ruled that the data recorded on public roads admissible since the respondent Jones had no reasonable exception of privacy in public streets, yet the D.C. Circuit reversed this through the violation of the Fourth Amendment of unwarranted use of GPS device. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k-1\n" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "1/k" }, { "math_id": 3, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=60583225
60584394
Intersective modifier
In linguistics, an intersective modifier is an expression which modifies another by delivering the intersection of their denotations. One example is the English adjective "blue", whose intersectivity can be seen in the fact that being a "blue pig" entails being both blue and a pig. By contrast, the English adjective "former" is non-intersective since a "former president" is neither former nor a president. When a modifier is intersective, its contribution to the sentence's truth conditions do not depend on the particular expression it modifies. This means that one can test whether a modifier is intersective by seeing whether it gives rise to valid reasoning patterns such as the following. With a non-intersective modifiers such as "skillful", the equivalent deduction would not be valid. Modifiers can be ambiguous, having both intersective and nonintersective interpretations. For instance, the example below has an intersective reading on which Oleg is both beautiful and a dancer, but it also has a merely subsective reading on which Oleg dances beautifully but need not himself be beautiful. On a textbook semantics for modification, an intersective modifier denotes the set of individuals which have the property in question. When the modifier modifies a modifiee which also denotes a set of individuals, the resulting phrase denotes the intersection of their denotations. Such meanings can be composed either by introducing an interpretation rule "Predicate Modification" which hard-codes intersectivity. However, this mode of composition can also be delivered by standard "Function Application" if the modifier is given a higher semantic type, either lexically or by applying a type shifter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[\\![ \\text{blue} ]\\!] = \\{x \\, | \\, x \\text{ is blue }\\}" }, { "math_id": 1, "text": "[\\![ \\text{blue pig} ]\\!] = \\{x \\, | \\, x \\text{ is blue }\\} \\cap \\{x \\, | \\, x \\text{ is a pig }\\}" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\\gamma" }, { "math_id": 5, "text": "[\\![\\beta]\\!], [\\![\\gamma]\\!] \\in \\mathcal{D}_{\\langle e, t \\rangle}" }, { "math_id": 6, "text": "[\\![\\alpha]\\!] = \\lambda x_e \\, . \\, x \\in [\\![\\beta]\\!]\\cap[\\![\\gamma]\\!]" } ]
https://en.wikipedia.org/wiki?curid=60584394
6059135
Doppler echocardiography
Medical imaging technique of the heart Doppler echocardiography is a procedure that uses Doppler ultrasonography to examine the heart. An echocardiogram uses high frequency sound waves to create an image of the heart while the use of Doppler technology allows determination of the speed and direction of blood flow by utilizing the Doppler effect. An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, any abnormal communications between the left and right side of the heart, any leaking of blood through the valves (valvular regurgitation), calculation of the cardiac output and calculation of E/A ratio (a measure of diastolic dysfunction). Contrast-enhanced ultrasound-using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements. An advantage of Doppler echocardiography is that it can be used to measure blood flow within the heart without invasive procedures such as cardiac catheterization. In addition, with slightly different filter/gain settings, the method can measure tissue velocities by tissue Doppler echocardiography. The combination of flow and tissue velocities can be used for estimating left ventricular filling pressure, although only under certain conditions. Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives). However, the calculation result will end up identical. This procedure is frequently used to examine children's hearts for heart disease because there is no age or size requirement. 2D Doppler imaging. Unlike 1D Doppler imaging, which can only provide one-dimensional velocity and has dependency on the beam to flow angle, 2D velocity estimation using Doppler ultrasound is able to generate velocity vectors with axial and lateral velocity components. 2D velocity is useful even if complex flow conditions such as stenosis and bifurcation exist. There are two major methods of 2D velocity estimation using ultrasound: Speckle tracking and crossed beam Vector Doppler, which are based on measuring the time shifts and phase shifts respectively. Vector Doppler. Vector Doppler is a natural extension of the traditional 1D Doppler imaging based on phase shift. The phase shift is found by taking the autocorrelation between echoes from two consecutive firings. The main idea of Vector Doppler is to divide the transducer into three apertures: one at the center as the transmit aperture and two on each side as the receive apertures. The phase shifts measured from left and right apertures are combined to give the axial and lateral velocity components. The positions and the relative angles between apertures need to be tuned according to the depth of the vessel and the lateral position of the region of interest. Speckle tracking. Speckle tracking, which is a well-established method in video compression and other applications, can be used to estimate blood flow in ultrasound systems. The basic idea of speckle tracking is to find the best match of a certain speckle from one frame within a search region in subsequent frames. The decorrelation between frames is one of the major factors degrading its performance. The decorrelation is mainly caused by the different velocity of pixels within a speckle, as they do not move as a block. This is less severe when measuring the flow at the center, where the changing rate of the velocity is the lowest. The flow at the center usually has the largest velocity magnitude, called "peak velocity". It is the most needed information in some cases, such as diagnosing stenosis. There are mainly three methods of finding the best match: SAD (Sum of absolute difference), SSD (Sum of squared difference) and Cross correlation. Assume formula_0 is a pixel in the kernel and formula_1 is the mapped pixel shifted by formula_2 in the search region. SAD is calculated as: formula_3 SSD is calculated as: formula_4 Normalized cross correlation coefficient is calculated as: formula_5 where formula_6 and formula_7 are the average values of formula_0 and formula_8 respectively. The formula_2 pair that gives the lowest D for SAD and SSD, or the largest ρ for the cross correlation, is selected as the estimation of the movement. The velocity is then calculated as the movement divided by the time difference between the frames. Usually, the median or average of multiple estimations is taken to give more accurate result. Sub pixel accuracy. In ultrasound systems, lateral resolution is usually much lower than the axial resolution. The poor lateral resolution in the B-mode image also results in poor lateral resolution in flow estimation. Therefore, sub pixel resolution is needed to improve the accuracy of the estimation in the lateral dimension. In the meantime, we could reduce the sampling frequency along the axial dimension to save computations and memories if the sub pixel movement is estimated accurately enough. There are generally two kinds of methods to obtain the sub pixel accuracy: interpolation methods, such as parabolic fit, and phase based methods in which the peak lag is found when the phase of the analytic cross correlation function crosses zero. Interpolation method (parabolic fit). As shown in the right figure, parabolic fit can help find the real peak of the cross correlation function. The equation for parabolic fit in 1D is: formula_9 where formula_10 is the cross correlation function and formula_11 is the originally found peak. formula_12 is then used to find the displacement of scatterers after interpolation. For the 2D scenario, this is done in both the axial and lateral dimensions. Some other techniques can be used to improve the accuracy and robustness of the interpolation method, including parabolic fit with bias compensation and matched filter interpolation. Phase based method. The main idea of this method is to generate synthetic lateral phase and use it to find the phase that crosses zero at the peak lag. The right figure illustrates the procedure of creating the synthetic lateral phase, as a first step. Basically, the lateral spectrum is split in two to generate two spectra with nonzero center frequencies. The cross correlation is done for both the up signal and down signal, creating formula_13 and formula_14 respectively. The lateral correlation function and axial correlation function are then calculated as follows: formula_15 where formula_16 is the complex conjugate of formula_14. They have the same magnitude, and the integer peak is found using traditional cross correlation methods. After the integer peak is located, a 3 by 3 region surrounding the peak is then extracted with its phase information. For both the lateral and axial dimensions, the zero crossings of a one-dimensional correlation function at the other dimension’s lags are found, and a linear least squares fitted line is created accordingly. The intersection of the two lines gives the estimate of the 2D displacement. Comparison between vector Doppler and speckle tracking. Both methods could be used for 2D Velocity Vector Imaging, but Speckle Tracking would be easier to extend to 3D. Also, in Vector Doppler, the depth and resolution of the region of interest are limited by the aperture size and the maximum angle between the transmit and receive apertures, while Speckle Tracking has the flexibility of alternating the size of the kernel and search region to adapt to different resolution requirement. However, vector Doppler is less computationally complex than speckle tracking. Volumetric flow estimation. Velocity estimation from conventional Doppler requires knowledge of the beam-to-flow angle (inclination angle) to produce reasonable results for regular flows and does a poor job of estimating complex flow patterns, such as those due to stenosis and/or bifurcation. Volumetric flow estimation requires integrating velocity across the vessel cross-section, with assumptions about the vessel geometry, further complicating flow estimates. 2D Doppler data can be used to calculate the volumetric flow in certain integration planes. The integration plane is chosen to be perpendicular to the beam, and Doppler power (generated from power Doppler mode of Doppler ultrasound) can be used to differentiate between the components that are inside and outside the vessel. This method does not require prior knowledge of the Doppler angle, flow profile and vessel geometry. Promise of 3D. Until recently, ultrasound images have been 2D views and have relied on highly-trained specialists to properly orient the probe and select the position within the body to image with only few and complex visual cues. The complete measurement of 3D velocity vectors makes many post-processing techniques possible. Not only is the volumetric flow across any plane measurable, but also, other physical information such as stress and pressure can be calculated based on the 3D velocity field. However, it is quite challenging to measure the complex blood flow to give velocity vectors, due to the fast acquisition rate and the massive computations needed for it. Plane wave technique is thus promising as it can generate very high frame rate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_0 (i,j)" }, { "math_id": 1, "text": "X_1 (i+\\alpha,j+\\beta)" }, { "math_id": 2, "text": "(\\alpha,\\beta)" }, { "math_id": 3, "text": "D(\\alpha,\\beta)=\\sum_{i=1} \\sum_{j=1} |X_0 (i,j)-X_1 (i+\\alpha,j+\\beta)|" }, { "math_id": 4, "text": "D(\\alpha,\\beta)=\\sum_{i=1} \\sum_{j=1} (X_0 (i,j)-X_1 (i+\\alpha,j+\\beta))^{2}" }, { "math_id": 5, "text": "\\rho(\\alpha,\\beta)=\\frac{\\sum_{i=1} \\sum_{j=1}(X_0 (i,j)-\\bar{X_0})(X_1 (i+\\alpha,j+\\beta)-\\bar{X_1})}{\\sqrt{(\\sum_{i=1} \\sum_{j=1} (X_0 (i,j)-\\bar{X_0})^{2})(\\sum_{i=1} \\sum_{j=1} (X_1 (i+\\alpha,j+\\beta)-\\bar{X_1})^{2})}}" }, { "math_id": 6, "text": "\\bar{X_0}" }, { "math_id": 7, "text": "\\bar{X_1}" }, { "math_id": 8, "text": "X_1 (i,j)" }, { "math_id": 9, "text": "k_{int}=k_s-\\frac{(R_{12} (k_s+1)-R_{12} (k_s-1))}{2(R_{12} (k_s+1)-2R_{12} (k_s )+R_{12} (k_s-1))} " }, { "math_id": 10, "text": "R_{12}" }, { "math_id": 11, "text": "k_s" }, { "math_id": 12, "text": "k_{int}" }, { "math_id": 13, "text": "R_{up}" }, { "math_id": 14, "text": "R_{down}" }, { "math_id": 15, "text": "R_{lateral}=R_{up}*R_{down}^{*}; R_{axial}=R_{up}*R_{down}" }, { "math_id": 16, "text": "R_{down}^{*}" } ]
https://en.wikipedia.org/wiki?curid=6059135
6059689
Cantor–Zassenhaus algorithm
Algorithm for factoring polynomials over finite fields In computational algebra, the Cantor–Zassenhaus algorithm is a method for factoring polynomials over finite fields (also called Galois fields). The algorithm consists mainly of exponentiation and polynomial GCD computations. It was invented by David G. Cantor and Hans Zassenhaus in 1981. It is arguably the dominant algorithm for solving the problem, having replaced the earlier Berlekamp's algorithm of 1967. It is currently implemented in many computer algebra systems. Overview. Background. The Cantor–Zassenhaus algorithm takes as input a square-free polynomial formula_0 (i.e. one with no repeated factors) of degree "n" with coefficients in a finite field formula_1 whose irreducible polynomial factors are all of equal degree (algorithms exist for efficiently factoring arbitrary polynomials into a product of polynomials satisfying these conditions, for instance, formula_2 is a squarefree polynomial with the same factors as formula_0, so that the Cantor–Zassenhaus algorithm can be used to factor arbitrary polynomials). It gives as output a polynomial formula_3 with coefficients in the same field such that formula_3 divides formula_0. The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of formula_0 into powers of irreducible polynomials (recalling that the ring of polynomials over any field is a unique factorisation domain). All possible factors of formula_0 are contained within the factor ring formula_4. If we suppose that formula_0 has irreducible factors formula_5, all of degree "d", then this factor ring is isomorphic to the direct product of factor rings formula_6. The isomorphism from "R" to "S", say formula_7, maps a polynomial formula_8 to the "s"-tuple of its reductions modulo each of the formula_9, i.e. if: formula_10 then formula_11. It is important to note the following at this point, as it shall be of critical importance later in the algorithm: Since the formula_9 are each irreducible, each of the factor rings in this direct sum is in fact a field. These fields each have degree formula_12. Core result. The core result underlying the Cantor–Zassenhaus algorithm is the following: If formula_13 is a polynomial satisfying: formula_14 formula_15 where formula_16 is the reduction of formula_17 modulo formula_9 as before, and if any two of the following three sets is non-empty: formula_18 formula_19 formula_20 then there exist the following non-trivial factors of formula_0: formula_21 formula_22 formula_23 Algorithm. The Cantor–Zassenhaus algorithm computes polynomials of the same type as formula_17 above using the isomorphism discussed in the Background section. It proceeds as follows, in the case where the field formula_1 is of odd-characteristic (the process can be generalised to characteristic 2 fields in a fairly straightforward way. Select a random polynomial formula_24 such that formula_25. Set formula_26 and compute formula_27. Since formula_7 is an isomorphism, we have (using our now-established notation): formula_28 Now, each formula_29 is an element of a field of order formula_12, as noted earlier. The multiplicative subgroup of this field has order formula_30 and so, unless formula_31, we have formula_32 for each "i" and hence formula_33 for each "i". If formula_31, then of course formula_34. Hence formula_27 is a polynomial of the same type as formula_17 above. Further, since formula_35, at least two of the sets formula_36 and "C" are non-empty and by computing the above GCDs we may obtain non-trivial factors. Since the ring of polynomials over a field is a Euclidean domain, we may compute these GCDs using the Euclidean algorithm. Applications. One important application of the Cantor–Zassenhaus algorithm is in computing discrete logarithms over finite fields of prime-power order. Computing discrete logarithms is an important problem in public key cryptography. For a field of prime-power order, the fastest known method is the index calculus method, which involves the factorisation of field elements. If we represent the prime-power order field in the usual way – that is, as polynomials over the prime order base field, reduced modulo an irreducible polynomial of appropriate degree – then this is simply polynomial factorisation, as provided by the Cantor–Zassenhaus algorithm. Implementation in computer algebra systems. The Cantor–Zassenhaus algorithm is implemented in the PARI/GP computer algebra system as the factorcantor() function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x)" }, { "math_id": 1, "text": "\\mathbb{F}_q" }, { "math_id": 2, "text": "f(x)/\\gcd(f(x),f'(x))" }, { "math_id": 3, "text": "g(x)" }, { "math_id": 4, "text": "R = \\frac{\\mathbb{F}_q[x]}{\\langle f(x) \\rangle}" }, { "math_id": 5, "text": "p_1(x), p_2(x), \\ldots, p_s(x)" }, { "math_id": 6, "text": "S = \\prod_{i=1}^s \\frac{\\mathbb{F}_q[x]}{\\langle p_i(x) \\rangle}" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "g(x) \\in R" }, { "math_id": 9, "text": "p_i(x)" }, { "math_id": 10, "text": "\n\\begin{align}\ng(x) & {} \\equiv g_1(x) \\pmod{p_1(x)}, \\\\\ng(x) & {} \\equiv g_2(x) \\pmod{p_2(x)}, \\\\\n& {} \\ \\ \\vdots \\\\\ng(x) & {} \\equiv g_s(x) \\pmod{p_s(x)},\n\\end{align}\n" }, { "math_id": 11, "text": "\\phi(g(x) + \\langle f(x) \\rangle) = (g_1(x) + \\langle p_1(x) \\rangle, \\ldots, g_s(x) + \\langle p_s(x) \\rangle)" }, { "math_id": 12, "text": "q^d" }, { "math_id": 13, "text": "a(x) \\in R" }, { "math_id": 14, "text": "a(x) \\neq 0, \\pm 1 " }, { "math_id": 15, "text": "a_i(x) \\in \\{0,-1,1\\}\\text{ for }i=1,2,\\ldots, s," }, { "math_id": 16, "text": "a_i(x)" }, { "math_id": 17, "text": "a(x)" }, { "math_id": 18, "text": "A = \\{ i \\mid a_i(x) = 0 \\}, " }, { "math_id": 19, "text": "B = \\{ i \\mid a_i(x) = -1 \\}, " }, { "math_id": 20, "text": "C = \\{ i \\mid a_i(x) = 1 \\}, " }, { "math_id": 21, "text": "\\gcd(f(x),a(x)) = \\prod_{i \\in A} p_i(x)," }, { "math_id": 22, "text": "\\gcd(f(x),a(x)+1) = \\prod_{i \\in B} p_i(x)," }, { "math_id": 23, "text": "\\gcd(f(x),a(x)-1) = \\prod_{i \\in C} p_i(x)." }, { "math_id": 24, "text": "b(x) \\in R" }, { "math_id": 25, "text": "b(x) \\neq 0, \\pm 1 " }, { "math_id": 26, "text": "m=(q^d-1)/2" }, { "math_id": 27, "text": "b(x)^m" }, { "math_id": 28, "text": "\\phi(b(x)^m) = (b_1^m(x) + \\langle p_1(x) \\rangle, \\ldots, b^m_s(x) + \\langle p_s(x) \\rangle)." }, { "math_id": 29, "text": "b_i(x) + \\langle p_i(x)\\rangle" }, { "math_id": 30, "text": "q^d-1" }, { "math_id": 31, "text": "b_i(x)=0" }, { "math_id": 32, "text": "b_i(x)^{q^d-1}=1" }, { "math_id": 33, "text": "b_i(x)^m = \\pm 1" }, { "math_id": 34, "text": "b_i(x)^m=0" }, { "math_id": 35, "text": "b(x) \\neq 0, \\pm1" }, { "math_id": 36, "text": "A,B" } ]
https://en.wikipedia.org/wiki?curid=6059689
606011
Dilated cardiomyopathy
Condition involving an enlarged, ineffective heart Medical condition Dilated cardiomyopathy (DCM) is a condition in which the heart becomes enlarged and cannot pump blood effectively. Symptoms vary from none to feeling tired, leg swelling, and shortness of breath. It may also result in chest pain or fainting. Complications can include heart failure, heart valve disease, or an irregular heartbeat. Causes include genetics, alcohol, cocaine, certain toxins, complications of pregnancy, and certain infections. Coronary artery disease and high blood pressure may play a role, but are not the primary cause. In many cases the cause remains unclear. It is a type of cardiomyopathy, a group of diseases that primarily affects the heart muscle. The diagnosis may be supported by an electrocardiogram, chest X-ray, or echocardiogram. In those with heart failure, treatment may include medications in the ACE inhibitor, beta blocker, and diuretic families. A low salt diet may also be helpful. In those with certain types of irregular heartbeat, blood thinners or an implantable cardioverter defibrillator may be recommended. Cardiac resynchronization therapy (CRT) may be necessary. If other measures are not effective a heart transplant may be an option in some. About 1 per 2,500 people is affected. It occurs more frequently in men than women. Onset is most often in middle age. Five-year survival rate is about 50%. It can also occur in children and is the most common type of cardiomyopathy in this age group. Signs and symptoms. Dilated cardiomyopathy develops insidiously, and may not initially cause symptoms significant enough to impact on quality of life. Nevertheless, many people experience significant symptoms. These might include: A person who has dilated cardiomyopathy may have an enlarged heart, with pulmonary edema and an elevated jugular venous pressure and a low pulse pressure. Signs of mitral and tricuspid regurgitation may be present. Causes. Although in many cases no cause is apparent, dilated cardiomyopathy is probably the result of damage to the myocardium produced by a variety of toxic, metabolic, or infectious agents. In many cases the cause remains unclear. It may be due to fibrous change of the myocardium from a previous myocardial infarction. Or, it may be the late sequelae of acute viral myocarditis, such as with Coxsackie B virus and other enteroviruses possibly mediated through an immunologic mechanism. Specific autoantibodies are detectable in some cases. Other causes include: Recent studies have shown that those subjects with an extremely high occurrence (several thousands a day) of premature ventricular contractions (extrasystole) can develop dilated cardiomyopathy. In these cases, if the extrasystole are reduced or removed (for example, via ablation therapy) the cardiomyopathy usually regresses. Genetics. About 25–35% of affected individuals have familial forms of the disease, with most mutations affecting genes encoding cytoskeletal proteins, while some affect other proteins involved in contraction. The disease is genetically heterogeneous, but the most common form of its transmission is an autosomal dominant pattern. Autosomal recessive (as found, for example, in Alström syndrome), X-linked (as in Duchenne muscular dystrophy), and mitochondrial inheritance of the disease is also found. Some relatives of those affected by dilated cardiomyopathy have preclinical, asymptomatic heart-muscle changes. Other cytoskeletal proteins involved in DCM include α-cardiac actin, desmin, and the nuclear lamins A and C. Mitochondrial deletions and mutations presumably cause DCM by altering myocardial ATP generation. Nuclear coding variations for mitochondrial complex II have also shown pathogenicity for dilated cardiomyopathy, designated 1GG for SDHA. Kayvanpour et al. performed 2016 a meta-analysis with the largest dataset available on genotype-phenotype associations in DCM and mutations in lamin (LMNA), phospholamban (PLN), RNA Binding Motif Protein 20 (RBM20), Cardiac Myosin Binding Protein C (MYBPC3), Myosin Heavy Chain 7 (MYH7), Cardiac Troponin T 2 (TNNT2), and Cardiac Troponin I (TNNI3). They also reviewed recent studies investigating genotype-phenotype associations in DCM patients with titin (TTN) mutations. LMNA and PLN mutation carriers showed a high prevalence of cardiac transplantation and ventricular arrhythmia. Dysrhythmias and sudden cardiac death (SCD) was shown to occur even before the manifestation of DCM and heart failure symptoms in LMNA mutation carriers. Pathophysiology. The progression of heart failure is associated with left ventricular remodeling, which manifests as gradual increases in left ventricular end-diastolic and end-systolic volumes, wall thinning, and a change in chamber geometry to a more spherical, less elongated shape. This process is usually associated with a continuous decline in ejection fraction. The concept of cardiac remodeling was initially developed to describe changes that occur in the days and months following myocardial infarction. Compensation effects. As DCM progresses, two compensatory mechanisms are activated in response to impaired myocyte contractility and reduced stroke volume: These responses initially compensate for decreased cardiac output and maintain those with DCM as asymptomatic. Eventually, however, these mechanisms become detrimental, intravascular volume becomes too great, and progressive dilatation leads to heart failure symptoms. Computational models. Cardiac dilatation is a transversely isotropic, irreversible process resulting from excess strains on the myocardium. A computation model of volumetric, isotropic, and cardiac wall growth predicts the relationship between cardiac strains (e.g. volume overload after myocardial infarction) and dilation using the following governing equations: formula_0 where formula_1 is elastic volume stretch that is reversible and formula_2 is irreversible, isotropic volume growth described by: formula_3 where formula_4 is a vector, which points along a cardiomyocyte's long axis and formula_5 is the cardiomyocyte stretch due to growth. The total cardiomyocyte growth is given by: formula_6 The above model reveals a gradual dilation of the myocardium, especially the ventricular myocardium, to support the blood volume overload in the chambers. Dilation manifests itself in an increase in total cardiac mass and cardiac diameter. Cardiomyocytes reach their maximum length of 150 formula_7m in the endocardium and 130 formula_7m in the epicardium by the addition of sarcomeres. Due to the increase in diameter, the dilated heart appears spherical in shape, as opposed the elliptical shape of a healthy human heart. In addition, the ventricular walls maintain the same thickness, characteristic of pathophysiological cardiac dilation. Valvular effects. As the ventricles enlarge, both the mitral and tricuspid valves may lose their ability to come together properly. This loss of coaptation may lead to mitral and tricuspid regurgitation. As a result, those with DCM are at increased risk of atrial fibrillation. Furthermore, stroke volume is decreased and a greater volume load is placed on the ventricle, thus increasing heart failure symptoms. Diagnosis. Generalized enlargement of the heart is seen upon normal chest X-ray. Pleural effusion may also be noticed, which is due to pulmonary venous hypertension. The electrocardiogram often shows sinus tachycardia or atrial fibrillation, ventricular arrhythmias, left atrial enlargement, and sometimes intraventricular conduction defects and low voltage. When left bundle-branch block (LBBB) is accompanied by right axis deviation (RAD), the rare combination is considered to be highly suggestive of dilated or congestive cardiomyopathy. Echocardiogram shows left ventricular dilatation with normal or thinned walls and reduced ejection fraction. Cardiac catheterization and coronary angiography are often performed to exclude ischemic heart disease. Genetic testing can be important, since one study has shown that gene mutations in the TTN gene (which codes for a protein called titin) are responsible for "approximately 25% of familial cases of idiopathic dilated cardiomyopathy and 18% of sporadic cases." The results of the genetic testing can help the doctors and patients understand the underlying cause of the dilated cardiomyopathy. Genetic test results can also help guide decisions on whether a patient's relatives should undergo genetic testing (to see if they have the same genetic mutation) and cardiac testing to screen for early findings of dilated cardiomyopathy. Cardiac magnetic resonance imaging (cardiac MRI) may also provide helpful diagnostic information in patients with dilated cardiomyopathy. Treatment. Medical therapy. Drug therapy can slow down progression and in some cases even improve the heart condition. Standard therapy may include salt restriction, ACE inhibitors, diuretics, and beta blockers. Anticoagulants may also be used for antithrombotic therapy. There is some evidence for the benefits of coenzyme Q10 in treating heart failure. Electrical treatment. Artificial pacemakers may be used in patients with intraventricular conduction delay, and implantable cardioverter-defibrillators in those at risk of arrhythmia. These forms of treatment have been shown to prevent sudden cardiac death, improve symptoms, and reduce hospitalization in patients with systolic heart failure. In addition, an implantable cardioverter-defibrillator should be considered as a therapeutic option for the primary prevention of sudden cardiac death in patients with a confirmed LMNA mutation responsible for dilated cardiomyopathy disease phenotype and clinical risk factors. A novel risk score calculator has been developed that allows calculation of risk of sustained ventricular arrhythmia in the next 5 years in patients with DCM. https://www.ikard.pl/SVA/ Surgical treatment. In patients with advanced disease who are refractory to medical therapy, heart transplantation may be considered. For these people, 1-year survival approaches 90% and over 50% survive greater than 20 years. Epidemiology. Although the disease is more common in African-Americans than in Caucasians, it may occur in any patient population. Research directions. Therapies that support reverse remodeling have been investigated, and this may suggests a new approach to the prognosis of cardiomyopathies (see ventricular remodeling). Animals. In some types of animals, both a hereditary and acquired version of dilated cardiomyopathy has been documented. Dogs. Dilated cardiomyopathy is a heritable disease in some dog breeds, including the Boxer, Dobermann, Great Dane, Irish Wolfhound, and St Bernard. Treatment is based on medication, including ACE inhibitors, loop diuretics, and phosphodiesterase inhibitors. An acquired variation of dilated cardiomyopathy describing a link between certain diets was discovered in 2019 by researchers at University of California, Davis School of Veterinary Medicine who published a report on the development of dilated cardiomyopathy in dog breeds lacking the genetic predisposition, particularly in Golden Retrievers. The diets associated with DCM were described as "BEG" (boutique, exotic-ingredient, and/or grain-free) dog foods, as well as legume-rich diets. For treating diet-related DCM, food changes, taurine and carnitine supplementation may be indicated even if the dog does not have a documented taurine or carnitine deficiency although the cost of carnitine supplementation may be viewed as prohibitive by some Cats. Dilated cardiomyopathy is also a disease affecting some cat breeds, including the Oriental Shorthair, Burmese, Persian, and Abyssinian. In cats, taurine deficiency is the most common cause of dilated cardiomyopathy. As opposed to these hereditary forms, non-hereditary DCM used to be common in the overall cat population before the addition of taurine to commercial cat food. Other animals. There is also a high incidence of heritable dilated cardiomyopathy in captive golden hamsters ("Mesocricetus auratus"), due in no small part to their being highly inbred. The incidence is high enough that several strains of Golden Hamster have been developed to serve as animal models in clinical testing for human forms of the disease. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = F^e \\cdot F^g\\," }, { "math_id": 1, "text": "F^e" }, { "math_id": 2, "text": "F^g" }, { "math_id": 3, "text": "F^g = \\mathbb{I}+[\\lambda^{g}-1]f_{0}\\otimes f_{0} \\," }, { "math_id": 4, "text": "f_{0}" }, { "math_id": 5, "text": " \\lambda^g " }, { "math_id": 6, "text": "\\lambda = \\lambda^e \\cdot F\\lambda^g\\," }, { "math_id": 7, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=606011
606123
Photoevaporation
Photoevaporation is the process where energetic radiation ionises gas and causes it to disperse away from the ionising source. The term is typically used in an astrophysical context where ultraviolet radiation from hot stars acts on clouds of material such as molecular clouds, protoplanetary disks, or planetary atmospheres. Molecular clouds. One of the most obvious manifestations of astrophysical photoevaporation is seen in the eroding structures of molecular clouds that luminous stars are born within. Evaporating gaseous globules (EGGs). Evaporating gaseous globules or EGGs were first discovered in the Eagle Nebula. These small cometary globules are being photoevaporated by the stars in the nearby cluster. EGGs are places of ongoing star-formation. Planetary atmospheres. A planet can be stripped of its atmosphere (or parts of the atmosphere) due to high energy photons and other electromagnetic radiation. If a photon interacts with an atmospheric molecule, the molecule is accelerated and its temperature increased. If sufficient energy is provided, the molecule or atom may reach the escape velocity of the planet and "evaporate" into space. The lower the mass number of the gas, the higher the velocity obtained by interaction with a photon. Thus hydrogen is the gas which is most prone to photoevaporation. Photoevaporation is the likely cause of the small planet radius gap. Examples of exoplanets with an evaporating atmosphere are HD 209458 b, HD 189733 b and Gliese 3470 b. Material from a possible evaporating planet around WD J0914+1914 might be responsible for the gaseous disk around this white dwarf. Protoplanetary disks. Protoplanetary disks can be dispersed by stellar wind and heating due to incident electromagnetic radiation. The radiation interacts with matter and thus accelerates it outwards. This effect is only noticeable when there is sufficient radiation strength, such as coming from nearby O and B type stars or when the central protostar commences nuclear fusion. The disk is composed of gas and dust. The gas, consisting mostly of light elements such as hydrogen and helium, is mainly affected by the effect, causing the ratio between dust and gas to increase. Radiation from the central star excites particles in the accretion disk. The irradiation of the disk gives rise to a stability length scale known as the gravitational radius (formula_0). Outside of the gravitational radius, particles can become sufficiently excited to escape the gravity of the disk, and evaporate. After 106 – 107 years, the viscous accretion rates fall below the photoevaporation rates at formula_0. A gap then opens around formula_0, the inner disk drains onto the central star, or spreads to formula_0 and evaporates. An inner hole extending to formula_0 is produced. Once an inner hole forms, the outer disk is very rapidly cleared. The formula for the gravitational radius of the disk is formula_1 where formula_2 is the ratio of specific heats (= 5/3 for a monatomic gas), formula_3 the universal gravitational constant, formula_4 the mass of the central star, formula_5 the mass of the Sun, formula_6 the mean weight of the gas, formula_7 Boltzmann constant, formula_8 is the temperature of the gas and AU the Astronomical Unit. If we denote the coefficient in the above equation by the Greek letter formula_9 then          formula_10  ,                                           .                                                                     where formula_11 is the number of degrees of freedom and we have used the formula: formula_12. For an atom, such as a hydrogen atom, then formula_13, because an atom can move in three different, orthogonal directions. Consequently, formula_14. If the hydrogen atom is ionized, i.e., it is a proton, and is in a strong magnetic field then formula_15, because the proton can move along the magnetic field and rotate around the field lines. In this case, formula_16. A diatomic molecule, e.g., a hydrogen molecule, has formula_17 and formula_18. For a non-linear triatomic molecule, such as water, formula_19 and formula_20. If "formula_11" becomes very large, then formula_9 approaches zero. This is summarised in the Table 1 , where we see that different gases may have different gravitational radii. Table 1: Gravitational radius coefficient as a function of the degrees of freedom. Because of this effect, the presence of massive stars in a star-forming region is thought to have a great effect on planet formation from the disk around a young stellar object, though it is not yet clear if this effect decelerates or accelerates it. Regions containing protoplanetary disks with clear signs of external photoevaporation. The most famous region containing photoevaporated protoplanetary disks is the Orion Nebula. They were called bright proplyds and since then the term was used for other regions to describe photoevaporation of protoplanetary disks. They were discovered with the Hubble Space Telescope. There might even be a planetary-mass object in the Orion Nebula that is being photoevaporated by "θ" 1 Ori C. Since then HST did observe other young star clusters and found bright proplyds in the Lagoon Nebula, the Trifid Nebula, Pismis 24 and NGC 1977. After the launch of the Spitzer Space Telescope additional observations revealed dusty cometary tails around young cluster members in NGC 2244, IC 1396 and NGC 2264. These dusty tails are also explained by photoevaporation of the proto-planetary disk. Later similar cometary tails were found with Spitzer in W5. This study concluded that the tails have a likely lifetime of 5 Myrs or less. Additional tails were found with Spitzer in NGC 1977, NGC 6193 and Collinder 69. Other bright proplyd candidates were found in the Carina Nebula with the CTIO 4m and near Sagittarius A* with the VLA. Follow-up observations of a proplyd candidate in the Carina Nebula with Hubble revealed that it is likely an evaporating gaseous globule. Objects in NGC 3603 and later in Cygnus OB2 were proposed as intermediate massive versions of the bright proplyds found in the Orion Nebula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_g" }, { "math_id": 1, "text": " r_g = \\frac{\\left(\\gamma - 1\\right)}{2\\gamma}\\frac{GM\\mu}{k_B T}\n\\approx 2.15 \\frac{\\left(M/M_\\odot\\right)}{\\left(T/10^4 \\ {\\rm K} \\right)} \\ {\\rm AU},\\!" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "M_\\odot" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": "k_B" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "\\kappa" }, { "math_id": 10, "text": "\\kappa = \\frac{(\\gamma - 1)}{2 \\gamma} = \\frac{1}{(2+f)}" }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": "\\gamma = 1 + \\frac{2}{f}" }, { "math_id": 13, "text": "f = 3" }, { "math_id": 14, "text": "\\kappa = 0.2 " }, { "math_id": 15, "text": "f = 2 " }, { "math_id": 16, "text": "\\kappa = 0.25 " }, { "math_id": 17, "text": "f = 5 " }, { "math_id": 18, "text": "\\kappa = 1/7 \\approx 0.143 " }, { "math_id": 19, "text": "f = 6 " }, { "math_id": 20, "text": "\\kappa = 0.125 " } ]
https://en.wikipedia.org/wiki?curid=606123
60623303
Planar SAT
Boolean satisfiability problem restricted to a planar incidence graph In computer science, the planar 3-satisfiability problem (abbreviated PLANAR 3SAT or PL3SAT) is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called "satisfiable". On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is "unsatisfiable". For example, the formula ""a" AND NOT "b" is satisfiable because one can find the values "a" = TRUE and "b" = FALSE, which make ("a" AND NOT "b") = TRUE. In contrast, "a" AND NOT "a"" is unsatisfiable. Like 3SAT, PLANAR-SAT is NP-complete, and is commonly used in reductions. Definition. Every 3SAT problem can be converted to an incidence graph in the following manner: For every variable formula_0, the graph has one corresponding node formula_0, and for every clause formula_1, the graph has one corresponding node formula_2 An edge formula_3 is created between variable formula_0 and clause formula_1 whenever formula_0 or formula_4 is in formula_1. Positive and negative literals are distinguished using edge colorings. The formula is satisfiable if and only if there is a way to assign TRUE or FALSE to each variable node such that every clause node is connected to at least one TRUE by a positive edge or FALSE by a negative edge. A planar graph is a graph that can be drawn on the plane in a way such that no two of its edges cross each other. Planar 3SAT is a subset of 3SAT in which the incidence graph of the variables and clauses of a Boolean formula is planar. It is important because it is a restricted variant, and is still NP-complete. Many problems (for example games and puzzles) cannot represent non-planar graphs. Hence, Planar 3SAT provides a way to prove those problems to be NP-hard. Proof of NP-completeness. The following proof sketch follows the proof of D. Lichtenstein. Trivially, PLANAR 3SAT is in NP. It is thus sufficient to show that it is NP-hard via reduction from 3SAT. This proof makes use of the fact that formula_5 is equivalent to formula_6 and that formula_7 is equivalent to formula_8. First, draw the incidence graph of the 3SAT formula. Since no two variables or clauses are connected, the resulting graph will be bipartite. Suppose the resulting graph is not planar. For every crossing of edges ("a", "c"1) and ("b", "c"2), introduce nine new variables "a"1, "b"1, "α", "β", "γ", "δ", "ξ", "a"2, "b"2, and replace every crossing of edges with a crossover gadget shown in the diagram. It consists of the following new clauses: formula_9 If the edge ("a", "c"1) is inverted in the original graph, ("a"1, "c"1) should be inverted in the crossover gadget. Similarly if the edge ("b", "c"2) is inverted in the original, ("b"1, "c"2) should be inverted. One can easily show that these clauses are satisfiable if and only if formula_10 and formula_11. This algorithm shows that it is possible to convert each crossing into its planar equivalent using only a constant amount of new additions. Since the number of crossings is polynomial in terms of the number of clauses and variables, the reduction is polynomial. Reductions. Logic puzzles. Reduction from Planar SAT is a commonly used method in NP-completeness proofs of logic puzzles. Examples of these include Fillomino, Nurikabe, Shakashaka, Tatamibari, and Tentai Show. These proofs involve constructing gadgets that can simulate wires carrying signals (Boolean values), input and output gates, signal splitters, NOT gates and AND (or OR) gates in order to represent the planar embedding of any Boolean circuit. Since the circuits are planar, crossover of wires do not need to be considered. Flat folding of fixed-angle chains. This is the problem of deciding whether a polygonal chain with fixed edge lengths and angles has a planar configuration without crossings. It has been proven to be strongly NP-hard via a reduction from planar monotone rectilinear 3SAT. Minimum edge-length partition. This is the problem of partitioning a polygon into simpler polygons such that the total length of all edges used in the partition is as small as possible. When the figure is a rectilinear polygon and it should be partitioned into rectangles, and the polygon is hole-free, then the problem is polynomial. But if it contains holes (even degenerate holes—single points), the problem is NP-hard, by reduction from Planar SAT. The same holds if the figure is any polygon and it should be partitioned into convex figures. A related problem is minimum-weight triangulation - finding a triangulation of minimal total edge length. The decision version of this problem is proven to be NP-complete via a reduction from a variant of Planar 1-in-3SAT. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_i" }, { "math_id": 1, "text": "c_j" }, { "math_id": 2, "text": "c_j." }, { "math_id": 3, "text": "(v_i, c_j)" }, { "math_id": 4, "text": "\\lnot v_i" }, { "math_id": 5, "text": "(\\lnot a \\lor \\lnot b \\lor c) \\land (a \\lor \\lnot c) \\land (b \\lor \\lnot c)" }, { "math_id": 6, "text": "(a \\land b) \\leftrightarrow c" }, { "math_id": 7, "text": "(a \\lor \\lnot b) \\land (\\lnot a \\lor b)" }, { "math_id": 8, "text": "a \\leftrightarrow b" }, { "math_id": 9, "text": "\n\\begin{array}{ll}\n(\\lnot a_2 \\lor \\lnot b_2 \\lor \\alpha) \\land (a_2 \\lor \\lnot \\alpha) \\land (b_2 \\lor \\lnot \\alpha), &\\quad \\text{i.e.,} \\quad a_2 \\land b_2 \\leftrightarrow \\alpha \\\\\n(\\lnot a_2 \\lor b_1 \\lor \\beta) \\land (a_2 \\lor \\lnot \\beta) \\land (\\lnot b_1 \\lor \\lnot \\beta), &\\quad \\text{i.e.,} \\quad a_2 \\land \\lnot b_1 \\leftrightarrow \\beta \\\\\n(a_1 \\lor b_1 \\lor \\gamma) \\land (\\lnot a_1 \\lor \\lnot \\gamma) \\land (\\lnot b_1 \\lor \\lnot \\gamma), &\\quad \\text{i.e.,} \\quad \\lnot a_1 \\land \\lnot b_1 \\leftrightarrow \\gamma \\\\\n(a_1 \\lor \\lnot b_2 \\lor \\delta) \\land (\\lnot a_1 \\lor \\lnot \\delta) \\land (b_2 \\lor \\lnot \\delta), &\\quad \\text{i.e.,} \\quad \\lnot a_1 \\land b_2 \\leftrightarrow \\delta \\\\\n(\\alpha \\lor \\beta \\lor \\xi) \\land (\\gamma \\lor \\delta \\lor \\lnot \\xi), &\\quad \\text{i.e.,} \\quad \\alpha \\lor \\beta \\lor \\gamma \\lor \\delta \\\\\n(\\lnot\\alpha \\lor \\lnot\\beta) \\land (\\lnot\\beta \\lor \\lnot\\gamma) \\land (\\lnot\\gamma \\lor \\lnot\\delta) \\land (\\lnot\\delta \\lor \\lnot\\alpha), &\\\\\n(a_2 \\lor \\lnot a) \\land (a \\lor \\lnot a_2) \\land (b_2 \\lor \\lnot b) \\land (b \\lor \\lnot b_2), &\\quad \\text{i.e.,} \\quad a \\leftrightarrow a_2, ~ b \\leftrightarrow b_2 \\\\\n\\end{array}\n" }, { "math_id": 10, "text": "a \\leftrightarrow a_1" }, { "math_id": 11, "text": "b \\leftrightarrow b_1" } ]
https://en.wikipedia.org/wiki?curid=60623303
60628425
IM 67118
Old Babylonian clay tablet about a problem in geometry IM 67118, also known as Db2-146, is an Old Babylonian clay tablet in the collection of the Iraq Museum that contains the solution to a problem in plane geometry concerning a rectangle with given area and diagonal. In the last part of the text, the solution is proved correct using the Pythagorean theorem. The steps of the solution are believed to represent cut-and-paste geometry operations involving a diagram from which, it has been suggested, ancient Mesopotamians might, at an earlier time, have derived the Pythagorean theorem. Description. The tablet was excavated in 1962 at Tell edh-Dhiba'i, an Old Babylonian settlement near modern Baghdad that was once part of the kingdom of Eshnunna, and was published by Taha Baqir in the same year. It dates to approximately 1770 BCE (according to the middle chronology), during the reign of Ibal-pi-el II, who ruled Eshnunna at the same time that Hammurabi ruled Babylon. The tablet measures 11.5×6.8×3.3 cm (4½" x 2¾" x 1¼"). Its language is Akkadian, written in cuneiform script. There are 19 lines of text on the tablet's obverse and six on its reverse. The reverse also contains a diagram consisting of the rectangle of the problem and one of its diagonals. Along that diagonal is written its length in sexagesimal notation; the area of the rectangle is written in the triangular region below the diagonal. Problem and its solution. In modern mathematical language, the problem posed on the tablet is the following: a rectangle has area "A" = 0.75 and diagonal "c" = 1.25. What are the lengths "a" and "b" of the sides of the rectangle? The solution can be understood as proceeding in two stages: in stage 1, the quantity formula_0 is computed to be 0.25. In stage 2, the well-attested Old Babylonian method of completing the square is used to solve what is effectively the system of equations "b" − "a" = 0.25, "ab" = 0.75. Geometrically this is the problem of computing the lengths of the sides of a rectangle whose area "A" and side-length difference "b"−"a" are known, which was a recurring problem in Old Babylonian mathematics. In this case it is found that "b" = 1 and "a" = 0.75. The solution method suggests that whoever devised the solution was using the property "c"2 − 2"A" = "c"2 − 2"ab" = ("b" − "a")2. It must be emphasized, however, that the modern notation for equations and the practice of representing parameters and unknowns by letters were unheard of in ancient times. It is now widely accepted as a result of Jens Høyrup's extensive analysis of the vocabulary of Old Babylonian mathematics, that underlying the procedures in texts such as IM 67118 was a set of standard cut-and-paste geometric operations, not a symbolic algebra. From the vocabulary of the solution Høyrup concludes that "c"2, the square of the diagonal, is to be understood as a geometric square, from which an area equal to 2"A" is to be "cut off", that is, removed, leaving a square with side "b" − "a". Høyrup suggests that the square on the diagonal was possibly formed by making four copies of the rectangle, each rotated by 90°, and that the area 2"A" was the area of the four right triangles contained in the square on the diagonal. The remainder is the small square in the center of the figure. The geometric procedure for computing the lengths of the sides of a rectangle of given area "A" and side-length difference "b" − "a" was to transform the rectangle into a gnomon of area "A" by cutting off a rectangular piece of dimensions "a×"½("b" − "a") and pasting this piece onto the side of the rectangle. The gnomon was then completed to a square by adding a smaller square of side ½("b" − "a") to it. In this problem, the side of the completed square is computed to be formula_1. The quantity ½("b" − "a")=0.125 is then added to the horizontal side of the square and subtracted from the vertical side. The resulting line segments are the sides of the desired rectangle. One difficulty in reconstructing Old Babylonian geometric diagrams is that known tablets never include diagrams in solutions—even in geometric solutions where explicit constructions are described in text—although diagrams are often included in formulations of problems. Høyrup argues that the cut-and-paste geometry would have been performed in some medium other than clay, perhaps in sand or on a "dust abacus", at least in the early stages of a scribe's training before mental facility with geometric calculation had been developed. Friberg does describe some tablets containing drawings of "figures within figures", including MS 2192, in which the band separating two concentric equilateral triangles is divided into three trapezoids. He writes, ""The idea of computing the area of a triangular band as the area of a chain of trapezoids is a variation on the idea of computing the area of a square band as the area of a chain of four rectangles." This is a simple idea, and it is likely that it was known by Old Babylonian mathematicians, although no cuneiform mathematical text has yet been found where this idea enters in an explicit way." He argues that this idea is implicit in the text of IM 67118. He also invites a comparison with the diagram of YBC 7329, in which two concentric squares are shown. The band separating the squares is not subdivided into four rectangles on this tablet, but the numerical value of the area of one of the rectangles area does appear next to the figure. Checking the solution. The solution "b" = 1, "a" = 0.75 is proved correct by computing the areas of squares with the corresponding side-lengths, adding these areas, and computing the side-length of the square with the resulting area, that is, by taking the square root. This is an application of the Pythagorean theorem, formula_2, and the result agrees with the given value, "c" = 1.25. That the area is also correct is verified by computing the product, "ab". Translation. The following translation is given by Britton, Proust, and Shnider and is based on the translation of Høyrup, which in turn is based on the hand copy and transliteration of Baqir, with some small corrections. Babylonian sexagesimal numbers are translated into decimal notation with base-60 digits separated by commas. Hence 1,15 means 1 + 15/60 = 5/4 = 1.25. Note that there was no "sexagesimal point" in the Babylonian system, so the overall power of 60 multiplying a number had to be inferred from context. The translation is "conformal", which, as described by Eleanor Robson, "involves consistently translating Babylonian technical terms with existing English words or neologisms which match the original meanings as closely as possible"; it also preserves Akkadian word order. Old Babylonian mathematics used different words for multiplication depending on the underlying geometric context and similarly for the other arithmetic operations. Obverse Reverse The problem statement is given in lines 1–3, stage 1 of the solution in lines 3–9, stage 2 of the solution in lines 9–16, and verification of the solution in lines 16–24. Note that "1,15 your diagonal, its counterpart lay down: make them hold" means to form a square by laying down perpendicular copies of the diagonal, the "equalside" is the side of a square, or the square root of its area, "may your head hold" means to remember, and "your hand" may refer to "a pad or a device for computation". Relation to other texts. Problem 2 on the tablet MS 3971 in the Schøyen collection, published by Friberg, is identical to the problem on IM 67118. The solution is very similar but proceeds by adding 2"A" to "c"2, rather than subtracting it. The side of the resulting square equals "b" + "a" = 1.75 in this case. The system of equations "b" + "a" = 1.75, "ab" = 0.75 is again solved by completing the square. MS 3971 contains no diagram and does not perform the verification step. Its language is "terse" and uses many Sumerian logograms in comparison with the "verbose" IM 67118, which is in syllabic Akkadian. Friberg believes this text comes from Uruk, in southern Iraq, and dates it before 1795 BCE. Friberg points out a similar problem in a 3rd-century BCE Egyptian Demotic papyrus, "P. Cairo", problems 34 and 35, published by Parker in 1972. Friberg also sees a possible connection to A.A. Vaiman's explanation of an entry in the Old Babylonian table of constants TMS 3, which reads, "57 36, constant of the šàr". Vaiman notes that the cuneiform sign for šàr resembles a chain of four right triangles arranged in a square, as in the proposed figure. The area of such a chain is 24/25 (equal to 57 36 in sexagesimal) if one assumes 3-4-5 right triangles with hypotenuse normalized to length 1. Høyrup writes that the problem of IM 67118 "turns up, solved in precisely the same way, in a Hebrew manual from 1116 ce". Significance. Although the problem on IM 67118 is concerned with a specific rectangle, whose sides and diagonal form a scaled version of the 3-4-5 right triangle, the language of the solution is general, usually specifying the functional role of each number as it is used. In the later part of the text, an abstract formulation is seen in places, making no reference to particular values ("the length make hold", "Your length to the width raise."). Høyrup sees in this "an unmistakeable trace of the 'Pythagorean rule' in abstract formulation". The manner of discovery of the Pythagorean rule is unknown, but some scholars see a possible path in the method of solution used on IM 67118. The observation that subtracting 2"A" from "c"2 yields ("b" − "a")2 need only be augmented by a geometric rearrangement of areas corresponding to "a"2, "b"2, and −2"A" = −2"ab" to obtain rearrangement proof of the rule, one which is well known in modern times and which is also suggested in the third century CE in Zhao Shuang's commentary on the ancient Chinese "Zhoubi Suanjing" ("Gnomon of the Zhou"). The formulation of the solution in MS 3971, problem 2, having no subtracted areas, provides a possibly even more straightforward derivation. Høyrup proposes the hypothesis, based in part on similarities among word problems that reappear over a broad range of times and places and on the language and numerical content of such problems, that much of the scribal Old Babylonian mathematical material was imported from the practical surveyor tradition, where solving riddle problems was used as a badge of professional skill. Høyrup believes that this surveyor culture survived the demise of Old Babylonian scribal culture that resulted from the Hittite conquest of Mesopotamia in the early 16th century BCE and that it influenced the mathematics of ancient Greece, of Babylon during the Seleucid period, of the Islamic empire, and of medieval Europe. Among the problems Høyrup ascribes to this practical surveyor tradition are several rectangle problems requiring completing the square, including the problem of IM 67118. On the basis that no third-millennium BCE references to the Pythagorean rule are known, and that the formulation of IM 67118 is already adapted to the scribal culture, Høyrup writes, ""To judge from this evidence alone" it is therefore likely that the Pythagorean rule was discovered within the lay surveyors' environment, possibly as a spin-off from the problem treated in Db2-146, somewhere between 2300 and 1825 BC." Thus the rule named after Pythagoras, who was born about 570 BCE and died c.495 BCE, is shown to have been discovered about 12 centuries before his birth. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{c^2-2A}" }, { "math_id": 1, "text": "\\sqrt{A+\\tfrac{1}{4}(b-a)^2}=\\sqrt{0.75+0.015625}=0.875" }, { "math_id": 2, "text": "c=\\sqrt{a^2+b^2}" } ]
https://en.wikipedia.org/wiki?curid=60628425
6063370
Torsten Carleman
Swedish mathematician Torsten Carleman (8 July 1892, Visseltofta, Osby Municipality – 11 January 1949, Stockholm), born Tage Gillis Torsten Carleman, was a Swedish mathematician, known for his results in classical analysis and its applications. As the director of the Mittag-Leffler Institute for more than two decades, Carleman was the most influential mathematician in Sweden. Work. The dissertation of Carleman under Erik Albert Holmgren, as well as his work in the early 1920s, was devoted to singular integral equations. He developed the spectral theory of integral operators with "Carleman kernels", that is, kernels "K"("x", "y") such that "K"("y", "x") = "K"("x", "y") for almost every ("x", "y"), and formula_0 for almost every "x". In the mid-1920s, Carleman developed the theory of quasi-analytic functions. He proved the necessary and sufficient condition for quasi-analyticity, now called the Denjoy–Carleman theorem. As a corollary, he obtained a sufficient condition for the determinacy of the moment problem. As one of the steps in the proof of the Denjoy–Carleman theorem in , he introduced the Carleman inequality formula_1 valid for any sequence of non-negative real numbers "a""k". At about the same time, he established the "Carleman formulae" in complex analysis, which reconstruct an analytic function in a domain from its values on a subset of the boundary. He also proved a generalisation of Jensen's formula, now called the Jensen–Carleman formula. In the 1930s, independently of John von Neumann, he discovered the mean ergodic theorem. Later, he worked in the theory of partial differential equations, where he introduced the "Carleman estimates", and found a way to study the spectral asymptotics of Schrödinger operators. In 1932, following the work of Henri Poincaré, Erik Ivar Fredholm, and Bernard Koopman, he devised the "Carleman embedding" (also called "Carleman linearization"), a way to embed a finite-dimensional system of nonlinear differential equations &lt;templatestyles src="Fraction/styles.css" /&gt;du⁄d"t" = P(u) for u: R"k" → R, where the components of P are polynomials in u, into an infinite-dimensional system of linear differential equations. In 1933 Carleman published a short proof of what is now called the Denjoy–Carleman–Ahlfors theorem. This theorem states that the number of asymptotic values attained by an entire function of order ρ along curves in the complex plane going outwards toward infinite absolute value is less than or equal to 2ρ. In 1935, Torsten Carleman introduced a generalisation of Fourier transform, which foreshadowed the work of Mikio Sato on hyperfunctions; his notes were published in . He considered the functions "f" of at most polynomial growth, and showed that every such function can be decomposed as "f" = "f"+ + "f"−, where "f"+ and "f"− are analytic in the upper and lower half planes, respectively, and that this representation is essentially unique. Then he defined the Fourier transform of ("f"+, "f"−) as another such pair ("g"+, "g"−). Though conceptually different, the definition coincides with the one given later by Laurent Schwartz for tempered distributions. Carleman's definition gave rise to numerous extensions. Returning to mathematical physics in the 1930s, Carleman gave the first proof of global existence for Boltzmann's equation in the kinetic theory of gases (his result applies to the space-homogeneous case). The results were published posthumously in . Carleman supervised the Ph.D. theses of Ulf Hellsten, Karl Persson (Dagerholm), Åke Pleijel and (jointly with Fritz Carlson) of Hans Rådström. Life. Carleman was born in Visseltofta to Alma Linnéa Jungbeck and Karl Johan Carleman, a school teacher. He studied at Växjö Cathedral School, graduating in 1910. He continued his studies at Uppsala University, being one of the active members of the Uppsala Mathematical Society. Kjellberg recalls: He was a genius! My older friends in Uppsala used to tell me about the wonderful years they had had when Carleman was there. He was the most active speaker in the Uppsala Mathematical Society and a well-trained gymnast. When people left the seminar crossing the Fyris River, he walked on his hands on the railing of the bridge. From 1917 he was docent at Uppsala University, and from 1923 — a full professor at Lund University. In 1924 he was appointed professor at Stockholm University. He was elected a member of the Royal Swedish Academy of Sciences in 1926 and of the Finnish Society of Sciences and Letters in 1934. From 1927, he was director of the Mittag-Leffler Institute and editor of Acta Mathematica. From 1929 to 1946 Carleman was married to Anna-Lisa Lemming (1885–1954), the half-sister of the athlete Eric Lemming who won four golden medals and three bronze at the Olympic Games. During this period he was also known as a recognized fascist, anti-semite and xenophobe. His interaction with William Feller before the former departure to the United States was not particularly pleasant, at some point being reported due to his opinion that "Jews and foreigners should be executed". Carlson remembers Carleman as: "secluded and taciturn, who looked at life and people with a bitter humour. In his heart, he was inclined to kindliness towards those around him, and strove to assist them swiftly." Towards the end of his life, he remarked to his students that "professors ought to be shot at the age of fifty." During the last decades of his life, Carleman abused alcohol, according to Norbert Wiener and William Feller. His final years were plagued by neuralgia. At the end of 1948, he developed the liver disease jaundice; he died from complications of the disease. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\int | K(x, y) |^2 dy < \\infty " }, { "math_id": 1, "text": " \\sum_{n=1}^\\infty \\left(a_1 a_2 \\cdots a_n\\right)^{1/n} \\le e \\sum_{n=1}^\\infty a_n," } ]
https://en.wikipedia.org/wiki?curid=6063370
60634470
Resonance escape probability
The probability that a high-energy neutron is not captured In nuclear physics, resonance escape probability formula_0 is the probability that a neutron will slow down from fission energy to thermal energies without being captured by a nuclear resonance. A resonance absorption of a neutron in a nucleus does not produce nuclear fission. The probability of resonance absorption is called the resonance factor formula_1, and the sum of the two factors is formula_2. Generally, the higher the neutron energy, the lower the probability of absorption, but for some energies, called resonance energies, the resonance factor is very high. These energies depend on the properties of heavy nuclei. Resonance escape probability is highly determined by the heterogeneous geometry of a reactor, because fast neutrons resulting from fission can leave the fuel and slow to thermal energies in a moderator, skipping over resonance energies before reentering the fuel. Resonance escape probability appears in the four factor formula and the six factor formula. To compute it, neutron transport theory is used. Resonant neutron absorption. The nucleus can capture a neutron only if the kinetic energy of the neutron is close to the energy of one of the energy levels of the new nucleus formed as a result of capture. The capture cross section of such a neutron by the nucleus increases sharply. The energy at which the neutron-nucleus interaction cross section reaches a maximum is called the resonance energy. The resonance energy range is divided into two parts, the region of resolved and unresolved resonances. The first region occupies the energy interval from 1 eV to "E"gr. In this region, the energy resolution of the instruments is sufficient enough to distinguish any resonance peak. Starting from the energy "E"gr, the distance between resonance peaks becomes smaller than the energy resolution. Subsequently, the resonance peaks are not separated. For heavy elements, the boundary energy "E"gr≈1 keV. In thermal neutron reactors, the main resonant neutron absorber is Uranium-238. In the table for 238U, several resonance neutron energies Er, the maximum absorption cross sections σ"a, r" in the peak, and the width G of these resonances are given. Effective resonance integral. Let us assume that the resonant neutrons move in an infinite system consisting of a moderator and 238U. When colliding with the moderator nuclei, the neutrons are scattered, and with the 238U nuclei, they are absorbed. The former collisions favor the retention and removal of resonant neutrons from the danger zone, while the latter lead to their loss. The probability of avoiding resonance capture (coefficient φ) is related to the density of nuclei "N"S and the moderating power of the medium ξΣ"S" by the relationship below, formula_3 The JeFF value is called the "effective resonance integral". It characterizes the absorption of neutrons by a single nucleus in the resonance region and is measured in barnes. The use of the effective resonance integral simplifies quantitative calculations of resonance absorption without detailed consideration of neutron interaction at deceleration. The effective resonance integral is usually determined experimentally. It depends on the concentration of 238U and the mutual arrangement of uranium and the moderator. Homogeneous mixtures. In a homogeneous mixture of moderator and 238U, the effective resonance integral is found with a good accuracy by the empirical formula below, formula_3 where "N3/N8" is the ratio of moderator and 238U nuclei in the homogeneous mixture, σ3"S" is the microscopic scattering cross section of the moderator. As can be seen from the formula, the effective resonance integral decreases with increasing 238U concentration. The more 238U nuclei in the mixture, the less likely absorption by a single nucleus of the moderating neutrons will take place. The effect of absorption in some 238U nuclei on absorption in others is called "resonance level shielding". It increases with increasing concentration of resonance absorbers. As an example, we can calculate the effective resonance integral in a homogeneous natural uranium-graphite mixture with the ratio "N3/N8=215". The scattering cross section of graphite σC"S"=4.7 barns; formula_4 барн. Heterogeneous mixtures. In a homogeneous environment, all 238U nuclei are in the same conditions with respect to the resonant neutron flux. In a heterogeneous environment uranium is separated from the moderator, which significantly affects the resonant neutron absorption. Firstly, some of the resonant neutrons become thermal neutrons in the moderator without colliding with uranium nuclei; secondly, resonant neutrons hitting the surface of the fuel elements are almost all absorbed by the thin surface layer. The inner 238U nuclei are shielded by the surface nuclei and participate less in the resonant neutron absorption, and the shielding increases with the increase of the fuel element diameter "d". Therefore, the effective 238U resonance integral in a heterogeneous reactor depends on the fuel element diameter "d": formula_5 The constant "a" characterizes the absorption of resonance neutrons by surface and the constant "b" - by inner 238U nuclei. For each type of nuclear fuel (natural uranium, uranium dioxide, etc.) the constants "a" and "b" are measured experimentally. For natural uranium rods "a=4.15, b=12.35". formula_6 "U" for a rod from natural uranium with diameter "d=3 cm:" formula_7 barns. Comparison of the last two examples shows that the separation of uranium and moderator noticeably decreases neutron absorption in the resonance region. Moderator influence. Coefficient φ is dependent on the following; formula_8 Which reflects the competition of two processes in the resonance region: absorption of neutrons and their deceleration. The cross section "Σ", by definition, is analogous to the macroscopic absorption cross section with replacement of the microscopic cross section by the effective resonance integral JeFF. It also characterizes the loss of slowing neutrons in the resonance region. As the 238U concentration increases, the absorption of resonant neutrons increases and hence fewer neutrons are slowed down to thermal energies. The resonance absorption is influenced by the slowing down of neutrons. Collisions with the moderator nuclei take neutrons out of the resonance region and are more intense the greater the moderating power formula_9. So, for the same concentration of 238U, the probability of avoiding resonance capture in the uranium-water medium is greater than in the uranium-carbon medium. Let us calculate the probability of avoiding resonance capture in homogeneous and heterogeneous environments natural uranium-graphite. In both media the ratio of carbon and 238U nuclei "NC/NS=215". The diameter of the uranium rod is "d=3 cm". Taking into account that "ξC=0.159", "σCa=4.7 barn", we calculate the following probability; formula_10 barn−1. Calculating the coefficients φ in homogeneous and heterogeneous mixtures, we get; φhom = e−0,00625·68 = e−0,425 ≈ 0,65, φhet = e−0,00625·11,3 = e−0,0705 ≈ 0,93. The transition from homogeneous to heterogeneous medium slightly reduces the thermal neutron absorption in uranium. However, this loss is considerably overlapped by the decrease of the resonance neutron absorption, and the propagation properties of the medium improve. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "p + \\psi = 1" }, { "math_id": 3, "text": "\\varphi = e^{- \\frac {N_S}{\\xi\\Sigma_S}J_\\mathrm{eff} }." }, { "math_id": 4, "text": "J_\\mathrm{eff} = 3,9\\cdot (215 \\cdot 4,7)^{0,415} = 69" }, { "math_id": 5, "text": "J_\\mathrm{eff}=a+\\frac {b}{\\sqrt{d}}." }, { "math_id": 6, "text": "J_\\mathrm{eff}=4,15+\\frac {12,35}{\\sqrt{d}}," }, { "math_id": 7, "text": "J_\\mathrm{eff}=4,15+\\frac {12,35}{\\sqrt{3}} \\approx 11,3" }, { "math_id": 8, "text": "\\frac{N_8 J_\\mathrm{eff}}{\\xi\\Sigma_S} = \\frac{\\Sigma}{\\xi\\Sigma_S}," }, { "math_id": 9, "text": "\\xi\\Sigma_S" }, { "math_id": 10, "text": "\\frac{N_8}{\\xi\\sigma^C_SN_C} = \\frac{1}{0,159 \\cdot 4,7 \\cdot 215} = 0,00625" } ]
https://en.wikipedia.org/wiki?curid=60634470
60638834
Topological polymers
Topological polymers may refer to a polymeric molecule that possesses unique spatial features, such as linear, branched, or cyclic architectures. It could also refer to polymer networks that exhibit distinct topologies owing to special crosslinkers. When self-assembling or crosslinking in a certain way, polymeric species with simple topological identity could also demonstrate complicated topological structures in a larger spatial scale. Topological structures, along with the chemical composition, determine the macroscopic physical properties of polymeric materials. Definition. Topological polymers, or polymer topology, could refer to a single polymeric chain with topological information or a polymer network with special junctions or connections. When the topology of a polymeric chain or network is investigated, the exact chemical composition is usually neglected, but the way of junctions and connections is more considered. Various topological structures, on one hand, could potentially change the interactions (van der Waals interaction, hydrogen bonding, etc.) between each of the polymer chain. On the other hand, topology also determines the hierarchical structures within a polymer network, from a microscopic level (&lt;1 nm) to a macroscopic level (10-100 nm), which eventually affords polymeric materials with completely different physical properties, such as mechanical property, glass transition temperature, gelation concentration. Topological polymer classification. In early 1950s, Paul J. Flory was the pioneer who developed theories to explain topology within a polymer network, and the structure-property relationships between the topology and the mechanical property, like elasticity, was initially established afterwards. Later in 1980s, Bertrand Duplantier developed theories to describe any polymer network topologies using statistical mechanics, which could help to derive topology-dependent critical exponents in a polymer network. In early 2000s, Yasuyuki Tezuka and coworkers were the first ones that systematically described a single molecular chain with topological information. Adapted from Y. Tezuka and coworker's description of a topological polymer chain with more generalized rules, the topology notation rules are to be introduced first, followed by three classical classifications, including linear, branched and cyclic polymer topologies, and they are classified in a table reorganized and redrawn from Y. Tezuka and coworker (Copyright, 2001 by American Chemical Society). A general polymer chain could be generalized into an undirected graph with nodes (vertices or points) and edges (lines or links) based on graph theory. In a graph theory topology, two sets of nodes are present, termini and junctions. The quantity ‘degree’ represents the number of edges linked to each node, if the degree of a certain node is larger than 3 (including 3), the node is a junction, while the degree of a node is 1, the node is a terminus. There are no nodes with a degree of 2 since they could be generalized into their adjacent nodes. As for a certain polymer, as long as the topology is fixed, a specific topology notation could be generated using the following rules: A general polymer chain notation could be expressed as: formula_0 i. For branched topology, a main chain is first selected, and the degree of each junction nodes along the chain should be noted as formula_13connected by a hyphen. If there is a side chain on any of the main chain node, formula_14 should be noted with a bracket following the main chain notation. ii. For monocyclic topology, the outward branch should be firstly identified with the number of branches at each of the junctions as formula_13connected by a hyphen. Then the topology of each branch should be identified using the rule in i as formula_14using a bracket following the formula_13notations. iii. For multicyclic topology, superscript letter (formula_15, formula_16, formula_17 and so on) is used to describe internal connections within an existing ring. Linear. Linear topology is a special topological structure that exclusively has two nodes as the termini without any junction nodes. High-density polyethylene (HDPE) could be regarded as a linear polymer chain with very small amount of branching, the linear topology has been listed below: Linear chains capable of forming intra-chain interactions can fold into a wide range of circuit topologies. Examples include biopolymers such as proteins and nucleic acids. Branched. When side chains are introduced into a linear polymer chain, a branched topology forms. Linear polymers are special types of branched polymers with zero junction nodes, but they are cataloged into two classifications to distinguish their special macroscopic properties. Branched polymers with the same molecular weight usually demonstrate different physical properties due to that branching could generally decrease the van der Waals interactions between each of the polymer chain. Several well-known branched polymers have been synthesized, such as star-shape polymer, comb polymer and dendrimer. Selected branched topologies have been listed below: Cyclic. Cyclic structures are of interest topologically because there are no termini in this topology and the physical property could be dramatically different as a result of the restriction of the termini. Monocyclic. Monocyclic topology is a topological structure with only one cycle in the polymer chain, and it could be coupled with outward branching structures. Selected monocyclic topologies have been listed below: Bicyclic. Bicyclic topology refers to a structure that two cycles connected internally or externally are present in a polymer chain. Selected bicyclic topologies are listed below: Polycyclic. Similar to monocyclic and bicyclic topologies, polycyclic topologies possess more cycles in a polymer chain and are more synthetically challenging. Selected polycyclic (tricyclic) topologies are listed below: Polymer network topology. Unlike single chain polymeric species, polymer network topology is more complicated as a result of the amorphous feature so that a simple notation is usually not feasible. To analyze the topology of a network, the crosslinkers, including the branched crosslinker and cyclic crosslinker, are considered. Branched crosslinking. Branched crosslinkers are entities that do not form cyclic topologies, which could be simply understood by branched topological polymer chain above. The ‘degree’ of branched demonstrates the theoretical number of polymer strands at the junctions of the crosslinker, also known as branch functionality ("f"). Combining monomers with different degree of branch functionality could generate various topological network with distinct elastic property. Meanwhile, amphiphilic polymers, such as block copolymers, when forming micelle structures, could also be treated as a branched crosslinker with high degree of branch functionality. Cyclic crosslinking. Branched crosslinkers should in principle form branched polymer network, but in practice, they could also generate loops and cycles. Cyclic crosslinkers are more sophisticated and show multiple possibilities. Loops or cycles could form in a smaller scale between two polymer chains or in a larger scale among multiple polymer strands. Besides, bicyclic topology is likely to form if two loops are catenated or linked internally or externally. Special cyclic crosslinking is more attractive within rotaxanes or catenanes since cycles are already present in those molecules. The characterization of cyclic topologies within a polymer network, compared to branched crosslinker, is relatively harder to perform. Conventional techniques such as rheology and tensile strength analysis are used to offer semiquantitative insights into the polymer topologies. Recently, the development of multiple quantum nuclear magnetic resonance (NMR) and network disassembly spectrometry (NDS) techniques provides quantitative characterizations of loops or cycles in a polymer network. Topological polymer/network synthesis. Topological polymer single chain. The synthesis of branched polymers (grafted polymer, comb polymer, star-shape polymer and dendrimer) has been well developed using well-known polymerization methodology such as cationic/anionic polymerization. Unlike branched polymer chain synthesis, the synthesis of cyclic polymer is more challenging. General cyclic species involve the combination between two fragments or among several fragments. Electrostatic self-assembly and covalent fixation is one of the most effective strategies to synthesize cyclic topological polymer. The reaction is driven by the electrostatic interactions between telechelic polytetrahydrofurans with cyclic ammonium salt and pluricaboxylate counterions. Upon dilution, the anions and cations could self-assemble into a cyclic structure, followed by a covalent fixation by heat or other external stimuli to undergo ring-opening reaction and close the chain into a cycle. Topological polymer network. Polymer networks intrinsically have various spatial features due to their amorphous property within a three-dimensional network. There are generally two ways to introduce spatially unique entities into a polymer network: Examples. The topology of a polymer chain or a polymer network is crucial in determining the macroscopic properties of a polymeric material, especially mechanical properties like elasticity and physical properties involving phase transitions. To date, several polymers with topological interest have been developed, which have been used for many applications, such as mechanical elastomer, energy, and so on. Below are some of the representative topological polymers or polymer networks. Interpenetrating polymer. Interpenetration polymers are polymer networks involving two and more polymer strands which are spatially intertwining with each other to form unique spatial topologies. Dendrimer. Dendrimer is a special branched polymers with a larger fraction of terminal nodes compared to the junction nodes and could be used for applications in drug delivery or catalysis. Polyrotaxane. Polyrotaxane is a polymer chain or a polymer network with mechanical interlock structures between ring-like molecules and polymer chain, where both the rings and the linear polymer chain could serve as the crosslinker to form a polymer network.
[ { "math_id": 0, "text": "P_m(x,y)[s_1(s_{11},s_{12},..),s_2(s_{21},s_{22},...),...]" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "II" }, { "math_id": 5, "text": "III" }, { "math_id": 6, "text": "IV" }, { "math_id": 7, "text": "V" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "m=x+y" }, { "math_id": 12, "text": "P_m(x,y)" }, { "math_id": 13, "text": "s_i" }, { "math_id": 14, "text": "s_{ij}" }, { "math_id": 15, "text": "a" }, { "math_id": 16, "text": "b" }, { "math_id": 17, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=60638834
60643388
Hertz vector
Formulation of electromagnetic potentials Hertz vectors, or the Hertz vector potentials, are an alternative formulation of the electromagnetic potentials. They are most often introduced in electromagnetic theory textbooks as practice problems for students to solve. There are multiple cases where they have a practical use, including antennas and waveguides. Though they are sometimes used in such practice problems, they are still rarely mentioned in most electromagnetic theory courses, and when they are they are often not practiced in a manner that demonstrates when they may be useful or provide a simpler method to solving a problem than more commonly practiced methods. Overview. Hertz vectors can be advantageous when solving for the electric and magnetic fields in certain scenarios, as they provide an alternative way to define the scalar potential formula_0 and the vector potential formula_1 which are used to find the fields as is commonly done. Considering cases of electric and magnetic polarization separately for simplicity, each can be defined in terms of the scalar and vector potentials which then allows for the electric and magnetic fields to be found. For cases of just electric polarization the following relations are used. And for cases of solely magnetic polarization they are defined as: To apply these, the polarizations need to be defined so that the form of the Hertz vectors can be obtained. Considering the case of simple electric polarization provides the path to finding this form via the wave equation. Assuming the space is uniform and non-conducting, and the charge and current distributions are given by formula_2, define a vector formula_3 such that formula_4 and formula_5. Using these to solve for the formula_6 vectors is similar to how the auxiliary fields formula_7 and formula_8 can be found, however here the Hertz vectors treat the electric and magnetic polarizations as sources. The Hertz vector potentials from these sources, formula_9 for the electric Hertz potential, and formula_10 for the magnetic Hertz potential can be derived using the wave equation for each. This is simply done by applying the d'Alembert operator formula_11 to both vectors, keeping in mind that formula_12, and the result is non-zero due to the polarizations that are present. This provides a direct pathway between easily determined properties such as current density formula_13 to fields via the Hertz vectors and their relations to the scalar and vector potentials. These wave equations yield the following solutions for the Hertz vectors: where formula_14 and formula_15 should be evaluated at the retarded time formula_16. The electric and magnetic fields can then be found using the Hertz vectors. For simplicity in observing the relationship between polarization, the Hertz vectors, and the fields, only one source of polarization (electric or magnetic) will be considered at a time. In the absence of any magnetic polarization, the formula_9 vector is used to find the fields as follows: Similarly, in the case of only magnetic polarization being present, the fields are determined via the previously stated relations to the scalar and vector potentials. For the case of both electric and magnetic polarization being present, the fields become Examples. Oscillating dipole. Consider a one dimensional, uniformly oscillating current. The current is aligned along the "z"-axis in some length of conducting material ℓ with an oscillation frequency formula_17. We will define the polarization vector where t is evaluated at the retarded time formula_18. Inserting this into the electric Hertz vector equation knowing that the length ℓ is small and the polarization is in one dimension it can be approximated in spherical coordinates as follows Continuing directly to taking the divergence quickly becomes messy due to the formula_19 denominator. This is readily resolved by using Legendre Polynomials for expanding a formula_20 potential: It is important to note that in the above equation, formula_21 and formula_22 are vectors, while formula_23 and formula_24 are the lengths of those vectors. formula_25 is the angle between the vectors formula_21 and formula_22. The Hertz vector is now written as follows. Taking the divergence Then the gradient of the result Finally finding the second partial with respect to time Allows for finding the electric field Simulation. Using the appropriate conversions to Cartesian coordinates, this field can be simulated in a 3D grid. Viewing the X-Y plane at the origin shows the two-lobed field in one plane we expect from a dipole, and it oscillates in time. The image below shows the shape of this field and how the polarity reverses in time due to the cosine term, however it does not currently show the amplitude change due to the time varying strength of the current. Regardless, its shape alone shows the effectiveness of using the electric Hertz vector in this scenario. This approach is significantly more straightforward than finding the electric field in terms of charges within the infinitely thin wire, especially as they vary with time. This is just one of several examples of when the use of Hertz vectors is advantageous compared to more common methods. Current loop. Consider a small loop of area formula_26 carrying a time varying current formula_27. With current flow, a magnetic field perpendicular to the direction of flow as a result of the right hand rule will be present. Due to this field being generated in a loop, it is expected that the field would look similar to that of an electric dipole. This can be proven quickly using Hertz vectors. First the magnetic polarization is determined by its relation to magnetic moment formula_28. The magnetic moment of a current loop is defined as formula_29, so if the loop lies in the x-y plane and has the previously defined time-varying current, the magnetic moment is formula_30. Inserting this into formula_31, and then into Equation (10), the magnetic Hertz vector is found in a simple form. As in the electric dipole example, the Legendre polynomials can be used to simplify the derivatives necessary to obtain formula_32 and formula_33. The electric field is then found through Due to the dependence on formula_23, it is significantly simpler to express the Hertz vector in spherical coordinates by transforming from the sole formula_34 component vector to the formula_35 and formula_36 components. Simulation. This field was simulated using Python by converting the spherical component to x and y components. The result is as expected. Due to the changing current, there is a time dependent magnetic field which induces an electric field. Due to the shape, the field appears as if it were a dipole.
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "\\mathbf{A}" }, { "math_id": 2, "text": "\\rho(\\mathbf{r},t), \\mathbf{J}(\\mathbf{r},t)" }, { "math_id": 3, "text": "\\mathbf{P}=\\mathbf{P}(\\mathbf{r},t)" }, { "math_id": 4, "text": "\\rho = -\\nabla \\cdot \\mathbf{P} " }, { "math_id": 5, "text": "\\mathbf{J}= \\frac{\\partial\\mathbf{P}}{\\partial t}" }, { "math_id": 6, "text": "\\mathbf{\\Pi}" }, { "math_id": 7, "text": "\\mathbf{D}" }, { "math_id": 8, "text": "\\mathbf{H}" }, { "math_id": 9, "text": "\\mathbf{\\Pi}_e" }, { "math_id": 10, "text": "\\mathbf{\\Pi}_m" }, { "math_id": 11, "text": "\\Box=\\left(\\nabla^2-\\frac{1}{c^2}\\frac{\\partial^2}{\\partial t^2}\\right)" }, { "math_id": 12, "text": "c^2 = \\left(\\mu \\epsilon\\right)^{-1}" }, { "math_id": 13, "text": "\\mathbf{J}" }, { "math_id": 14, "text": "\\left[\\mathbf{P}\\left(\\mathbf{r}'\\right)\\right]" }, { "math_id": 15, "text": "\\left[\\mathbf{M}\\left(\\mathbf{r}'\\right)\\right]" }, { "math_id": 16, "text": "|\\mathbf{r}-\\mathbf{r'}|/v" }, { "math_id": 17, "text": "\\omega" }, { "math_id": 18, "text": "t' = t-|\\mathbf{r}-\\mathbf{r}'|/v" }, { "math_id": 19, "text": "|\\mathbf{r}-\\mathbf{r'}|" }, { "math_id": 20, "text": "1/r" }, { "math_id": 21, "text": "\\mathbf{x}" }, { "math_id": 22, "text": "\\mathbf{x}'" }, { "math_id": 23, "text": "r" }, { "math_id": 24, "text": "r'" }, { "math_id": 25, "text": "\\gamma" }, { "math_id": 26, "text": "A" }, { "math_id": 27, "text": "I \\sin\\left(\\omega t\\right)" }, { "math_id": 28, "text": "\\mathbf{M}=\\frac{d\\mathbf{m}}{dV}" }, { "math_id": 29, "text": "\\mathbf{m}=IA\\mathbf{\\hat{n}}" }, { "math_id": 30, "text": "\\mathbf{m}=IA \\sin\\left(\\omega t\\right)\\mathbf{\\hat{z}}" }, { "math_id": 31, "text": "\\mathbf{M}" }, { "math_id": 32, "text": "\\mathbf{E}" }, { "math_id": 33, "text": "\\mathbf{B}" }, { "math_id": 34, "text": "\\mathbf{\\hat{z}}" }, { "math_id": 35, "text": "\\mathbf{\\hat{r}}" }, { "math_id": 36, "text": "\\mathbf{\\hat{\\theta}}" } ]
https://en.wikipedia.org/wiki?curid=60643388
6064424
Spatial descriptive statistics
Spatial descriptive statistics is the intersection of spatial statistics and descriptive statistics; these methods are used for a variety of purposes in geography, particularly in quantitative data analyses involving Geographic Information Systems (GIS). Types of spatial data. The simplest forms of spatial data are "gridded data", in which a scalar quantity is measured for each point in a regular grid of points, and "point sets", in which a set of coordinates (e.g. of points in the plane) is observed. An example of gridded data would be a satellite image of forest density that has been digitized on a grid. An example of a point set would be the latitude/longitude coordinates of all elm trees in a particular plot of land. More complicated forms of data include marked point sets and spatial time series. Measures of spatial central tendency. The coordinate-wise mean of a point set is the centroid, which solves the same variational problem in the plane (or higher-dimensional Euclidean space) that the familiar average solves on the real line — that is, the centroid has the smallest possible average squared distance to all points in the set. Measures of spatial dispersion. Dispersion captures the degree to which points in a point set are separated from each other. For most applications, spatial dispersion should be quantified in a way that is invariant to rotations and reflections. Several simple measures of spatial dispersion for a point set can be defined using the covariance matrix of the coordinates of the points. The trace, the determinant, and the largest eigenvalue of the covariance matrix can be used as measures of spatial dispersion. A measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors. Measures of spatial homogeneity. A homogeneous set of points in the plane is a set that is distributed such that approximately the same number of points occurs in any circular region of a given area. A set of points that lacks homogeneity may be "spatially clustered" at a certain spatial scale. A simple probability model for spatially homogeneous points is the Poisson process in the plane with constant intensity function. Ripley's "K" and "L" functions. Ripley's "K" and "L" functions introduced by Brian D. Ripley are closely related descriptive statistics for detecting deviations from spatial homogeneity. The "K" function (technically its sample-based estimate) is defined as formula_0 where "d""ij" is the Euclidean distance between the "i"th and "j"th points in a data set of "n" points, t is the search radius, λ is the average density of points (generally estimated as "n"/"A", where "A" is the area of the region containing all points) and "I" is the indicator function (i.e. 1 if its operand is true, 0 otherwise). In 2 dimensions, if the points are approximately homogeneous, formula_1 should be approximately equal to π"t"2. For data analysis, the variance stabilized Ripley "K" function called the "L" function is generally used. The sample version of the "L" function is defined as formula_2 For approximately homogeneous data, the "L" function has expected value "t" and its variance is approximately constant in "t". A common plot is a graph of formula_3 against "t", which will approximately follow the horizontal zero-axis with constant dispersion if the data follow a homogeneous Poisson process. Using Ripley's "K" function it can be determined whether points have a random, dispersed or clustered distribution pattern at a certain scale.
[ { "math_id": 0, "text": "\n\\widehat{K}(t) = \\lambda^{-1} \\sum_{i\\ne j} \\frac{I(d_{ij}<t)} n,\n" }, { "math_id": 1, "text": "\\widehat{K}(t)" }, { "math_id": 2, "text": "\n\\widehat{L}(t) = \\left( \\frac{\\widehat{K}(t)} \\pi \\right)^{1/2}.\n" }, { "math_id": 3, "text": "t - \\widehat{L}(t)" } ]
https://en.wikipedia.org/wiki?curid=6064424
60646773
ℓ-adic sheaf
In algebraic geometry, an ℓ-adic sheaf on a Noetherian scheme "X" is an inverse system consisting of formula_0-modules formula_1 in the étale topology and formula_2 inducing formula_3. Bhatt–Scholze's pro-étale topology gives an alternative approach. Motivation. The development of étale cohomology as a whole was fueled by the desire to produce a 'topological' theory of cohomology for algebraic varieties, i.e. a Weil cohomology theory that works in any characteristic. An essential feature of such a theory is that it admits coefficients in a field of characteristic 0. However, constant étale sheaves with no torsion have no interesting cohomology. For example, if formula_4 is a smooth variety over a field formula_5, then formula_6 for all positive formula_7. On the other hand, the constant sheaves formula_8 do produce the 'correct' cohomology, as long as formula_9 is invertible in the ground field formula_5. So one takes a prime formula_10 for which this is true and defines formula_10-adic cohomology as formula_11. This definition, however, is not completely satisfactory: As in the classical case of topological spaces, one might want to consider cohomology with coefficients in a local system of formula_12-vector spaces, and there should be a category equivalence between such local systems and continuous formula_12-representations of the étale fundamental group. Another problem with the definition above is that it behaves well only when formula_5 is a separably closed. In this case, all the groups occurring in the inverse limit are finitely generated and taking the limit is exact. But if formula_5 is for example a number field, the cohomology groups formula_13 will often be infinite and the limit not exact, which causes issues with functoriality. For instance, there is in general no Hochschild-Serre spectral sequence relating formula_14 to the Galois cohomology of formula_15. These considerations lead one to consider the category of inverse systems of sheaves as described above. One has then the desired equivalence of categories with representations of the fundamental group (for formula_16-local systems, and when formula_4 is normal for formula_17-systems as well), and the issue in the last paragraph is resolved by so-called continuous étale cohomology, where one takes the derived functor of the composite functor of taking the limit over global sections of the system. Constructible and lisse ℓ-adic sheaves. An ℓ-adic sheaf formula_18 is said to be Some authors (e.g., those of SGA 4&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2) assume an ℓ-adic sheaf to be constructible. Given a connected scheme "X" with a geometric point "x", SGA 1 defines the étale fundamental group formula_19 of "X" at "x" to be the group classifying finite Galois coverings of "X". Then the category of lisse ℓ-adic sheaves on "X" is equivalent to the category of continuous representations of formula_19 on finite free formula_20-modules. This is an analog of the correspondence between local systems and continuous representations of the fundament group in algebraic topology (because of this, a lisse ℓ-adic sheaf is sometimes also called a local system). ℓ-adic cohomology. An ℓ-adic cohomology groups is an inverse limit of étale cohomology groups with certain torsion coefficients. The "derived category" of constructible ℓ-adic sheaves. In a way similar to that for ℓ-adic cohomology, the derived category of constructible formula_21-sheaves is defined essentially as formula_22 writes "in daily life, one pretends (without getting into much trouble) that formula_23 is simply the full subcategory of some hypothetical derived category formula_24 ..." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}/\\ell^n" }, { "math_id": 1, "text": "F_n" }, { "math_id": 2, "text": "F_{n+1} \\to F_n" }, { "math_id": 3, "text": "F_{n+1} \\otimes_{\\mathbb{Z}/\\ell^{n+1}} \\mathbb{Z}/\\ell^n \\overset{\\simeq}\\to F_n" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "H^i(X_\\text{ét},\\mathbb{Q})=0" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "\\mathbb{Z}/m" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "\\ell" }, { "math_id": 11, "text": "H^i(X_\\text{ét}, \\mathbb{Z}_\\ell):= \\varprojlim_n H^i(X_\\text{ét}, \\mathbb{Z}/\\ell^n)\\text{, and } H^i(X_\\text{ét}, \\mathbb{Q}_\\ell):= \\varprojlim_n H^i(X_\\text{ét}, \\mathbb{Z}/\\ell^n)\\otimes \\mathbb Q" }, { "math_id": 12, "text": "\\mathbb{Q}_\\ell" }, { "math_id": 13, "text": "H^i(X_\\text{ét}, \\mathbb{Z}/\\ell^n)" }, { "math_id": 14, "text": "H^i(X_\\text{ét}, \\mathbb{Z}_\\ell)" }, { "math_id": 15, "text": "H^i((X_{k^\\text{sep}})_\\text{ét}, \\mathbb{Z}_\\ell)" }, { "math_id": 16, "text": "\\mathbb Z_\\ell" }, { "math_id": 17, "text": "\\Q_\\ell" }, { "math_id": 18, "text": "\\{ F_n \\}_{\\ge 0}" }, { "math_id": 19, "text": "\\pi^{\\text{ét}}_1(X, x)" }, { "math_id": 20, "text": "\\mathbb{Z}_l" }, { "math_id": 21, "text": "\\overline{\\mathbb{Q}}_\\ell" }, { "math_id": 22, "text": "D^b_c(X, \\overline{\\mathbb{Q}}_\\ell) := (\\varprojlim_n D^b_c(X, \\mathbb{Z}/\\ell^n)) \\otimes_{\\mathbb{Z}_\\ell} \\overline{\\mathbb{Q}}_\\ell." }, { "math_id": 23, "text": "D^b_c(X, \\overline{\\mathbb{Q}}_\\ell)" }, { "math_id": 24, "text": "D(X, \\overline{\\mathbb{Q}}_\\ell)" } ]
https://en.wikipedia.org/wiki?curid=60646773
60649045
Re-Pair
Lossless, but memory-consuming, data compression algorithm Re-Pair (short for recursive pairing) is a grammar-based compression algorithm that, given an input text, builds a straight-line program, i.e. a context-free grammar generating a single string: the input text. In order to perform the compression in linear time, it consumes the amount of memory that is approximately five times the size of its input. The grammar is built by recursively replacing the most frequent pair of characters occurring in the text. Once there is no pair of characters occurring twice, the resulting string is used as the axiom of the grammar. Therefore, the output grammar is such that all rules but the axiom have two symbols on the right-hand side. How it works. Re-Pair was first introduced by NJ. Larsson and A. Moffat in 1999. In their paper the algorithm is presented together with a detailed description of the data structures required to implement it with linear time and space complexity. The experiments showed that Re-Pair achieves high compression ratios and offers good performance for decompression. However, the major drawback of the algorithm is its memory consumption, which is approximately 5 times the size of the input. Such memory usage is required in order to perform the compression in linear time but makes the algorithm impractical for compressing large files. The image on the right shows how the algorithm works compresses the string formula_0. During the first iteration, the pair formula_1, which occurs three times in formula_2, is replaced by a new symbol formula_3. On the second iteration, the most frequent pair in the string formula_4, which is formula_5, is replaced by a new symbol formula_6. Thus, at the end of the second iteration, the remaining string is formula_7. In the next two iterations, the pairs formula_8 and formula_9 are replaced by symbols formula_10 and formula_11 respectively. Finally, the string formula_12 contains no repeated pair and therefore it is used as the axiom of the output grammar. Data structures. In order to achieve linear time complexity, Re-Pair requires the following data structures Since the hash table and the priority queue refer to the same elements (pairs), they can be implemented by a common data structure called PAIR with pointers for the hash table (h_next) and the priority queue (p_next and p_prev). Furthermore, each PAIR points to the beginning of the first (f_pos) and the last (b_pos) occurrences of the string represented by the PAIR in the sequence. The following picture shows an overview of this data structure. The following two pictures show an example of how these data structures look after the initialization and after applying one step of the pairing process (pointers to NULL are not displayed): Encoding the grammar. Once the grammar has been built for a given input string, in order to achieve effective compression, this grammar has to be encoded efficiently. One of the simplest methods for encoding the grammar is the implicit encoding, which consists on invoking function codice_0, described below, sequentially on all the axiom's symbols. Intuitively, rules are encoded as they are visited in a depth-first traversal of the grammar. The first time a rule is visited, its right hand side is encoded recursively and a new code is assigned to the rule. From that point, whenever the rule is reached, the assigned value is written. num_rules_encoded = 256 // By default, the extended ASCII charset are the terminals of the grammar. writeSymbol(symbol s) { bitslen = log(num_rules_encoded); // Initially 8, to describe any extended ASCII character write s in binary using bitslen bits void encodeCFG_rec(symbol s) { if (s is non-terminal and this is the first time symbol s appears) { take rule s → X Y; write bit 1; encodeCFG_rec(X); encodeCFG_rec(Y); assign to symbol s value ++num_rules_encoded; } else { write bit 0; writeSymbol(terminal/value assigned) void encodeCFG(symbol s) { encodeCFG_rec(s); write bit 1; Another possibility is to separate the rules of the grammar into generations such that a rule formula_19 belongs to generation formula_13 iff at least one of formula_20 or formula_21 belongs to generation formula_22 and the other belongs to generation formula_23 with formula_24. Then these generations are encoded subsequently starting from generation formula_25. This was the method proposed originally when Re-Pair was first introduced. However, most implementations of Re-Pair use the implicit encoding method due to its simplicity and good performance. Furthermore, it allows on-the-fly decompression. Versions. There exists a number of different implementations of Re-Pair. Each of these versions aims at improving one specific aspect of the algorithm, such as reducing the runtime, reducing the space consumption or increasing the compression ratio. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w=xabcabcy123123zabc" }, { "math_id": 1, "text": "ab" }, { "math_id": 2, "text": "w" }, { "math_id": 3, "text": "R_1" }, { "math_id": 4, "text": "w=xR_1cR_1cy123123zR_1c" }, { "math_id": 5, "text": "R_1c" }, { "math_id": 6, "text": "R_2" }, { "math_id": 7, "text": "w=xR_2R_2y123123zR_2" }, { "math_id": 8, "text": "12" }, { "math_id": 9, "text": "R_{3}3" }, { "math_id": 10, "text": "R_3" }, { "math_id": 11, "text": "R_4" }, { "math_id": 12, "text": "w=xR_2R_2yR_4R_4zR_2" }, { "math_id": 13, "text": "i" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "m" }, { "math_id": 16, "text": "w[i]" }, { "math_id": 17, "text": "w[k]" }, { "math_id": 18, "text": "w[m]" }, { "math_id": 19, "text": "X \\to YZ" }, { "math_id": 20, "text": "Y" }, { "math_id": 21, "text": "Z" }, { "math_id": 22, "text": "i-1" }, { "math_id": 23, "text": "j" }, { "math_id": 24, "text": "j \\leq i-1" }, { "math_id": 25, "text": "0" } ]
https://en.wikipedia.org/wiki?curid=60649045
60654680
Nash-Williams theorem
Theorem in graph theory describing number of edge-disjoint spanning trees a graph can have In graph theory, the Nash-Williams theorem is a tree-packing theorem that describes how many edge-disjoint spanning trees (and more generally forests) a graph can have:A graph "G" has "t" edge-disjoint spanning trees iff for every partition formula_0 where formula_1 there are at least "t"("k" − 1) crossing edges (Tutte 1961, Nash-Williams 1961).For this article, we will say that such a graph has arboricity "t" or is "t"-arboric. (The actual definition of arboricity is slightly different and applies to forests rather than trees.) Related tree-packing properties. A "k"-arboric graph is necessarily "k"-edge connected. The converse is not true. As a corollary of NW, every 2"k"-edge connected graph is "k"-arboric. Both NW and Menger's theorem characterize when a graph has "k" edge-disjoint paths between two vertices. Nash-Williams theorem for forests. In 1964, Nash-Williams generalized the above result to forests:G can be partitioned into "t" edge-disjoint forests iff for every formula_2, the induced subgraph "G"["U"] has at most formula_3 edges. A proof is given here. This is how people usually define what it means for a graph to be "t"-aboric. In other words, for every subgraph "S" = "G"["U"], we have formula_4. It is tight in that there is a subgraph "S" that saturates the inequality (or else we can choose a smaller t). This leads to the following formulaformula_5also referred to as the NW formula. The general problem is to ask when a graph can be covered by edge-disjoint subgraphs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_1, \\ldots, V_k \\subset V(G)" }, { "math_id": 1, "text": "V_i \\neq \\emptyset" }, { "math_id": 2, "text": "U \\subset V(G)" }, { "math_id": 3, "text": "t(|U|-1)" }, { "math_id": 4, "text": "t \\geq \\lceil E(S) / (V(S) - 1) \\rceil" }, { "math_id": 5, "text": "t = \\lceil \\max_{S \\subset G} \\frac{E(S)}{V(S) - 1} \\rceil" } ]
https://en.wikipedia.org/wiki?curid=60654680
60656372
Deletion–contraction formula
Formula in graph theory In graph theory, a deletion-contraction formula / recursion is any formula of the following recursive form: formula_0 Here "G" is a graph, "f" is a function on graphs, "e" is any edge of "G", "G" \ "e" denotes edge deletion, and "G" / "e" denotes contraction. Tutte refers to such a function as a W-function. The formula is sometimes referred to as the fundamental reduction theorem. In this article we abbreviate to DC. R. M. Foster had already observed that the chromatic polynomial is one such function, and Tutte began to discover more, including a function "f" = "t"("G") counting the number of spanning trees of a graph (also see Kirchhoff's theorem). It was later found that the flow polynomial is yet another; and soon Tutte discovered an entire class of functions called Tutte polynomials (originally referred to as dichromates) that satisfy DC. Examples. Spanning trees. The number of spanning trees formula_1 satisfies DC. Proof. formula_2 denotes the number of spanning trees not including "e", whereas formula_3 the number including "e". To see the second, if "T" is a spanning tree of "G" then contracting "e" produces another spanning tree of formula_4. Conversely, if we have a spanning tree "T" of formula_4, then expanding the edge "e" gives two disconnected trees; adding "e" connects the two and gives a spanning tree of "G". Laplacian characteristic polynomials. By Kirchhoff's theorem, the number of spanning trees in a graph is counted by a cofactor of the Laplacian matrix. However, the Laplacian characteristic polynomial does not satisfy DC. By studying Laplacians with vertex weights, one can find a deletion-contraction relation between the scaled vertex-weighted Laplacian characteristic polynomials. Chromatic polynomials. The chromatic polynomial formula_5 counting the number of "k"-colorings of "G" does not satisfy DC, but a slightly modified formula (which can be made equivalent): formula_6 Proof. If "e" = "uv", then a "k"-coloring of "G" is the same as a "k"-coloring of "G" \ "e" where "u" and "v" have different colors. There are formula_7 total "G" \ "e" colorings. We need now subtract the ones where "u" and "v" are colored similarly. But such colorings correspond to the "k"-colorings of formula_8 where "u" and "v" are merged. This above property can be used to show that the chromatic polynomial formula_5 is indeed a polynomial in "k". We can do this via induction on the number of edges and noting that in the base case where there are no edges, there are formula_9 possible colorings (which is a polynomial in "k").
[ { "math_id": 0, "text": "f(G) = f(G \\setminus e) + f(G / e)." }, { "math_id": 1, "text": "t(G)" }, { "math_id": 2, "text": "t(G \\setminus e)" }, { "math_id": 3, "text": "t(G/e)" }, { "math_id": 4, "text": "G/e" }, { "math_id": 5, "text": "\\chi_G(k)" }, { "math_id": 6, "text": "\\chi_G(k) = \\chi_{G - e}(k) - \\chi_{G / e}(k)." }, { "math_id": 7, "text": "\\chi_{G\\setminus e}(k)" }, { "math_id": 8, "text": "\\chi_{G/e}(k)" }, { "math_id": 9, "text": "k^{|V(G)|}" } ]
https://en.wikipedia.org/wiki?curid=60656372
60657382
Random cluster model
In statistical mechanics, probability theory, graph theory, etc. the random cluster model is a random graph that generalizes and unifies the Ising model, Potts model, and percolation model. It is used to study random combinatorial structures, electrical networks, etc. It is also referred to as the RC model or sometimes the FK representation after its founders Cees Fortuin and Piet Kasteleyn. The random cluster model has a critical limit, described by a conformal field theory. Definition. Let formula_0 be a graph, and formula_1 be a bond configuration on the graph that maps each edge to a value of either 0 or 1. We say that a bond is "closed" on edge formula_2 if formula_3, and open if formula_4. If we let formula_5 be the set of open bonds, then an open cluster or FK cluster is any connected component in formula_6 union the set of vertices. Note that an open cluster can be a single vertex (if that vertex is not incident to any open bonds). Suppose an edge is open independently with probability formula_7 and closed otherwise, then this is just the standard Bernoulli percolation process. The probability measure of a configuration formula_8 is given as formula_9 The RC model is a generalization of percolation, where each cluster is weighted by a factor of formula_10. Given a configuration formula_8, we let formula_11 be the number of open clusters, or alternatively the number of connected components formed by the open bonds. Then for any formula_12, the probability measure of a configuration formula_8 is given as formula_13 "Z" is the partition function, or the sum over the unnormalized weights of all configurations, formula_14 The partition function of the RC model is a specialization of the Tutte polynomial, which itself is a specialization of the multivariate Tutte polynomial. Special values of "q". The parameter formula_10 of the random cluster model can take arbitrary complex values. This includes the following special cases: Edwards-Sokal representation. The Edwards-Sokal (ES) representation of the Potts model is named after Robert G. Edwards and Alan D. Sokal. It provides a unified representation of the Potts and random cluster models in terms of a joint distribution of spin and bond configurations. Let formula_21 be a graph, with the number of vertices being formula_22 and the number of edges being formula_23. We denote a spin configuration as formula_24 and a bond configuration as formula_25. The joint measure of formula_26 is given as formula_27 where formula_28 is the uniform measure, formula_29 is the product measure with density formula_30, and formula_31 is an appropriate normalizing constant. Importantly, the indicator function formula_32 of the set formula_33 enforces the constraint that a bond can only be open on an edge if the adjacent spins are of the same state, also known as the SW rule. The statistics of the Potts spins can be recovered from the cluster statistics (and vice versa), thanks to the following features of the ES representation: Frustration. There are several complications of the ES representation once frustration is present in the spin model (e.g. the Ising model with both ferromagnetic and anti-ferromagnetic couplings in the same lattice). In particular, there is no longer a correspondence between the spin statistics and the cluster statistics, and the correlation length of the RC model will be greater than the correlation length of the spin model. This is the reason behind the inefficiency of the SW algorithm for simulating frustrated systems. Two-dimensional case. If the underlying graph formula_43 is a planar graph, there is a duality between the random cluster models on formula_43 and on the dual graph formula_44. At the level of the partition function, the duality reads formula_45 On a self-dual graph such as the square lattice, a phase transition can only occur at the self-dual coupling formula_46. The random cluster model on a planar graph can be reformulated as a loop model on the corresponding medial graph. For a configuration formula_8 of the random cluster model, the corresponding loop configuration is the set of self-avoiding loops that separate the clusters from the dual clusters. In the transfer matrix approach, the loop model is written in terms of a Temperley-Lieb algebra with the parameter formula_47. In two dimensions, the random cluster model is therefore closely related to the O(n) model, which is also a loop model. In two dimensions, the critical random cluster model is described by a conformal field theory with the central charge formula_48 Known exact results include the conformal dimensions of the fields that detect whether a point belongs to an FK cluster or a spin cluster. In terms of Kac indices, these conformal dimensions are respectively formula_49 and formula_50, corresponding to the fractal dimensions formula_51 and formula_52 of the clusters. History and applications. RC models were introduced in 1969 by Fortuin and Kasteleyn, mainly to solve combinatorial problems. After their founders, it is sometimes referred to as FK models. In 1971 they used it to obtain the FKG inequality. Post 1987, interest in the model and applications in statistical physics reignited. It became the inspiration for the Swendsen–Wang algorithm describing the time-evolution of Potts models. Michael Aizenman and coauthors used it to study the phase boundaries in 1D Ising and Potts models. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = (V,E)" }, { "math_id": 1, "text": "\\omega: E \\to \\{0,1\\}" }, { "math_id": 2, "text": "e\\in E" }, { "math_id": 3, "text": "\\omega(e)=0" }, { "math_id": 4, "text": "\\omega(e)=1" }, { "math_id": 5, "text": "A(\\omega) = \\{e\\in E : \\omega(e)=1 \\}" }, { "math_id": 6, "text": "A(\\omega)" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "\\omega" }, { "math_id": 9, "text": "\\mu(\\omega) = \\prod_{e \\in E} p^{\\omega(e)}(1-p)^{1-\\omega(e)}." }, { "math_id": 10, "text": "q" }, { "math_id": 11, "text": "C(\\omega)" }, { "math_id": 12, "text": "q>0" }, { "math_id": 13, "text": "\\mu(\\omega) = \\frac{1}{Z} q^{C(\\omega)}\\prod_{e \\in E} p^{\\omega(e)}(1-p)^{1-\\omega(e)}. " }, { "math_id": 14, "text": "Z = \\sum_{\\omega \\in \\Omega} \\left\\{q^{C(\\omega)}\\prod_{e \\in E(G)} p^{\\omega(e)}(1-p)^{1-\\omega(e)} \\right\\}. " }, { "math_id": 15, "text": "q\\to 0" }, { "math_id": 16, "text": "q < 1" }, { "math_id": 17, "text": "q=1" }, { "math_id": 18, "text": "Z=1" }, { "math_id": 19, "text": "q=2" }, { "math_id": 20, "text": "q\\in \\mathbb{Z}^{+}" }, { "math_id": 21, "text": "G = (V, E)" }, { "math_id": 22, "text": "n = |V|" }, { "math_id": 23, "text": "m = |E|" }, { "math_id": 24, "text": "\\sigma\\in \\mathbb{Z}_q^n" }, { "math_id": 25, "text": "\\omega\\in \\{0,1\\}^m" }, { "math_id": 26, "text": "(\\sigma,\\omega)" }, { "math_id": 27, "text": " \\mu(\\sigma,\\omega) = Z^{-1}\\psi(\\sigma)\\phi_p(\\omega)1_A(\\sigma,\\omega), " }, { "math_id": 28, "text": "\\psi" }, { "math_id": 29, "text": "\\phi_p" }, { "math_id": 30, "text": "p = 1-e^{-\\beta}" }, { "math_id": 31, "text": " Z " }, { "math_id": 32, "text": "1_A" }, { "math_id": 33, "text": " A = \\{ (\\sigma,\\omega) : \\sigma_i = \\sigma_j \\text{ for any edge } (i,j) \\text{ where } \\omega = 1 \\} " }, { "math_id": 34, "text": " \\mu(\\sigma) " }, { "math_id": 35, "text": " \\beta " }, { "math_id": 36, "text": " \\phi_{p, q}(\\omega) " }, { "math_id": 37, "text": " \\mu(\\sigma \\,|\\, \\omega) " }, { "math_id": 38, "text": " \\phi_{p,q}(\\omega \\,|\\, \\sigma) " }, { "math_id": 39, "text": " G " }, { "math_id": 40, "text": " (i,j) " }, { "math_id": 41, "text": " \\sigma_i \\text{ and } \\sigma_j " }, { "math_id": 42, "text": "\\phi_{p,q}(i \\leftrightarrow j) = \\langle \\sigma_i\\sigma_j \\rangle" }, { "math_id": 43, "text": "G" }, { "math_id": 44, "text": "G^*" }, { "math_id": 45, "text": "\n\\tilde{Z}_G(q,v) = q^{|V|-|E|-1}v^{|E|} \\tilde{Z}_{G^*}\\left(q, \\frac{q}{v}\\right) \\qquad \\text{with} \\qquad v = \\frac{p}{1-p}\\quad \\text{and}\\quad \\tilde{Z}_G(q,v) = (1-p)^{-|E|}Z_G(q,v)\n" }, { "math_id": 46, "text": "v_\\text{self-dual}=\\sqrt{q}" }, { "math_id": 47, "text": "\\delta = q" }, { "math_id": 48, "text": " \nc = 13 - 6\\beta^2 - 6\\beta^{-2} \\qquad \\text{with} \\qquad q = 4\\cos^2(\\pi\\beta^2)\\ . \n" }, { "math_id": 49, "text": "2h_{0,\\frac12}" }, { "math_id": 50, "text": "2h_{\\frac12,0}" }, { "math_id": 51, "text": "2-2h_{0,\\frac12}" }, { "math_id": 52, "text": "2-2h_{\\frac12,0}" } ]
https://en.wikipedia.org/wiki?curid=60657382
60663582
Hamiltonian simulation
Hamiltonian simulation (also referred to as quantum simulation) is a problem in quantum information science that attempts to find the computational complexity and quantum algorithms needed for simulating quantum systems. Hamiltonian simulation is a problem that demands algorithms which implement the evolution of a quantum state efficiently. The Hamiltonian simulation problem was proposed by Richard Feynman in 1982, where he proposed a quantum computer as a possible solution since the simulation of general Hamiltonians seem to grow exponentially with respect to the system size. Problem statement. In the Hamiltonian simulation problem, given a Hamiltonian formula_0 (formula_1 hermitian matrix acting on formula_2 qubits), a time formula_3 and maximum simulation error formula_4, the goal is to find an algorithm that approximates formula_5 such that formula_6, where formula_7 is the ideal evolution and formula_8 is the spectral norm. A special case of the Hamiltonian simulation problem is the local Hamiltonian simulation problem. This is when formula_0 is a k-local Hamiltonian on formula_2 qubits where formula_9 and formula_10 acts non-trivially on at most formula_11 qubits instead of formula_2 qubits. The local Hamiltonian simulation problem is important because most Hamiltonians that occur in nature are k-local. Techniques. Product formulas. Also known as Trotter formulas or Trotter–Suzuki decompositions, Product formulas simulate the sum-of-terms of a Hamiltonian by simulating each one separately for a small time slice. If formula_12, then formula_13 for a large formula_14; where formula_14 is the number of time steps to simulate for. The larger the formula_14, the more accurate the simulation. If the Hamiltonian is represented as a Sparse matrix, the distributed edge coloring algorithm can be used to decompose it into a sum of terms; which can then be simulated by a Trotter–Suzuki algorithm. Taylor series. formula_15 by the Taylor series expansion. This says that during the evolution of a quantum state, the Hamiltonian is applied over and over again to the system with a various number of repetitions. The first term is the identity matrix so the system doesn't change when it is applied, but in the second term the Hamiltonian is applied once. For practical implementations, the series has to be truncated formula_16, where the bigger the formula_17, the more accurate the simulation. This truncated expansion is then implemented via the linear combination of unitaries (LCU) technique for Hamiltonian simulation. Namely, one decomposes the Hamiltonian formula_18 such that each formula_19 is unitary (for instance, the Pauli operators always provide such a basis), and so each formula_20 is also a linear combination of unitaries. Quantum walk. In the quantum walk, a unitary operation whose spectrum is related to the Hamiltonian is implemented then the Quantum phase estimation algorithm is used to adjust the eigenvalues. This makes it unnecessary to decompose the Hamiltonian into a sum-of-terms like the Trotter-Suzuki methods. Quantum signal processing. The quantum signal processing algorithm works by transducing the eigenvalues of the Hamiltonian into an ancilla qubit, transforming the eigenvalues with single qubit rotations and finally projecting the ancilla. It has been proved to be optimal in query complexity when it comes to Hamiltonian simulation. Complexity. The table of the complexities of the Hamiltonian simulation algorithms mentioned above. The Hamiltonian simulation can be studied in two ways. This depends on how the Hamiltonian is given. If it is given explicitly, then gate complexity matters more than query complexity. If the Hamiltonian is described as an Oracle (black box) then the number of queries to the oracle is more important than the gate count of the circuit. The following table shows the gate and query complexity of the previously mentioned techniques. Where formula_21 is the largest entry of formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "2^n \\times 2^n" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "\\epsilon" }, { "math_id": 5, "text": " U " }, { "math_id": 6, "text": "||U - e^{-iHt} || \\leq \\epsilon " }, { "math_id": 7, "text": "e^{-iHt}" }, { "math_id": 8, "text": "||\\cdot||" }, { "math_id": 9, "text": "H = \\sum_{j \\mathop =1}^m H_j " }, { "math_id": 10, "text": "H_j" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "H = A + B + C " }, { "math_id": 13, "text": " U = e^{-i(A + B + C)t} = (e^{-iAt/r}e^{-iBt/r}e^{-iCt/r})^r" }, { "math_id": 14, "text": "r" }, { "math_id": 15, "text": " e^{-iHt} = \\sum_{n \\mathop = 0}^ \\infty \\frac{(-iHt)^{n}}{n!} = I - iHt - \\frac{H^{2}t^{2}}{2} + \\frac{iH^{3}t^{3}}{6} + \\cdots " }, { "math_id": 16, "text": " \\left( \\sum_{n \\mathop = 0}^N \\frac{(-iHt)^{n}}{n!} \\right)" }, { "math_id": 17, "text": "N" }, { "math_id": 18, "text": " H = \\sum_{\\ell=1}^L \\alpha_\\ell H_\\ell " }, { "math_id": 19, "text": " H_\\ell " }, { "math_id": 20, "text": " H^n = \\sum_{\\ell_1,\\ldots,\\ell_n=1}^L \\alpha_{\\ell_1} \\cdots \\alpha_{\\ell_n} H_{\\ell_1} \\cdots H_{\\ell_n} " }, { "math_id": 21, "text": "||H||_{\\rm max}" } ]
https://en.wikipedia.org/wiki?curid=60663582
606719
Tits group
Finite simple group; sometimes classed as sporadic In group theory, the Tits group 2"F"4(2)′, named for Jacques Tits (), is a finite simple group of order    211 · 33 · 52 · 13 = 17,971,200. This is the only simple group that is a derivative of a group of Lie type that is not a group of Lie type in any series from exceptional isomorphisms. It is sometimes considered a 27th sporadic group. History and properties. The Ree groups 2"F"4(22"n"+1) were constructed by , who showed that they are simple if "n" ≥ 1. The first member 2"F"4(2) of this series is not simple. It was studied by Jacques Tits (1964) who showed that it is almost simple, its derived subgroup 2"F"4(2)′ of index 2 being a new simple group, now called the Tits group. The group 2"F"4(2) is a group of Lie type and has a BN pair, but the Tits group itself does not have a BN pair. The Tits group is member of the infinite family 2"F"4(22"n"+1)′ of commutator groups of the Ree groups, and thus by definition not sporadic. But because it is also not strictly a group of Lie type, it is sometimes regarded as a 27th sporadic group. The Schur multiplier of the Tits group is trivial and its outer automorphism group has order 2, with the full automorphism group being the group 2"F"4(2). The Tits group occurs as a maximal subgroup of the Fischer group Fi22. The group 2"F"4(2) also occurs as a maximal subgroup of the Rudvalis group, as the point stabilizer of the rank-3 permutation action on 4060 = 1 + 1755 + 2304 points. The Tits group is one of the simple N-groups, and was overlooked in John G. Thompson's first announcement of the classification of simple "N"-groups, as it had not been discovered at the time. It is also one of the thin finite groups. The Tits group was characterized in various ways by Parrott (1972, 1973) and . Maximal subgroups. and independently found the 8 classes of maximal subgroups of the Tits group as follows: L3(3):2 Two classes, fused by an outer automorphism. These subgroups fix points of rank 4 permutation representations. 2.[28].5.4 Centralizer of an involution. L2(25) 22.[28].S3 A6.22 (Two classes, fused by an outer automorphism) 52:4A4 Presentation. The Tits group can be defined in terms of generators and relations by formula_0 where ["a", "b"] is the commutator "a"−1"b"−1"ab". It has an outer automorphism obtained by sending ("a", "b") to ("a", "b"("ba")5"b"("ba")5).
[ { "math_id": 0, "text": "a^2 = b^3 = (ab)^{13} = [a, b]^5 = [a, bab]^4 = ((ab)^4 ab^{-1})^6 = 1, \\," } ]
https://en.wikipedia.org/wiki?curid=606719
60672870
Surface growth
Dynamical study of growth of a surface In mathematics and physics, surface growth refers to models used in the dynamical study of the growth of a surface, usually by means of a stochastic differential equation of a field. Examples. Popular growth models include: They are studied for their fractal properties, scaling behavior, critical exponents, universality classes, and relations to chaos theory, dynamical system, non-equilibrium / disordered / complex systems. Popular tools include statistical mechanics, renormalization group, rough path theory, etc. Kinetic Monte Carlo surface growth model. Kinetic Monte Carlo (KMC) is a form of computer simulation in which atoms and molecules are allowed to interact at given rate that could be controlled based on known physics. This simulation method is typically used in the micro-electrical industry to study crystal surface growth, and it can provide accurate models surface morphology in different growth conditions on a time scales typically ranging from micro-seconds to hours. Experimental methods such as scanning electron microscopy (SEM), X-ray diffraction, and transmission electron microscopy (TEM), and other computer simulation methods such as molecular dynamics (MD), and Monte Carlo simulation (MC) are widely used. How KMC surface growth works. 1. Absorption process. First, the model tries to predict where an atom would land on a surface and its rate at particular environmental conditions, such as temperature and vapor pressure. In order to land on a surface, atoms have to overcome the so-called activation energy barrier. The frequency of passing through the activation barrier can by calculated by the Arrhenius equation: formula_0 where A is the thermal frequency of molecular vibration, formula_1 is the activation energy, k is the Boltzmann constant and T is the absolute temperature. 2. Desorption process. When atoms land on a surface, there are two possibilities. First, they would diffuse on the surface and find other atoms to make a cluster, which will be discussed below. Second, they could come off of the surface or so-called desorption process. The desorption is described exactly as in the absorption process, with the exception of a different activation energy barrier. formula_2 For example, if all positions on the surface of the crystal are energy equivalent, the rate of growth can be calculated from Turnbull formula: formula_3 where formula_4 is the rate of growth, ∆G = Ein – Eout, Aout, A0 out are frequencies to go in or out of crystal for any given molecule on the surface, h is the height of the molecule in the growth direction and C0 the concentration of the molecules in direct distance from the surface. 3. Diffusion process on surface. Diffusion process can also be calculated with Arrhenius equation: formula_5 where D is the diffusion coefficient and Ed is diffusion activation energy. All three processes strongly depend on surface morphology at a certain time. For example, atoms tend to lend at the edges of a group of connected atoms, the so-called island, rather than on a flat surface, this reduces the total energy. When atoms diffuse and connect to an island, each atom tends to diffuse no further, because activation energy to detach itself out of the island is much higher. Moreover, if an atom landed on top of an island, it would not diffuse fast enough, and the atom would tend to move down the steps and enlarge it. Simulation methods. Because of limited computing power, specialized simulation models have been developed for various purposes depending on the time scale: a) Electronic scale simulations (density function theory, ab-initio molecular dynamics): sub-atomic length scale in femto-second time scale b) Atomic scale simulations (MD): nano to micro-meter length scale in nano-second time scale c) Film scale simulation (KMC): micro-meter length scale in micro to hour time scale. d) Reactor scale simulation (phase field model): meter length scale in year time scale. Multiscale modeling techniques have also been developed to deal with overlapping time scales. How to use growth conditions in KMC. The interest of growing a smooth and defect-free surface requires a combination set of physical conditions throughout the process. Such conditions are bond strength, temperature, surface-diffusion limited and supersaturation (or impingement) rate. Using KMC surface growth method, following pictures describe final surface structure at different conditions. 1. Bond strength and temperature. Bond strength and temperature certainly play important roles in the crystal grow process. For high bond strength, when atoms land on a surface, they tend to be closed to atomic surface clusters, which reduce total energy. This behavior results in many isolated cluster formations with a variety of size yielding a rough surface. Temperature, on the other hand, controls the high of the energy barrier. Conclusion: high bond strength and low temperature is preferred to grow a smoothed surface. 2. Surface and bulk diffusion effect. Thermodynamically, a smooth surface is the lowest ever configuration, which has the smallest surface area. However, it requires a kinetic process such as surface and bulk diffusion to create a perfectly flat surface. Conclusion: enhancing surface and bulk diffusion will help create a smoother surface. 3. Supersaturation level. Conclusion: low impingement rate helps creating smoother surface. 4. Morphology at different combination of conditions. With the control of all growth conditions such as temperature, bond strength, diffusion, and saturation level, desired morphology could be formed by choosing the right parameters. Following is the demonstration how to obtain some interesting surface features: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_{in} = A_{0,in}\\exp \\left(-\\frac{E_{a,in}}{kT}\\right)" }, { "math_id": 1, "text": "E_{a}" }, { "math_id": 2, "text": "A_{out} = A_{0,out}\\exp \\left(-\\frac{E_{a,out}}{kT}\\right)" }, { "math_id": 3, "text": "V_c = hC_0(A_{out}-A_{0,out}) = hC_0\\exp\\left(-\\frac{E_{a,in}}{kT}\\right)\\cdot\\left(1-\\exp\\left(-\\frac{\\Delta G}{kT}\\right)\\right)" }, { "math_id": 4, "text": "V_{c}" }, { "math_id": 5, "text": "D = D_0\\exp \\left(-\\frac{E_d}{kT}\\right)" } ]
https://en.wikipedia.org/wiki?curid=60672870
60677420
NIMPLY gate
Digital logic gate The NIMPLY gate is a digital logic gate that implements a material nonimplication. Symbols. A right-facing arrow with a line through it (formula_0) can be used to denote NIMPLY in algebraic expressions. Logically, it is equivalent to material nonimplication, and the logical expression A ∧ ¬B. Usage. The NIMPLY gate is often used in synthetic biology and genetic circuits. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nrightarrow" } ]
https://en.wikipedia.org/wiki?curid=60677420
606803
Blower door
A blower door is a machine used to perform a building air leakage test. It can also be used to measure airflow between building zones, to test ductwork airtightness and to help physically locate air leakage sites in the building envelope. There are three primary components to a blower door: a calibrated, variable-speed blower or fan, capable of inducing a range of airflows sufficient to pressurize and depressurize a variety of building sizes; a pressure measurement instrument, called a manometer, to simultaneously measure the pressure differential induced across the face of the fan and across the building envelope, as a result of fan airflow; and a mounting system, used to mount the fan in a building opening, such as a door or a window. Airtightness testing is usually thought of in residential settings. It is becoming more common in commercial settings. The General Services Administration (GSA) requires testing of new US federal government buildings. A variety of blower door air tightness metrics can be produced using the combination of building-to-outside pressure and fan airflow measurements. These metrics differ in their measurement methods, calculation and uses. Blower door tests are used by building researchers, weatherization crews, home performance contractors, home energy auditors, and others in efforts to assess the construction quality of the building envelope, locate air leakage pathways, assess how much ventilation is supplied by the air leakage, assess the energy losses resulting from that air leakage, determine if the building is too tight or too loose, determine if the building needs mechanical ventilation and to assess compliance with building performance standards. History. In Sweden blower door technology was first used to measure building air tightness in 1977. The earliest implementation in Sweden used a fan mounted in a window, rather than a door. By 1979, similar window-mounted measurement techniques were being pursued in Texas, and door-mounted test fans were being developed by a team at Princeton University to help them find and fix air leaks in homes in a Twin Rivers, New Jersey housing development. In Canada the initial concept similarly involved the use of a "blower window", which was first utilized by G. T. Tamura in Ottawa, Canada, as part of a Division of Building Research study to test houses in Ottawa in 1967–1968 and was published in 1975. In Canada, a team at the National Research Council of Canada's Division of Building Research (NRC/DBR) in Saskatchewan, advanced the published work of Tamura in 1975 and went from a blower window to a blower door concept used in the construction of the Saskatchewan Conservation House in 1977. The Saskatchewan group was actively involved in the development of the blower door in 1977-78 and published their findings in 1980. They made available their flow nozzle to interested companies including one from Minneapolis. Harold Orr, who had been in Ottawa in 1967 when Tamura was conducting his work, continued to work on blower door technology after Tamura published his paper. Tamura's blower window concept from 1967, preceded the Swedish work in 1977 by a decade. The first blower door was further used to test the airtightness of the Saskatchewan Conservation House built in 1977, which was tested at 0.5 ach at 50 Pa. These early research efforts demonstrated the potential power of blower door testing in revealing otherwise unaccounted for energy losses in homes. Previously, air leakage around doors, windows and electrical outlets was considered to be the primary leakage pathway in homes, but Harrje, Dutt and Beya used blower doors to identify thermal bypasses. These bypasses were air leakage sites, such as attic utility chases, that accounted for the largest percentage of air leakage energy loss in most homes. Use of blower doors in home energy retrofitting and weatherization efforts became known as "house doctoring" by researchers on the east and west United States coasts. The blower door first became commercially available in the United States in 1980 under the name Gadsco. Harmax started to sell units in 1981, followed closely by The Energy Conservatory in 1982. While these blower door-testing efforts were useful in identifying leakage pathways and in accounting for otherwise inexplicable energy losses, the results could not be used to determine real-time air exchange in buildings under natural conditions, or even to determine average annual air exchange levels. Sherman attributes the first attempt at doing this to Persily and Kronvall, who estimated annual average air exchange by: formula_0 formula_1 = Natural Air Changes per Hour [1/h] formula_2 = Air Changes per Hour at 50 pascal [1/h] Further physical modeling efforts allowed for the development and validation of an infiltration model by researchers at Lawrence Berkeley National Laboratory (LBNL). This model combined data derived from blower door tests with annual weather data to generate time-resolved ventilation rates for a given home in a specific location. This model has been incorporated into the ASHRAE Handbook of Fundamentals (1989), and it has been used in the development of ASHRAE Standards 119 and 136. Other infiltration models have been developed elsewhere, including one by Deru and Burns at the National Renewable Energy Laboratory (NREL), for use in whole-building performance simulation. How blower door tests work. A basic blower door system includes three components: a calibrated fan, a door panel system, and a pressure measurement device (manometer). Test setup. The blower door fan is temporarily sealed into an exterior doorway using the door panel system. All interior doors are opened, and all exterior doors and windows are closed. HVAC balancing dampers and registers are not to be adjusted, and fireplaces and other operable dampers should be closed. All mechanical exhaust devices in the home, such as bathroom exhaust, kitchen range hood or dryer, should be turned off. Pressure tubing is used to measure the fan pressure, and it is also run to the exterior of the building, so that the indoor/outdoor pressure differential can be measured. The exterior pressure sensor should be shielded from wind and direct sunlight. The test begins by sealing the face of the fan and measuring the baseline indoor/outdoor pressure differential. The average value is to be subtracted from all indoor/outdoor pressure differential measurements during the test. Test procedure. The blower door fan is used to blow air into or out of the building, creating either a positive or negative pressure differential between inside and outside. This pressure difference forces air through all holes and penetrations in the building enclosure. The tighter the building (e.g. fewer holes), the less air is needed from the blower door fan to create a change in building pressure. Typically, only depressurization testing is performed, but both depressurization and pressurization are preferable. Different values for blower door metrics are to be expected for pressurizing and depressurizing, due to the building envelope's response to directional airflow. The smallest fan ring that allows the fan to reach the maximum target indoor/outdoor pressure differential should be used. A multi-point test can be performed either manually or using data acquisition and fan control software products. The manual test consists of adjusting the fan to maintain a series of indoor/outdoor pressure differentials and recording the resulting average fan and indoor/outdoor pressures. Alternatively, a single-point test can be performed, where the blower door fan is ramped up to a reference indoor/outdoor pressure differential and the fan pressure is recorded. Often the blower door hardware converts fan pressure measurements directly to fan airflow values. Power law model of airflow. Building leakage is described by a power law equation of flow through an orifice. The orifice flow equation is typically expressed as formula_3 formula_4 =Airflow (m3/s) formula_5 = Air leakage coefficient formula_6 = Pressure differential (Pa) formula_7 = Pressure exponent The C parameter reflects the size of the orifice, the ∆P is the pressure differential across the orifice, and the n parameter represents the characteristic shape of the orifice, with values ranging from 0.5 to 1, representing a perfect orifice and a very long, thin crack, respectively. There are two airflows to be determined in blower door testing, airflow through the fan (codice_0) and airflow through the building envelope (codice_1). formula_8 formula_9 It is assumed in blower door analysis that mass is conserved, resulting in: formula_10 Which results in: formula_11 Fan airflow is determined using codice_2 and codice_3 values that are provided by the blower door manufacturer, and they are used to calculate codice_0. The multi-point blower door test procedure results in a series of known values of Qn, Fan and ∆Pn, Building. Typical ∆Pn, Building values are ±5, 10, 20, 30, 40 and 50 pascal. Ordinary least squares regression analysis is then used to calculate the leakage characteristics of the building envelope: codice_5 and codice_6. These leakage characteristics of the building envelope can then be used to calculate how much airflow will be induced through the building envelope for a given pressure difference caused by wind, temperature difference or mechanical forces. 50 Pa can be plugged into the orifice-flow equation, along with the derived building C and n values to calculate airflow at 50 pascal. This same method can be used to calculate airflow at a variety of pressures, for use in creation of other blower door metrics. An alternative approach to the multi-point procedure is to only measure fan airflow and building pressure differential at a single test point, such as 50 Pa, and then use an assumed pressure exponent, codice_6 in the analysis and generation of blower door metrics. This method is preferred by some for two main reasons: (1) measuring and recording one data point is easier than recording multiple test points, and (2) the measurements are least reliable at very low building pressure differentials, due both to fan calibration and to wind effects. Air density corrections. In order to increase the accuracy of blower door test results, air density corrections should be applied to all airflow data. This must be done prior to the derivation of building air leakage coefficients (formula_12) and pressure exponents (formula_13). The following methods are used to correct blower door data to standard conditions. For depressurization testing, the following equation should be used: formula_14 formula_15 = Airflow corrected to actual air density formula_16 = Airflow derived using formula_17 and formula_18 formula_19 = Air density inside the building, during testing formula_20 = Air density outside the building, during testing For pressurization testing, the following equation should be used: formula_21 The values formula_22 and formula_23 are referred to as air density correction factors in product literature. They are often tabulated in easy to use tables in product literature, where a factor can be determined from outside and inside temperatures. If such tables are not used, the following equations will be required to calculate air densities. formula_19 can be calculated in IP units using the following equation: formula_24 formula_19 = Air density inside the building, during testing formula_25 = Elevation above sea level (ft) formula_26 = Indoor temperature (F) formula_20 can be calculated in IP units using the following equation: formula_27 formula_20 = Air density outside the building, during testing formula_25 = Elevation above sea level (ft) formula_28 = Outdoor temperature (F) In order to translate the airflow values derived using formula_17 and formula_18 from the blower door manufacturer to the actual volumetric airflow through the fan, use the following: formula_29 formula_30 = Actual volumetric airflow through the fan formula_31 = Volumetric airflow calculated using manufacturer's coefficients or software formula_32 = Reference air density (typically 1.204 for kg/m3 or 0.075 for lb/ft3) formula_33 = Actual density of air going through the fan formula_19 for depressurization and formula_20 for pressurization Blower door metrics. Depending on how a blower door test is performed, a wide variety of airtightness and building airflow metrics can be derived from the gathered data. Some of the most common metrics and their variations are discussed below. The examples below use the SI pressure measurement unit Pascal (pa). Imperial measurement units are commonly water column inches (WC Inch or IWC). The conversion rate is 1 WC inch = 249 Pa. Examples below use the commonly accepted pressure of 50pa which is 20% of 1 IWC. Airflow at a specified building pressure. This is the first metric that results from a Blower Door Test. The airflow, (Imperial in Cubic Feet / minute; SI in liters / second) at a given building-to-outside pressure differential, 50 pascal (Q50). This standardized single-point test allows for comparison between homes measured at the same reference pressure. This is a raw number reflecting only the flow of air through the fan. Homes of different sizes and similar envelope quality will have different results in this test. Airflow per unit surface area or floor area. Often, an effort is made to control for building size and layout by normalizing the airflow at a specified building pressure to either the building's floor area or to its total surface area. These values are generated by taking the airflow rate through the fan and dividing by the area. These metrics are most used to assess construction and building envelope quality, because they normalize the total building leakage area to the total amount of area through which that leakage could occur. In other words, how much leakage occurs per unit area of wall, floor, ceiling, etc. Air changes per hour at a specified building pressure. Another common metric is the air changes per hour at a specified building pressure, again, typically at 50 Pa (ACH50). formula_34 formula_35 = Air changes per hour at 50 pascal (h−1) formula_36 = Airflow at 50 pascal (ft3/minute or m3/minute) formula_37 = Building volume (ft3 or m3) This normalizes the airflow at a specified building pressure by the building's volume, which allows for more direct comparison of homes of different sizes and layouts. This metric indicates the rate at which the air in a building is replaced with outside air, and as a result, is an important metric in determinations of indoor air quality. Effective leakage area. In order to take values generated by fan pressurization and to use them in determining natural air exchange, the effective leakage area of a building must be calculated. Each gap and crack in the building envelope contributes a certain amount of area to the total leakage area of the building. The Effective Leakage Area assumes that all of the individual leakage areas in the building are combined into a single idealized orifice or hole. This value is typically described to building owners as the area of a window that is open 24/7, 365 in their building. The ELA will change depending on the reference pressure used to calculate it. 4 Pa is typically used in the US, whereas a reference pressure of 10 Pa is used in Canada. It is calculated as follows: formula_38 formula_39 = Effective Leakage Area (m2 or in2) formula_12 = Building air leakage coefficient formula_40 = Air density (kg/m3 or lb/in3), typically a standard density is used formula_41 = Reference pressure (Pa or lbForce/in2), typically 4 Pa in US and 10 Pa in Canada formula_13 = Building pressure exponent It is essential that units are carefully conserved in these calculations. codice_5 and codice_6 should be calculated using SI units, and ρ and ∆PReference should be kg/m3 and pascal, respectively. Alternatively, codice_5 and codice_6 can be calculated using Imperial units, with ρ and ∆PReference being lb/ft3 and lbForce/in2, respectively. The ELA can be used, along with the Specific Infiltration Rate(s) derived using the LBNL infiltration model, to determine airflow rate through the building envelope throughout the year. Leakage area per unit floor or surface area. Leakage area estimates can also be normalized for the size of the enclosure being tested, For example, the LEED Green Building Rating System has set an airtightness standard for multi-family dwelling units of of leakage area per of enclosure area, to control tobacco smoke between units. This is equal to 0.868 cm2/m2. Normalized leakage. Normalized leakage is a measure of the tightness of a building envelope relative to the building size and number of stories. Normalized leakage is defined in ASHRAE Standard 119 as: formula_42 formula_43 = Normalized leakage formula_44 = Effective Leakage Area (m2 or in2) formula_45 = Building floor area (m2 or in2) formula_46 = Building height (m or in) formula_47 = Reference height () Applications. Blower doors can be used in a variety of types of testing. These include (but are not limited to): NFPA enclosure integrity testing. NFPA enclosure integrity testing is a specialized type of enclosure testing that typically measures the airtightness of rooms within buildings that are protected by clean agent fire suppression systems. This test is normally done during the installation and commissioning of the system and is mandatory under NFPA, ISO, EN and FIA standards that also require the test is repeated annually if any doubt exists about the airtightness from the previous test. These types of enclosures are typically server rooms containing large amounts of computer and electronic hardware that would be damaged by the more typical water based sprinkler system. The word "clean" refers to the fact that after the suppression system discharges, there is nothing to be cleaned up; the agent merely disperses into the atmosphere. NFPA-2001 (2015 Edition) is used throughout North America, many Asian countries and the Middle East. A hold time analysis has been required since 1985. ISO-14520-2015 version or EN-15004 Standards are used throughout Europe while FIA Standards are used in the UK. Results from all these standards are very similar. NFPA standards for equipment calibration are about the same as they are for other types of testing, so any modern blower door equipment is sufficiently accurate to perform NFPA enclosure integrity testing. Specialized software or a tedious calculation must be provided to arrive at the hold time which is generally ten minutes. The NFPA standard requires that the blower door operator be trained, but does not specify the nature or source of this training. There is no official NFPA training available for enclosure integrity testing methodology at this time. An NFPA enclosure integrity test result is typically reported in the form of an "agent hold time" which represents the duration for which the room will retain at least 85% of the design concentration in order to suppress a fire and to ensure it does not reignite. This retention time inversely proportion to the leakage area of the room which is the major factor. Location of leaks, height being protected, the presence of continual mixing and clean agent being used will affect the hold time also. The 2008 Edition of NFPA-2001 required a peak pressure evaluation in addition but the industry in the USA in particular has been slow to institute this important requirement since excessive pressure during discharge has damaged many enclosures. This requirement was designed to prevent that. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ACH_{natural} = {ACH_{at 50 pascal}\\over20}\\,\\!" }, { "math_id": 1, "text": "ACH_{natural}" }, { "math_id": 2, "text": "ACH_{at 50 pascal}\\,\\!" }, { "math_id": 3, "text": " Q = C{\\Delta}P^n\\,\\!" }, { "math_id": 4, "text": " Q\\,\\!" }, { "math_id": 5, "text": " C\\,\\!" }, { "math_id": 6, "text": " {\\Delta}P\\,\\!" }, { "math_id": 7, "text": " n\\,\\!" }, { "math_id": 8, "text": " Q_{Fan} = C_{Fan}{{\\Delta}P_{Fan}}^{n_{Fan}}\\,\\!" }, { "math_id": 9, "text": " Q_{Building} = C_{Building}{{\\Delta}P_{Building}}^{n_{Building}}\\,\\!" }, { "math_id": 10, "text": " Q_{Fan} = Q_{Building}\\,\\!" }, { "math_id": 11, "text": " C_{Fan}{{\\Delta}P_{Fan}}^{n_{Fan}} = C_{Building}{{\\Delta}P_{Building}}^{n_{Building}}\\,\\!" }, { "math_id": 12, "text": "C_{Building}\\,\\!" }, { "math_id": 13, "text": "n_{Building}\\,\\!" }, { "math_id": 14, "text": " Q_{Corrected} = Q_{Measured}*{\\rho_{In} \\over \\rho_{Out}}\\,\\!" }, { "math_id": 15, "text": "Q_{Corrected}\\,\\!" }, { "math_id": 16, "text": "Q_{Measured}\\,\\!" }, { "math_id": 17, "text": "C_{Fan}\\,\\!" }, { "math_id": 18, "text": "n_{Fan}\\,\\!" }, { "math_id": 19, "text": "\\rho_{In}\\,\\!" }, { "math_id": 20, "text": "\\rho_{Out}\\,\\!" }, { "math_id": 21, "text": " Q_{Corrected} = Q_{Measured}*{\\rho_{Out} \\over \\rho_{In}}\\,\\!" }, { "math_id": 22, "text": "{\\rho_{Out} \\over \\rho_{In}}\\,\\!" }, { "math_id": 23, "text": "{\\rho_{In} \\over \\rho_{Out}}\\,\\!" }, { "math_id": 24, "text": "\\rho_{In} = 0.07517*(1-{0.0035666*E \\over 528})^{5.2553}*({528 \\over T_{In}+460})\\,\\!" }, { "math_id": 25, "text": "E\\,\\!" }, { "math_id": 26, "text": "T_{In}\\,\\!" }, { "math_id": 27, "text": "\\rho_{Out} = 0.07517*(1-{0.0035666*E \\over 528})^{5.2553}*({528 \\over T_{Out}+460})\\,\\!" }, { "math_id": 28, "text": "T_{Out}\\,\\!" }, { "math_id": 29, "text": "Q_{Actual} = Q_{Fan}*\\sqrt{\\rho_{Ref} \\over \\rho_{Actual}}\\,\\!" }, { "math_id": 30, "text": "Q_{Actual}\\,\\!" }, { "math_id": 31, "text": "Q_{Fan}\\,\\!" }, { "math_id": 32, "text": "\\rho_{Ref}\\,\\!" }, { "math_id": 33, "text": "\\rho_{Actual}\\,\\!" }, { "math_id": 34, "text": " ACH_{50} = {Q_{50}*60\\over V_{Building}}\\,\\!" }, { "math_id": 35, "text": " ACH_{50}\\,\\!" }, { "math_id": 36, "text": " Q_{50}\\,\\!" }, { "math_id": 37, "text": " V_{Building}\\,\\!" }, { "math_id": 38, "text": " ELA = C_{Building}*\\sqrt{\\rho \\over 2}*{\\Delta}P_{Ref}^{n_{Building}-0.5}\\,\\!" }, { "math_id": 39, "text": " ELA\\,\\!" }, { "math_id": 40, "text": "\\rho\\,\\!" }, { "math_id": 41, "text": "{\\Delta}P_{Ref}\\,\\!" }, { "math_id": 42, "text": " NL = 1000*({ELA \\over A_{Floor}})*({H \\over H_{Ref}})^{0.3}\\,\\!" }, { "math_id": 43, "text": " NL\\,\\!" }, { "math_id": 44, "text": "ELA\\,\\!" }, { "math_id": 45, "text": "A_{Floor}\\,\\!" }, { "math_id": 46, "text": "H\\,\\!" }, { "math_id": 47, "text": "H_{Ref}\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=606803
606874
Einstein–Cartan theory
Classical theory of gravitation In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922. Overview. Einstein–Cartan theory differs from general relativity in two ways: (1) it is formulated within the framework of Riemann–Cartan geometry, which possesses a locally gauged Lorentz symmetry, while general relativity is formulated within the framework of Riemannian geometry, which does not; (2) an additional set of equations are posed that relate torsion to spin. This difference can be factored into general relativity (Einstein–Hilbert) → general relativity (Palatini) → Einstein–Cartan by first reformulating general relativity onto a Riemann–Cartan geometry, replacing the Einstein–Hilbert action over Riemannian geometry by the Palatini action over Riemann–Cartan geometry; and second, removing the zero torsion constraint from the Palatini action, which results in the additional set of equations for spin and torsion, as well as the addition of extra spin-related terms in the Einstein field equations themselves. The theory of general relativity was originally formulated in the setting of Riemannian geometry by the Einstein–Hilbert action, out of which arise the Einstein field equations. At the time of its original formulation, there was no concept of Riemann–Cartan geometry. Nor was there a sufficient awareness of the concept of gauge symmetry to understand that Riemannian geometries do not possess the requisite structure to embody a locally gauged Lorentz symmetry, such as would be required to be able to express continuity equations and conservation laws for rotational and boost symmetries, or to describe spinors in curved spacetime geometries. The result of adding this infrastructure is a Riemann–Cartan geometry. In particular, to be able to describe spinors requires the inclusion of a spin structure, which suffices to produce such a geometry. The chief difference between a Riemann–Cartan geometry and Riemannian geometry is that in the former, the affine connection is independent of the metric, while in the latter it is derived from the metric as the Levi-Civita connection, the difference between the two being referred to as the contorsion. In particular, the antisymmetric part of the connection (referred to as the torsion) is zero for Levi-Civita connections, as one of the defining conditions for such connections. Because the contorsion can be expressed linearly in terms of the torsion, then is also possible to directly translate the Einstein–Hilbert action into a Riemann–Cartan geometry, the result being the Palatini action (see also Palatini variation). It is derived by rewriting the Einstein–Hilbert action in terms of the affine connection and then separately posing a constraint that forces both the torsion and contorsion to be zero, which thus forces the affine connection to be equal to the Levi-Civita connection. Because it is a direct translation of the action and field equations of general relativity, expressed in terms of the Levi-Civita connection, this may be regarded as the theory of general relativity, itself, transposed into the framework of Riemann–Cartan geometry. Einstein–Cartan theory relaxes this condition and, correspondingly, relaxes general relativity's assumption that the affine connection have a vanishing antisymmetric part (torsion tensor). The action used is the same as the Palatini action, except that the constraint on the torsion is removed. This results in two differences from general relativity: (1) the field equations are now expressed in terms of affine connection, rather than the Levi-Civita connection, and so have additional terms in Einstein's field equations involving the contorsion that are not present in the field equations derived from the Palatini formulation; (2) an additional set of equations are now present which couple the torsion to the intrinsic angular momentum (spin) of matter, much in the same way in which the affine connection is coupled to the energy and momentum of matter. In Einstein–Cartan theory, the torsion is now a variable in the principle of stationary action that is coupled to a curved spacetime formulation of spin (the spin tensor). These extra equations express the torsion linearly in terms of the spin tensor associated with the matter source, which entails that the torsion generally be non-zero inside matter. A consequence of the linearity is that outside of matter there is zero torsion, so that the exterior geometry remains the same as what would be described in general relativity. The differences between Einstein–Cartan theory and general relativity (formulated either in terms of the Einstein–Hilbert action on Riemannian geometry or the Palatini action on Riemann–Cartan geometry) rest solely on what happens to the geometry inside matter sources. That is: "torsion does not propagate". Generalizations of the Einstein–Cartan action have been considered which allow for propagating torsion. Because Riemann–Cartan geometries have Lorentz symmetry as a local gauge symmetry, it is possible to formulate the associated conservation laws. In particular, regarding the metric and torsion tensors as independent variables gives the correct generalization of the conservation law for the total (orbital plus intrinsic) angular momentum to the presence of the gravitational field. History. The theory was first proposed by Élie Cartan in 1922 and expounded in the following few years. Albert Einstein became affiliated with the theory in 1928 during his unsuccessful attempt to match torsion to the electromagnetic field tensor as part of a unified field theory. This line of thought led him to the related but different theory of teleparallelism. Dennis Sciama and Tom Kibble independently revisited the theory in the 1960s, and an important review was published in 1976. Einstein–Cartan theory has been historically overshadowed by its torsion-free counterpart and other alternatives like Brans–Dicke theory because torsion seemed to add little predictive benefit at the expense of the tractability of its equations. Since the Einstein–Cartan theory is purely classical, it also does not fully address the issue of quantum gravity. In the Einstein–Cartan theory, the Dirac equation becomes nonlinear. Even though renowned physicists such as Steven Weinberg "never understood what is so important physically about the possibility of torsion in differential geometry", other physicists claim that theories with torsion are valuable. The theory has indirectly influenced loop quantum gravity (and seems also to have influenced twistor theory). Field equations. The Einstein field equations of general relativity can be derived by postulating the Einstein–Hilbert action to be the true action of spacetime and then varying that action with respect to the metric tensor. The field equations of Einstein–Cartan theory come from exactly the same approach, except that a general asymmetric affine connection is assumed rather than the symmetric Levi-Civita connection (i.e., spacetime is assumed to have torsion in addition to curvature), and then the metric and torsion are varied independently. Let formula_0 represent the Lagrangian density of matter and formula_1 represent the Lagrangian density of the gravitational field. The Lagrangian density for the gravitational field in the Einstein–Cartan theory is proportional to the Ricci scalar: formula_2 formula_3 where formula_4 is the determinant of the metric tensor, and formula_5 is a physical constant formula_6 involving the gravitational constant and the speed of light. By Hamilton's principle, the variation of the total action formula_7 for the gravitational field and matter vanishes: formula_8 The variation with respect to the metric tensor formula_9 yields the Einstein equations: formula_10 where formula_11 is the Ricci tensor and formula_12 is the "canonical" stress–energy–momentum tensor. The Ricci tensor is no longer symmetric because the connection contains a nonzero torsion tensor; therefore, the right-hand side of the equation cannot be symmetric either, implying that formula_12 must include an asymmetric contribution that can be shown to be related to the spin tensor. This canonical energy–momentum tensor is related to the more familiar "symmetric" energy–momentum tensor by the Belinfante–Rosenfeld procedure. The variation with respect to the torsion tensor formula_13 yields the Cartan spin connection equations formula_14 where formula_15 is the spin tensor. Because the torsion equation is an algebraic constraint rather than a partial differential equation, the torsion field does not propagate as a wave, and vanishes outside of matter. Therefore, in principle the torsion can be algebraically eliminated from the theory in favor of the spin tensor, which generates an effective "spin–spin" nonlinear self-interaction inside matter. Torsion is equal to its source term and can be replaced by a boundary or a topological structure with a throat such as a "wormhole". Avoidance of singularities. Recently, interest in Einstein–Cartan theory has been driven toward cosmological implications, most importantly, the avoidance of a gravitational singularity at the beginning of the universe, such as in the black hole cosmology, static universe, or cyclic model. Singularity theorems which are premised on and formulated within the setting of Riemannian geometry (e.g. Penrose–Hawking singularity theorems) need not hold in Riemann–Cartan geometry. Consequently, Einstein–Cartan theory is able to avoid the general-relativistic problem of the singularity at the Big Bang. The minimal coupling between torsion and Dirac spinors generates an effective nonlinear spin–spin self-interaction, which becomes significant inside fermionic matter at extremely high densities. Such an interaction is conjectured to replace the singular Big Bang with a cusp-like Big Bounce at a minimum but finite scale factor, before which the observable universe was contracting. This scenario also explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic, providing a physical alternative to cosmic inflation. Torsion allows fermions to be spatially extended instead of "pointlike", which helps to avoid the formation of singularities such as black holes and removes the ultraviolet divergence in quantum field theory. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, instead, the collapse reaches a bounce and forms a regular Einstein–Rosen bridge (wormhole) to a new, growing universe on the other side of the event horizon; pair production by the gravitational field after the bounce, when torsion is still strong, generates a finite period of inflation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{L}_\\mathrm{M}" }, { "math_id": 1, "text": "\\mathcal{L}_\\mathrm{G}" }, { "math_id": 2, "text": "\\mathcal{L}_\\mathrm{G}=\\frac{1}{2\\kappa}R \\sqrt{|g|} " }, { "math_id": 3, "text": "S=\\int \\left( \\mathcal{L}_\\mathrm{G} + \\mathcal{L}_\\mathrm{M} \\right) \\, d^4x ," }, { "math_id": 4, "text": "g" }, { "math_id": 5, "text": "\\kappa" }, { "math_id": 6, "text": "8\\pi G/c^4" }, { "math_id": 7, "text": "S" }, { "math_id": 8, "text": "\\delta S = 0." }, { "math_id": 9, "text": "g^{ab}" }, { "math_id": 10, "text": " \\frac{\\delta \\mathcal{L}_\\mathrm{G}}{\\delta g^{ab}} -\\frac{1}{2}P_{ab}=0" }, { "math_id": 11, "text": "R_{ab}" }, { "math_id": 12, "text": "P_{ab}" }, { "math_id": 13, "text": "{T^{ab}}_c" }, { "math_id": 14, "text": "\\frac{\\delta \\mathcal{L}_\\mathrm{G}}{\\delta {T^{ab}}_c} -\\frac{1}{2}{\\sigma_{ab}}^c =0" }, { "math_id": 15, "text": "{\\sigma_{ab}}^c" } ]
https://en.wikipedia.org/wiki?curid=606874
606880
Nonfirstorderizability
Concept in formal logic In formal logic, nonfirstorderizability is the inability of a natural-language statement to be adequately captured by a formula of first-order logic. Specifically, a statement is nonfirstorderizable if there is no formula of first-order logic which is true in a model if and only if the statement holds in that model. Nonfirstorderizable statements are sometimes presented as evidence that first-order logic is not adequate to capture the nuances of meaning in natural language. The term was coined by George Boolos in his paper "To Be is to Be a Value of a Variable (or to Be Some Values of Some Variables)". Quine argued that such sentences call for second-order symbolization, which can be interpreted as plural quantification over the same domain as first-order quantifiers use, without postulation of distinct "second-order objects" (properties, sets, etc.). Examples. Geach-Kaplan sentence. A standard example is the "Geach–Kaplan sentence": "Some critics admire only one another." If "Axy" is understood to mean ""x" admires "y"," and the universe of discourse is the set of all critics, then a reasonable translation of the sentence into second order logic is: formula_0 That this formula has no first-order equivalent can be seen by turning it into a formula in the language of arithmetic . Substitute the formula formula_1 for "Axy". The result, formula_2 states that there is a set X with these properties: A model of a formal theory of arithmetic, such as first-order Peano arithmetic, is called "standard" if it only contains the familiar natural numbers 0, 1, 2, ... as elements. The model is called non-standard otherwise. Therefore, the formula given above is true only in non-standard models, because, in the standard model, the set X must contain all available numbers 0, 1, 2, ... In addition, there is a set X satisfying the formula in every non-standard model. Let us assume that there is a first-order rendering of the above formula called E. If formula_3 were added to the Peano axioms, it would mean that there were no non-standard models of the augmented axioms. However, the usual argument for the existence of non-standard models would still go through, proving that there are non-standard models after all. This is a contradiction, so we can conclude that no such formula E exists in first-order logic. Finiteness of the domain. There is no formula A in first-order logic with equality which is true of all and only models with finite domains. In other words, there is no first-order formula which can express "there is only a finite number of things". This is implied by the compactness theorem as follows. Suppose there is a formula A which is true in all and only models with finite domains. We can express, for any positive integer n, the sentence "there are at least n elements in the domain". For a given n, call the formula expressing that there are at least n elements Bn. For example, the formula B3 is: formula_4 which expresses that there are at least three distinct elements in the domain. Consider the infinite set of formulae formula_5 Every finite subset of these formulae has a model: given a subset, find the greatest n for which the formula Bn is in the subset. Then a model with a domain containing n elements will satisfy A (because the domain is finite) and all the B formulae in the subset. Applying the compactness theorem, the entire infinite set must also have a model. Because of what we assumed about A, the model must be finite. However, this model cannot be finite, because if the model has only m elements, it does not satisfy the formula Bm+1. This contradiction shows that there can be no formula A with the property we assumed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exists X ( \\exists x,y (Xx \\land Xy \\land Axy) \\land \\exists x \\neg Xx \\land \\forall x\\, \\forall y (Xx \\land Axy \\rightarrow Xy))" }, { "math_id": 1, "text": " ( y = x + 1 \\lor x = y + 1 ) " }, { "math_id": 2, "text": "\\exists X ( \\exists x,y (Xx \\land Xy \\land (y = x + 1 \\lor x = y + 1)) \\land \\exists x \\neg Xx \\land \\forall x\\, \\forall y (Xx \\land (y = x + 1 \\lor x = y + 1) \\rightarrow Xy))" }, { "math_id": 3, "text": "\\neg E" }, { "math_id": 4, "text": "\\exists x \\exists y \\exists z (x \\neq y \\wedge x \\neq z \\wedge y \\neq z)" }, { "math_id": 5, "text": "A, B_2, B_3, B_4, \\ldots" } ]
https://en.wikipedia.org/wiki?curid=606880
6069126
Time perception
Perception of events' position in time In psychology and neuroscience, time perception or chronoception is the subjective experience, or sense, of time, which is measured by someone's own perception of the duration of the indefinite and unfolding of events. The perceived time interval between two successive events is referred to as perceived duration. Though directly experiencing or understanding another person's perception of time is not possible, perception can be objectively studied and inferred through a number of scientific experiments. Some temporal illusions help to expose the underlying neural mechanisms of time perception. The ancient Greeks recognized the difference between chronological time (chronos) and subjective time (kairos). Pioneering work on time perception, emphasizing species-specific differences, was conducted by Karl Ernst von Baer. Theories. Time perception is typically categorized in three distinct ranges, because different ranges of duration are processed in different areas of the brain: There are many theories and computational models for time perception mechanisms in the brain. William J. Friedman (1993) contrasted two theories of the sense of time: Another hypothesis involves the brain's subconscious tallying of "pulses" during a specific interval, forming a biological stopwatch. This theory proposes that the brain can run multiple biological stopwatches independently depending on the type of tasks being tracked. The source and nature of the pulses is unclear. They are as yet a metaphor whose correspondence to brain anatomy or physiology is unknown. Philosophical perspectives. The "specious present" is the time duration wherein a state of consciousness is experienced as being in the present. The term was first introduced by the philosopher E. R. Clay in 1882 (E. Robert Kelly), and was further developed by William James. James defined the specious present to be "the prototype of all conceived times... the short duration of which we are immediately and incessantly sensible". In "Scientific Thought" (1930), C. D. Broad further elaborated on the concept of the specious present and considered that the specious present may be considered as the temporal equivalent of a sensory datum. A version of the concept was used by Edmund Husserl in his works and discussed further by Francisco Varela based on the writings of Husserl, Heidegger, and Merleau-Ponty. Although the perception of time is not associated with a specific sensory system, psychologists and neuroscientists suggest that humans do have a system, or several complementary systems, governing the perception of time. Time perception is handled by a highly distributed system involving the cerebral cortex, cerebellum and basal ganglia. One particular component, the suprachiasmatic nucleus, is responsible for the circadian (or daily) rhythm, while other cell clusters appear to be capable of shorter (ultradian) timekeeping. There is some evidence that very short (millisecond) durations are processed by dedicated neurons in early sensory parts of the brain. Warren Meck devised a physiological model for measuring the passage of time. He found the representation of time to be generated by the oscillatory activity of cells in the upper cortex. The frequency of these cells' activity is detected by cells in the dorsal striatum at the base of the forebrain. His model separated explicit timing and implicit timing. Explicit timing is used in estimating the duration of a stimulus. Implicit timing is used to gauge the amount of time separating one from an impending event that is expected to occur in the near future. These two estimations of time do not involve the same neuroanatomical areas. For example, implicit timing often occurs to achieve a motor task, involving the cerebellum, left parietal cortex, and left premotor cortex. Explicit timing often involves the supplementary motor area and the right prefrontal cortex. Two visual stimuli, inside someone's field of view, can be successfully regarded as simultaneous up to five milliseconds. In the popular essay "Brain Time", David Eagleman explains that different types of sensory information (auditory, tactile, visual, etc.) are processed at different speeds by different neural architectures. The brain must learn how to overcome these speed disparities if it is to create a temporally unified representation of the external world: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;if the visual brain wants to get events correct timewise, it may have only one choice: wait for the slowest information to arrive. To accomplish this, it must wait about a tenth of a second. In the early days of television broadcasting, engineers worried about the problem of keeping audio and video signals synchronized. Then they accidentally discovered that they had around a hundred milliseconds of slop: As long as the signals arrived within this window, viewers' brains would automatically resynchronize the signals. He goes on to say, "This brief waiting period allows the visual system to discount the various delays imposed by the early stages; however, it has the disadvantage of pushing perception into the past. There is a distinct survival advantage to operating as close to the present as possible; an animal does not want to live too far in the past. Therefore, the tenth-of-a-second window may be the smallest delay that allows higher areas of the brain to account for the delays created in the first stages of the system while still operating near the border of the present. This window of delay means that awareness is retroactive, incorporating data from a window of time after an event and delivering a delayed interpretation of what happened." Experiments have shown that rats can successfully estimate a time interval of approximately 40 seconds, despite having their cortex entirely removed. This suggests that time estimation may be a low-level process. Ecological perspectives. In recent history, ecologists and psychologists have been interested in whether and how time is perceived by non-human animals, as well as which functional purposes are served by the ability to perceive time. Studies have demonstrated that many species of animals, including both vertebrates and invertebrates, have cognitive abilities that allow them to estimate and compare time intervals and durations in a similar way to humans. There is empirical evidence that metabolic rate has an impact on animals' ability to perceive time. In general, it is true within and across taxa that animals of smaller size (such as flies), which have a fast metabolic rate, experience time more slowly than animals of larger size, which have a slow metabolic rate. Researchers suppose that this could be the reason why small-bodied animals are generally better at perceiving time on a small scale, and why they are more agile than larger animals. Time perception in vertebrates. Examples in fish. In a lab experiment, goldfish were conditioned to receive a light stimulus followed shortly by an aversive electric shock, with a constant time interval between the two stimuli. Test subjects showed an increase in general activity around the time of the electric shock. This response persisted in further trials in which the light stimulus was kept but the electric shock was removed. This suggests that goldfish are able to perceive time intervals and to initiate an avoidance response at the time when they expect the distressing stimulus to happen. In two separate studies, golden shiners and dwarf inangas demonstrated the ability to associate the availability of food sources to specific locations and times of day, called time-place learning. In contrast, when tested for time-place learning based on predation risk, inangas were unable to associate spatiotemporal patterns to the presence or absence of predators. In June 2022, researchers reported in "Physical Review Letters" that salamanders were demonstrating counter-intuitive responses to the arrow of time in how their eyes perceived different stimuli. Examples in birds. When presented with the choice between obtaining food at regular intervals (with a fixed delay between feedings) or at stochastic intervals (with a variable delay between feedings), starlings can discriminate between the two types of intervals and consistently prefer getting food at variable intervals. This is true whether the total amount of food is the same for both options or if the total amount of food is unpredictable in the variable option. This suggests that starlings have an inclination for risk-prone behavior. Pigeons are able to discriminate between different times of day and show time-place learning. After training, lab subjects were successfully able to peck specific keys at different times of day (morning or afternoon) in exchange for food, even after their sleep/wake cycle was artificially shifted. This suggests that to discriminate between different times of day, pigeons can use an internal timer (or circadian timer) that is independent of external cues. However, a more recent study on time-place learning in pigeons suggests that for a similar task, test subjects will switch to a non-circadian timing mechanism when possible to save energy resources. Experimental tests revealed that pigeons are also able to discriminate between cues of various durations (on the order of seconds), but that they are less accurate when timing auditory cues than when timing visual cues. Examples in mammals. A study on privately owned dogs revealed that dogs are able to perceive durations ranging from minutes to several hours differently. Dogs reacted with increasing intensity to the return of their owners when they were left alone for longer durations, regardless of the owners' behavior. After being trained with food reinforcement, female wild boars are able to correctly estimate time intervals of days by asking for food at the end of each interval, but they are unable to accurately estimate time intervals of minutes with the same training method. When trained with positive reinforcement, rats can learn to respond to a signal of a certain duration, but not to signals of shorter or longer durations, which demonstrates that they can discriminate between different durations. Rats have demonstrated time-place learning, and can also learn to infer correct timing for a specific task by following an order of events, suggesting that they might be able to use an ordinal timing mechanism. Like pigeons, rats are thought to have the ability to use a circadian timing mechanism for discriminating time of day. Time perception in invertebrates. When returning to the hive with nectar, forager honey bees need to know the current ratio of nectar-collecting to nectar-processing rates in the colony. To do so, they estimate the time it takes them to find a food-storer bee, which will unload the forage and store it. The longer it takes them to find one, the busier the food-storer bees are, and therefore the higher the nectar-collecting rate of the colony. Forager bees also assess the quality of nectar by comparing the length of time it takes to unload the forage: a longer unloading time indicates higher quality nectar. They compare their own unloading time to the unloading time of other foragers present in the hive, and adjust their recruiting behavior accordingly. For instance, honey bees reduce the duration of their waggle dance if they judge their own yield to be inferior. Scientists have demonstrated that anesthesia disrupts the circadian clock and impairs the time perception of honey bees, as observed in humans. Experiments revealed that a six-hour-long general anesthesia significantly delayed the start of the foraging behaviour of honeybees if induced during daytime, but not if induced during nighttime. Bumble bees can be successfully trained to respond to a stimulus after a certain time interval has elapsed (usually several seconds after the start signal). Studies have shown that they can also learn to simultaneously time multiple interval durations. In a single study, colonies from three species of ants from the genus "Myrmica" were trained to associate feeding sessions with different times. The trainings lasted several days, where each day the feeding time was delayed by 20 minutes compared to the previous day. In all three species, at the end of the training, most individuals were present at the feeding spot at the correct expected times, suggesting that ants are able to estimate the time running, keep in memory the expected feeding time and to act anticipatively. Types of temporal illusions. A temporal illusion is a distortion in the perception of time. For example: Kappa effect. The "Kappa effect" or "perceptual time dilation" is a form of temporal illusion verifiable by experiment. The temporal duration between a sequence of consecutive stimuli is thought to be relatively longer or shorter than its actual elapsed time, due to the spatial/auditory/tactile separation between each consecutive stimuli. The kappa effect can be displayed when considering a journey made in two parts that each take an equal amount of time. When mentally comparing these two sub-journeys, the part that covers more "distance" may appear to take longer than the part covering less distance, even though they take an equal amount of time. Eye movements and chronostasis. The perception of space and time undergoes distortions during rapid saccadic eye movements. "Chronostasis" is a type of temporal illusion in which the first impression following the introduction of a new event or task demand to the brain appears to be extended in time. For example, chronostasis temporarily occurs when fixating on a target stimulus, immediately following a saccade (e.g., quick eye movement). This elicits an overestimation in the temporal duration for which that target stimulus (i.e., postsaccadic stimulus) was perceived. This effect can extend apparent durations by up to 500 ms and is consistent with the idea that the visual system models events prior to perception. The most well-known version of this illusion is known as the stopped-clock illusion, wherein a subject's first impression of the second-hand movement of an analog clock, subsequent to one's directed attention (i.e., saccade) to the clock, is the perception of a slower-than-normal second-hand movement rate (the second-hand of the clock may seemingly temporarily freeze in place after initially looking at it). The occurrence of chronostasis extends beyond the visual domain into the auditory and tactile domains. In the auditory domain, chronostasis and duration overestimation occur when observing auditory stimuli. One common example is a frequent occurrence when making telephone calls. If, while listening to the phone's dial tone, research subjects move the phone from one ear to the other, the length of time between rings appears longer. In the tactile domain, chronostasis has persisted in research subjects as they reach for and grasp objects. After grasping a new object, subjects overestimate the time in which their hand has been in contact with this object. Flash-lag effect. In an experiment, participants were told to stare at an "x" symbol on a computer screen whereby a moving blue doughnut-like ring repeatedly circled the fixed "x" point. Occasionally, the ring would display a white flash for a split second that physically overlapped the ring's interior. However, when asked what was perceived, participants responded that they saw the white flash lagging behind the center of the moving ring. In other words, despite the reality that the two retinal images were actually spatially aligned, the flashed object was usually observed to trail a continuously moving object in space — a phenomenon referred to as the "flash-lag effect". The first proposed explanation, called the "motion extrapolation" hypothesis, is that the visual system extrapolates the position of moving objects but not flashing objects when accounting for neural delays (i.e., the lag time between the retinal image and the observer's perception of the flashing object). The second proposed explanation by David Eagleman and Sejnowski, called the "latency difference" hypothesis, is that the visual system processes moving objects at a faster rate than flashed objects. In the attempt to disprove the first hypothesis, David Eagleman conducted an experiment in which the moving ring suddenly reverses direction to spin in the other way as the flashed object briefly appears. If the first hypothesis were correct, we would expect that, immediately following reversal, the moving object would be observed as lagging behind the flashed object. However, the experiment revealed the opposite — immediately following reversal, the flashed object was observed as lagging behind the moving object. This experimental result supports the "latency difference" hypothesis. A recent study tries to reconcile these different approaches by treating perception as an inference mechanism aiming at describing what is happening at the present time. Oddball effect. Humans typically overestimate the perceived duration of the initial and final event in a stream of identical events. This "oddball effect" may serve an evolutionarily adapted "alerting" function and is consistent with reports of time slowing down in threatening situations. The effect seems to be strongest for images that are expanding in size on the retina, i.e., that are "looming" or approaching the viewer, and the effect can be eradicated for oddballs that are contracting or perceived to be receding from the viewer. The effect is also reduced or reversed with a static oddball presented among a stream of expanding stimuli. Initial studies suggested that this oddball-induced "subjective time dilation" expanded the perceived duration of oddball stimuli by 30–50% but subsequent research has reported more modest expansion of around 10% or less. The direction of the effect, whether the viewer perceives an increase or a decrease in duration, also seems to be dependent upon the stimulus used. Reversal of temporal order judgment. Numerous experimental findings suggest that temporal order judgments of actions preceding effects can be reversed under special circumstances. Experiments have shown that sensory simultaneity judgments can be manipulated by repeated exposure to non-simultaneous stimuli. In an experiment conducted by David Eagleman, a temporal order judgment reversal was induced in subjects by exposing them to delayed motor consequences. In the experiment, subjects played various forms of video games. Unknown to the subjects, the experimenters introduced a fixed delay between the mouse movements and the subsequent sensory feedback. For example, a subject may not see a movement register on the screen until 150 milliseconds after they had moved the mouse. Participants playing the game quickly adapted to the delay and felt as though there was less delay between their mouse movement and the sensory feedback. Shortly after the experimenters removed the delay, the subjects commonly felt as though the effect on the screen happened just before they commanded it. This work addresses how the perceived timing of effects is modulated by expectations, and the extent to which such predictions are quickly modifiable. In an experiment conducted by Haggard and colleagues in 2002, participants pressed a button that triggered a flash of light at a distance, after a slight delay of 100 milliseconds. By repeatedly engaging in this act, participants had adapted to the delay (i.e., they experienced a gradual shortening in the perceived time interval between pressing the button and seeing the flash of light). The experimenters then showed the flash of light instantly after the button was pressed. In response, subjects often thought that the flash (the effect) had occurred before the button was pressed (the cause). Additionally, when the experimenters slightly reduced the delay, and shortened the spatial distance between the button and the flash of light, participants had often claimed again to have experienced the effect before the cause. Several experiments also suggest that temporal order judgment of a pair of tactile stimuli delivered in rapid succession, one to each hand, is noticeably impaired (i.e., misreported) by crossing the hands over the midline. However, congenitally blind subjects showed no trace of temporal order judgment reversal after crossing the arms. These results suggest that tactile signals taken in by the congenitally blind are ordered in time without being referred to a visuospatial representation. Unlike the congenitally blind subjects, the temporal order judgments of the late-onset blind subjects were impaired when crossing the arms to a similar extent as non-blind subjects. These results suggest that the associations between tactile signals and visuospatial representation is maintained once it is accomplished during infancy. Some research studies have also found that the subjects showed reduced deficit in tactile temporal order judgments when the arms were crossed behind their back than when they were crossed in front. Physiological associations. Tachypsychia. Tachypsychia is a neurological condition that alters the perception of time, usually induced by physical exertion, drug use, or a traumatic event. For someone affected by tachypsychia, time perceived by the individual either lengthens, making events appear to slow down, or contracts, with objects appearing as moving in a speeding blur. Effects of emotional states. Awe. Research has suggested the feeling of awe has the ability to expand one's perceptions of time availability. Awe can be characterized as an experience of immense perceptual vastness that coincides with an increase in focus. Consequently, it is conceivable that one's temporal perception would slow down when experiencing awe. The perception of time can differ as people choose between savoring moments and deferring gratification. Fear. Possibly related to the oddball effect, research suggests that time seems to slow down for a person during dangerous events (such as a car accident, a robbery, or when a person perceives a potential predator or mate), or when a person skydives or bungee jumps, where they are capable of complex thoughts in what would normally be the blink of an eye (See Fight-or-flight response). This reported slowing in temporal perception may have been evolutionarily advantageous because it may have enhanced one's ability to intelligibly make quick decisions in moments that were of critical importance to our survival. However, even though observers commonly report that time seems to have moved in slow motion during these events, it is unclear whether this is a function of increased time resolution during the event, or instead an illusion created by the remembering of an emotionally salient event. A strong time dilation effect has been reported for perception of objects that were looming, but not of those retreating, from the viewer, suggesting that the expanding discs — which mimic an approaching object — elicit self-referential processes which act to signal the presence of a possible danger. Anxious people, or those in great fear, experience greater "time dilation" in response to the same threat stimuli due to higher levels of epinephrine, which increases brain activity (an adrenaline rush). In such circumstances, an illusion of time dilation could assist an effective escape. When exposed to a threat, three-year-old children were observed to exhibit a similar tendency to overestimate elapsed time. Research suggests that the effect appears only at the point of retrospective assessment, rather than occurring simultaneously with events as they happened. Perceptual abilities were tested during a frightening experience — a free fall — by measuring people's sensitivity to flickering stimuli. The results showed that the subjects' temporal resolution was not improved as the frightening event was occurring. Events appear to have taken longer only in retrospect, possibly because memories were being more densely packed during the frightening situation. Other researchers suggest that additional variables could lead to a different state of consciousness in which altered time perception does occur during an event. Research does demonstrate that visual sensory processing increases in scenarios involving action preparation. Participants demonstrated a higher detection rate of rapidly presented symbols when preparing to move, as compared to a control without movement. People shown extracts from films known to induce fear often overestimated the elapsed time of a subsequently presented visual stimulus, whereas people shown emotionally neutral clips (weather forecasts and stock market updates) or those known to evoke feelings of sadness showed no difference. It is argued that fear prompts a state of arousal in the amygdala, which increases the rate of a hypothesized "internal clock". This could be the result of an evolved defensive mechanism triggered by a threatening situation. Individuals experiencing sudden or surprising events, real or imagined (e.g., witnessing a crime, or believing one is seeing a ghost), may overestimate the duration of the event. Changes with age. Psychologists have found that the subjective perception of the passing of time tends to speed up with increasing age in humans. This often causes people to increasingly underestimate a given interval of time as they age. This fact can likely be attributed to a variety of age-related changes in the aging brain, such as the lowering in dopaminergic levels with older age; however, the details are still being debated. Very young children will first experience the passing of time when they can subjectively perceive and reflect on the unfolding of a collection of events. A child's awareness of time develops during childhood, when the child's attention and short-term memory capacities form — this developmental process is thought to be dependent on the slow maturation of the prefrontal cortex and hippocampus. The common explanation is that most external and internal experiences are new for young children but repetitive for adults. Children have to be extremely engaged (i.e. dedicate many neural resources or significant brain power) in the present moment because they must constantly reconfigure their mental models of the world to assimilate it and manage behaviour properly. Adults, however, may rarely need to step outside mental habits and external routines. When an adult frequently experiences the same stimuli, such stimuli may seem "invisible" as a result of having already been sufficiently mapped by the brain. This phenomenon is known as neural adaptation. Consequently, the subjective perception is often that time passes by at a faster rate with age. Proportional to the real time. Let S be subjective time, R be real time, and define both to be zero at birth. One model proposes that the passage of subjective time relative to actual time is inversely proportional to real time: formula_0 When solved, formula_1. One day would be approximately 1/4,000 of the life of an 11-year-old, but approximately 1/20,000 of the life of a 55-year-old. This helps to explain why a random, ordinary day may therefore appear longer for a young child than an adult. So a year would be experienced by a 55-year-old as passing approximately five times more quickly than a year experienced by an 11-year-old. If long-term time perception is based solely on the proportionality of a person's age, then the following four periods in life would appear to be quantitatively equal: ages 5–10 (1x), ages 10–20 (2x), ages 20–40 (4x), age 40–80 (8x), as the end age is twice the start age. However, this does not work for ages 0–10, which corresponds to ages 10–∞. Proportional to the subjective time. Lemlich posits that the passage of subjective time relative to actual time is inversely proportional to total subjective time, rather than the total real time: formula_2 When mathematically solved, formula_3 It avoids the issue of infinite subjective time passing from real age 0 to 1 year, as the asymptote can be integrated in an improper integral. Using the initial conditions S = 0 when R = 0 and K &gt; 0, formula_4 formula_5 This means that time appears to pass in proportion to the square root of the perceiver's real age, rather than directly proportional. Under this model, a 55-year-old would subjectively experience time passing times more quickly than an 11-year-old, rather than five times under the previous. This means the following periods in life would appear to be quantitatively equal: ages 0–1, 1–4, 4–9, 9–16, 16–25, 25–36, 36–49, 49–64, 64–81, 81–100, 100–121. In a study, participants consistently provided answers that fit this model when asked about time perception at 1/4 of their age, but were less consistent for 1/2 of their age. Their answers suggest that this model is more accurate than the previous one. A consequence of this model is that the fraction of subjective life remaining is always less than the fraction of real life remaining, but it is always more than one half of real life remaining. This can be seen for formula_6 and formula_7: formula_8 Effects of drugs on time perception. Stimulants such as thyroxine, caffeine, and amphetamines lead to overestimation of time intervals by both humans and rats, while depressants and anesthetics such as barbiturates and nitrous oxide can have the opposite effect and lead to underestimation of time intervals. The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this. A research on stimulant-dependent individuals (SDI) showed several abnormal time processing characteristics including larger time differences for effective duration discrimination, and overestimating the duration of a relatively long time interval. Altered time processing and perception in SDI could explain the difficulty SDI have with delaying gratification. Another research studied the dose-dependent effect in methamphetamine dependents with short term abstinence and its effects on time perception. Results shows that motor timing but not perceptual timing, was altered in meth dependents, which persisted for at least three months of abstinence. Dose-dependent effects on time perception were only observed when short-term abstinent meth abusers processed long time intervals. The study concluded that time perception alteration in meth dependents is task specific and dose dependent. The effect of cannabis on time perception has been studied with inconclusive results mainly due to methodological variations and the paucity of research. Even though 70% of time estimation studies report over-estimation, the findings of time production and time reproduction studies remain inconclusive. Studies show consistently throughout the literature that most cannabis users self-report the experience of a slowed perception of time. In the laboratory, researchers have confirmed the effect of cannabis on the perception of time in both humans and animals. Using PET scans it was observed that participants who showed a decrease in cerebellar blood flow (CBF) also had a significant alteration in time sense. The relationship between decreased CBF and impaired time sense is of interest as the cerebellum is linked to an internal timing system. Effects of body temperature. The chemical clock hypothesis implies a causal link between body temperature and the perception of time. Past work show that increasing body temperature tends to make individuals experience a dilated perception of time and they perceive durations as shorter than they actually were, ultimately leading them to underestimate time durations. While decreasing body temperature has the opposite effect – causing participants to experience a condensed perception of time leading them to over-estimate time duration – observations of the latter type were rare. Research establishes a parametric effect of body temperature on time perception with higher temperatures generally producing faster subjective time and vice versa. This is especially seen to be true under changes in arousal levels and stressful events. Applications. Since subjective time is measurable, through information such as heartbeats or actions taken within a time period, there are analytical applications for time perception. Social networks. Time perception can be used as a tool in social networks to define the subjective experiences of each node within a system. This method can be used to study characters' psychology in dramas, both film and literature, analyzed by social networks. Each character's subjective time may be calculated, with methods as simple as word counting, and compared to the real time of the story to shed light on their internal states. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dS}{dR} = \\frac{K}R" }, { "math_id": 1, "text": "S_2 - S_1 = K(\\log{R_2} - \\log{R_1}) = K \\log{\\left({R_2}/{R_1}\\right)}" }, { "math_id": 2, "text": "\\frac{dS}{dR} = \\frac{K}S" }, { "math_id": 3, "text": "S^2 = 2KR + C" }, { "math_id": 4, "text": "S = \\sqrt{2KR}" }, { "math_id": 5, "text": "\\frac{dS}{dR} = \\sqrt{\\frac{K}{2R}}" }, { "math_id": 6, "text": "0 < S < S_f" }, { "math_id": 7, "text": "0 < R < R_f" }, { "math_id": 8, "text": "\\frac12\\left(1 - \\frac{R}{R_f}\\right) < 1 - \\frac{S}{S_f} < 1 - \\frac{R}{R_f}" } ]
https://en.wikipedia.org/wiki?curid=6069126
60692
Prime element
Analogue of a prime number in a commutative ring In mathematics, specifically in abstract algebra, a prime element of a commutative ring is an object satisfying certain properties similar to the prime numbers in the integers and to irreducible polynomials. Care should be taken to distinguish prime elements from irreducible elements, a concept that is the same in UFDs but not the same in general. Definition. An element p of a commutative ring R is said to be prime if it is not the zero element or a unit and whenever p divides ab for some a and b in R, then p divides a or p divides b. With this definition, Euclid's lemma is the assertion that prime numbers are prime elements in the ring of integers. Equivalently, an element p is prime if, and only if, the principal ideal ("p") generated by p is a nonzero prime ideal. (Note that in an integral domain, the ideal (0) is a prime ideal, but 0 is an exception in the definition of 'prime element'.) Interest in prime elements comes from the fundamental theorem of arithmetic, which asserts that each nonzero integer can be written in essentially only one way as 1 or −1 multiplied by a product of positive prime numbers. This led to the study of unique factorization domains, which generalize what was just illustrated in the integers. Being prime is relative to which ring an element is considered to be in; for example, 2 is a prime element in Z but it is not in Z["i"], the ring of Gaussian integers, since 2 = (1 + "i")(1 − "i") and 2 does not divide any factor on the right. Connection with prime ideals. An ideal "I" in the ring "R" (with unity) is prime if the factor ring "R"/"I" is an integral domain. In an integral domain, a nonzero principal ideal is prime if and only if it is generated by a prime element. Irreducible elements. Prime elements should not be confused with irreducible elements. In an integral domain, every prime is irreducible but the converse is not true in general. However, in unique factorization domains, or more generally in GCD domains, primes and irreducibles are the same. Examples. The following are examples of prime elements in rings: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf Z[\\sqrt{-5}]," }, { "math_id": 1, "text": "9=(2+\\sqrt{-5})(2-\\sqrt{-5})" } ]
https://en.wikipedia.org/wiki?curid=60692
6069215
Reversible-jump Markov chain Monte Carlo
Simulation method in statistics In computational statistics, reversible-jump Markov chain Monte Carlo is an extension to standard Markov chain Monte Carlo (MCMC) methodology, introduced by Peter Green, which allows simulation (the creation of samples) of the posterior distribution on spaces of varying dimensions. Thus, the simulation is possible even if the number of parameters in the model is not known. The "jump" refers to the switching from one parameter space to another during the running of the chain. RJMCMC is useful to compare models of different dimension to see which one fits the data best. It is also useful for predictions of new data points, because we do not need to choose and fix a model, RJMCMC can directly predict the new values for all the models at the same time. Models that suit the data best will be chosen more frequently then the poorer ones. Details on the RJMCMC process. Let formula_0be a model indicator and formula_1 the parameter space whose number of dimensions formula_2 depends on the model formula_3. The model indication need not be . The stationary distribution is the joint posterior distribution of formula_4 that takes the values formula_5. The proposal formula_6 can be constructed with a mapping formula_7 of formula_8 and formula_9, where formula_9 is drawn from a random component formula_10 with density formula_11 on formula_12. The move to state formula_13 can thus be formulated as formula_14 The function formula_15 must be "one to one" and differentiable, and have a non-zero support: formula_16 so that there exists an inverse function formula_17 that is differentiable. Therefore, the formula_18 and formula_19 must be of equal dimension, which is the case if the dimension criterion formula_20 is met where formula_21 is the dimension of formula_9. This is known as "dimension matching". If formula_22 then the dimensional matching condition can be reduced to formula_23 with formula_24 The acceptance probability will be given by formula_25 where formula_26 denotes the absolute value and formula_27 is the joint posterior probability formula_28 where formula_29 is the normalising constant. Software packages. There is an experimental RJ-MCMC tool available for the open source BUGs package. The Gen probabilistic programming system automates the acceptance probability computation for user-defined reversible jump MCMC kernels as part of its Involution MCMC feature.
[ { "math_id": 0, "text": "n_m\\in N_m=\\{1,2,\\ldots,I\\} \\, " }, { "math_id": 1, "text": "M=\\bigcup_{n_m=1}^I \\R^{d_m}" }, { "math_id": 2, "text": "d_m" }, { "math_id": 3, "text": "n_m" }, { "math_id": 4, "text": "(M,N_m)" }, { "math_id": 5, "text": "(m,n_m)" }, { "math_id": 6, "text": "m'" }, { "math_id": 7, "text": "g_{1mm'}" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "u" }, { "math_id": 10, "text": "U" }, { "math_id": 11, "text": "q" }, { "math_id": 12, "text": "\\R^{d_{mm'}}" }, { "math_id": 13, "text": "(m',n_m')" }, { "math_id": 14, "text": "\n (m',n_m')=(g_{1mm'}(m,u),n_m') \\, \n" }, { "math_id": 15, "text": "\n g_{mm'}:=\\Bigg((m,u)\\mapsto \\bigg((m',u')=\\big(g_{1mm'}(m,u),g_{2mm'}(m,u)\\big)\\bigg)\\Bigg) \\, \n" }, { "math_id": 16, "text": " \\mathrm{supp}(g_{mm'})\\ne \\varnothing \\, " }, { "math_id": 17, "text": "g^{-1}_{mm'}=g_{m'm} \\, " }, { "math_id": 18, "text": "(m,u)" }, { "math_id": 19, "text": "(m',u')" }, { "math_id": 20, "text": "d_m+d_{mm'}=d_{m'}+d_{m'm} \\, " }, { "math_id": 21, "text": "d_{mm'}" }, { "math_id": 22, "text": "\\R^{d_m}\\subset \\R^{d_{m'}}" }, { "math_id": 23, "text": "d_m+d_{mm'}=d_{m'} \\, " }, { "math_id": 24, "text": "(m,u)=g_{m'm}(m). \\, " }, { "math_id": 25, "text": "\n a(m,m')=\\min\\left(1,\n \\frac{p_{m'm}p_{m'}f_{m'}(m')}{p_{mm'}q_{mm'}(m,u)p_{m}f_m(m)}\\left|\\det\\left(\\frac{\\partial g_{mm'}(m,u)}{\\partial (m,u)}\\right)\\right|\\right),\n" }, { "math_id": 26, "text": "|\\cdot |" }, { "math_id": 27, "text": "p_mf_m" }, { "math_id": 28, "text": "\n p_mf_m=c^{-1}p(y|m,n_m)p(m|n_m)p(n_m), \\, \n" }, { "math_id": 29, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=6069215
60693
Irreducible element
In algebra, element without non-trivial factors In algebra, an irreducible element of an integral domain is a non-zero element that is not invertible (that is, is not a unit), and is not the product of two non-invertible elements. The irreducible elements are the terminal elements of a factorization process; that is, they are the factors that cannot be further factorized. The irreducible factors of an element are uniquely defined, up to the multiplication by a unit, if the integral domain is a unique factorization domain. It was discovered in the 19th century that the rings of integers of some number fields are not unique factorization domains, and, therefore, that some irreducible elements can appear in some factorization of an element and not in other factorizations of the same element. The ignorance of this fact is the main error in many of the wrong proofs of Fermat's Last Theorem that were given during the three centuries between Fermat's statement and Wiles's proof of Fermat's Last Theorem. If formula_0 is an integral domain, then formula_1 is an irreducible element of formula_0 if and only if, for all formula_2, the equation formula_3 implies that the ideal generated by formula_1 is equal to the ideal generated by formula_4 or equal to the ideal generated by formula_5. This equivalence does not hold for general commutative rings, which is why the assumption of the ring having no nonzero zero divisors is commonly made in the definition of irreducible elements. It results also that there are several ways to extend the definition of an irreducible element to an arbitrary commutative ring. Relationship with prime elements. Irreducible elements should not be confused with prime elements. (A non-zero non-unit element formula_1 in a commutative ring formula_0 is called prime if, whenever formula_6 for some formula_4 and formula_5 in formula_7 then formula_8 or formula_9) In an integral domain, every prime element is irreducible, but the converse is not true in general. The converse is true for unique factorization domains (or, more generally, GCD domains). Moreover, while an ideal generated by a prime element is a prime ideal, it is not true in general that an ideal generated by an irreducible element is an irreducible ideal. However, if formula_10 is a GCD domain and formula_11 is an irreducible element of formula_10, then as noted above formula_11 is prime, and so the ideal generated by formula_11 is a prime (hence irreducible) ideal of formula_10. Example. In the quadratic integer ring formula_12 it can be shown using norm arguments that the number 3 is irreducible. However, it is not a prime element in this ring since, for example, formula_13 but 3 does not divide either of the two factors. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "b,c\\in R" }, { "math_id": 3, "text": "a=bc" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "c" }, { "math_id": 6, "text": "a \\mid bc" }, { "math_id": 7, "text": "R," }, { "math_id": 8, "text": "a \\mid b" }, { "math_id": 9, "text": "a \\mid c." }, { "math_id": 10, "text": "D" }, { "math_id": 11, "text": "x" }, { "math_id": 12, "text": "\\mathbf{Z}[\\sqrt{-5}]," }, { "math_id": 13, "text": "3 \\mid \\left(2 + \\sqrt{-5}\\right)\\left(2 - \\sqrt{-5}\\right)=9," } ]
https://en.wikipedia.org/wiki?curid=60693
606970
Minimal Supersymmetric Standard Model
Simplest supersymmetric extension to the Standard Model The Minimal Supersymmetric Standard Model (MSSM) is an extension to the Standard Model that realizes supersymmetry. MSSM is the minimal supersymmetrical model as it considers only "the [minimum] number of new particle states and new interactions consistent with "Reality". Supersymmetry pairs bosons with fermions, so every Standard Model particle has a (yet undiscovered) superpartner. If discovered, such superparticles could be candidates for dark matter, and could provide evidence for grand unification or the viability of string theory. The failure to find evidence for MSSM using the Large Hadron Collider has strengthened an inclination to abandon it. Background. The MSSM was originally proposed in 1981 to stabilize the weak scale, solving the hierarchy problem. The Higgs boson mass of the Standard Model is unstable to quantum corrections and the theory predicts that weak scale should be much weaker than what is observed to be. In the MSSM, the Higgs boson has a fermionic superpartner, the Higgsino, that has the same mass as it would if supersymmetry were an exact symmetry. Because fermion masses are radiatively stable, the Higgs mass inherits this stability. However, in MSSM there is a need for more than one Higgs field, as described below. The only unambiguous way to claim discovery of supersymmetry is to produce superparticles in the laboratory. Because superparticles are expected to be 100 to 1000 times heavier than the proton, it requires a huge amount of energy to make these particles that can only be achieved at particle accelerators. The Tevatron was actively looking for evidence of the production of supersymmetric particles before it was shut down on 30 September 2011. Most physicists believe that supersymmetry must be discovered at the LHC if it is responsible for stabilizing the weak scale. There are five classes of particle that superpartners of the Standard Model fall into: squarks, gluinos, charginos, neutralinos, and sleptons. These superparticles have their interactions and subsequent decays described by the MSSM and each has characteristic signatures. The MSSM imposes R-parity to explain the stability of the proton. It adds supersymmetry breaking by introducing explicit soft supersymmetry breaking operators into the Lagrangian that is communicated to it by some unknown (and unspecified) dynamics. This means that there are 120 new parameters in the MSSM. Most of these parameters lead to unacceptable phenomenology such as large flavor changing neutral currents or large electric dipole moments for the neutron and electron. To avoid these problems, the MSSM takes all of the soft supersymmetry breaking to be diagonal in flavor space and for all of the new CP violating phases to vanish. Theoretical motivations. There are three principal motivations for the MSSM over other theoretical extensions of the Standard Model, namely: These motivations come out without much effort and they are the primary reasons why the MSSM is the leading candidate for a new theory to be discovered at collider experiments such as the Tevatron or the LHC. Naturalness. The original motivation for proposing the MSSM was to stabilize the Higgs mass to radiative corrections that are quadratically divergent in the Standard Model (the hierarchy problem). In supersymmetric models, scalars are related to fermions and have the same mass. Since fermion masses are logarithmically divergent, scalar masses inherit the same radiative stability. The Higgs vacuum expectation value (VEV) is related to the negative scalar mass in the Lagrangian. In order for the radiative corrections to the Higgs mass to not be dramatically larger than the actual value, the mass of the superpartners of the Standard Model should not be significantly heavier than the Higgs VEV – roughly 100 GeV. In 2012, the Higgs particle was discovered at the LHC, and its mass was found to be 125–126 GeV. Gauge-coupling unification. If the superpartners of the Standard Model are near the TeV scale, then measured gauge couplings of the three gauge groups unify at high energies. The beta-functions for the MSSM gauge couplings are given by where formula_0 is measured in SU(5) normalization—a factor of different than the Standard Model's normalization and predicted by Georgi–Glashow SU(5) . The condition for gauge coupling unification at one loop is whether the following expression is satisfied formula_1. Remarkably, this is precisely satisfied to experimental errors in the values of formula_2. There are two loop corrections and both TeV-scale and GUT-scale threshold corrections that alter this condition on gauge coupling unification, and the results of more extensive calculations reveal that gauge coupling unification occurs to an accuracy of 1%, though this is about 3 standard deviations from the theoretical expectations. This prediction is generally considered as indirect evidence for both the MSSM and SUSY GUTs. Gauge coupling unification does not necessarily imply grand unification and there exist other mechanisms to reproduce gauge coupling unification. However, if superpartners are found in the near future, the apparent success of gauge coupling unification would suggest that a supersymmetric grand unified theory is a promising candidate for high scale physics. Dark matter. If R-parity is preserved, then the lightest superparticle (LSP) of the MSSM is stable and is a Weakly interacting massive particle (WIMP) – i.e. it does not have electromagnetic or strong interactions. This makes the LSP a good dark matter candidate, and falls into the category of cold dark matter (CDM). Predictions of the MSSM regarding hadron colliders. The Tevatron and LHC have active experimental programs searching for supersymmetric particles. Since both of these machines are hadron colliders – proton antiproton for the Tevatron and proton proton for the LHC – they search best for strongly interacting particles. Therefore, most experimental signature involve production of squarks or gluinos. Since the MSSM has R-parity, the lightest supersymmetric particle is stable and after the squarks and gluinos decay each decay chain will contain one LSP that will leave the detector unseen. This leads to the generic prediction that the MSSM will produce a 'missing energy' signal from these particles leaving the detector. Neutralinos. There are four neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They are typically labeled , , , (although sometimes formula_3 is used instead). These four states are mixtures of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical with its antiparticle. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos. In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector. The heavier neutralinos typically decay through a to a lighter neutralino or through a to chargino. Thus a typical decay is Note that the “Missing energy” byproduct represents the mass-energy of the neutralino (  ) and in the second line, the mass-energy of a neutrino-antineutrino pair (  +  ) produced with the lepton and antilepton in the final decay, all of which are undetectable in individual reactions with current technology. The mass splittings between the different neutralinos will dictate which patterns of decays are allowed. Charginos. There are two charginos that are fermions and are electrically charged. They are typically labeled and (although sometimes formula_4 and formula_5 is used instead). The heavier chargino can decay through to the lighter chargino. Both can decay through a to neutralino. Squarks. The squarks are the scalar superpartners of the quarks and there is one version for each Standard Model quark. Due to phenomenological constraints from flavor changing neutral currents, typically the lighter two generations of squarks have to be nearly the same in mass and therefore are not given distinct names. The superpartners of the top and bottom quark can be split from the lighter squarks and are called "stop" and "sbottom". In the other direction, there may be a remarkable left-right mixing of the stops formula_6 and of the sbottoms formula_7 because of the high masses of the partner quarks top and bottom: formula_8 formula_9 A similar story holds for bottom formula_7 with its own parameters formula_10 and formula_11. Squarks can be produced through strong interactions and therefore are easily produced at hadron colliders. They decay to quarks and neutralinos or charginos which further decay. In R-parity conserving scenarios, squarks are pair produced and therefore a typical signal is formula_12 2 jets + missing energy formula_13 2 jets + 2 leptons + missing energy Gluinos. Gluinos are Majorana fermionic partners of the gluon which means that they are their own antiparticles. They interact strongly and therefore can be produced significantly at the LHC. They can only decay to a quark and a squark and thus a typical gluino signal is formula_14 4 jets + Missing energy Because gluinos are Majorana, gluinos can decay to either a quark+anti-squark or an anti-quark+squark with equal probability. Therefore, pairs of gluinos can decay to formula_15 4 jets+ formula_16 + Missing energy This is a distinctive signature because it has same-sign di-leptons and has very little background in the Standard Model. Sleptons. Sleptons are the scalar partners of the leptons of the Standard Model. They are not strongly interacting and therefore are not produced very often at hadron colliders unless they are very light. Because of the high mass of the tau lepton there will be left-right mixing of the stau similar to that of stop and sbottom (see above). Sleptons will typically be found in decays of a charginos and neutralinos if they are light enough to be a decay product. formula_17 formula_18 MSSM fields. Fermions have bosonic superpartners (called sfermions), and bosons have fermionic superpartners (called bosinos). For most of the Standard Model particles, doubling is very straightforward. However, for the Higgs boson, it is more complicated. A single Higgsino (the fermionic superpartner of the Higgs boson) would lead to a gauge anomaly and would cause the theory to be inconsistent. However, if two Higgsinos are added, there is no gauge anomaly. The simplest theory is one with two Higgsinos and therefore two scalar Higgs doublets. Another reason for having two scalar Higgs doublets rather than one is in order to have Yukawa couplings between the Higgs and both down-type quarks and up-type quarks; these are the terms responsible for the quarks' masses. In the Standard Model the down-type quarks couple to the Higgs field (which has Y=−) and the up-type quarks to its complex conjugate (which has Y=+). However, in a supersymmetric theory this is not allowed, so two types of Higgs fields are needed. MSSM superfields. In supersymmetric theories, every field and its superpartner can be written together as a superfield. The superfield formulation of supersymmetry is very convenient to write down manifestly supersymmetric theories (i.e. one does not have to tediously check that the theory is supersymmetric term by term in the Lagrangian). The MSSM contains vector superfields associated with the Standard Model gauge groups which contain the vector bosons and associated gauginos. It also contains chiral superfields for the Standard Model fermions and Higgs bosons (and their respective superpartners). MSSM Higgs mass. The MSSM Higgs mass is a prediction of the Minimal Supersymmetric Standard Model. The mass of the lightest Higgs boson is set by the Higgs "quartic coupling". Quartic couplings are not soft supersymmetry-breaking parameters since they lead to a quadratic divergence of the Higgs mass. Furthermore, there are no supersymmetric parameters to make the Higgs mass a free parameter in the MSSM (though not in non-minimal extensions). This means that Higgs mass is a prediction of the MSSM. The LEP II and the IV experiments placed a lower limit on the Higgs mass of 114.4 GeV. This lower limit is significantly above where the MSSM would typically predict it to be but does not rule out the MSSM; the discovery of the Higgs with a mass of 125 GeV is within the maximal upper bound of approximately 130 GeV that loop corrections within the MSSM would raise the Higgs mass to. Proponents of the MSSM point out that a Higgs mass within the upper bound of the MSSM calculation of the Higgs mass is a successful prediction, albeit pointing to more fine tuning than expected. Formulas. The only susy-preserving operator that creates a quartic coupling for the Higgs in the MSSM arise for the D-terms of the SU(2) and U(1) gauge sector and the magnitude of the quartic coupling is set by the size of the gauge couplings. This leads to the prediction that the Standard Model-like Higgs mass (the scalar that couples approximately to the VEV) is limited to be less than the Z mass: formula_19 . Since supersymmetry is broken, there are radiative corrections to the quartic coupling that can increase the Higgs mass. These dominantly arise from the 'top sector': formula_20 where formula_21 is the top mass and formula_22 is the mass of the top squark. This result can be interpreted as the RG running of the Higgs quartic coupling from the scale of supersymmetry to the top mass—however since the top squark mass should be relatively close to the top mass, this is usually a fairly modest contribution and increases the Higgs mass to roughly the LEP II bound of 114 GeV before the top squark becomes too heavy. Finally there is a contribution from the top squark A-terms: formula_23 where formula_24 is a dimensionless number. This contributes an additional term to the Higgs mass at loop level, but is not logarithmically enhanced formula_25 by pushing formula_26 (known as 'maximal mixing') it is possible to push the Higgs mass to 125 GeV without decoupling the top squark or adding new dynamics to the MSSM. As the Higgs was found at around 125 GeV (along with no other superparticles) at the LHC, this strongly hints at new dynamics beyond the MSSM, such as the 'Next to Minimal Supersymmetric Standard Model' (NMSSM); and suggests some correlation to the little hierarchy problem. MSSM Lagrangian. The Lagrangian for the MSSM contains several pieces. formula_27 The constant term is unphysical in global supersymmetry (as opposed to supergravity). Soft SUSY breaking. The last piece of the MSSM Lagrangian is the soft supersymmetry breaking Lagrangian. The vast majority of the parameters of the MSSM are in the susy breaking Lagrangian. The soft susy breaking are divided into roughly three pieces. The reason these soft terms are not often mentioned are that they arise through local supersymmetry and not global supersymmetry, although they are required otherwise if the Goldstino were massless it would contradict observation. The Goldstino mode is eaten by the Gravitino to become massive, through a gauge shift, which also absorbs the would-be "mass" term of the Goldstino. Problems. There are several problems with the MSSM—most of them falling into understanding the parameters. Theories of supersymmetry breaking. A large amount of theoretical effort has been spent trying to understand the mechanism for soft supersymmetry breaking that produces the desired properties in the superpartner masses and interactions. The three most extensively studied mechanisms are: Gravity-mediated supersymmetry breaking. Gravity-mediated supersymmetry breaking is a method of communicating supersymmetry breaking to the supersymmetric Standard Model through gravitational interactions. It was the first method proposed to communicate supersymmetry breaking. In gravity-mediated supersymmetry-breaking models, there is a part of the theory that only interacts with the MSSM through gravitational interaction. This hidden sector of the theory breaks supersymmetry. Through the supersymmetric version of the Higgs mechanism, the gravitino, the supersymmetric version of the graviton, acquires a mass. After the gravitino has a mass, gravitational radiative corrections to soft masses are incompletely cancelled beneath the gravitino's mass. It is currently believed that it is not generic to have a sector completely decoupled from the MSSM and there should be higher dimension operators that couple different sectors together with the higher dimension operators suppressed by the Planck scale. These operators give as large of a contribution to the soft supersymmetry breaking masses as the gravitational loops; therefore, today people usually consider gravity mediation to be gravitational sized direct interactions between the hidden sector and the MSSM. mSUGRA stands for minimal supergravity. The construction of a realistic model of interactions within "N" = 1 supergravity framework where supersymmetry breaking is communicated through the supergravity interactions was carried out by Ali Chamseddine, Richard Arnowitt, and Pran Nath in 1982. mSUGRA is one of the most widely investigated models of particle physics due to its predictive power requiring only 4 input parameters and a sign, to determine the low energy phenomenology from the scale of Grand Unification. The most widely used set of parameters is: Gravity-Mediated Supersymmetry Breaking was assumed to be flavor universal because of the universality of gravity; however, in 1986 Hall, Kostelecky, and Raby showed that Planck-scale physics that are necessary to generate the Standard-Model Yukawa couplings spoil the universality of the supersymmetry breaking. Gauge-mediated supersymmetry breaking (GMSB). Gauge-mediated supersymmetry breaking is method of communicating supersymmetry breaking to the supersymmetric Standard Model through the Standard Model's gauge interactions. Typically a hidden sector breaks supersymmetry and communicates it to massive messenger fields that are charged under the Standard Model. These messenger fields induce a gaugino mass at one loop and then this is transmitted on to the scalar superpartners at two loops. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.5GeV. With the Higgs being discovered at 125GeV - this model requires stops above 2 TeV. Anomaly-mediated supersymmetry breaking (AMSB). Anomaly-mediated supersymmetry breaking is a special type of gravity mediated supersymmetry breaking that results in supersymmetry breaking being communicated to the supersymmetric Standard Model through the conformal anomaly. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.0GeV. With the Higgs being discovered at 125GeV - this scenario requires stops heavier than 2 TeV. Phenomenological MSSM (pMSSM). The unconstrained MSSM has more than 100 parameters in addition to the Standard Model parameters. This makes any phenomenological analysis (e.g. finding regions in parameter space consistent with observed data) impractical. Under the following three assumptions: one can reduce the number of additional parameters to the following 19 quantities of the phenomenological MSSM (pMSSM): The large parameter space of pMSSM makes searches in pMSSM extremely challenging and makes pMSSM difficult to exclude. Experimental tests. Terrestrial detectors. XENON1T (a dark matter WIMP detector - being commissioned in 2016) is expected to explore/test supersymmetry candidates such as CMSSM. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha^{-1}_{1}" }, { "math_id": 1, "text": "\\frac{\\alpha^{-1}_3 - \\alpha^{-1}_2}{\\alpha^{-1}_2-\\alpha^{-1}_1} = \\frac{b_{0\\,3} - b_{0\\,2}}{b_{0\\,2} -b_{0\\,1}}" }, { "math_id": 2, "text": "\\alpha^{-1}(M_{Z^0})" }, { "math_id": 3, "text": " \\tilde{\\chi}_1^0, \\ldots, \\tilde{\\chi}_4^0" }, { "math_id": 4, "text": "\\tilde{\\chi}_1^\\pm" }, { "math_id": 5, "text": "\\tilde{\\chi}_2^\\pm" }, { "math_id": 6, "text": "\\tilde{t}" }, { "math_id": 7, "text": "\\tilde{b}" }, { "math_id": 8, "text": "\\tilde{t}_1 = e^{+i\\phi} \\cos(\\theta) \\tilde{t_L} + \\sin(\\theta) \\tilde{t_R}" }, { "math_id": 9, "text": "\\tilde{t}_2 = e^{-i\\phi} \\cos(\\theta) \\tilde{t_R} - \\sin(\\theta) \\tilde{t_L}" }, { "math_id": 10, "text": "\\phi" }, { "math_id": 11, "text": "\\theta" }, { "math_id": 12, "text": " \\tilde{q}\\tilde{\\bar{q}} \\rightarrow q \\tilde{N}^0_1 \\bar{q} \\tilde{N}^0_1 \\rightarrow " }, { "math_id": 13, "text": " \\tilde{q}\\tilde{\\bar{q}} \\rightarrow q \\tilde{N}^0_2 \\bar{q} \\tilde{N}^0_1 \\rightarrow q \\tilde{N}^0_1 \\ell \\bar{\\ell} \\bar{q} \\tilde{N}^0_1 \\rightarrow" }, { "math_id": 14, "text": " \\tilde{g}\\tilde{g}\\rightarrow (q \\tilde{\\bar{q}}) (\\bar{q} \\tilde{q}) \\rightarrow (q \\bar{q} \\tilde{N}^0_1) (\\bar{q} q \\tilde{N}^0_1) \\rightarrow" }, { "math_id": 15, "text": " \\tilde{g}\\tilde{g}\\rightarrow (\\bar{q} \\tilde{q}) (\\bar{q} \\tilde{q}) \\rightarrow (q \\bar{q} \\tilde{C}^+_1) (q \\bar{q} \\tilde{C}^+_1) \\rightarrow (q \\bar{q} W^+) (q \\bar{q} W^+) \\rightarrow " }, { "math_id": 16, "text": " \\ell^+ \\ell^+" }, { "math_id": 17, "text": "\\tilde{C}^+\\rightarrow \\tilde{\\ell}^+ \\nu" }, { "math_id": 18, "text": " \\tilde{N}^0 \\rightarrow \\tilde{\\ell}^+ \\ell^-" }, { "math_id": 19, "text": "m_{h^0}^2 \\le m_{Z^0}^2\\cos^2 2\\beta" }, { "math_id": 20, "text": "m_{h^0}^2 \\le m_{Z^0}^2\\cos^2 2\\beta + \\frac{3}{\\pi^2} \\frac{m_t^4 \\sin^4\\beta}{v^2} \\log \\frac{m_{\\tilde{t}}}{m_t}" }, { "math_id": 21, "text": "m_t" }, { "math_id": 22, "text": "m_{\\tilde{t}}" }, { "math_id": 23, "text": " \\mathcal{L} = y_t\\, m_{\\tilde{t}}\\, a\\; h_u \\tilde{q}_3 \\tilde{u}^c_3" }, { "math_id": 24, "text": " a " }, { "math_id": 25, "text": "m_{h^0}^2 \\le m_{Z^0}^2\\cos^2 2\\beta + \\frac{3}{\\pi^2} \\frac{m_t^4 \\sin^4\\beta}{v^2} \\left(\\log \\frac{m_{\\tilde{t}}}{m_t} + a^2 ( 1 - a^2/12) \\right)" }, { "math_id": 26, "text": "a \\rightarrow \\sqrt{6}" }, { "math_id": 27, "text": "W_{}^{} = \\mu H_u H_d+ y_u H_u Q U^c+ y_d H_d Q D^c + y_l H_d L E^c" }, { "math_id": 28, "text": " \\mathcal{L} \\supset m_{\\frac{1}{2}} \\tilde{\\lambda}\\tilde{\\lambda} + \\text{h.c.}" }, { "math_id": 29, "text": "\\tilde{\\lambda}" }, { "math_id": 30, "text": "m_{\\frac{1}{2}}" }, { "math_id": 31, "text": " \\mathcal{L} \\supset m^2_0 \\phi^\\dagger \\phi" }, { "math_id": 32, "text": "m_0" }, { "math_id": 33, "text": "3\\times 3" }, { "math_id": 34, "text": "A" }, { "math_id": 35, "text": " B" }, { "math_id": 36, "text": "\\mathcal{L} \\supset B_{\\mu} h_u h_d + A h_u \\tilde{q} \\tilde{u}^c+ A h_d \\tilde{q} \\tilde{d}^c +A h_d \\tilde{l} \\tilde{e}^c + \\text{h.c.}" }, { "math_id": 37, "text": " \\mathcal{L} \\supset m_{3/2}\\Psi_{\\mu}^{\\alpha}(\\sigma^{\\mu\\nu})_{\\alpha}^{\\beta}\\Psi_{\\beta} + m_{3/2}G^{\\alpha}G_{\\alpha}+\\text{h.c.} " } ]
https://en.wikipedia.org/wiki?curid=606970
6070279
Tate cohomology group
In mathematics, Tate cohomology groups are a slightly modified form of the usual cohomology groups of a finite group that combine homology and cohomology groups into one sequence. They were introduced by John Tate (1952, p. 297), and are used in class field theory. Definition. If "G" is a finite group and "A" a "G"-module, then there is a natural map "N" from formula_0 to formula_1 taking a representative "a" to formula_2 (the sum over all "G"-conjugates of "a"). The Tate cohomology groups formula_3 are defined by formula_10 is a short exact sequence of "G"-modules, then we get the usual long exact sequence of Tate cohomology groups: formula_11 (Fixed points of "G" on "A")/(Obvious fixed points of "G" acting on "A") Properties. where by the "obvious" fixed point we mean those of the form formula_12. In other words, the zeroth cohomology group in some sense describes the non-obvious fixed points of "G" acting on "A". The Tate cohomology groups are characterized by the three properties above. Tate's theorem. Tate's theorem gives conditions for multiplication by a cohomology class to be an isomorphism between cohomology groups. There are several slightly different versions of it; a version that is particularly convenient for class field theory is as follows: Suppose that "A" is a module over a finite group "G" and "a" is an element of formula_13, such that for every subgroup "E" of "G" Then cup product with "a" is an isomorphism: for all "n"; in other words the graded Tate cohomology of "A" is isomorphic to the Tate cohomology with integral coefficients, with the degree shifted by 2. Tate-Farrell cohomology. F. Thomas Farrell extended Tate cohomology groups to the case of all groups "G" of finite virtual cohomological dimension. In Farrell's theory, the groups formula_3 are isomorphic to the usual cohomology groups whenever "n" is greater than the virtual cohomological dimension of the group "G". Finite groups have virtual cohomological dimension 0, and in this case Farrell's cohomology groups are the same as those of Tate.
[ { "math_id": 0, "text": "H_0(G,A)" }, { "math_id": 1, "text": "H^0(G,A)" }, { "math_id": 2, "text": "\\sum_{g\\in G} ga" }, { "math_id": 3, "text": "\\hat H^n(G,A)" }, { "math_id": 4, "text": "\\hat H^n(G,A) = H^n(G,A)" }, { "math_id": 5, "text": "n\\ge 1" }, { "math_id": 6, "text": "\\hat H^0(G,A)=\\operatorname {coker} N=" }, { "math_id": 7, "text": "\\hat H^{-1}(G,A)=\\ker N=" }, { "math_id": 8, "text": "\\hat H^{n}(G,A) = H_{-n - 1}(G,A)" }, { "math_id": 9, "text": "n\\le -2" }, { "math_id": 10, "text": " 0 \\longrightarrow A \\longrightarrow B \\longrightarrow C \\longrightarrow 0" }, { "math_id": 11, "text": "\\cdots \\longrightarrow\\hat H^{n}(G,A)\\longrightarrow\\hat H^{n}(G,B)\\longrightarrow\\hat H^{n}(G,C)\\longrightarrow\\hat H^{n+1}(G,A)\\longrightarrow\\hat H^{n+1}(G,B)\\cdots" }, { "math_id": 12, "text": "\\sum g a" }, { "math_id": 13, "text": "H^2(G,A)" }, { "math_id": 14, "text": "H^1(E,A)" }, { "math_id": 15, "text": "H^2(E,A)" }, { "math_id": 16, "text": "\\operatorname{Res}(a)" }, { "math_id": 17, "text": "\\hat H^n(G,\\Z)\\longrightarrow\\hat H^{n+2}(G,A)" } ]
https://en.wikipedia.org/wiki?curid=6070279
607062
Frequency compensation
In electronics engineering, frequency compensation is a technique used in amplifiers, and especially in amplifiers employing negative feedback. It usually has two primary goals: To avoid the unintentional creation of positive feedback, which will cause the amplifier to oscillate, and to control overshoot and ringing in the amplifier's step response. It is also used extensively to improve the bandwidth of single pole systems. Explanation. Most amplifiers use negative feedback to trade gain for other desirable properties, such as decreased distortion, improved noise reduction or increased invariance to variation of parameters such as temperature. Ideally, the phase characteristic of an amplifier's frequency response would be linear; however, device limitations make this goal physically unattainable. More particularly, capacitances within the amplifier's gain stages cause the output signal to lag behind the input signal by up to 90° for each pole they create. If the sum of these phase lags reaches 180°, the output signal will be the negative of the input signal. Feeding back any portion of this output signal to the inverting (negative) input when the gain of the amplifier is sufficient will cause the amplifier to oscillate. This is because the feedback signal will reinforce the input signal. That is, the feedback is then positive rather than negative. Frequency compensation is implemented to avoid this result. Another goal of frequency compensation is to control the step response of an amplifier circuit as shown in Figure 1. For example, if a step in voltage is input to a voltage amplifier, ideally a step in output voltage would occur. However, the output is not ideal because of the frequency response of the amplifier, and ringing occurs. Several figures of merit to describe the adequacy of step response are in common use. One is the rise time of the output, which ideally would be short. A second is the time for the output to lock into its final value, which again should be short. The success in reaching this lock-in at final value is described by overshoot (how far the response exceeds final value) and settling time (how long the output swings back and forth about its final value). These various measures of the step response usually conflict with one another, requiring optimization methods. Frequency compensation is implemented to optimize step response, one method being pole splitting.A Use in operational amplifiers. Because operational amplifiers are so ubiquitous and are designed to be used with feedback, the following discussion will be limited to frequency compensation of these devices. It should be expected that the outputs of even the simplest operational amplifiers will have at least two poles. A consequence of this is that at some critical frequency, the phase of the amplifier's output = −180° compared to the phase of its input signal. The amplifier will oscillate if it has a gain of one or more at this critical frequency. This is because (a) the feedback is implemented through the use of an inverting input that adds an additional −180° to the output phase making the total phase shift −360° and (b) the gain is sufficient to induce oscillation. A more precise statement of this is the following: An operational amplifier will oscillate at the frequency at which its open loop gain equals its closed loop gain if, at that frequency, # The open loop gain of the amplifier is ≥ 1 and # The difference between the phase of the open loop signal and phase response of the network creating the closed loop output = −180°. Mathematically: formula_0 Practice. Frequency compensation is implemented by modifying the gain and phase characteristics of the amplifier's open loop output or of its feedback network, or both, in such a way as to avoid the conditions leading to oscillation. This is usually done by the internal or external use of resistance-capacitance networks. Dominant-pole compensation. The method most commonly used is called dominant-pole compensation, which is a form of lag compensation. It is an external compensation technique and is used for relatively low closed loop gain. A pole placed at an appropriate low frequency in the open-loop response reduces the gain of the amplifier to one (0 dB) for a frequency at or just below the location of the next highest frequency pole. The lowest frequency pole is called the dominant pole because it dominates the effect of all of the higher frequency poles. The result is that the difference between the open loop output phase and the phase response of a feedback network having no reactive elements never falls below −180° while the amplifier has a gain of one or more, ensuring stability. Dominant-pole compensation can be implemented for general purpose operational amplifiers by adding an integrating capacitance to the stage that provides the bulk of the amplifier's gain. This capacitor creates a pole that is set at a frequency low enough to reduce the gain to one (0 dB) at or just below the frequency where the pole next highest in frequency is located. The result is a phase margin of ≈ 45°, depending on the proximity of still higher poles. This margin is sufficient to prevent oscillation in the most commonly used feedback configurations. In addition, dominant-pole compensation allows control of overshoot and ringing in the amplifier step response, which can be a more demanding requirement than the simple need for stability. This compensation method is described below: Let formula_1 be the uncompensated transfer function of op amp in open-loop configuration which is given by: formula_2 where formula_3 is the open-loop gain of the Op-Amp and formula_4, formula_5, and formula_6 are the angular frequencies at which the gain function formula_7 rolls off by -20dB, -40dB, and -60dB respectively. Thus, for compensation, introduce a dominant pole by adding an RC network in series with the Op-Amp as shown in the figure. The Transfer function of the compensated open loop Op-Amp circuit is given by: where fd &lt; f1 &lt; f2 &lt; f3 The compensation capacitance C is chosen such that fd &lt; f1. Hence, the frequency response of a dominant pole compensated open loop Op-Amp circuit shows uniform gain roll off from fd and becomes 0 at f1 as shown in the graph. The advantages of dominant pole compensation are: 1. It is simple and effective. 2. Noise immunity is improved since noise frequency components outside the bandwidth are eliminated. Though simple and effective, this kind of conservative dominant pole compensation has two drawbacks: Often, the implementation of dominant-pole compensation results in the phenomenon of "Pole splitting". This results in the lowest frequency pole of the uncompensated amplifier "moving" to an even lower frequency to become the dominant pole, and the higher-frequency pole of the uncompensated amplifier "moving" to a higher frequency. To overcome these disadvantages, pole zero compensation is used. Other methods. Some other compensation methods are: lead compensation, lead–lag compensation and feed-forward compensation. Lead compensation. Whereas dominant pole compensation places or moves poles in the open loop response, lead compensation places a zero in the open loop response to cancel one of the existing poles. Lead–lag compensation places both a zero and a pole in the open loop response, with the pole usually being at an open loop gain of less than one. Feed-forward or Miller compensation uses a capacitor to bypass a stage in the amplifier at high frequencies, thereby eliminating the pole that stage creates. The purpose of these three methods is to allow greater open loop bandwidth while still maintaining amplifier closed loop stability. They are often used to compensate high gain, wide bandwidth amplifiers. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi_{OL} - \\Phi_{CLnet} = -180^\\circ" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "A(s) = A_{OL}\\cdot \\frac{1}{1+\\frac{s}{\\omega_1}} \\cdot \\frac{1}{1+\\frac{s}{\\omega_2}} \\cdot \\frac{1}{1+\\frac{s}{\\omega_3}}= A_{OL}\\cdot \\frac{\\omega_1\\omega_2\\omega_3}{(s+\\omega1)(s+\\omega_2)(s+\\omega_3)}" }, { "math_id": 3, "text": "A_{OL}" }, { "math_id": 4, "text": "\\omega_1" }, { "math_id": 5, "text": "\\omega_2" }, { "math_id": 6, "text": "\\omega_3" }, { "math_id": 7, "text": "A(s)" } ]
https://en.wikipedia.org/wiki?curid=607062
60714639
Allochromatium phaeobacterium
Genus of bacteria &lt;templatestyles src="Template:Taxobox/core/styles.css" /&gt; Allochromatium phaeobacterium ("A. phaeobacterium") is a phototrophic and rod-shaped purple sulfur bacterium from the genus of "Allochromatium" which has been isolated from brackish water in Bheemli, Visakhapatnam, India. "A. phaeobacterium" was first isolated from anoxic sediment in 2007, and isolated on a modified Bielbl and Pfennig medium. Description. "A. phaeobacterium" is a rod-shaped, gram-negative species of bacteria. This species appears brown in color, and is 1.0–1.5x2.0–4.0 formula_0in length. "A. phaeobacterium" is motile and reproduces through binary fission. It has been observed growing photoautotrophically, photolithoautotrophically, photolithotrophically, and photoorganoheterotrophically. Absoprtion spectra has confirmed photoreceptors bacteriochlorophyll α, rhodopinals, and carotenoids. Genome. "A. phaeobacterium" has been partially sequenced through 16s rRNA gene sequencing with a sequence of approximately 1400 base pairs. The G/C content is 59.8%. Based on phylogenetic analysis from this 16s sequence, "Allochromatium phaeobacterium" is considered morphologically and physiologically distinct from others in the "Allochromatium" genus. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu m" } ]
https://en.wikipedia.org/wiki?curid=60714639
607285
Haplotype
Group of genes from one parent A haplotype (haploid genotype) is a group of alleles in an organism that are inherited together from a single parent. Many organisms contain genetic material (DNA) which is inherited from two parents. Normally these organisms have their DNA organized in two sets of pairwise similar chromosomes. The offspring gets one chromosome in each pair from each parent. A set of pairs of chromosomes is called diploid and a set of only one half of each pair is called haploid. The haploid genotype (haplotype) is a genotype that considers the singular chromosomes rather than the pairs of chromosomes. It can be all the chromosomes from one of the parents or a minor part of a chromosome, for example a sequence of 9000 base pairs or a small set of alleles. Specific contiguous parts of the chromosome are likely to be inherited together and not be split by chromosomal crossover, a phenomenon called genetic linkage. As a result, identifying these statistical associations and a few alleles of a specific haplotype sequence can facilitate identifying "all other such" polymorphic sites that are nearby on the chromosome (imputation). Such information is critical for investigating the genetics of common diseases; which have been investigated in humans by the International HapMap Project. Other parts of the genome are almost always haploid and do not undergo crossover: for example, human mitochondrial DNA is passed down through the maternal line and the Y chromosome is passed down the paternal line. In these cases, the entire sequence can be grouped into a simple evolutionary tree, with each branch founded by a unique-event polymorphism mutation (often, but not always, a single-nucleotide polymorphism (SNP)). Each clade under a branch, containing haplotypes with a single shared ancestor, is called a haplogroup. Haplotype resolution. An organism's genotype may not define its haplotype uniquely. For example, consider a diploid organism and two bi-allelic loci (such as SNPs) on the same chromosome. Assume the first locus has alleles "A" or "T" and the second locus "G" or "C". Both loci, then, have three possible genotypes: ("AA", "AT", and "TT") and ("GG", "GC", and "CC"), respectively. For a given individual, there are nine possible configurations (haplotypes) at these two loci (shown in the Punnett square below). For individuals who are homozygous at one or both loci, the haplotypes are unambiguous - meaning that there is not any differentiation of haplotype T1T2 vs haplotype T2T1; where T1 and T2 are labeled to show that they are the same locus, but labeled as such to show it does not matter which order you consider them in, the end result is two T loci. For individuals heterozygous at both loci, the gametic phase is ambiguous - in these cases, an observer does not know which haplotype the individual has, e.g., TA vs AT. The only unequivocal method of resolving phase ambiguity is by sequencing. However, it is possible to estimate the probability of a particular haplotype when phase is ambiguous using a sample of individuals. Given the genotypes for a number of individuals, the haplotypes can be inferred by haplotype resolution or haplotype phasing techniques. These methods work by applying the observation that certain haplotypes are common in certain genomic regions. Therefore, given a set of possible haplotype resolutions, these methods choose those that use fewer different haplotypes overall. The specifics of these methods vary - some are based on combinatorial approaches (e.g., parsimony), whereas others use likelihood functions based on different models and assumptions such as the Hardy–Weinberg principle, the coalescent theory model, or perfect phylogeny. The parameters in these models are then estimated using algorithms such as the expectation-maximization algorithm (EM), Markov chain Monte Carlo (MCMC), or hidden Markov models (HMM). Microfluidic whole genome haplotyping is a technique for the physical separation of individual chromosomes from a metaphase cell followed by direct resolution of the haplotype for each allele. Gametic phase. In genetics, a gametic phase represents the original allelic combinations that a diploid individual inherits from both parents. It is therefore a particular association of alleles at different loci on the same chromosome. Gametic phase is influenced by genetic linkage. Y-DNA haplotypes from genealogical DNA tests. Unlike other chromosomes, Y chromosomes generally do not come in pairs. Every human male (excepting those with XYY syndrome) has only one copy of that chromosome. This means that there is not any chance variation of which copy is inherited, and also (for most of the chromosome) not any shuffling between copies by recombination; so, unlike autosomal haplotypes, there is effectively not any randomisation of the Y-chromosome haplotype between generations. A human male should largely share the same Y chromosome as his father, give or take a few mutations; thus Y chromosomes tend to pass largely intact from father to son, with a small but accumulating number of mutations that can serve to differentiate male lineages. In particular, the Y-DNA represented as the numbered results of a Y-DNA genealogical DNA test should match, except for mutations. UEP results (SNP results). Unique-event polymorphisms (UEPs) such as SNPs represent haplogroups. STRs represent haplotypes. The results that comprise the full Y-DNA haplotype from the Y chromosome DNA test can be divided into two parts: the results for UEPs, sometimes loosely called the SNP results as most UEPs are single-nucleotide polymorphisms, and the results for microsatellite short tandem repeat sequences (Y-STRs). The UEP results represent the inheritance of events it is believed can be assumed to have happened only once in all human history. These can be used to identify the individual's Y-DNA haplogroup, his place in the "family tree" of the whole of humanity. Different Y-DNA haplogroups identify genetic populations that are often distinctly associated with particular geographic regions; their appearance in more recent populations located in different regions represents the migrations tens of thousands of years ago of the direct patrilineal ancestors of current individuals. Y-STR haplotypes. Genetic results also include the Y-STR haplotype, the set of results from the Y-STR markers tested. Unlike the UEPs, the Y-STRs mutate much more easily, which allows them to be used to distinguish recent genealogy. But it also means that, rather than the population of descendants of a genetic event all sharing the "same" result, the Y-STR haplotypes are likely to have spread apart, to form a "cluster" of more or less similar results. Typically, this cluster will have a definite most probable center, the modal haplotype (presumably similar to the haplotype of the original founding event), and also a haplotype diversity — the degree to which it has become spread out. The further in the past the defining event occurred, and the more that subsequent population growth occurred early, the greater the haplotype diversity will be for a particular number of descendants. However, if the haplotype diversity is smaller for a particular number of descendants, this may indicate a more recent common ancestor, or a recent population expansion. It is important to note that, unlike for UEPs, two individuals with a similar Y-STR haplotype may not necessarily share a similar ancestry. Y-STR events are not unique. Instead, the clusters of Y-STR haplotype results inherited from different events and different histories tend to overlap. In most cases, it is a long time since the haplogroups' defining events, so typically the cluster of Y-STR haplotype results associated with descendants of that event has become rather broad. These results will tend to significantly overlap the (similarly broad) clusters of Y-STR haplotypes associated with other haplogroups. This makes it impossible for researchers to predict with absolute certainty to which Y-DNA haplogroup a Y-STR haplotype would point. If the UEPs are not tested, the Y-STRs may be used only to predict probabilities for haplogroup ancestry, but not certainties. A similar scenario exists in trying to evaluate whether shared surnames indicate shared genetic ancestry. A cluster of similar Y-STR haplotypes may indicate a shared common ancestor, with an identifiable modal haplotype, but only if the cluster is sufficiently distinct from what may have happened by chance from different individuals who historically adopted the same name independently. Many names were adopted from common occupations, for instance, or were associated with habitation of particular sites. More extensive haplotype typing is needed to establish genetic genealogy. Commercial DNA-testing companies now offer their customers testing of more numerous sets of markers to improve definition of their genetic ancestry. The number of sets of markers tested has increased from 12 during the early years to 111 more recently. Establishing plausible relatedness between different surnames data-mined from a database is significantly more difficult. The researcher must establish that the "very nearest" member of the population in question, chosen purposely from the population for that reason, would be unlikely to match by accident. This is more than establishing that a "randomly selected" member of the population is unlikely to have such a close match by accident. Because of the difficulty, establishing relatedness between different surnames as in such a scenario is likely to be impossible, except in special cases where there is specific information to drastically limit the size of the population of candidates under consideration. Diversity. Haplotype diversity is a measure of the uniqueness of a particular haplotype in a given population. The haplotype diversity (H) is computed as:&lt;br&gt; formula_0&lt;br&gt; where formula_1 is the (relative) haplotype frequency of each haplotype in the sample and formula_2 is the sample size. Haplotype diversity is given for each sample. History. The term "haplotype" was first introduced by MHC biologist Ruggero Ceppellini during the Third International Histocompatibility Workshop to substitute "pheno-group". References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H=\\frac{N}{N-1}(1- \\sum_{i}x_i^2)" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=607285
60730016
Table of specific heat capacities
For some substances and engineering materials, includes volumetric and molar values The table of specific heat capacities gives the volumetric heat capacity as well as the specific heat capacity of some substances and engineering materials, and (when applicable) the molar heat capacity. Generally, the most notable constant parameter is the volumetric heat capacity (at least for solids) which is around the value of 3 megajoule per cubic meter per kelvin: formula_0 Note that the especially high "molar" values, as for paraffin, gasoline, water and ammonia, result from calculating specific heats in terms of moles of "molecules". If specific heat is expressed per mole of "atoms" for these substances, none of the constant-volume values exceed, to any large extent, the theoretical Dulong–Petit limit of 25 J⋅mol−1⋅K−1 = 3 "R" per mole of atoms (see the last column of this table). For example, Paraffin has very large molecules and thus a high heat capacity per mole, but as a substance it does not have remarkable heat capacity in terms of volume, mass, or atom-mol (which is just 1.41 "R" per mole of atoms, or less than half of most solids, in terms of heat capacity per atom). The Dulong–Petit limit also explains why dense substances, such as lead, which have very heavy atoms, rank very low in mass heat capacity. In the last column, major departures of solids at standard temperatures from the Dulong–Petit law value of 3 "R", are usually due to low atomic weight plus high bond strength (as in diamond) causing some vibration modes to have too much energy to be available to store thermal energy at the measured temperature. For gases, departure from 3 "R" per mole of atoms is generally due to two factors: (1) failure of the higher quantum-energy-spaced vibration modes in gas molecules to be excited at room temperature, and (2) loss of potential energy degree of freedom for small gas molecules, simply because most of their atoms are not bonded maximally in space to other atoms, as happens in many solids. Human body. The specific heat of the human body calculated from the measured values of individual tissues is 2.98 kJ · kg−1 · °C−1. This is 17% lower than the earlier wider used one based on non measured values of 3.47 kJ · kg−1· °C−1. The contribution of the muscle to the specific heat of the body is approximately 47%, and the contribution of the fat and skin is approximately 24%. The specific heat of tissues range from ~0.7 kJ · kg−1 · °C−1 for tooth (enamel) to 4.2 kJ · kg−1 · °C−1 for eye (sclera). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho c_p \\simeq 3\\,\\text{MJ}/(\\text{m}^3{\\cdot}\\text{K})\\quad \\text{(solid)}" } ]
https://en.wikipedia.org/wiki?curid=60730016
60738124
Reuschle's theorem
Describes a property of the cevians of a triangle intersecting in a common point In elementary geometry, Reuschle's theorem describes a property of the cevians of a triangle intersecting in a common point and is named after the German mathematician Karl Gustav Reuschle (1812–1875). It is also known as Terquem's theorem after the French mathematician Olry Terquem (1782–1862), who published it in 1842. In a triangle formula_1 with its three cevians intersecting in a common point other than the vertices formula_2, formula_3 or formula_4 let formula_5, formula_6 and formula_7 denote the intersections of the (extended) triangle sides and the cevians. The circle defined by the three points formula_5, formula_6 and formula_7 intersects the (extended) triangle sides in the (additional) points formula_8, formula_9 and formula_10. Reuschle's theorem now states that the three new cevians formula_0, formula_11 and formula_12 intersect in a common point as well.
[ { "math_id": 0, "text": "AP'_a" }, { "math_id": 1, "text": "ABC" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "B" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "P_a" }, { "math_id": 6, "text": "P_b" }, { "math_id": 7, "text": "P_c" }, { "math_id": 8, "text": "P'_a" }, { "math_id": 9, "text": "P'_b" }, { "math_id": 10, "text": "P'_c" }, { "math_id": 11, "text": "BP'_b" }, { "math_id": 12, "text": "CP'_c" } ]
https://en.wikipedia.org/wiki?curid=60738124
6073963
Transcendental equation
Equation whose side(s) describe a transcendental function In applied mathematics, a transcendental equation is an equation over the real (or complex) numbers that is not algebraic, that is, if at least one of its sides describes a transcendental function. Examples include: formula_0 A transcendental equation need not be an equation between elementary functions, although most published examples are. In some cases, a transcendental equation can be solved by transforming it into an equivalent algebraic equation. Some such transformations are sketched below; computer algebra systems may provide more elaborated transformations. In general, however, only approximate solutions can be found. Transformation into an algebraic equation. Ad hoc methods exist for some classes of transcendental equations in one variable to transform them into algebraic equations which then might be solved. Exponential equations. If the unknown, say "x", occurs only in exponents: formula_1 transforms to formula_2, which simplifies to formula_3, which has the solutions formula_4 This will not work if addition occurs "at the base line", as in formula_5 formula_6 transforms, using "y"=2"x", to formula_7 which has the solutions formula_8, hence formula_9 is the only real solution. This will not work if squares or higher power of "x" occurs in an exponent, or if the "base constants" do not "share" a common "q". formula_10 transforms to formula_11 which has the solutions formula_12 hence formula_13, where formula_14 and formula_15 the denote the real-valued branches of the multivalued formula_16 function. Logarithmic equations. If the unknown "x" occurs only in arguments of a logarithm function: formula_17 transforms, using exponentiation to base formula_18 to formula_19 which has the solutions formula_20 If only real numbers are considered, formula_21 is not a solution, as it leads to a non-real subexpression formula_22 in the given equation. This requires the original equation to consist of integer-coefficient linear combinations of logarithms w.r.t. a unique base, and the logarithm arguments to be polynomials in "x". formula_26 transforms, using formula_27 to formula_28 which is algebraic and has the single solution formula_29. After that, applying inverse operations to the substitution equation yields formula_30 Trigonometric equations. If the unknown "x" occurs only as argument of trigonometric functions: formula_35 transforms to formula_36, and, after substitution, to formula_37 which is algebraic and can be solved. After that, applying formula_38 obtains the solutions. Hyperbolic equations. If the unknown "x" occurs only in linear expressions inside arguments of hyperbolic functions, formula_40 unfolds to formula_41 which transforms to the equation formula_42 which is algebraic and can be solved. Applying formula_43 obtains the solutions of the original equation. Approximate solutions. Approximate numerical solutions to transcendental equations can be found using numerical, analytical approximations, or graphical methods. Numerical methods for solving arbitrary equations are called root-finding algorithms. In some cases, the equation can be well approximated using Taylor series near the zero. For example, for formula_44, the solutions of formula_45 are approximately those of formula_46, namely formula_47 and formula_48. For a graphical solution, one method is to set each side of a single-variable transcendental equation equal to a dependent variable and plot the two graphs, using their intersecting points to find solutions (see picture). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n x &= e^{-x} \\\\\n x &= \\cos x \\\\\n 2^x &= x^2\n\\end{align}" }, { "math_id": 1, "text": "4^x = 3^{x^2-1} \\cdot 2^{5x}" }, { "math_id": 2, "text": "x \\ln 4 = (x^2-1) \\ln 3 + 5x \\ln 2" }, { "math_id": 3, "text": "x^2 \\ln 3 + x(5 \\ln 2 - \\ln 4) -\\ln 3 = 0" }, { "math_id": 4, "text": "x = \\frac{ -3 \\ln 2 \\pm \\sqrt{9(\\ln 2)^2 - 4 (\\ln 3)^2} }{ 2 \\ln 3 } ." }, { "math_id": 5, "text": "4^x = 3^{x^2-1} + 2^{5x} ." }, { "math_id": 6, "text": "2^{x-1} + 4^{x-2} - 8^{x-2} = 0" }, { "math_id": 7, "text": "\\frac{1}{2} y + \\frac{1}{16} y^2 - \\frac{1}{64} y^3 = 0" }, { "math_id": 8, "text": "y \\in \\{ 0, -4, 8\\}" }, { "math_id": 9, "text": "x= \\log_2 8 = 3" }, { "math_id": 10, "text": "x^2e^{2x} + 2 = 3x e^x" }, { "math_id": 11, "text": "y^2 + 2 = 3y," }, { "math_id": 12, "text": "y \\in \\{1,2\\}," }, { "math_id": 13, "text": "x \\in \\{ W_0(1), W_0(2), W_{-1}(1), W_{-1}(2) \\}" }, { "math_id": 14, "text": "W_0" }, { "math_id": 15, "text": "W_{-1}" }, { "math_id": 16, "text": "W" }, { "math_id": 17, "text": "2 \\log_5 (3x-1) - \\log_5 (12x+1) = 0" }, { "math_id": 18, "text": "5." }, { "math_id": 19, "text": "\\frac{ (3x-1)^2 }{ 12x+1 } = 1," }, { "math_id": 20, "text": "x \\in \\{ 0, 2\\} ." }, { "math_id": 21, "text": "x = 0" }, { "math_id": 22, "text": "\\log_5(-1)" }, { "math_id": 23, "text": "b" }, { "math_id": 24, "text": "f(x)," }, { "math_id": 25, "text": "y = \\log_b (f(x))" }, { "math_id": 26, "text": "5 \\ln(\\sin x^2) + 6 = 7 \\sqrt{ \\ln(\\sin x^2) + 8 }" }, { "math_id": 27, "text": "y = \\ln(\\sin x^2) ," }, { "math_id": 28, "text": "5 y + 6 = 7 \\sqrt{ y + 8 }," }, { "math_id": 29, "text": "y=\\frac{89}{25}" }, { "math_id": 30, "text": "x = \\sqrt{ \\arcsin \\exp y } = \\sqrt{ \\arcsin \\exp \\frac{89}{25} }." }, { "math_id": 31, "text": "\\sin(nx+a), \\cos(mx+b), \\tan(lx+c), ..." }, { "math_id": 32, "text": "n,m,l,..." }, { "math_id": 33, "text": "\\sin x" }, { "math_id": 34, "text": "y = \\sin(x)" }, { "math_id": 35, "text": "\\sin(x+a) = (\\cos^2 x) - 1" }, { "math_id": 36, "text": "(\\sin x)(\\cos a) + \\sqrt{ 1 - \\sin^2 x }(\\sin a) = 1 - (\\sin^2 x) - 1" }, { "math_id": 37, "text": "y (\\cos a) + \\sqrt{ 1 - y^2 }(\\sin a) = - y^2" }, { "math_id": 38, "text": "x = 2k\\pi + \\arcsin y" }, { "math_id": 39, "text": "y = \\exp(x)" }, { "math_id": 40, "text": "3 \\cosh x = 4 + \\sinh (2x-6)" }, { "math_id": 41, "text": "\\frac{3}{2} (e^x + \\frac{1}{e^x}) = 4 + \\frac{1}{2} \\left( \\frac{(e^x)^2}{e^6} - \\frac{e^6}{(e^x)^2} \\right) ," }, { "math_id": 42, "text": "\\frac{3}{2} (y + \\frac{1}{y}) = 4 + \\frac{1}{2} \\left( \\frac{y^2}{e^6} - \\frac{e^6}{y^2} \\right) ," }, { "math_id": 43, "text": "x = \\ln y" }, { "math_id": 44, "text": "k \\approx 1" }, { "math_id": 45, "text": "\\sin x = k x" }, { "math_id": 46, "text": "(1-k) x - x^3/6=0" }, { "math_id": 47, "text": "x=0" }, { "math_id": 48, "text": "x = \\plusmn \\sqrt{6} \\sqrt{1-k}" }, { "math_id": 49, "text": "x_0" }, { "math_id": 50, "text": "f(x)=g(x)" }, { "math_id": 51, "text": "f(x)\\leq c\\leq g(x)" }, { "math_id": 52, "text": "f(x_0)=g(x_0)=c" }, { "math_id": 53, "text": "\\log_{2}\\left(3+2x-x^{2}\\right)=\\tan^{2}\\left(\\frac{\\pi x}{4}\\right)+\\cot^{2}\\left(\\frac{\\pi x}{4}\\right)" }, { "math_id": 54, "text": "-1<x<3" }, { "math_id": 55, "text": "f(x)=\\log_{2}\\left(3+2x-x^{2}\\right)" }, { "math_id": 56, "text": "g(x)=\\tan^{2}\\left(\\frac{\\pi x}{4}\\right)+\\cot^{2}\\left(\\frac{\\pi x}{4}\\right)" }, { "math_id": 57, "text": "f(x)\\leq 2" }, { "math_id": 58, "text": "g(x)\\geq 2" }, { "math_id": 59, "text": "f(x)=g(x)=2" }, { "math_id": 60, "text": "f(x)=2" }, { "math_id": 61, "text": "x=1\\in(-1,3)" }, { "math_id": 62, "text": "f(1)=g(1)=2" }, { "math_id": 63, "text": "x=1" } ]
https://en.wikipedia.org/wiki?curid=6073963
60744
Cubic zirconia
The cubic crystalline form of zirconium dioxide Cubic zirconia (abbreviated "CZ") is the cubic crystalline form of zirconium dioxide (ZrO2). The synthesized material is hard and usually colorless, but may be made in a variety of different colors. It should not be confused with zircon, which is a zirconium silicate (ZrSiO4). It is sometimes erroneously called "cubic zirconium". Because of its low cost, durability, and close visual likeness to diamond, synthetic cubic zirconia has remained the most gemologically and economically important competitor for diamonds since commercial production began in 1976. Its main competitor as a synthetic gemstone is a more recently cultivated material, synthetic moissanite. Technical aspects. Cubic zirconia is crystallographically isometric, an important attribute of a would-be diamond simulant. During synthesis zirconium oxide naturally forms monoclinic crystals, which are stable under normal atmospheric conditions. A stabilizer is required for cubic crystals (taking on the fluorite structure) to form, and remain stable at ordinary temperatures; typically this is either yttrium or calcium oxide, the amount of stabilizer used depending on the many recipes of individual manufacturers. Therefore, the physical and optical properties of synthesized CZ vary, all values being ranges. It is a dense substance, with a density between 5.6 and 6.0 g/cm3—about 1.65 times that of diamond. Cubic zirconia is relatively hard, 8–8.5 on the Mohs scale—slightly harder than most semi-precious natural gems. Its refractive index is high at 2.15–2.18 (compared to 2.42 for diamonds) and its luster is Adamantine lustre. Its dispersion is very high at 0.058–0.066, exceeding that of diamond (0.044). Cubic zirconia has no cleavage and exhibits a conchoidal fracture. Because of its high hardness, it is generally considered brittle. Under shortwave UV cubic zirconia typically fluoresces a yellow, greenish yellow or "beige". Under longwave UV the effect is greatly diminished, with a whitish glow sometimes being seen. Colored stones may show a strong, complex rare earth absorption spectrum. History. Discovered in 1892, the yellowish monoclinic mineral baddeleyite is a natural form of zirconium oxide. The high melting point of zirconia (2750 °C or 4976 °F) hinders controlled growth of single crystals. However, stabilization of cubic zirconium oxide had been realized early on, with the synthetic product "stabilized zirconia" introduced in 1929. Although cubic, it was in the form of a polycrystalline ceramic: it was used as a refractory material, highly resistant to chemical and thermal attack (up to 2540 °C or 4604 °F). In 1937, German mineralogists M. V. Stackelberg and K. Chudoba discovered naturally occurring cubic zirconia in the form of microscopic grains included in metamict zircon. This was thought to be a byproduct of the metamictization process, but the two scientists did not think the mineral important enough to give it a formal name. The discovery was confirmed through X-ray diffraction, proving the existence of a natural counterpart to the synthetic product. As with the majority of grown diamond substitutes, the idea of producing single-crystal cubic zirconia arose in the minds of scientists seeking a new and versatile material for use in lasers and other optical applications. Its production eventually exceeded that of earlier synthetics, such as synthetic strontium titanate, synthetic rutile, YAG (yttrium aluminium garnet) and GGG (gadolinium gallium garnet). Some of the earliest research into controlled single-crystal growth of cubic zirconia occurred in 1960s France, much work being done by Y. Roulin and R. Collongues. This technique involved molten zirconia being contained within a thin shell of still-solid zirconia, with crystal growth from the melt. The process was named "cold crucible", an allusion to the system of water cooling used. Though promising, these attempts yielded only small crystals. Later, Soviet scientists under V. V. Osiko in the Laser Equipment Laboratory at the Lebedev Physical Institute in Moscow perfected the technique, which was then named "skull crucible" (an allusion either to the shape of the water-cooled container or to the form of crystals sometimes grown). They named the jewel "Fianit" after the institute's name FIAN (Physical Institute of the Academy of Science), but the name was not used outside of the USSR. This was known at the time as the Institute of Physics at the Russian Academy of Science. Their breakthrough was published in 1973, and commercial production began in 1976. In 1977, cubic zirconia began to be mass-produced in the jewelry marketplace by the Ceres Corporation, with crystals stabilized with 94% yttria. Other major producers as of 1993 include Taiwan Crystal Company Ltd, Swarovski and ICT inc. By 1980, annual global production had reached 60 million carats (12 tonnes) and continued to increase, with production reaching around 400 tonnes per year in 1998. Because the natural form of cubic zirconia is so rare, all cubic zirconia used in jewelry has been synthesized, one method of which was patented by Josep F. Wenckus &amp; Co. in 1997. Synthesis. The skull-melting method refined by Josep F. Wenckus and coworkers in 1997 remains the industry standard. This is largely due to the process allowing for temperatures of over 3000  °C to be achieved, lack of contact between crucible and material as well as the freedom to choose any gas atmosphere. Primary downsides to this method include the inability to predict the size of the crystals produced and it is impossible to control the crystallization process through temperature changes. The apparatus used in this process consists of a cup-shaped crucible surrounded by radio frequency-activated (RF-activated) copper coils and a water-cooling system. Zirconium dioxide thoroughly mixed with a stabilizer (normally 10% yttrium oxide) is fed into a cold crucible. Metallic chips of either zirconium or the stabilizer are introduced into the powder mix in a compact pile manner. The RF generator is switched on and the metallic chips quickly start heating up and readily oxidize into more zirconia. Consequently, the surrounding powder heats up by thermal conduction, begins melting and, in turn, becomes electroconductive, and thus it begins to heat up via the RF generator as well. This continues until the entire product is molten. Due to the cooling system surrounding the crucible, a thin shell of sintered solid material is formed. This causes the molten zirconia to remain contained within its own powder which prevents it from being contaminated from the crucible and reduces heat loss. The melt is left at high temperatures for some hours to ensure homogeneity and ensure that all impurities have evaporated. Finally, the entire crucible is slowly removed from the RF coils to reduce the heating and let it slowly cool down (from bottom to top). The rate at which the crucible is removed from the RF coils is chosen as a function of the stability of crystallization dictated by the phase transition diagram. This provokes the crystallization process to begin and useful crystals begin to form. Once the crucible has been completely cooled to room temperature, the resulting crystals are multiple elongated-crystalline blocks. This shape is dictated by a concept known as crystal degeneration according to Tiller. The size and diameter of the obtained crystals is a function of the cross-sectional area of the crucible, volume of the melt and composition of the melt. The diameter of the crystals is heavily influenced by the concentration of Y2O3 stabilizer. Phase relations in zirconia solids solutions. As seen on the phase diagram, the cubic phase will crystallize first as the solution is cooled down no matter the concentration of Y2O3. If the concentration of Y2O3 is not high enough the cubic structure will start to break down into the tetragonal state which will then break down into a monoclinic phase. If the concentration of Y2O3 is between 2.5-5% the resulting product will be PSZ (partially stabilized zirconia) while monophasic cubic crystals will form from around 8-40%. Below 14% at low growth rates tend to be opaque indicating partial phase separation in the solid solution (likely due to diffusion in the crystals remaining in the high temperature region for a longer time). Above this threshold crystals tend to remain clear at reasonable growth rates and maintains good annealing conditions. Doping. Because of cubic zirconia's isomorphic capacity, it can be doped with several elements to change the color of the crystal. A list of specific dopants and colors produced by their addition can be seen below. Primary growth defects. The vast majority of YCZ (yttrium bearing cubic zirconia) crystals are clear with high optical perfection and with gradients of the refractive index lower than formula_0. However some samples contain defects with the most characteristic and common ones listed below. Uses outside jewelry. Due to its optical properties yttrium cubic zirconia (YCZ) has been used for windows, lenses, prisms, filters and laser elements. Particularly in the chemical industry it is used as window material for the monitoring of corrosive liquids due to its chemical stability and mechanical toughness. YCZ has also been used as a substrate for semiconductor and superconductor films in similar industries. Mechanical properties of partially stabilized zirconia (high hardness and shock resistance, low friction coefficient, high chemical and thermal resistance as well as high wear and tear resistance) allow it to be used as a very particular building material, especially in the bio-engineering industry: It has been used to make reliable super-sharp medical scalpels for doctors that are compatible with bio-tissues and contain an edge much smoother than one made of steel. Innovations. In recent years manufacturers have sought ways of distinguishing their product by supposedly "improving" cubic zirconia. Coating finished cubic zirconia with a film of diamond-like carbon (DLC) is one such innovation, a process using chemical vapor deposition. The resulting material is purportedly harder, more lustrous and more like diamond overall. The coating is thought to quench the excess fire of cubic zirconia, while improving its refractive index, thus making it appear more like diamond. Additionally, because of the high percentage of diamond bonds in the amorphous diamond coating, the finished simulant will show a positive diamond signature in Raman spectra. Another technique first applied to quartz and topaz has also been adapted to cubic zirconia: An iridescent effect created by vacuum-sputtering onto finished stones an extremely thin layer of a precious metal (typically gold), or certain metal oxides, metal nitrides, or other coatings. This material is marketed as "mystic" by many dealers. Unlike diamond-like carbon and other hard synthetic ceramic coatings, the iridescent effect made with precious metal coatings is not durable, due to their extremely low hardness and poor abrasion wear properties, compared to the remarkably durable cubic zirconia substrate. Cubic zirconia vis-à-vis diamond. Key features of cubic zirconia distinguish it from diamond: Effects on the diamond market. Cubic zirconia, as a diamond simulant and jewel competitor, can potentially reduce demand for conflict diamonds, and impact the controversy surrounding the rarity and value of diamonds. Regarding value, the paradigm that diamonds are costly due to their rarity and visual beauty has been replaced by an artificial rarity attributed to price-fixing practices of De Beers Company which held a monopoly on the market from the 1870s to early 2000s. The company pleaded guilty to these charges in an Ohio court in 13 July 2004. However, while De Beers has less market power, the price of diamonds continues to increase due to the demand in emerging markets such as India and China. The emergence of artificial stones such as cubic zirconia with optic properties similar to diamonds, could be an alternative for jewelry buyers given their lower price and noncontroversial history. An issue closely related to monopoly is the emergence of conflict diamonds. The Kimberley Process (KP) was established to deter the illicit trade of diamonds that fund civil wars in Angola and Sierra Leone. However, the KP is not as effective in decreasing the number of conflict diamonds reaching the European and American markets. Its definition does not include forced labor conditions or human right violations. A 2015 study from the Enough Project, showed that groups in the Central African Republic have reaped between US$3 million and US$6 million annually from conflict diamonds. UN reports show that more than US$24 million in conflict diamonds have been smuggled since the establishment of the KP. Diamond simulants have become an alternative to boycott the funding of unethical practices. Terms such as “Eco-friendly Jewelry” define them as conflict free origin and environmentally sustainable. However, concerns from mining countries such as the Democratic Republic of Congo are that a boycott in purchases of diamonds would only worsen their economy. According to the Ministry of Mines in Congo, 10% of its population relies on the income from diamonds. Therefore, cubic zirconia are a short term alternative to reduce conflict but a long term solution would be to establish a more rigorous system of identifying the origin of these stones. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "5\\times 10^{-5}" }, { "math_id": 1, "text": "8\\times 10^{-4}" } ]
https://en.wikipedia.org/wiki?curid=60744
607495
Freezing-point depression
Drop in freezing temperature of a solvent due to the addition of solute Freezing-point depression is a drop in the maximum temperature at which a substance freezes, caused when a smaller amount of another, non-volatile substance is added. Examples include adding salt into water (used in ice cream makers and for de-icing roads), alcohol in water, ethylene or propylene glycol in water (used in antifreeze in cars), adding copper to molten silver (used to make solder that flows at a lower temperature than the silver pieces being joined), or the mixing of two solids such as impurities into a finely powdered drug. In all cases, the substance added/present in smaller amounts is considered the solute, while the original substance present in larger quantity is thought of as the solvent. The resulting liquid solution or solid-solid mixture has a lower freezing point than the pure solvent or solid because the chemical potential of the solvent in the mixture is lower than that of the pure solvent, the difference between the two being proportional to the natural logarithm of the mole fraction. In a similar manner, the chemical potential of the vapor above the solution is lower than that above a pure solvent, which results in boiling-point elevation. Freezing-point depression is what causes sea water (a mixture of salt and other compounds in water) to remain liquid at temperatures below , the freezing point of pure water. Explanation. Using vapour pressure. The freezing point is the temperature at which the liquid solvent and solid solvent are at equilibrium, so that their vapor pressures are equal. When a non-volatile solute is added to a volatile liquid solvent, the solution vapour pressure will be lower than that of the pure solvent. As a result, the solid will reach equilibrium with the solution at a lower temperature than with the pure solvent. This explanation in terms of vapor pressure is equivalent to the argument based on chemical potential, since the chemical potential of a vapor is logarithmically related to pressure. All of the colligative properties result from a lowering of the chemical potential of the solvent in the presence of a solute. This lowering is an entropy effect. The greater randomness of the solution (as compared to the pure solvent) acts in opposition to freezing, so that a lower temperature must be reached, over a broader range, before equilibrium between the liquid solution and solid solution phases is achieved. Melting point determinations are commonly exploited in organic chemistry to aid in identifying substances and to ascertain their purity. Due to concentration and entropy. In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze (a lower concentration of solvent exists in a solution versus pure solvent). Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. The solute is not occluding or preventing the solvent from solidifying, it is simply diluting it so there is a reduced probability of a solvent making an attempt at freezing in any given moment. At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well. Uses. The phenomenon of freezing-point depression has many practical uses. The radiator fluid in an automobile is a mixture of water and ethylene glycol. The freezing-point depression prevents radiators from freezing in winter. Road salting takes advantage of this effect to lower the freezing point of the ice it is placed on. Lowering the freezing point allows the street ice to melt at lower temperatures, preventing the accumulation of dangerous, slippery ice. Commonly used sodium chloride can depress the freezing point of water to about . If the road surface temperature is lower, NaCl becomes ineffective and other salts are used, such as calcium chloride, magnesium chloride or a mixture of many. These salts are somewhat aggressive to metals, especially iron, so in airports safer media such as sodium formate, potassium formate, sodium acetate, and potassium acetate are used instead. Freezing-point depression is used by some organisms that live in extreme cold. Such creatures have evolved means through which they can produce a high concentration of various compounds such as sorbitol and glycerol. This elevated concentration of solute decreases the freezing point of the water inside them, preventing the organism from freezing solid even as the water around them freezes, or as the air around them becomes very cold. Examples of organisms that produce antifreeze compounds include some species of arctic-living fish such as the rainbow smelt, which produces glycerol and other molecules to survive in frozen-over estuaries during the winter months. In other animals, such as the spring peeper frog ("Pseudacris crucifer"), the molality is increased temporarily as a reaction to cold temperatures. In the case of the peeper frog, freezing temperatures trigger a large-scale breakdown of glycogen in the frog's liver and subsequent release of massive amounts of glucose into the blood. With the formula below, freezing-point depression can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called cryoscopy (Greek "cryo" = cold, "scopos" = observe; "observe the cold") and relies on exact measurement of the freezing point. The degree of dissociation is measured by determining the van 't Hoff factor "i" by first determining "m"B and then comparing it to "m"solute. In this case, the molar mass of the solute must be known. The molar mass of a solute is determined by comparing "m"B with the amount of solute dissolved. In this case, "i" must be known, and the procedure is primarily useful for organic compounds using a nonpolar solvent. Cryoscopy is no longer as common a measurement method as it once was, but it was included in textbooks at the turn of the 20th century. As an example, it was still taught as a useful analytic procedure in Cohen's "Practical Organic Chemistry " of 1910, in which the molar mass of naphthalene is determined using a "Beckmann freezing apparatus". Laboratory uses. Freezing-point depression can also be used as a purity analysis tool when analyzed by differential scanning calorimetry. The results obtained are in mol%, but the method has its place, where other methods of analysis fail. In the laboratory, lauric acid may be used to investigate the molar mass of an unknown substance via the freezing-point depression. The choice of lauric acid is convenient because the melting point of the pure compound is relatively high (43.8 °C). Its cryoscopic constant is 3.9 °C·kg/mol. By melting lauric acid with the unknown substance, allowing it to cool, and recording the temperature at which the mixture freezes, the molar mass of the unknown compound may be determined. This is also the same principle acting in the melting-point depression observed when the melting point of an impure solid mixture is measured with a melting-point apparatus since melting and freezing points both refer to the liquid-solid phase transition (albeit in different directions). In principle, the boiling-point elevation and the freezing-point depression could be used interchangeably for this purpose. However, the cryoscopic constant is larger than the ebullioscopic constant, and the freezing point is often easier to measure with precision, which means measurements using the freezing-point depression are more precise. FPD measurements are also used in the dairy industry to ensure that milk has not had extra water added. Milk with a FPD of over 0.509 °C is considered to be unadulterated. Formula. For dilute solution. If the solution is treated as an ideal solution, the extent of freezing-point depression depends only on the solute concentration that can be estimated by a simple linear relationship with the cryoscopic constant ("Blagden's Law"). formula_0 formula_1 where: Some values of the cryoscopic constant "K"f for selected solvents: For concentrated solution. The simple relation above doesn't consider the nature of the solute, so it is only effective in a diluted solution. For a more accurate calculation at a higher concentration, for ionic solutes, Ge and Wang (2010) proposed a new equation: formula_10 In the above equation, "T"F is the normal freezing point of the pure solvent (273 K for water, for example); "a"liq is the activity of the solvent in the solution (water activity for aqueous solution); Δ"H"fusTF is the enthalpy change of fusion of the pure solvent at "T"F, which is 333.6 J/g for water at 273 K; Δ"C"fusp is the difference between the heat capacities of the liquid and solid phases at "T"F, which is 2.11 J/(g·K) for water. The solvent activity can be calculated from the Pitzer model or modified TCPC model, which typically requires 3 adjustable parameters. For the TCPC model, these parameters are available for many single salts. Ethanol example. The freezing point of ethanol water mixture is shown in the following graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Delta T_f \\propto \\frac{\\text{Moles of dissolved species}}{\\text{Mass of solvent}}" }, { "math_id": 1, "text": " \\Delta T_f = K_fbi" }, { "math_id": 2, "text": "\\Delta T_f" }, { "math_id": 3, "text": " T_f^0" }, { "math_id": 4, "text": " T_f" }, { "math_id": 5, "text": " \\Delta T_f" }, { "math_id": 6, "text": " T_f = T_f^0 - \\Delta T_f" }, { "math_id": 7, "text": "K_f" }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "\n\\Delta T_\\text{F} = \\frac{\\Delta H^\\text{fus}_{T_\\text{F}} - 2RT_\\text{F} \\cdot \\ln a_\\text{liq} - \\sqrt{2\\Delta C^\\text{fus}_p T^2_\\text{F}R \\cdot \\ln a_\\text{liq} + (\\Delta H^\\text{fus}_{T_\\text{F}})^2}}{2\\left(\\frac{\\Delta H^\\text{fus}_{T_\\text{F}}}{T_\\text{F}} + \\frac{\\Delta C^\\text{fus}_p}{2} - R \\cdot \\ln a_\\text{liq}\\right)}.\n" } ]
https://en.wikipedia.org/wiki?curid=607495
6075099
Binding potential
In pharmacokinetics and receptor-ligand kinetics the binding potential (BP) is a combined measure of the density of "available" neuroreceptors and the affinity of a drug to that neuroreceptor. Description. Consider a ligand receptor binding system. Ligand with a concentration "L" associates with a receptor of concentration or availability "R" to form a ligand-receptor complex with concentration "RL". The binding potential is then the ratio ligand-receptor complex to free ligand at equilibrium and in the limit of L tending to 0, and is given symbol BP: formula_0 This quantity, originally defined by Mintun, describes the capacity of a receptor to bind ligand. It is a limit (L « Ki) of the general receptor association equation: formula_1 and is thus also equivalent to: formula_2 These equations apply equally when measuring the total receptor density or the residual receptor density available after binding to second ligand - availability. BP in Positron Emission Tomography. BP is a pivotal measure in the use of positron emission tomography (PET) to measure the density of "available" receptors, e.g. to assess the occupancy by drugs or to characterize neuropsychiatric diseases (yet, one should keep in mind that binding potential is a combined measure that depends on receptor density as well as on affinity). An overview of the related methodology is e.g. given in Laruelle et al. (2002). Estimating BP with PET usually requires that a reference tissue is available. A reference tissue has negligible receptor density and its distribution volume should be the same as the distribution volume in the target region if all receptors were blocked. Although the BP can be measured in a relatively unbiased way by measuring the whole time course of labelled ligand association and blood radioactivity, this is practically not always necessary. Two other common measures have been derived, which involve assumptions, but result in measures that should correlate with BP: formula_3 and formula_4. Definitions and Symbols. While formula_3 and formula_4 are nonambiguous symbols, BP is not. There are many publications in which BP denotes formula_4. Generally, if there were no arterial samples ("noninvasive imaging"), BP denotes formula_4. formula_12: Total density of receptors = formula_13. In PET imaging, the amount of radioligand is usually very small (L « Ki, see above), thus formula_14 formula_15 and formula_16: Transfer rate constants from the two tissue compartment model. NEW NOTATIONAL CONVENTIONS: In Innis et al., a large group of researchers who are active in this field agreed to a consensus nomenclature for these terms, with the intent of making the literature in this field more transparent to non-specialists. The convention involves use of the subscripts p for quantities referred to plasma and ND for quantities referred to the free plus nonspecifically bound concentration in brain (NonDisplaceable). Under the consensus nomenclature, the parameters referred to above as f1 and BP1 are now called fp and BPp, while f2 and BP2 are called fND and BPND. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "BP=\\frac{RL}{L}\\bigg|_{L\\approx0}" }, { "math_id": 1, "text": "RL=\\frac{R*L}{L+Ki}" }, { "math_id": 2, "text": "BP=\\frac{R}{K_i}" }, { "math_id": 3, "text": "BP_1" }, { "math_id": 4, "text": "BP_2" }, { "math_id": 5, "text": "V_3''" }, { "math_id": 6, "text": "BP_2=k_3/k_4" }, { "math_id": 7, "text": "BP_2=f_2BP" }, { "math_id": 8, "text": "f_2" }, { "math_id": 9, "text": "BP'" }, { "math_id": 10, "text": "BP_1=f_1BP" }, { "math_id": 11, "text": "f_1" }, { "math_id": 12, "text": "B_{max}" }, { "math_id": 13, "text": "R+RL" }, { "math_id": 14, "text": "B_{max} \\approx R" }, { "math_id": 15, "text": "k_3" }, { "math_id": 16, "text": "k_4" } ]
https://en.wikipedia.org/wiki?curid=6075099
607530
Reaction mechanism
Any model explaining a chemical reaction In chemistry, a reaction mechanism is the step by step sequence of elementary reactions by which overall chemical reaction occurs. A chemical mechanism is a theoretical conjecture that tries to describe in detail what takes place at each stage of an overall chemical reaction. The detailed steps of a reaction are not observable in most cases. The conjectured mechanism is chosen because it is thermodynamically feasible and has experimental support in isolated intermediates (see next section) or other quantitative and qualitative characteristics of the reaction. It also describes each reactive intermediate, activated complex, and transition state, which bonds are broken (and in what order), and which bonds are formed (and in what order). A complete mechanism must also explain the reason for the reactants and catalyst used, the stereochemistry observed in reactants and products, all products formed and the amount of each. The electron or arrow pushing method is often used in illustrating a reaction mechanism; for example, see the illustration of the mechanism for benzoin condensation in the following examples section. A reaction mechanism must also account for the order in which molecules react. Often what appears to be a single-step conversion is in fact a multistep reaction. Reaction intermediates. Reaction intermediates are chemical species, often unstable and short-lived (however sometimes can be isolated), which are not reactants or products of the overall chemical reaction, but are temporary products and/or reactants in the mechanism's reaction steps. Reaction intermediates are often free radicals or ions. Reaction intermediates are often confused with the transition state. The transition state is a fleeting, high-energy configuration that exists only at the peak of the energy barrier during a reaction, while a reaction intermediate is a relatively stable species that exists for a measurable time between steps in a reaction. Unlike the transition state, intermediates can sometimes be isolated or observed directly. The kinetics (relative rates of the reaction steps and the rate equation for the overall reaction) are explained in terms of the energy needed for the conversion of the reactants to the proposed transition states (molecular states that correspond to maxima on the reaction coordinates, and to saddle points on the potential energy surface for the reaction). Chemical kinetics. Information about the mechanism of a reaction is often provided by the use of chemical kinetics to determine the rate equation and the reaction order in each reactant. Consider the following reaction for example: CO + NO2 → CO2 + NO In this case, experiments have determined that this reaction takes place according to the rate law formula_0. This form suggests that the rate-determining step is a reaction between two molecules of NO2. A possible mechanism for the overall reaction that explains the rate law is: 2 NO2 → NO3 + NO (slow) NO3 + CO → NO2 + CO2 (fast) Each step is called an elementary step, and each has its own rate law and molecularity. The elementary steps should add up to the original reaction. (Meaning, if we were to cancel out all the molecules that appear on both sides of the reaction, we would be left with the original reaction.) When determining the overall rate law for a reaction, the slowest step is the step that determines the reaction rate. Because the first step (in the above reaction) is the slowest step, it is the rate-determining step. Because it involves the collision of two NO2 molecules, it is a bimolecular reaction with a rate formula_1 which obeys the rate law formula_2. Other reactions may have mechanisms of several consecutive steps. In organic chemistry, the reaction mechanism for the benzoin condensation, put forward in 1903 by A. J. Lapworth, was one of the first proposed reaction mechanisms. A chain reaction is an example of a complex mechanism, in which the propagation steps form a closed cycle. In a chain reaction, the intermediate produced in one step generates an intermediate in another step. Intermediates are called chain carriers. Sometimes, the chain carriers are radicals, they can be ions as well. In nuclear fission they are neutrons. Chain reactions have several steps, which may include: Even though all these steps can appear in one chain reaction, the minimum necessary ones are Initiation, propagation, and termination. An example of a simple chain reaction is the thermal decomposition of acetaldehyde (CH3CHO) to methane (CH4) and carbon monoxide (CO). The experimental reaction order is 3/2, which can be explained by a "Rice-Herzfeld mechanism". This reaction mechanism for acetaldehyde has 4 steps with rate equations for each step : For the overall reaction, the rates of change of the concentration of the intermediates •CH3 and CH3CO• are zero, according to the steady-state approximation, which is used to account for the rate laws of chain reactions. d[•CH3]/dt = k1[CH3CHO] – k2[•CH3][CH3CHO] + k3[CH3CO•] - 2k4[•CH3]2 = 0 and d[CH3CO•]/dt = k2[•CH3][CH3CHO] – k3[CH3CO•] = 0 The sum of these two equations is k1[CH3CHO] – 2 k4[•CH3]2 = 0. This may be solved to find the steady-state concentration of •CH3 radicals as [•CH3] = (k1 / 2k4)1/2 [CH3CHO]1/2. It follows that the rate of formation of CH4 is d[CH4]/dt = k2[•CH3][CH3CHO] = k2 (k1 / 2k4)1/2 [CH3CHO]3/2 Thus the mechanism explains the observed rate expression, for the principal products CH4 and CO. The exact rate law may be even more complicated, there are also minor products such as acetone (CH3COCH3) and propanal (CH3CH2CHO). Other experimental methods to determine mechanism. Many experiments that suggest the possible sequence of steps in a reaction mechanism have been designed, including: Theoretical modeling. A correct reaction mechanism is an important part of accurate predictive modeling. For many combustion and plasma systems, detailed mechanisms are not available or require development. Even when information is available, identifying and assembling the relevant data from a variety of sources, reconciling discrepant values and extrapolating to different conditions can be a difficult process without expert help. Rate constants or thermochemical data are often not available in the literature, so computational chemistry techniques or group additivity methods must be used to obtain the required parameters. Computational chemistry methods can also be used to calculate potential energy surfaces for reactions and determine probable mechanisms. Molecularity. Molecularity in chemistry is the number of colliding molecular entities that are involved in a single reaction step. In general, reaction steps involving more than three molecular entities do not occur, because is statistically improbable in terms of Maxwell distribution to find such a transition state. References. &lt;templatestyles src="Reflist/styles.css" /&gt; L.G.WADE, ORGANIC CHEMISTRY 7TH ED, 2010
[ { "math_id": 0, "text": "r = k[NO_2]^2" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "r = k[NO_{2}(t)]^2" } ]
https://en.wikipedia.org/wiki?curid=607530
607577
Activated complex
In chemistry, an activated complex represents a collection of intermediate structures in a chemical reaction when bonds are breaking and forming. The activated complex is an arrangement of atoms in an arbitrary region near the saddle point of a potential energy surface. The region represents not one defined state, but a range of unstable configurations that a collection of atoms pass through between the reactants and products of a reaction. Activated complexes have partial reactant and product character, which can significantly impact their behaviour in chemical reactions. The terms activated complex and transition state are often used interchangeably, but they represent different concepts. Transition states only represent the highest potential energy configuration of the atoms during the reaction, while activated complex refers to a range of configurations near the transition state. In a reaction coordinate, the transition state is the configuration at the maximum of the diagram while the activated complex can refer to any point near the maximum. Transition state theory (also known as activated complex theory) studies the kinetics of reactions that pass through a defined intermediate state with standard Gibbs energy of activation Δ"G"°‡. The transition state, represented by the double dagger symbol represents the exact configuration of atoms that has an equal probability of forming either the reactants or products of the given reaction. The activation energy is the minimum amount of energy to initiate a chemical reaction and form the activated complex. The energy serves as a threshold that reactant molecules must surpass to overcome the energy barrier and transition into the activated complex. Endothermic reactions absorb energy from the surroundings, while exothermic reactions release energy. Some reactions occur spontaneously, while others necessitate an external energy input. The reaction can be visualized using a reaction coordinate diagram to show the activation energy and potential energy throughout the reaction. Activated complexes were first discussed in transition state theory (also called activated complex theory), which was first developed by Eyring, Evans, and Polanyi in 1935. Reaction Rate. Transition State Theory. Transition state theory explains the reaction dynamics of reactions. The theory is based on the idea that there is an equilibrium between the activated complex and reactant molecules. The theory incorporates concepts from collision theory, which states that for a reaction to occur, reacting molecules must collide with a minimum energy and correct orientation. The reactants are first transformed into the activated complex before breaking into the products. From the properties of the activated complex and reactants, the reaction rate constant isformula_0where K is the equilibrium constant, formula_1 is the Boltzmann constant, T is the thermodynamic temperature, and h is Planck's constant. Transition state theory is based on classical mechanics, as it assumes that as the reaction proceeds, the molecules will never return to the transition state. Symmetry. An activated complex with high symmetry can decrease the accuracy of rate expressions. Error can arise from introducing symmetry numbers into the rotational partition functions for the reactants and activated complexes. To reduce errors, symmetry numbers can by omitted by multiplying the rate expression by a statistical factor:formula_2where the statistical factor formula_3 is the number of equivalent activated complexes that can be formed, and the Q are the partition functions from the symmetry numbers that have been omitted. The activated complex is a collection of molecules that forms and then explodes along a particular internal normal coordinate. Ordinary molecules have three translational degrees of freedom, and their properties are similar to activated complexes. However, activated complexed have an extra degree of translation associated with their approach to the energy barrier, crossing it, and then dissociating.
[ { "math_id": 0, "text": "k = K \\frac{k_B T}{h}" }, { "math_id": 1, "text": "k_B" }, { "math_id": 2, "text": "k = l^\\ddagger \\frac{k_B T}{h} \\frac{Q_\\ddagger}{Q_A Q_B} e^\\left(-\\frac{\\epsilon}{k_B T}\\right)" }, { "math_id": 3, "text": "l^\\ddagger" } ]
https://en.wikipedia.org/wiki?curid=607577
60758
Box–Muller transform
Statistical transform The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers. The method was first mentioned explicitly by Raymond E. A. C. Paley and Norbert Wiener in their 1934 treatise on Fourier transforms in the complex domain. Given the status of these latter authors and the widespread availability and use of their treatise, it is almost certain that Box and Muller were well aware of its contents. The Box–Muller transform is commonly expressed in two forms. The basic form as given by Box and Muller takes two samples from the uniform distribution on the interval (0,1) and maps them to two standard, normally distributed samples. The polar form takes two samples from a different interval, [−1,+1], and maps them to two normally distributed samples without the use of sine or cosine functions. The Box–Muller transform was developed as a more computationally efficient alternative to the inverse transform sampling method. The ziggurat algorithm gives a more efficient method for scalar processors (e.g. old CPUs), while the Box–Muller transform is superior for processors with vector units (e.g. GPUs or modern CPUs). Basic form. Suppose "U"1 and "U"2 are independent samples chosen from the uniform distribution on the unit interval (0, 1). Let formula_0 and formula_1 Then "Z"0 and "Z"1 are independent random variables with a standard normal distribution. The derivation is based on a property of a two-dimensional Cartesian system, where X and Y coordinates are described by two independent and normally distributed random variables, the random variables for "R"2 and Θ (shown above) in the corresponding polar coordinates are also independent and can be expressed as formula_2 and formula_3 Because "R"2 is the square of the norm of the standard bivariate normal variable ("X", "Y"), it has the chi-squared distribution with two degrees of freedom. In the special case of two degrees of freedom, the chi-squared distribution coincides with the exponential distribution, and the equation for "R"2 above is a simple way of generating the required exponential variate. Polar form. The polar form was first proposed by J. Bell and then modified by R. Knop. While several different versions of the polar method have been described, the version of R. Knop will be described here because it is the most widely used, in part due to its inclusion in "Numerical Recipes". A slightly different form is described as "Algorithm P" by D. Knuth in "The Art of Computer Programming". Given u and v, independent and uniformly distributed in the closed interval [−1, +1], set "s" = "R"2 = "u"2 + "v"2. If "s" = 0 or "s" ≥ 1, discard "u" and "v", and try another pair ("u", "v"). Because u and v are uniformly distributed and because only points within the unit circle have been admitted, the values of "s" will be uniformly distributed in the open interval (0, 1), too. The latter can be seen by calculating the cumulative distribution function for "s" in the interval (0, 1). This is the area of a circle with radius formula_4, divided by formula_5. From this we find the probability density function to have the constant value 1 on the interval (0, 1). Equally so, the angle θ divided by formula_6 is uniformly distributed in the interval [0, 1) and independent of s. We now identify the value of "s" with that of "U"1 and formula_7 with that of "U"2 in the basic form. As shown in the figure, the values of formula_8 and formula_9 in the basic form can be replaced with the ratios formula_10 and formula_11, respectively. The advantage is that calculating the trigonometric functions directly can be avoided. This is helpful when trigonometric functions are more expensive to compute than the single division that replaces each one. Just as the basic form produces two standard normal deviates, so does this alternate calculation. formula_12 and formula_13 Contrasting the two forms. The polar method differs from the basic method in that it is a type of rejection sampling. It discards some generated random numbers, but can be faster than the basic method because it is simpler to compute (provided that the random number generator is relatively fast) and is more numerically robust. Avoiding the use of expensive trigonometric functions improves speed over the basic form. It discards of the total input uniformly distributed random number pairs generated, i.e. discards uniformly distributed random number pairs per Gaussian random number pair generated, requiring input random numbers per output random number. The basic form requires two multiplications, 1/2 logarithm, 1/2 square root, and one trigonometric function for each normal variate. On some processors, the cosine and sine of the same argument can be calculated in parallel using a single instruction. Notably for Intel-based machines, one can use the fsincos assembler instruction or the expi instruction (usually available from C as an intrinsic function), to calculate complex formula_14 and just separate the real and imaginary parts. Note: To explicitly calculate the complex-polar form use the following substitutions in the general form, Let formula_15 and formula_16 Then formula_17 The polar form requires 3/2 multiplications, 1/2 logarithm, 1/2 square root, and 1/2 division for each normal variate. The effect is to replace one multiplication and one trigonometric function with a single division and a conditional loop. Tails truncation. When a computer is used to produce a uniform random variable it will inevitably have some inaccuracies because there is a lower bound on how close numbers can be to 0. If the generator uses 32 bits per output value, the smallest non-zero number that can be generated is formula_18. When formula_19 and formula_20 are equal to this the Box–Muller transform produces a normal random deviate equal to formula_21. This means that the algorithm will not produce random variables more than 6.660 standard deviations from the mean. This corresponds to a proportion of formula_22 lost due to the truncation, where formula_23 is the standard cumulative normal distribution. With 64 bits the limit is pushed to formula_24 standard deviations, for which formula_25. Implementation. C++. The standard Box–Muller transform generates values from the standard normal distribution ("i.e." standard normal deviates) with mean "0" and standard deviation "1". The implementation below in standard C++ generates values from any normal distribution with mean formula_26 and variance formula_27. If formula_28 is a standard normal deviate, then formula_29 will have a normal distribution with mean formula_26 and standard deviation formula_30. The random number generator has been seeded to ensure that new, pseudo-random values will be returned from sequential calls to the codice_0 function. //"mu" is the mean of the distribution, and "sigma" is the standard deviation. std::pair&lt;double, double&gt; generateGaussianNoise(double mu, double sigma) constexpr double two_pi = 2.0 * M_PI; //initialize the random uniform number generator (runif) in a range 0 to 1 static std::mt19937 rng(std::random_device{}()); // Standard mersenne_twister_engine seeded with rd() static std::uniform_real_distribution&lt;&gt; runif(0.0, 1.0); //create two random numbers, make sure u1 is greater than zero double u1, u2; do u1 = runif(rng); while (u1 == 0); u2 = runif(rng); //compute z0 and z1 auto mag = sigma * sqrt(-2.0 * log(u1)); auto z0 = mag * cos(two_pi * u2) + mu; auto z1 = mag * sin(two_pi * u2) + mu; return std::make_pair(z0, z1); JavaScript. function rand_normal() { /* Syntax: * [ x, y ] = rand_normal(); * x = rand_normal()[0]; * y = rand_normal()[1]; */ // Box-Muller Transform: let phi = 2 * Math.PI * Math.random(); let R = Math.sqrt( -2 * Math.log( Math.random() ) ); let x = R * Math.cos(phi); let y = R * Math.sin(phi); // Return values: return [ x, y ]; Julia. boxmullersample(N) Generate `2N` samples from the standard normal distribution using the Box-Muller method. function boxmullersample(N) z = Array{Float64}(undef,N,2); for i in axes(z,1) z[i,:] .= sincospi(2 * rand()); z[i,:] .*= sqrt(-2 * log(rand())); end vec(z) end boxmullersample(n,μ,σ) Generate `n` samples from the normal distribution with mean `μ` and standard deviation `σ` using the Box-Muller method. function boxmullersample(n,μ,σ) μ .+ σ*boxmullersample(cld(n,2))[1:n]; end
[ { "math_id": 0, "text": "Z_0 = R \\cos(\\Theta) =\\sqrt{-2 \\ln U_1} \\cos(2 \\pi U_2)\\," }, { "math_id": 1, "text": "Z_1 = R \\sin(\\Theta) = \\sqrt{-2 \\ln U_1} \\sin(2 \\pi U_2).\\," }, { "math_id": 2, "text": "R^2 = -2\\cdot\\ln U_1\\," }, { "math_id": 3, "text": "\\Theta = 2\\pi U_2. \\," }, { "math_id": 4, "text": " \\sqrt{s}" }, { "math_id": 5, "text": "\\pi" }, { "math_id": 6, "text": " 2 \\pi" }, { "math_id": 7, "text": " \\theta/(2 \\pi)" }, { "math_id": 8, "text": " \\cos \\theta = \\cos 2 \\pi U_2" }, { "math_id": 9, "text": " \\sin \\theta = \\sin 2 \\pi U_2" }, { "math_id": 10, "text": "\\cos \\theta = u/R = u/\\sqrt{s}" }, { "math_id": 11, "text": "\\sin \\theta = v/R = v/\\sqrt{s}" }, { "math_id": 12, "text": "z_0 = \\sqrt{-2 \\ln U_1} \\cos(2 \\pi U_2) = \\sqrt{-2 \\ln s} \\left( \\frac{u}{\\sqrt{s}}\\right) = u \\cdot \\sqrt{\\frac{-2 \\ln s}{s}}" }, { "math_id": 13, "text": "z_1 = \\sqrt{-2 \\ln U_1} \\sin(2 \\pi U_2) = \\sqrt{-2 \\ln s} \\left( \\frac{v}{\\sqrt{s}}\\right) = v \\cdot \\sqrt{\\frac{-2 \\ln s}{s}}." }, { "math_id": 14, "text": "\\exp(iz) = e^{i z} = \\cos z + i \\sin z, \\, " }, { "math_id": 15, "text": " r = \\sqrt{- \\ln(u_1)} " }, { "math_id": 16, "text": " z = 2 \\pi u_2. " }, { "math_id": 17, "text": " re^{i z} = \\sqrt{- \\ln(u_1)} e^{i 2 \\pi u_2} =\\sqrt{-2 \\ln(u_1)}\\left[ \\cos(2 \\pi u_2) + i \\sin(2 \\pi u_2)\\right]." }, { "math_id": 18, "text": "2^{-32}" }, { "math_id": 19, "text": "U_1" }, { "math_id": 20, "text": "U_2" }, { "math_id": 21, "text": "\\delta = \\sqrt{-2 \\ln(2^{-32})} \\cos(2 \\pi 2^{-32})\\approx 6.660" }, { "math_id": 22, "text": "2(1-\\Phi(\\delta)) \\simeq 2.738 \\times 10^{-11}" }, { "math_id": 23, "text": "\\Phi(\\delta)" }, { "math_id": 24, "text": "\\delta = 9.419" }, { "math_id": 25, "text": "2(1-\\Phi(\\delta)) < 5 \\times 10^{-21}" }, { "math_id": 26, "text": "\\mu" }, { "math_id": 27, "text": "\\sigma^2" }, { "math_id": 28, "text": "Z" }, { "math_id": 29, "text": "X = Z\\sigma + \\mu" }, { "math_id": 30, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=60758
607593
Taylor rule
Rule from monetary policy The Taylor rule is a monetary policy targeting rule. The rule was proposed in 1992 by American economist John B. Taylor for central banks to use to stabilize economic activity by appropriately setting short-term interest rates. The rule considers the federal funds rate, the price level and changes in real income. The Taylor rule computes the optimal federal funds rate based on the gap between the desired (targeted) inflation rate and the actual inflation rate; and the output gap between the actual and natural output level. According to Taylor, monetary policy is stabilizing when the nominal interest rate is higher/lower than the increase/decrease in inflation. Thus the Taylor rule prescribes a relatively high interest rate when actual inflation is higher than the inflation target. In the United States, the Federal Open Market Committee controls monetary policy. The committee attempts to achieve an average inflation rate of 2% (with an equal likelihood of higher or lower inflation). The main advantage of a general targeting rule is that a central bank gains the discretion to apply multiple means to achieve the set target. The monetary policy of the Federal Reserve changed throughout the 20th century. The period between the 1960s and the 1970s is evaluated by Taylor and others as a period of poor monetary policy; the later years typically characterized as stagflation. The inflation rate was high and increasing, while interest rates were kept low. Since the mid-1970s monetary targets have been used in many countries as a means to target inflation. However, in the 2000s the actual interest rate in advanced economies, notably in the US, was kept below the value suggested by the Taylor rule. The Taylor rule is typically contrasted with discretionary monetary policy, which relies on the personal views of the monetary policy authorities. The Taylor rule often faces criticism due to the limited number of factors it considers. Equation. According to Taylor's original version of the rule, the real policy interest rate should respond to divergences of actual inflation rates from target inflation rates and of actual Gross Domestic Product (GDP) from potential GDP: formula_0 In this equation, formula_1 is the target short-term nominal policy interest rate (e.g. the federal funds rate in the US, the Bank of England base rate in the UK), formula_2 is the rate of inflation as measured by the GDP deflator, formula_3 is the desired rate of inflation, formula_4 is the assumed natural/equilibrium interest rate, formula_5 is the natural logarithm of actual GDP, and formula_6 is the natural logarithm of potential output, as determined by a linear trend. formula_7 is the output gap. The formula_8 approximation is used here. Because of formula_9, formula_10 In this equation, both formula_11 and formula_12 should be positive (as a rough rule of thumb, Taylor's 1993 paper proposed setting formula_13). That is, the rule produces a relatively high real interest rate (a "tight" monetary policy) when inflation is above its target or when output is above its full-employment level, in order to reduce inflationary pressure. It recommends a relatively low real interest rate ("easy" monetary policy) in the opposite situation, to stimulate output. Sometimes monetary policy goals may conflict, as in the case of stagflation, when inflation is above its target with a substantial output gap. In such a situation, a Taylor rule specifies the relative weights given to reducing inflation versus increasing output. Principle. By specifying formula_14, the Taylor rule says that an increase in inflation by one percentage point should prompt the central bank to raise the nominal interest rate by more than one percentage point (specifically, by formula_15, the sum of the two coefficients on formula_16 in the equation). Since the real interest rate is (approximately) the nominal interest rate minus inflation, stipulating formula_14 implies that when inflation rises, the real interest rate should be increased. The idea that the nominal interest rate should be raised "more than one-for-one" to cool the economy when inflation increases (that is increasing the real interest rate) has been called the Taylor principle. The Taylor principle presumes a unique bounded equilibrium for inflation. If the Taylor principle is violated, then the inflation path may be unstable. History. The concept of a policy rule emerged as part of the discussion on whether monetary policy should be based on intuition/discretion. The discourse began at the beginning of the 19th century. The first formal debate forum was launched in the 1920s by the US House Committee on Banking and Currency. In the hearing on the so-called Strong bill, introduced in 1923 by Representative James G. Strong of Kansas, the conflict in the views on monetary policy clearly appeared. New York Fed Governor Benjamin Strong Jr. (no relation to Representative Strong), supported by Professors John R. Commons and Irving Fisher, was concerned about the Fed's practices that attempted to ensure price stability. In his opinion, Federal Reserve policy regarding the price level could not guarantee long-term stability. After the death of Governor Strong in 1928, political debate on changing the Fed's policy was suspended. The Fed had been dominated by Strong and his New York Reserve Bank. After the Great Depression hit the country, policies came under debate. Irving Fisher opined, "this depression was almost wholly preventable and that it would have been prevented if Governor Strong had lived, who was conducting open-market operations with a view of bringing about stability". Later on, monetarists such as Milton Friedman and Anna Schwartz agreed that high inflation could be avoided if the Fed managed the quantity of money more consistently. The 1960s recession in the US was accompanied by relatively high interest rates. After the Bretton Woods agreement collapsed, policymakers focused on keeping interest rates low, which yielded the Great Inflation of 1970. Since the mid-1970s money supply targets have been used in many countries to address inflation targets. Many advanced economies, such as the US and the UK, made their policy rates broadly consistent with the Taylor rule in the period of the Great Moderation between the mid-1980s and early 2000s. That period was characterized by limited inflation/stable prices. New Zealand went first, adopting an inflation target in 1990. The Reserve Bank of New Zealand was reformed to prioritize price stability, gaining more independence at the same time. The Bank of Canada (1991) and by 1994 the banks of Sweden, Finland, Australia, Spain, Israel and Chile were given the mandate to target inflation. Since the 2000s began the actual interest rate in advanced economies, especially in the US, was below that suggested by the Taylor rule. The deviation can be explained by the fact that central banks were supposed to mitigate the outcomes of financial busts, while intervening only given inflation expectations. Economic shocks were accompanied by lower rates. Alternative versions. While the Taylor principle has proven influential, debate remains about what else the rule should incorporate. According to some New Keynesian macroeconomic models, insofar as the central bank keeps inflation stable, the degree of fluctuation in output will be optimized (economists Olivier Blanchard and Jordi Gali call this property the 'divine coincidence'). In this case, the central bank does not need to take fluctuations in the output gap into account when setting interest rates (that is, it may optimally set formula_17.) Other economists proposed adding terms to the Taylor rule to take into account financial conditions: for example, the interest rate might be raised when stock prices, housing prices, or interest rate spreads increase. Taylor offered a modified rule in 1999: that specified formula_18. Alternative theories. The solvency rule was presented by Emiliano Brancaccio after the 2008 financial crisis. The banker follows a rule aimed to control the economy's solvency . The inflation target and output gap are neglected, while the interest rate is conditional upon the solvency of workers and firms. The solvency rule was presented more as a benchmark than a mechanistic formula. The McCallum rule:was offered by economist Bennett T. McCallum at the end of the 20th-century. It targets the nominal gross domestic product. He proposed that the Fed stabilize nominal GDP. The McCallum rule uses precise financial data. Thus, it can overcome the problem of unobservable variables. Market monetarism extended the idea of NGDP targeting to include level targeting. (targeting a specific amount of growth per time period, and accelerating/decelerating growth to compensate for prior periods of weakness/strength). It also introduced the concept of targeting the forecast, such that policy is set to achieve the goal rather than merely to lean in one direction or the other. One proposed mechanism for assessing the impact of policy was to establish an NGDP futures market and use it to draw upon the insights of that market to direct policy. Empirical relevance. Although the Federal Reserve does not follow the Taylor rule, many analysts have argued that it provides a fairly accurate explanation of US monetary policy under Paul Volcker and Alan Greenspan and other developed economies. This observation has been cited by Clarida, Galí, and Gertler as a reason why inflation had remained under control and the economy had been relatively stable in most developed countries from the 1980s through the 2000s. However, according to Taylor, the rule was not followed in part of the 2000s, possibly inflating the housing bubble. Some research has reported that households form expectations about the future path of interest rates, inflation, and unemployment in a way that is consistent with Taylor-type rules. Limitations. The Taylor rule is debated in the discourse of the rules vs. discretion. Limitations of the Taylor rule include. Taylor highlighted that the rule should not be followed blindly: "…There will be episodes where monetary policy will need to be adjusted to deal with special factors." Criticisms. Athanasios Orphanides (2003) claimed that the Taylor rule can mislead policy makers who face real-time data. He claimed that the Taylor rule matches the US funds rate less perfectly when accounting for informational limitations and that an activist policy following the Taylor rule would have resulted in inferior macroeconomic performance during the 1970s. In 2015, "Bond King" Bill Gross said the Taylor rule "must now be discarded into the trash bin of history", in light of tepid GDP growth in the years after 2009. Gross believed that low interest rates were not the cure for decreased growth, but the source of the problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i_t = \\pi_t + r_t^* + a_\\pi ( \\pi_t - \\pi_t^* ) + a_y ( y_t - \\bar y_t )." }, { "math_id": 1, "text": "\\,i_t\\," }, { "math_id": 2, "text": "\\,\\pi_t\\," }, { "math_id": 3, "text": "\\pi^*_t" }, { "math_id": 4, "text": "r_t^*" }, { "math_id": 5, "text": "\\,y_t\\," }, { "math_id": 6, "text": "\\bar y_t" }, { "math_id": 7, "text": "y_t - \\bar y_t" }, { "math_id": 8, "text": "\\ln(1 + x) = x" }, { "math_id": 9, "text": "i_t - \\pi_t = \\mbox{real policy interest rate}" }, { "math_id": 10, "text": "\\begin{align}\n \\mbox{Desired real policy interest rate} &= \\mbox{equilibrium real interest rate} \\\\\n &+ a_{\\pi} \\times \\mbox{difference from the inflation target} \\\\\n &+ a_y \\times \\mbox{output gap} \\\\\n \\end{align}\n" }, { "math_id": 11, "text": "a_{\\pi}" }, { "math_id": 12, "text": "a_y" }, { "math_id": 13, "text": "a_{\\pi}=a_y=0.5" }, { "math_id": 14, "text": "a_{\\pi}>0" }, { "math_id": 15, "text": "1+a_{\\pi}" }, { "math_id": 16, "text": "\\pi_t" }, { "math_id": 17, "text": "a_y=0" }, { "math_id": 18, "text": "a_{\\pi} = 0.5, a_y \\ge 0" } ]
https://en.wikipedia.org/wiki?curid=607593
607631
Rhombic dodecahedron
Catalan solid with 12 faces In geometry, the rhombic dodecahedron is a convex polyhedron with 12 congruent rhombic faces. It has 24 edges, and 14 vertices of 2 types. It is a Catalan solid, and the dual polyhedron of the cuboctahedron. Properties. The rhombic dodecahedron is a zonohedron. Its polyhedral dual is the cuboctahedron. The long face-diagonal length is exactly √2 times the short face-diagonal length; thus, the acute angles on each face measure arccos(), or approximately 70.53°. Being the dual of an Archimedean polyhedron, the rhombic dodecahedron is face-transitive, meaning the symmetry group of the solid acts transitively on its set of faces. In elementary terms, this means that for any two faces A and B, there is a rotation or reflection of the solid that leaves it occupying the same region of space while moving face A to face B. The rhombic dodecahedron can be viewed as the convex hull of the union of the vertices of a cube and an octahedron where the edges intersect perpendicularly. The 6 vertices where 4 rhombi meet correspond to the vertices of the octahedron, while the 8 vertices where 3 rhombi meet correspond to the vertices of the cube. The rhombic dodecahedron is one of the nine edge-transitive convex polyhedra, the others being the five Platonic solids, the cuboctahedron, the icosidodecahedron, and the rhombic triacontahedron. The rhombic dodecahedron can be used to tessellate three-dimensional space: it can be stacked to fill a space, much like hexagons fill a plane. This polyhedron in a space-filling tessellation can be seen as the Voronoi tessellation of the face-centered cubic lattice. It is the Brillouin zone of body centered cubic (bcc) crystals. Some minerals such as garnet form a rhombic dodecahedral crystal habit. As Johannes Kepler noted in his 1611 book on snowflakes ("Strena seu de Nive Sexangula"), honey bees use the geometry of rhombic dodecahedra to form honeycombs from a tessellation of cells each of which is a hexagonal prism capped with half a rhombic dodecahedron. The rhombic dodecahedron also appears in the unit cells of diamond and diamondoids. In these cases, four vertices (alternate threefold ones) are absent, but the chemical bonds lie on the remaining edges. The graph of the rhombic dodecahedron is nonhamiltonian. A rhombic dodecahedron can be dissected into 4 obtuse trigonal trapezohedra around its center. These rhombohedra are the cells of a trigonal trapezohedral honeycomb. Analogy: a regular hexagon can be dissected into 3 rhombi around its center. These rhombi are the tiles of a rhombille. The collections of the Louvre include a die in the shape of a rhombic dodecahedron dating from Ptolemaic Egypt. The faces are inscribed with Greek letters representing the numbers 1 through 12: Α Β Γ Δ Ε Ϛ Z Η Θ Ι ΙΑ ΙΒ. The function of the die is unknown. Dimensions. Denoting by "a" the edge length of a rhombic dodecahedron, formula_0 (OEIS: ), formula_1 (OEIS: ), formula_2 (OEIS: ), formula_3 Area and volume. The surface area "A" and the volume "V" of the rhombic dodecahedron with edge length "a" are: formula_4 formula_5 Orthogonal projections. The "rhombic dodecahedron" has four special orthogonal projections along its axes of symmetry, centered on a face, an edge, and the two types of vertex, threefold and fourfold. The last two correspond to the B2 and A2 Coxeter planes. Cartesian coordinates. For edge length √3, the eight vertices where three faces meet at their obtuse angles have Cartesian coordinates: (±1, ±1, ±1) The coordinates of the six vertices where four faces meet at their acute angles are: (±2, 0, 0), (0, ±2, 0) and (0, 0, ±2) The rhombic dodecahedron can be seen as a degenerate limiting case of a pyritohedron, with permutation of coordinates (±1, ±1, ±1) and (0, 1 + "h", 1 − "h"2) with parameter "h" = 1. These coordinates illustrate that a rhombic dodecahedron can be seen as a cube with a square pyramid attached to each face, and that the six square pyramids could fit together to a cube of the same size, i.e the rhombic dodecahedron has twice the volume of the inscribed cube with edges equal to the short diagonals of the rhombi. Topologically equivalent forms. Parallelohedron. The "rhombic dodecahedron" is a parallelohedron, a space-filling polyhedron, dodecahedrille, being the dual to the "tetroctahedrille" or half cubic honeycomb, and described by two Coxeter diagrams: and . With D3d symmetry, it can be seen as an elongated trigonal trapezohedron. Dihedral rhombic dodecahedron. Other symmetry constructions of the rhombic dodecahedron are also space-filling, and as parallelotopes they are similar to variations of space-filling truncated octahedra. For example, with 4 square faces, and 60-degree rhombic faces, and D4h dihedral symmetry, order 16. It can be seen as a cuboctahedron with square pyramids augmented on the top and bottom. Bilinski dodecahedron. In 1960 Stanko Bilinski discovered a second rhombic dodecahedron with 12 congruent rhombus faces, the Bilinski dodecahedron. It has the same topology but different geometry. The rhombic faces in this form have the golden ratio. Deltoidal dodecahedron. Another topologically equivalent variation, sometimes called a deltoidal dodecahedron, is isohedral with tetrahedral symmetry order 24, distorting rhombic faces into kites (deltoids). It has 8 vertices adjusted in or out in alternate sets of 4, with the limiting case a tetrahedral envelope. Variations can be parametrized by ("a","b"), where "b" and "a" depend on each other such that the tetrahedron defined by the four vertices of a face has volume zero, i.e. is a planar face. (1,1) is the rhombic solution. As "a" approaches , "b" approaches infinity. It always holds that + = 2, with "a", "b" &gt; . (±2, 0, 0), (0, ±2, 0), (0, 0, ±2) ("a", "a", "a"), (−"a", −"a", "a"), (−"a", "a", −"a"), ("a", −"a", −"a") (−"b", −"b", −"b"), (−"b", "b", "b"), ("b", −"b", "b"), ("b", "b", −"b") Related polyhedra. When projected onto a sphere (see right), it can be seen that the edges make up the edges of two tetrahedra arranged in their dual positions (the stella octangula). This trend continues on with the deltoidal icositetrahedron and deltoidal hexecontahedron for the dual pairings of the other regular polyhedra (alongside the triangular bipyramid if improper tilings are to be considered), giving this shape the alternative systematic name of "deltoidal dodecahedron". This polyhedron is a part of a sequence of rhombic polyhedra and tilings with ["n",3] Coxeter group symmetry. The cube can be seen as a rhombic hexahedron where the rhombi are squares. Similarly it relates to the infinite series of tilings with the face configurations V3.2"n".3.2"n", the first in the Euclidean plane, and the rest in the hyperbolic plane. Stellations. Like many convex polyhedra, the rhombic dodecahedron can be stellated by extending the faces or edges until they meet to form a new polyhedron. Several such stellations have been described by Dorman Luke. The first stellation, often simply called the stellated rhombic dodecahedron, is well known. It can be seen as a rhombic dodecahedron with each face augmented by attaching a rhombic-based pyramid to it, with a pyramid height such that the sides lie in the face planes of the neighbouring faces: Luke describes four more stellations: the second and third stellations (expanding outwards), one formed by removing the second from the third, and another by adding the original rhombic dodecahedron back to the previous one. Related polytopes. The rhombic dodecahedron forms the hull of the vertex-first projection of a tesseract to three dimensions. There are exactly two ways of decomposing a rhombic dodecahedron into four congruent rhombohedra, giving eight possible rhombohedra as projections of the tesseracts 8 cubic cells. One set of projective vectors are: "u" = (1,1,−1,−1), "v" = (−1,1,−1,1), "w" = (1,−1,−1,1). The rhombic dodecahedron forms the maximal cross-section of a 24-cell, and also forms the hull of its vertex-first parallel projection into three dimensions. The rhombic dodecahedron can be decomposed into six congruent (but non-regular) square dipyramids meeting at a single vertex in the center; these form the images of six pairs of the 24-cell's octahedral cells. The remaining 12 octahedral cells project onto the faces of the rhombic dodecahedron. The non-regularity of these images are due to projective distortion; the facets of the 24-cell are regular octahedra in 4-space. This decomposition gives an interesting method for constructing the rhombic dodecahedron: cut a cube into six congruent square pyramids, and attach them to the faces of a second cube. The triangular faces of each pair of adjacent pyramids lie on the same plane, and so merge into rhombuses. The 24-cell may also be constructed in an analogous way using two tesseracts. Architectural meaning and cultural freight. The architectural expert James D. Wenn has identified that philosophical and meanings coded into buildings connected with meanings associate with the rhombic dodecahedron by thinkers such as Plato. Buildings identified as engaging with this kind of code include: Practical usage. In spacecraft reaction wheel layout, a tetrahedral configuration of four wheels is commonly used. For wheels that perform equally (from a peak torque and max angular momentum standpoint) in both spin directions and across all four wheels, the maximum torque and maximum momentum envelopes for the 3-axis attitude control system (considering idealized actuators) are given by projecting the tesseract representing the limits of each wheel's torque or momentum into 3D space via the 3 × 4 matrix of wheel axes; the resulting 3D polyhedron is a rhombic dodecahedron. Such an arrangement of reaction wheels is not the only possible configuration (a simpler arrangement consists of three wheels mounted to spin about orthogonal axes), but it is advantageous in providing redundancy to mitigate the failure of one of the four wheels (with degraded overall performance available from the remaining three active wheels) and in providing a more convex envelope than a cube, which leads to less agility dependence on axis direction (from an actuator/plant standpoint). Spacecraft mass properties influence overall system momentum and agility, so decreased variance in envelope boundary does not necessarily lead to increased uniformity in preferred axis biases (that is, even with a perfectly distributed performance limit within the actuator subsystem, preferred rotation axes are not necessarily arbitrary at the system level). The polyhedron is also the basis for the HEALPix grid, used in cosmology for storing and manipulating maps of the cosmic microwave background, and in computer graphics for storing environment maps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_\\mathrm{i} = \\frac{\\sqrt{6}}{3}~a \\approx 0.816\\,496\\,5809~a\\quad" }, { "math_id": 1, "text": "r_\\mathrm{m} = \\frac{2\\sqrt{2}}{3}~a \\approx 0.942\\,809\\,041\\,58~a\\quad" }, { "math_id": 2, "text": "r_\\mathrm{o} = \\frac{2\\sqrt{3}}{3}~a \\approx 1.154\\,700\\,538~a\\quad" }, { "math_id": 3, "text": "r_\\mathrm{t} = a" }, { "math_id": 4, "text": "A = 8\\sqrt{2}~a^2 \\approx 11.313\\,7085~a^2" }, { "math_id": 5, "text": "V = \\frac{16\\sqrt{3}}{9}~a^3 \\approx 3.079\\,201\\,44~a^3" } ]
https://en.wikipedia.org/wiki?curid=607631
60763706
Wine/water paradox
Probability theory paradox The wine/water paradox is an apparent paradox in probability theory. It is stated by Michael Deakin as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A mixture is known to contain a mix of wine and water in proportions such that the amount of wine divided by the amount of water is a ratio formula_0 lying in the interval formula_1 (i.e. 25-75% wine). We seek the probability, formula_2 say, that formula_3. (i.e. less than or equal to 66%.) The core of the paradox is in finding consistent and justifiable simultaneous prior distributions for formula_0 and formula_4. Calculation. This calculation is the demonstration of the paradoxical conclusion when making use of the principle of indifference. To recapitulate, We do not know formula_0, the wine to water ratio. When considering the numbers above, it is only known that it lies in an interval between the minimum of one quarter wine over three quarters water on one end (i.e. 25% wine), to the maximum of three quarters wine over one quarter water on the other (i.e. 75% wine). In term of ratios, formula_5 resp. formula_6. Now, making use of the principle of indifference, we may assume that formula_0 is uniformly distributed. Then the chance of finding the ratio formula_0 "below" any given fixed threshold formula_7, with formula_8, should linearly depend on the value formula_7. So the probability value is the number formula_9 As a function of the threshold value formula_7, this is the linearly growing function that is formula_10 resp. formula_11 at the end points formula_12 resp. the larger formula_13. Consider the threshold formula_14, as in the example of the original formulation above. This is two parts wine vs. one part water, i.e. 66% wine. With this we conclude that formula_15. Now consider formula_16, the inverted ratio of water to wine but the equivalent wine/water mixture threshold. It lies between the inverted bounds. Again using the principle of indifference, we get formula_17. This is the function which is formula_10 resp. formula_11 at the end points formula_18 resp. the smaller formula_19. Now taking the corresponding threshold formula_20 (also half as much water as wine). We conclude that formula_21. The second probability always exceeds the first by a factor of formula_22. For our example the number is formula_23. Paradoxical conclusion. Since formula_24, we get formula_25, a contradiction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "1/3 \\le x \\le 3" }, { "math_id": 2, "text": "P^\\ast" }, { "math_id": 3, "text": "x \\le 2" }, { "math_id": 4, "text": "\\frac{1}{x}" }, { "math_id": 5, "text": "x_\\mathrm{min}=\\frac{1/4}{3/4} = \\frac{1}{3}" }, { "math_id": 6, "text": "x_\\mathrm{max}=\\frac{3/4}{1/4} = 3" }, { "math_id": 7, "text": "x_t" }, { "math_id": 8, "text": "x_\\mathrm{min}<x_t<x_\\mathrm{max}" }, { "math_id": 9, "text": "\\operatorname{Prob}\\{x \\le x_t\\} = \\frac{x_t-x_\\mathrm{min}}{x_\\mathrm{max}-x_\\mathrm{min}} = \\frac{1}{8} (3x_t - 1)." }, { "math_id": 10, "text": "0" }, { "math_id": 11, "text": "1" }, { "math_id": 12, "text": "x_\\mathrm{min}" }, { "math_id": 13, "text": "x_\\mathrm{max}" }, { "math_id": 14, "text": "x_t = 2" }, { "math_id": 15, "text": "\\operatorname{Prob}\\{x \\le 2\\} = \\frac{1}{8}(3\\cdot 2 - 1) = \\frac{5}{8}" }, { "math_id": 16, "text": "y = \\frac{1}{x}" }, { "math_id": 17, "text": "\\operatorname{Prob}\\{y \\ge y_t\\} = \\frac{x_\\mathrm{max}(1 - x_\\mathrm{min}\\,y_t)}{x_\\mathrm{max} - x_\\mathrm{min}} = \\frac{3}{8} (3 - y_t)" }, { "math_id": 18, "text": "\\tfrac{1}{x_\\mathrm{min}}" }, { "math_id": 19, "text": "\\tfrac{1}{x_\\mathrm{max}}" }, { "math_id": 20, "text": "y_t = \\frac{1}{x_t} = \\frac{1}{2}" }, { "math_id": 21, "text": "\\operatorname{Prob}\\left\\{y \\ge \\tfrac{1}{2}\\right\\} = \\frac{3}{8}\\frac{3\\cdot 2 - 1}{2} = \\frac{15}{16} = \\frac{3}{2}\\frac{5}{8}" }, { "math_id": 22, "text": "\\frac{x_\\mathrm{max}}{x_t} \\ge 1" }, { "math_id": 23, "text": "\\frac{3}{2}" }, { "math_id": 24, "text": "y = \\frac{1}{x}" }, { "math_id": 25, "text": "\\frac{5}{8} = \\operatorname{Prob}\\{x \\le 2\\} = P^* = \\operatorname{Prob}\\left\\{y \\ge \\frac{1}{2}\\right\\} = \\frac{15}{16} > \\frac{5}{8}" } ]
https://en.wikipedia.org/wiki?curid=60763706
60766
Pseudosphere
Geometric surface In geometry, a pseudosphere is a surface with constant negative Gaussian curvature. A pseudosphere of radius R is a surface in formula_0 having curvature −1/"R"2 at each point. Its name comes from the analogy with the sphere of radius R, which is a surface of curvature 1/"R"2. The term was introduced by Eugenio Beltrami in his 1868 paper on models of hyperbolic geometry. Tractroid. The same surface can be also described as the result of revolving a tractrix about its asymptote. For this reason the pseudosphere is also called tractroid. As an example, the (half) pseudosphere (with radius 1) is the surface of revolution of the tractrix parametrized by formula_1 It is a singular space (the equator is a singularity), but away from the singularities, it has constant negative Gaussian curvature and therefore is locally isometric to a hyperbolic plane. The name "pseudosphere" comes about because it has a two-dimensional surface of constant negative Gaussian curvature, just as a sphere has a surface with constant positive Gaussian curvature. Just as the sphere has at every point a positively curved geometry of a dome the whole pseudosphere has at every point the negatively curved geometry of a saddle. As early as 1693 Christiaan Huygens found that the volume and the surface area of the pseudosphere are finite, despite the infinite extent of the shape along the axis of rotation. For a given edge radius R, the area is 4π"R"2 just as it is for the sphere, while the volume is π"R"3 and therefore half that of a sphere of that radius. The pseudosphere is an important geometric precursor to mathematical fabric arts and pedagogy. Universal covering space. The half pseudosphere of curvature −1 is covered by the interior of a horocycle. In the Poincaré half-plane model one convenient choice is the portion of the half-plane with "y" ≥ 1. Then the covering map is periodic in the x direction of period 2π, and takes the horocycles "y" = "c" to the meridians of the pseudosphere and the vertical geodesics "x" = "c" to the tractrices that generate the pseudosphere. This mapping is a local isometry, and thus exhibits the portion "y" ≥ 1 of the upper half-plane as the universal covering space of the pseudosphere. The precise mapping is formula_2 where formula_3 is the parametrization of the tractrix above. Hyperboloid. In some sources that use the hyperboloid model of the hyperbolic plane, the hyperboloid is referred to as a pseudosphere. This usage of the word is because the hyperboloid can be thought of as a sphere of imaginary radius, embedded in a Minkowski space. Pseudospherical surfaces. A pseudospherical surface is a generalization of the pseudosphere. A surface which is piecewise smoothly immersed in formula_0 with constant negative curvature is a pseudospherical surface. The tractroid is the simplest example. Other examples include the Dini's surfaces, breather surfaces, and the Kuen surface. Relation to solutions to the sine-Gordon equation. Pseudospherical surfaces can be constructed from solutions to the sine-Gordon equation. A sketch proof starts with reparametrizing the tractroid with coordinates in which the Gauss–Codazzi equations can be rewritten as the sine-Gordon equation. In particular, for the tractroid the Gauss–Codazzi equations are the sine-Gordon equation applied to the static soliton solution, so the Gauss–Codazzi equations are satisfied. In these coordinates the first and second fundamental forms are written in a way that makes clear the Gaussian curvature is −1 for any solution of the sine-Gordon equations. Then any solution to the sine-Gordon equation can be used to specify a first and second fundamental form which satisfy the Gauss–Codazzi equations. There is then a theorem that any such set of initial data can be used to at least locally specify an immersed surface in formula_0. A few examples of sine-Gordon solutions and their corresponding surface are given as follows: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^3" }, { "math_id": 1, "text": "t \\mapsto \\left( t - \\tanh t, \\operatorname{sech}\\,t \\right), \\quad \\quad 0 \\le t < \\infty." }, { "math_id": 2, "text": "(x,y)\\mapsto \\big(v(\\operatorname{arcosh} y)\\cos x, v(\\operatorname{arcosh} y) \\sin x, u(\\operatorname{arcosh} y)\\big)" }, { "math_id": 3, "text": "t\\mapsto \\big(u(t) = t - \\operatorname{tanh} t,v(t) = \\operatorname{sech} t\\big)" } ]
https://en.wikipedia.org/wiki?curid=60766
60767585
Hazel Perfect
British mathematician Hazel Perfect (circa 1927 – 8 July 2015) was a British mathematician specialising in combinatorics. Contributions. Perfect was known for inventing gammoids,[AMG] for her work with Leon Mirsky on doubly stochastic matrices,[SP2] for her three books "Topics in Geometry",[TIG] "Topics in Algebra",[TIA] and "Independence Theory in Combinatorics",[ITC] and for her work as a translator (from an earlier German translation) of Pavel Alexandrov's book "An Introduction to the Theory of Groups" (Hafner, 1959).[ITG] The Perfect–Mirsky conjecture, named after Perfect and Leon Mirsky, concerns the region of the complex plane formed by the eigenvalues of doubly stochastic matrices. Perfect and Mirsky conjectured that for formula_0 matrices this region is the union of regular polygons of up to formula_1 sides, having the roots of unity of each degree up to formula_1 as vertices. Perfect and Mirsky proved their conjecture for formula_2; it was subsequently shown to be true for formula_3 and false for formula_4, but remains open for larger values Education and career. Perfect earned a master's degree through Westfield College (a constituent college for women in the University of London) in 1949, with a thesis on "The Reduction of Matrices to Canonical Form". In the 1950s, Perfect was a lecturer at University College of Swansea; she collaborated with Gordon Petersen, a visitor to Swansea at that time, on their translation of Alexandrov's book. She completed her Ph.D. at the University of London in 1969; her dissertation was "Studies in Transversal Theory with Particular Reference to Independence Structures and Graphs". She became a reader in mathematics at the University of Sheffield. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\times n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n\\le 3" }, { "math_id": 3, "text": "n=4" }, { "math_id": 4, "text": "n=5" } ]
https://en.wikipedia.org/wiki?curid=60767585
607686
Palatini variation
Concept relating to general relativity In general relativity and gravitation the Palatini variation is nowadays thought of as a variation of a Lagrangian with respect to the connection. In fact, as is well known, the Einstein–Hilbert action for general relativity was first formulated purely in terms of the spacetime metric formula_0. In the Palatini variational method one takes as independent field variables not only the ten components formula_0 but also the forty components of the affine connection formula_1, assuming, a priori, no dependence of the formula_1 from the formula_0 and their derivatives. The reason the Palatini variation is considered important is that it means that the use of the Christoffel connection in general relativity does not have to be added as a separate assumption; the information is already in the Lagrangian. For theories of gravitation which have more complex Lagrangians than the Einstein–Hilbert Lagrangian of general relativity, the Palatini variation sometimes gives more complex connections and sometimes tensorial equations. Attilio Palatini (1889–1949) was an Italian mathematician who received his doctorate from the University of Padova, where he studied under Levi-Civita and Ricci-Curbastro. The history of the subject, and Palatini's connection with it, are not straightforward (see references). In fact, it seems that what the textbooks now call "Palatini formalism" was actually invented in 1925 by Einstein, and as the years passed, people tended to mix up the Palatini identity and the Palatini formalism.
[ { "math_id": 0, "text": "{g_{\\mu\\nu}}" }, { "math_id": 1, "text": "{\\Gamma^{\\alpha}_{\\,\\beta\\mu}}" } ]
https://en.wikipedia.org/wiki?curid=607686
607690
Affine connection
Construct allowing differentiation of tangent vector fields of manifolds In differential geometry, an affine connection is a geometric object on a smooth manifold which "connects" nearby tangent spaces, so it permits tangent vector fields to be differentiated as if they were functions on the manifold with values in a fixed vector space. Connections are among the simplest methods of defining differentiation of the sections of vector bundles. The notion of an affine connection has its roots in 19th-century geometry and tensor calculus, but was not fully developed until the early 1920s, by Élie Cartan (as part of his general theory of connections) and Hermann Weyl (who used the notion as a part of his foundations for general relativity). The terminology is due to Cartan and has its origins in the identification of tangent spaces in Euclidean space R"n" by translation: the idea is that a choice of affine connection makes a manifold look infinitesimally like Euclidean space not just smoothly, but as an affine space. On any manifold of positive dimension there are infinitely many affine connections. If the manifold is further endowed with a metric tensor then there is a natural choice of affine connection, called the Levi-Civita connection. The choice of an affine connection is equivalent to prescribing a way of differentiating vector fields which satisfies several reasonable properties (linearity and the Leibniz rule). This yields a possible definition of an affine connection as a covariant derivative or (linear) connection on the tangent bundle. A choice of affine connection is also equivalent to a notion of parallel transport, which is a method for transporting tangent vectors along curves. This also defines a parallel transport on the frame bundle. Infinitesimal parallel transport in the frame bundle yields another description of an affine connection, either as a Cartan connection for the affine group or as a principal connection on the frame bundle. The main invariants of an affine connection are its torsion and its curvature. The torsion measures how closely the Lie bracket of vector fields can be recovered from the affine connection. Affine connections may also be used to define (affine) geodesics on a manifold, generalizing the "straight lines" of Euclidean space, although the geometry of those straight lines can be very different from usual Euclidean geometry; the main differences are encapsulated in the curvature of the connection. Motivation and history. A smooth manifold is a mathematical object which looks locally like a smooth deformation of Euclidean space R"n": for example a smooth curve or surface looks locally like a smooth deformation of a line or a plane. Smooth functions and vector fields can be defined on manifolds, just as they can on Euclidean space, and scalar functions on manifolds can be differentiated in a natural way. However, differentiation of vector fields is less straightforward: this is a simple matter in Euclidean space, because the tangent space of based vectors at a point p can be identified naturally (by translation) with the tangent space at a nearby point q. On a general manifold, there is no such natural identification between nearby tangent spaces, and so tangent vectors at nearby points cannot be compared in a well-defined way. The notion of an affine connection was introduced to remedy this problem by "connecting" nearby tangent spaces. The origins of this idea can be traced back to two main sources: surface theory and tensor calculus. Motivation from surface theory. Consider a smooth surface S in a 3-dimensional Euclidean space. Near any point, S can be approximated by its tangent plane at that point, which is an affine subspace of Euclidean space. Differential geometers in the 19th century were interested in the notion of development in which one surface was "rolled" along another, without "slipping" or "twisting". In particular, the tangent plane to a point of S can be rolled on S: this should be easy to imagine when S is a surface like the 2-sphere, which is the smooth boundary of a convex region. As the tangent plane is rolled on S, the point of contact traces out a curve on S. Conversely, given a curve on S, the tangent plane can be rolled along that curve. This provides a way to identify the tangent planes at different points along the curve: in particular, a tangent vector in the tangent space at one point on the curve is identified with a unique tangent vector at any other point on the curve. These identifications are always given by affine transformations from one tangent plane to another. This notion of parallel transport of tangent vectors, by affine transformations, along a curve has a characteristic feature: the point of contact of the tangent plane with the surface "always moves" with the curve under parallel translation (i.e., as the tangent plane is rolled along the surface, the point of contact moves). This generic condition is characteristic of Cartan connections. In more modern approaches, the point of contact is viewed as the "origin" in the tangent plane (which is then a vector space), and the movement of the origin is corrected by a translation, so that parallel transport is linear, rather than affine. In the point of view of Cartan connections, however, the affine subspaces of Euclidean space are "model" surfaces — they are the simplest surfaces in Euclidean 3-space, and are homogeneous under the affine group of the plane — and every smooth surface has a unique model surface tangent to it at each point. These model surfaces are "Klein geometries" in the sense of Felix Klein's Erlangen programme. More generally, an n-dimensional affine space is a Klein geometry for the affine group Aff("n"), the stabilizer of a point being the general linear group GL("n"). An affine n-manifold is then a manifold which looks infinitesimally like n-dimensional affine space. Motivation from tensor calculus. The second motivation for affine connections comes from the notion of a covariant derivative of vector fields. Before the advent of coordinate-independent methods, it was necessary to work with vector fields by embedding their respective Euclidean vectors into an atlas. These components can be differentiated, but the derivatives do not transform in a manageable way under changes of coordinates. Correction terms were introduced by Elwin Bruno Christoffel (following ideas of Bernhard Riemann) in the 1870s so that the (corrected) derivative of one vector field along another transformed covariantly under coordinate transformations — these correction terms subsequently came to be known as Christoffel symbols. This idea was developed into the theory of "absolute differential calculus" (now known as tensor calculus) by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita between 1880 and the turn of the 20th century. Tensor calculus really came to life, however, with the advent of Albert Einstein's theory of general relativity in 1915. A few years after this, Levi-Civita formalized the unique connection associated to a Riemannian metric, now known as the Levi-Civita connection. More general affine connections were then studied around 1920, by Hermann Weyl, who developed a detailed mathematical foundation for general relativity, and Élie Cartan, who made the link with the geometrical ideas coming from surface theory. Approaches. The complex history has led to the development of widely varying approaches to and generalizations of the affine connection concept. The most popular approach is probably the definition motivated by covariant derivatives. On the one hand, the ideas of Weyl were taken up by physicists in the form of gauge theory and gauge covariant derivatives. On the other hand, the notion of covariant differentiation was abstracted by Jean-Louis Koszul, who defined (linear or Koszul) connections on vector bundles. In this language, an affine connection is simply a covariant derivative or (linear) connection on the tangent bundle. However, this approach does not explain the geometry behind affine connections nor how they acquired their name. The term really has its origins in the identification of tangent spaces in Euclidean space by translation: this property means that Euclidean n-space is an affine space. (Alternatively, Euclidean space is a principal homogeneous space or torsor under the group of translations, which is a subgroup of the affine group.) As mentioned in the introduction, there are several ways to make this precise: one uses the fact that an affine connection defines a notion of parallel transport of vector fields along a curve. This also defines a parallel transport on the frame bundle. Infinitesimal parallel transport in the frame bundle yields another description of an affine connection, either as a Cartan connection for the affine group Aff("n") or as a principal GL("n") connection on the frame bundle. Formal definition as a differential operator. Let M be a smooth manifold and let Γ(T"M") be the space of vector fields on M, that is, the space of smooth sections of the tangent bundle T"M". Then an affine connection on M is a bilinear map formula_0 such that for all f in the set of smooth functions on "M", written "C"∞("M", R), and all vector fields "X", "Y" on M: "f" ∇"XY", that is, ∇ is "C"∞("M", R)-"linear" in the first variable; (∂"X" "f") "Y" + "f" ∇"XY", where ∂"X" denotes the directional derivative; that is, ∇ satisfies "Leibniz rule" in the second variable. Elementary properties. ∂"X""Y" from M to R"n". Any other affine connection ∇ on M may therefore be written ∇ d + Γ, where Γ is a connection form on M. Parallel transport for affine connections. Comparison of tangent vectors at different points on a manifold is generally not a well-defined process. An affine connection provides one way to remedy this using the notion of parallel transport, and indeed this can be used to give a definition of an affine connection. Let M be a manifold with an affine connection ∇. Then a vector field X is said to be parallel if ∇"X" 0 in the sense that for any vector field Y, ∇"Y""X" 0. Intuitively speaking, parallel vectors have "all their derivatives equal to zero" and are therefore in some sense "constant". By evaluating a parallel vector field at two points x and y, an identification between a tangent vector at x and one at y is obtained. Such tangent vectors are said to be parallel transports of each other. Nonzero parallel vector fields do not, in general, exist, because the equation ∇"X" 0 is a partial differential equation which is overdetermined: the integrability condition for this equation is the vanishing of the curvature of ∇ (see below). However, if this equation is restricted to a curve from x to y it becomes an ordinary differential equation. There is then a unique solution for any initial value of X at x. More precisely, if "γ" : "I" → "M" a smooth curve parametrized by an interval ["a", "b"] and "ξ" ∈ T"x""M", where "x" "γ"("a"), then a vector field X along γ (and in particular, the value of this vector field at "y" "γ"("b")) is called the parallel transport of ξ along γ if 0, for all "t" ∈ ["a", "b"] "ξ". Formally, the first condition means that X is parallel with respect to the pullback connection on the pullback bundle "γ"∗T"M". However, in a local trivialization it is a first-order system of linear ordinary differential equations, which has a unique solution for any initial condition given by the second condition (for instance, by the Picard–Lindelöf theorem). Thus parallel transport provides a way of moving tangent vectors along a curve using the affine connection to keep them "pointing in the same direction" in an intuitive sense, and this provides a linear isomorphism between the tangent spaces at the two ends of the curve. The isomorphism obtained in this way will in general depend on the choice of the curve: if it does not, then parallel transport along every curve can be used to define parallel vector fields on M, which can only happen if the curvature of ∇ is zero. A linear isomorphism is determined by its action on an ordered basis or frame. Hence parallel transport can also be characterized as a way of transporting elements of the (tangent) frame bundle GL("M") along a curve. In other words, the affine connection provides a lift of any curve γ in M to a curve γ̃ in GL("M"). Formal definition on the frame bundle. An affine connection may also be defined as a principal GL("n") connection ω on the frame bundle F"M" or GL("M") of a manifold M. In more detail, ω is a smooth map from the tangent bundle T(F"M") of the frame bundle to the space of "n" × "n" matrices (which is the Lie algebra gl("n") of the Lie group GL("n") of invertible "n" × "n" matrices) satisfying two properties: "ξ" for any ξ in gl("n"), where Xξ is the vector field on F"M" corresponding to ξ. Such a connection ω immediately defines a covariant derivative not only on the tangent bundle, but on vector bundles associated to any group representation of GL("n"), including bundles of tensors and tensor densities. Conversely, an affine connection on the tangent bundle determines an affine connection on the frame bundle, for instance, by requiring that ω vanishes on tangent vectors to the lifts of curves to the frame bundle defined by parallel transport. The frame bundle also comes equipped with a solder form "θ" : T(F"M") → R"n" which is horizontal in the sense that it vanishes on vertical vectors such as the point values of the vector fields Xξ: Indeed θ is defined first by projecting a tangent vector (to F"M" at a frame f) to M, then by taking the components of this tangent vector on M with respect to the frame f. Note that θ is also GL("n")-equivariant (where GL("n") acts on R"n" by matrix multiplication). The pair ("θ", "ω") defines a bundle isomorphism of T(F"M") with the trivial bundle F"M" × aff("n"), where aff("n") is the Cartesian product of R"n" and gl("n") (viewed as the Lie algebra of the affine group, which is actually a semidirect product – see below). Affine connections as Cartan connections. Affine connections can be defined within Cartan's general framework. In the modern approach, this is closely related to the definition of affine connections on the frame bundle. Indeed, in one formulation, a Cartan connection is an absolute parallelism of a principal bundle satisfying suitable properties. From this point of view the aff("n")-valued one-form ("θ", "ω") : T(F"M") → aff("n") on the frame bundle (of an affine manifold) is a Cartan connection. However, Cartan's original approach was different from this in a number of ways: Explanations and historical intuition. The points just raised are easiest to explain in reverse, starting from the motivation provided by surface theory. In this situation, although the planes being rolled over the surface are tangent planes in a naive sense, the notion of a tangent space is really an infinitesimal notion, whereas the planes, as affine subspaces of R3, are infinite in extent. However these affine planes all have a marked point, the point of contact with the surface, and they are tangent to the surface at this point. The confusion therefore arises because an affine space with a marked point can be identified with its tangent space at that point. However, the parallel transport defined by rolling does not fix this origin: it is affine rather than linear; the linear parallel transport can be recovered by applying a translation. Abstracting this idea, an affine manifold should therefore be an n-manifold M with an affine space "A""x", of dimension n, "attached" to each "x" ∈ "M" at a marked point "a""x" ∈ "A""x", together with a method for transporting elements of these affine spaces along any curve C in M. This method is required to satisfy several properties: These last two points are quite hard to make precise, so affine connections are more often defined infinitesimally. To motivate this, it suffices to consider how affine frames of reference transform infinitesimally with respect to parallel transport. (This is the origin of Cartan's method of moving frames.) An affine frame at a point consists of a list ("p", e1,… e"n"), where "p" ∈ "A""x" and the e"i" form a basis of T"p"("A""x"). The affine connection is then given symbolically by a first order differential system formula_2 defined by a collection of one-forms ("θ j", "ω "). Geometrically, an affine frame undergoes a displacement travelling along a curve γ from "γ"("t") to "γ"("t" + "δt") given (approximately, or infinitesimally) by formula_3 Furthermore, the affine spaces "A""x" are required to be tangent to M in the informal sense that the displacement of "a""x" along γ can be identified (approximately or infinitesimally) with the tangent vector "γ"′("t") to γ at "x" "γ"("t") (which is the infinitesimal displacement of x). Since formula_4 where θ is defined by "θ"("X") "θ"1("X")e1 + … + "θ""n"("X")e"n", this identification is given by θ, so the requirement is that θ should be a linear isomorphism at each point. The tangential affine space "A""x" is thus identified intuitively with an "infinitesimal affine neighborhood" of x. The modern point of view makes all this intuition more precise using principal bundles (the essential idea is to replace a frame or a "variable" frame by the space of all frames and functions on this space). It also draws on the inspiration of Felix Klein's Erlangen programme, in which a "geometry" is defined to be a homogeneous space. Affine space is a geometry in this sense, and is equipped with a "flat" Cartan connection. Thus a general affine manifold is viewed as "curved" deformation of the flat model geometry of affine space. Affine space as the flat model geometry. Definition of an affine space. Informally, an affine space is a vector space without a fixed choice of origin. It describes the geometry of points and free vectors in space. As a consequence of the lack of origin, points in affine space cannot be added together as this requires a choice of origin with which to form the parallelogram law for vector addition. However, a vector v may be added to a point p by placing the initial point of the vector at p and then transporting p to the terminal point. The operation thus described "p" → "p" + "v" is the translation of p along v. In technical terms, affine n-space is a set A"n" equipped with a free transitive action of the vector group R"n" on it through this operation of translation of points: A"n" is thus a principal homogeneous space for the vector group R"n". The general linear group GL("n") is the group of transformations of R"n" which preserve the "linear structure" of R"n" in the sense that "T"("av" + "bw") "aT"("v") + "bT"("w"). By analogy, the affine group Aff("n") is the group of transformations of A"n" preserving the "affine structure". Thus "φ" ∈ Aff("n") must "preserve translations" in the sense that formula_5 where T is a general linear transformation. The map sending "φ" ∈ Aff("n") to "T" ∈ GL("n") is a group homomorphism. Its kernel is the group of translations R"n". The stabilizer of any point p in A can thus be identified with GL("n") using this projection: this realises the affine group as a semidirect product of GL("n") and R"n", and affine space as the homogeneous space Aff("n")/GL("n"). Affine frames and the flat affine connection. An "affine frame" for A consists of a point "p" ∈ "A" and a basis (e1,… e"n") of the vector space T"p""A" R"n". The general linear group GL("n") acts freely on the set F"A" of all affine frames by fixing p and transforming the basis (e1,… e"n") in the usual way, and the map π sending an affine frame ("p"; e1,… e"n") to p is the quotient map. Thus F"A" is a principal GL("n")-bundle over A. The action of GL("n") extends naturally to a free transitive action of the affine group Aff("n") on F"A", so that F"A" is an Aff("n")-torsor, and the choice of a reference frame identifies F"A" → "A" with the principal bundle Aff("n") → Aff("n")/GL("n"). On F"A" there is a collection of "n" + 1 functions defined by formula_6 (as before) and formula_7 After choosing a basepoint for A, these are all functions with values in R"n", so it is possible to take their exterior derivatives to obtain differential 1-forms with values in R"n". Since the functions εi yield a basis for R"n" at each point of F"A", these 1-forms must be expressible as sums of the form formula_8 for some collection ("θ i", "ω ")1 ≤ "i", "j", "k" ≤ "n" of real-valued one-forms on Aff("n"). This system of one-forms on the principal bundle F"A" → "A" defines the affine connection on A. Taking the exterior derivative a second time, and using the fact that d2 0 as well as the linear independence of the εi, the following relations are obtained: formula_9 These are the Maurer–Cartan equations for the Lie group Aff("n") (identified with F"A" by the choice of a reference frame). Furthermore: 0 (for all j) is integrable, and its integral manifolds are the fibres of the principal bundle Aff("n") → "A". 0 (for all "i", "j") is also integrable, and its integral manifolds define parallel transport in F"A". Thus the forms ("ω ") define a flat principal connection on F"A" → "A". For a strict comparison with the motivation, one should actually define parallel transport in a principal Aff("n")-bundle over A. This can be done by pulling back F"A" by the smooth map "φ" : R"n" × "A" → "A" defined by translation. Then the composite "φ"′ ∗ F"A" → F"A" → "A" is a principal Aff("n")-bundle over A, and the forms ("θ i", "ω ") pull back to give a flat principal Aff("n")-connection on this bundle. General affine geometries: formal definitions. An affine space, as with essentially any smooth Klein geometry, is a manifold equipped with a flat Cartan connection. More general affine manifolds or affine geometries are obtained easily by dropping the flatness condition expressed by the Maurer-Cartan equations. There are several ways to approach the definition and two will be given. Both definitions are facilitated by the realisation that 1-forms ("θ i", "ω ") in the flat model fit together to give a 1-form with values in the Lie algebra aff("n") of the affine group Aff("n"). In these definitions, M is a smooth n-manifold and "A" Aff("n")/GL("n") is an affine space of the same dimension. Definition via absolute parallelism. Let M be a manifold, and P a principal GL("n")-bundle over M. Then an affine connection is a 1-form η on P with values in aff("n") satisfying the following properties "ξ" for all ξ in the Lie algebra gl("n") of all "n" × "n" matrices; The last condition means that η is an absolute parallelism on P, i.e., it identifies the tangent bundle of P with a trivial bundle (in this case "P" × aff("n")). The pair ("P", "η") defines the structure of an affine geometry on M, making it into an affine manifold. The affine Lie algebra aff("n") splits as a semidirect product of R"n" and gl("n") and so η may be written as a pair ("θ", "ω") where θ takes values in R"n" and ω takes values in gl("n"). Conditions 1 and 2 are equivalent to ω being a principal GL("n")-connection and θ being a horizontal equivariant 1-form, which induces a bundle homomorphism from T"M" to the associated bundle "P" ×GL("n") R"n". Condition 3 is equivalent to the fact that this bundle homomorphism is an isomorphism. (However, this decomposition is a consequence of the rather special structure of the affine group.) Since P is the frame bundle of "P" ×GL("n") R"n", it follows that θ provides a bundle isomorphism between P and the frame bundle F"M" of M; this recovers the definition of an affine connection as a principal GL("n")-connection on F"M". The 1-forms arising in the flat model are just the components of θ and ω. Definition as a principal affine connection. An affine connection on M is a principal Aff("n")-bundle Q over M, together with a principal GL("n")-subbundle P of Q and a principal Aff("n")-connection α (a 1-form on Q with values in aff("n")) which satisfies the following (generic) "Cartan condition". The R"n" component of pullback of α to P is a horizontal equivariant 1-form and so defines a bundle homomorphism from T"M" to "P" ×GL("n") R"n": this is required to be an isomorphism. Relation to the motivation. Since Aff("n") acts on A, there is, associated to the principal bundle Q, a bundle A "Q" ×Aff("n") "A", which is a fiber bundle over M whose fiber at x in M is an affine space "A""x". A section a of A (defining a marked point "a""x" in "A""x" for each "x" ∈ "M") determines a principal GL("n")-subbundle P of Q (as the bundle of stabilizers of these marked points) and vice versa. The principal connection α defines an Ehresmann connection on this bundle, hence a notion of parallel transport. The Cartan condition ensures that the distinguished section a always moves under parallel transport. Further properties. Curvature and torsion. Curvature and torsion are the main invariants of an affine connection. As there are many equivalent ways to define the notion of an affine connection, so there are many different ways to define curvature and torsion. From the Cartan connection point of view, the curvature is the failure of the affine connection η to satisfy the Maurer–Cartan equation formula_10 where the second term on the left hand side is the wedge product using the Lie bracket in aff("n") to contract the values. By expanding η into the pair ("θ", "ω") and using the structure of the Lie algebra aff("n"), this left hand side can be expanded into the two formulae formula_11 where the wedge products are evaluated using matrix multiplication. The first expression is called the torsion of the connection, and the second is also called the curvature. These expressions are differential 2-forms on the total space of a frame bundle. However, they are horizontal and equivariant, and hence define tensorial objects. These can be defined directly from the induced covariant derivative ∇ on T"M" as follows. The torsion is given by the formula formula_12 If the torsion vanishes, the connection is said to be "torsion-free" or "symmetric". The curvature is given by the formula formula_13 Note that ["X", "Y"] is the Lie bracket of vector fields formula_14 in Einstein notation. This is independent of coordinate system choice and formula_15 the tangent vector at point p of the ith coordinate curve. The ∂"i" are a natural basis for the tangent space at point p, and the X i the corresponding coordinates for the vector field "X" "X i" ∂"i". When both curvature and torsion vanish, the connection defines a pre-Lie algebra structure on the space of global sections of the tangent bundle. The Levi-Civita connection. If ("M", "g") is a Riemannian manifold then there is a unique affine connection ∇ on M with the following two properties: ["X", "Y"]; This connection is called the Levi-Civita connection. The term "symmetric" is often used instead of torsion-free for the first property. The second condition means that the connection is a metric connection in the sense that the Riemannian metric g is parallel: ∇"g" 0. For a torsion-free connection, the condition is equivalent to the identity "X" "g"("Y", "Z") "g"(∇"X""Y", "Z") + "g"("Y", ∇"X" "Z"), "compatibility with the metric". In local coordinates the components of the form are called Christoffel symbols: because of the uniqueness of the Levi-Civita connection, there is a formula for these components in terms of the components of g. Geodesics. Since straight lines are a concept in affine geometry, affine connections define a generalized notion of (parametrized) straight lines on any affine manifold, called affine geodesics. Abstractly, a parametric curve "γ" : "I" → "M" is a straight line if its tangent vector remains parallel and equipollent with itself when it is transported along γ. From the linear point of view, an affine connection M distinguishes the affine geodesics in the following way: a smooth curve "γ" : "I" → "M" is an affine geodesic if formula_16 is parallel transported along γ, that is formula_17 where "τ" : T"γsM" → T"γtM" is the parallel transport map defining the connection. In terms of the infinitesimal connection ∇, the derivative of this equation implies formula_18 for all "t" ∈ "I". Conversely, any solution of this differential equation yields a curve whose tangent vector is parallel transported along the curve. For every "x" ∈ "M" and every "X" ∈ T"x""M", there exists a unique affine geodesic "γ" : "I" → "M" with "γ"(0) "x" and "γ̇"(0) "X" and where I is the maximal open interval in R, containing 0, on which the geodesic is defined. This follows from the Picard–Lindelöf theorem, and allows for the definition of an exponential map associated to the affine connection. In particular, when M is a (pseudo-)Riemannian manifold and ∇ is the Levi-Civita connection, then the affine geodesics are the usual geodesics of Riemannian geometry and are the locally distance minimizing curves. The geodesics defined here are sometimes called affinely parametrized, since a given straight line in M determines a parametric curve γ through the line up to a choice of affine reparametrization "γ"("t") → "γ"("at" + "b"), where a and b are constants. The tangent vector to an affine geodesic is parallel and equipollent along itself. An unparametrized geodesic, or one which is merely parallel along itself without necessarily being equipollent, need only satisfy formula_19 for some function k defined along γ. Unparametrized geodesics are often studied from the point of view of projective connections. Development. An affine connection defines a notion of development of curves. Intuitively, development captures the notion that if xt is a curve in M, then the affine tangent space at "x"0 may be "rolled" along the curve. As it does so, the marked point of contact between the tangent space and the manifold traces out a curve Ct in this affine space: the development of xt. In formal terms, let "τ" : T"xtM" → T"x"0"M" be the linear parallel transport map associated to the affine connection. Then the development Ct is the curve in T"x"0"M" starts off at 0 and is parallel to the tangent of xt for all time t: formula_20 In particular, xt is a "geodesic" if and only if its development is an affinely parametrized straight line in T"x"0"M". Surface theory revisited. If M is a surface in R3, it is easy to see that M has a natural affine connection. From the linear connection point of view, the covariant derivative of a vector field is defined by differentiating the vector field, viewed as a map from M to R3, and then projecting the result orthogonally back onto the tangent spaces of M. It is easy to see that this affine connection is torsion-free. Furthermore, it is a metric connection with respect to the Riemannian metric on M induced by the inner product on R3, hence it is the Levi-Civita connection of this metric. Example: the unit sphere in Euclidean space. Let ⟨ , ⟩ be the usual scalar product on R3, and let S2 be the unit sphere. The tangent space to S2 at a point x is naturally identified with the vector subspace of R3 consisting of all vectors orthogonal to x. It follows that a vector field Y on S2 can be seen as a map "Y" : S2 → R3 which satisfies formula_21 Denote as d"Y" the differential (Jacobian matrix) of such a map. Then we have: Lemma. The formula formula_22 defines an affine connection on S2 with vanishing torsion. Proof. It is straightforward to prove that ∇ satisfies the Leibniz identity and is "C"∞(S2) linear in the first variable. So all that needs to be proved here is that the map above does indeed define a tangent vector field. That is, we need to prove that for all x in S2 formula_23 Consider the map formula_24 The map "f" is constant, hence its differential vanishes. In particular formula_25 Equation 1 above follows. Q.E.D. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Bibliography. Primary historical references. &lt;templatestyles src="Refbegin/styles.css" /&gt; Cartan's treatment of affine connections as motivated by the study of relativity theory. Includes a detailed discussion of the physics of reference frames, and how the connection reflects the physical notion of transport along a worldline. A more mathematically motivated account of affine connections. Affine connections from the point of view of Riemannian geometry. Robert Hermann's appendices discuss the motivation from surface theory, as well as the notion of affine connections in the modern sense of Koszul. He develops the basic properties of the differential operator ∇, and relates them to the classical affine connections in the sense of Cartan. Secondary references. &lt;templatestyles src="Refbegin/styles.css" /&gt; This is the main reference for the technical details of the article. Volume 1, chapter III gives a detailed account of affine connections from the perspective of principal bundles on a manifold, parallel transport, development, geodesics, and associated differential operators. Volume 1 chapter VI gives an account of affine transformations, torsion, and the general theory of affine geodesy. Volume 2 gives a number of applications of affine connections to homogeneous spaces and complex manifolds, as well as to other assorted topics. Two articles by Lumiste, giving precise conditions on parallel transport maps in order that they define affine connections. They also treat curvature, torsion, and other standard topics from a classical (non-principal bundle) perspective. This fills in some of the historical details, and provides a more reader-friendly elementary account of Cartan connections in general. Appendix A elucidates the relationship between the principal connection and absolute parallelism viewpoints. Appendix B bridges the gap between the classical "rolling" model of affine connections, and the modern one based on principal bundles and differential operators.
[ { "math_id": 0, "text": "\\begin{align}\n\\Gamma(\\mathrm{T}M)\\times \\Gamma(\\mathrm{T}M) & \\rightarrow \\Gamma(\\mathrm{T}M)\\\\\n(X,Y) & \\mapsto \\nabla_X Y\\,,\\end{align}" }, { "math_id": 1, "text": "\\Gamma_x : \\mathrm{T}_xM \\times \\mathrm{T}_xM \\to \\mathrm{T}_xM" }, { "math_id": 2, "text": "(*) \\begin{cases}\n\\mathrm{d}{p} &= \\theta^1\\mathbf{e}_1 + \\cdots + \\theta^n\\mathbf{e}_n \\\\\n\\mathrm{d}\\mathbf{e}_i &= \\omega^1_i\\mathbf{e}_1 + \\cdots + \\omega^n_i\\mathbf{e}_n\n\\end{cases} \\quad i=1,2,\\ldots,n" }, { "math_id": 3, "text": "\\begin{align}\np(\\gamma(t+\\delta t)) - p(\\gamma(t)) &= \\left(\\theta^1\\left(\\gamma'(t)\\right)\\mathbf{e}_1 + \\cdots + \\theta^n\\left(\\gamma'(t)\\right)\\mathbf{e}_n\\right)\\mathrm \\delta t \\\\\n\\mathbf{e}_i(\\gamma(t+\\delta t)) - \\mathbf{e}_i(\\gamma(t)) &= \\left(\\omega^1_i\\left(\\gamma'(t)\\right)\\mathbf{e}_1 + \\cdots + \\omega^n_i\\left(\\gamma'(t)\\right) \\mathbf{e}_n\\right)\\delta t\\,.\n\\end{align}" }, { "math_id": 4, "text": "a_x (\\gamma(t + \\delta t)) - a_x (\\gamma(t)) = \\theta\\left(\\gamma'(t)\\right) \\delta t \\,," }, { "math_id": 5, "text": "\\varphi(p+v)=\\varphi(p)+T(v)" }, { "math_id": 6, "text": "\\pi(p;\\mathbf{e}_1, \\dots ,\\mathbf{e}_n) = p" }, { "math_id": 7, "text": "\\varepsilon_i(p;\\mathbf{e}_1,\\dots , \\mathbf{e}_n) = \\mathbf{e}_i\\,." }, { "math_id": 8, "text": "\\begin{align}\n\\mathrm{d}\\pi &= \\theta^1\\varepsilon_1+\\cdots+\\theta^n\\varepsilon_n\\\\\n\\mathrm{d}\\varepsilon_i &= \\omega^1_i\\varepsilon_1+\\cdots+\\omega^n_i\\varepsilon_n\n\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\n\\mathrm{d}\\theta^j - \\sum_i\\omega^j_i\\wedge\\theta^i &=0\\\\\n\\mathrm{d}\\omega^j_i - \\sum_k \\omega^j_k\\wedge\\omega^k_i &=0\\,.\n\\end{align}" }, { "math_id": 10, "text": "\\mathrm{d}\\eta + \\tfrac12[\\eta\\wedge\\eta] = 0," }, { "math_id": 11, "text": " \\mathrm{d}\\theta + \\omega\\wedge\\theta \\quad \\text{and} \\quad \\mathrm{d}\\omega + \\omega\\wedge\\omega\\,," }, { "math_id": 12, "text": "T^\\nabla(X,Y) = \\nabla_X Y - \\nabla_Y X - [X,Y]." }, { "math_id": 13, "text": "R^\\nabla_{X,Y}Z = \\nabla_X\\nabla_Y Z - \\nabla_Y\\nabla_X Z - \\nabla_{[X,Y]}Z." }, { "math_id": 14, "text": "[X,Y]=\\left(X^j \\partial_j Y^i - Y^j \\partial_j X^i\\right)\\partial_i" }, { "math_id": 15, "text": "\\partial_i = \\left(\\frac{\\partial}{\\partial\\xi^i}\\right)_p\\,," }, { "math_id": 16, "text": "\\dot\\gamma" }, { "math_id": 17, "text": "\\tau_t^s\\dot\\gamma(s) = \\dot\\gamma(t)" }, { "math_id": 18, "text": "\\nabla_{\\dot\\gamma(t)}\\dot\\gamma(t) = 0" }, { "math_id": 19, "text": "\\nabla_{\\dot{\\gamma}}\\dot{\\gamma} = k\\dot{\\gamma}" }, { "math_id": 20, "text": "\\dot{C}_t = \\tau_t^0\\dot{x}_t\\,,\\quad C_0 = 0." }, { "math_id": 21, "text": "\\langle Y_x, x\\rangle = 0\\,, \\quad \\forall x\\in \\mathbf{S}^2." }, { "math_id": 22, "text": "(\\nabla_Z Y)_x = \\mathrm{d}Y_x(Z_x) + \\langle Z_x,Y_x\\rangle x" }, { "math_id": 23, "text": "\\bigl\\langle(\\nabla_Z Y)_x,x\\bigr\\rangle = 0\\,.\\qquad \\text{(Eq.1)}" }, { "math_id": 24, "text": "\\begin{align} f: \\mathbf{S}^2&\\to \\mathbf{R}\\\\ x &\\mapsto \\langle Y_x, x\\rangle\\,.\\end{align}" }, { "math_id": 25, "text": "\\mathrm{d}f_x(Z_x) = \\bigl\\langle (\\mathrm{d} Y)_x(Z_x),x(\\gamma'(t))\\bigr\\rangle + \\langle Y_x, Z_x\\rangle = 0\\,." } ]
https://en.wikipedia.org/wiki?curid=607690
60770
Curvature
Mathematical measure of how much a curve or surface deviates from flatness In mathematics, curvature is any of several strongly related concepts in geometry that intuitively measure the amount by which a curve deviates from being a straight line or by which a surface deviates from being a plane. If a curve or surface is contained in a larger space, curvature can be defined "extrinsically" relative to the ambient space. Curvature of Riemannian manifolds of dimension at least two can be defined "intrinsically" without reference to a larger space. For curves, the canonical example is that of a circle, which has a curvature equal to the reciprocal of its radius. Smaller circles bend more sharply, and hence have higher curvature. The curvature "at a point" of a differentiable curve is the curvature of its osculating circle — that is, the circle that best approximates the curve near this point. The curvature of a straight line is zero. In contrast to the tangent, which is a vector quantity, the curvature at a point is typically a scalar quantity, that is, it is expressed by a single real number. For surfaces (and, more generally for higher-dimensional manifolds), that are embedded in a Euclidean space, the concept of curvature is more complex, as it depends on the choice of a direction on the surface or manifold. This leads to the concepts of "maximal curvature", "minimal curvature", and "mean curvature". History. In "Tractatus de configurationibus qualitatum et motuum," the 14th-century philosopher and mathematician Nicole Oresme introduces the concept of curvature as a measure of departure from straightness; for circles he has the curvature as being inversely proportional to the radius; and he attempts to extend this idea to other curves as a continuously varying magnitude. The curvature of a differentiable curve was originally defined through osculating circles. In this setting, Augustin-Louis Cauchy showed that the center of curvature is the intersection point of two infinitely close normal lines to the curve. Plane curves. Intuitively, the curvature describes for any part of a curve how much the curve direction changes over a small distance travelled (e.g. angle in rad/m), so it is a measure of the instantaneous rate of change of "direction" of a point that moves on the curve: the larger the curvature, the larger this rate of change. In other words, the curvature measures how fast the unit tangent vector to the curve at point p rotates when point p moves at unit speed along the curve. In fact, it can be proved that this instantaneous rate of change is exactly the curvature. More precisely, suppose that the point is moving on the curve at a constant speed of one unit, that is, the position of the point "P"("s") is a function of the parameter s, which may be thought as the time or as the arc length from a given origin. Let T("s") be a unit tangent vector of the curve at "P"("s"), which is also the derivative of "P"("s") with respect to s. Then, the derivative of T("s") with respect to s is a vector that is normal to the curve and whose length is the curvature. To be meaningful, the definition of the curvature and its different characterizations require that the curve is continuously differentiable near P, for having a tangent that varies continuously; it requires also that the curve is twice differentiable at P, for insuring the existence of the involved limits, and of the derivative of T("s"). The characterization of the curvature in terms of the derivative of the unit tangent vector is probably less intuitive than the definition in terms of the osculating circle, but formulas for computing the curvature are easier to deduce. Therefore, and also because of its use in kinematics, this characterization is often given as a definition of the curvature. Osculating circle. Historically, the curvature of a differentiable curve was defined through the osculating circle, which is the circle that best approximates the curve at a point. More precisely, given a point P on a curve, every other point Q of the curve defines a circle (or sometimes a line) passing through Q and tangent to the curve at P. The osculating circle is the limit, if it exists, of this circle when Q tends to P. Then the "center" and the "radius of curvature" of the curve at P are the center and the radius of the osculating circle. The curvature is the reciprocal of radius of curvature. That is, the curvature is formula_0 where R is the radius of curvature (the whole circle has this curvature, it can be read as turn 2π over the length 2πR). This definition is difficult to manipulate and to express in formulas. Therefore, other equivalent definitions have been introduced. In terms of arc-length parametrization. Every differentiable curve can be parametrized with respect to arc length. In the case of a plane curve, this means the existence of a parametrization γ("s") ("x"("s"), "y"("s")), where x and y are real-valued differentiable functions whose derivatives satisfy formula_1 This means that the tangent vector formula_2 has a length equal to one and is thus a unit tangent vector. If the curve is twice differentiable, that is, if the second derivatives of x and y exist, then the derivative of T("s") exists. This vector is normal to the curve, its length is the curvature "κ"("s"), and it is oriented toward the center of curvature. That is, formula_3 Moreover, because the radius of curvature is (assuming "𝜿"("s") ≠ 0) formula_4 and the center of curvature is on the normal to the curve, the center of curvature is the point formula_5 If N("s") is the unit normal vector obtained from T("s") by a counterclockwise rotation of , then formula_6 with "k"("s") = ± "κ"("s"). The real number "k"("s") is called the oriented curvature or signed curvature. It depends on both the orientation of the plane (definition of counterclockwise), and the orientation of the curve provided by the parametrization. In fact, the change of variable "s" → –"s" provides another arc-length parametrization, and changes the sign of "k"("s"). In terms of a general parametrization. Let γ("t") ("x"("t"), "y"("t")) be a proper parametric representation of a twice differentiable plane curve. Here "proper" means that on the domain of definition of the parametrization, the derivative is defined, differentiable and nowhere equal to the zero vector. With such a parametrization, the signed curvature is formula_7 where primes refer to derivatives with respect to t. The curvature "κ" is thus formula_8 These can be expressed in a coordinate-free way as formula_9 These formulas can be derived from the special case of arc-length parametrization in the following way. The above condition on the parametrisation imply that the arc length s is a differentiable monotonic function of the parameter t, and conversely that t is a monotonic function of s. Moreover, by changing, if needed, s to –"s", one may suppose that these functions are increasing and have a positive derivative. Using notation of the preceding section and the chain rule, one has formula_10 and thus, by taking the norm of both sides formula_11 where the prime denotes differentiation with respect to t. The curvature is the norm of the derivative of T with respect to s. By using the above formula and the chain rule this derivative and its norm can be expressed in terms of γ′ and γ″ only, with the arc-length parameter s completely eliminated, giving the above formulas for the curvature. Graph of a function. The graph of a function "y" "f"("x"), is a special case of a parametrized curve, of the form formula_12 As the first and second derivatives of x are 1 and 0, previous formulas simplify to formula_13 for the curvature, and to formula_14 for the signed curvature. In the general case of a curve, the sign of the signed curvature is somewhat arbitrary, as it depends on the orientation of the curve. In the case of the graph of a function, there is a natural orientation by increasing values of x. This makes significant the sign of the signed curvature. The sign of the signed curvature is the same as the sign of the second derivative of f. If it is positive then the graph has an upward concavity, and, if it is negative the graph has a downward concavity. If it is zero, then one has an inflection point or an undulation point. When the slope of the graph (that is the derivative of the function) is small, the signed curvature is well approximated by the second derivative. More precisely, using big O notation, one has formula_15 It is common in physics and engineering to approximate the curvature with the second derivative, for example, in beam theory or for deriving the wave equation of a string under tension, and other applications where small slopes are involved. This often allows systems that are otherwise nonlinear to be treated approximately as linear. Polar coordinates. If a curve is defined in polar coordinates by the radius expressed as a function of the polar angle, that is r is a function of θ, then its curvature is formula_16 where the prime refers to differentiation with respect to θ. This results from the formula for general parametrizations, by considering the parametrization formula_17 Implicit curve. For a curve defined by an implicit equation "F"("x", "y") 0 with partial derivatives denoted Fx , Fy , Fxx , Fxy , Fyy , the curvature is given by formula_18 The signed curvature is not defined, as it depends on an orientation of the curve that is not provided by the implicit equation. Note that changing F into –"F" would not change the curve defined by "F"("x", "y") 0, but it would change the sign of the numerator if the absolute value were omitted in the preceding formula. A point of the curve where "Fx" = "Fy" = 0 is a singular point, which means that the curve is not differentiable at this point, and thus that the curvature is not defined (most often, the point is either a crossing point or a cusp). The above formula for the curvature can be derived from the expression of the curvature of the graph of a function by using the implicit function theorem and the fact that, on such a curve, one has formula_19 Examples. It can be useful to verify on simple examples that the different formulas given in the preceding sections give the same result. Circle. A common parametrization of a circle of radius r is γ("t") = ("r" cos "t", "r" sin "t"). The formula for the curvature gives formula_20 It follows, as expected, that the radius of curvature is the radius of the circle, and that the center of curvature is the center of the circle. The circle is a rare case where the arc-length parametrization is easy to compute, as it is formula_21 It is an arc-length parametrization, since the norm of formula_22 is equal to one. This parametrization gives the same value for the curvature, as it amounts to division by "r"3 in both the numerator and the denominator in the preceding formula. The same circle can also be defined by the implicit equation "F"("x", "y") = 0 with "F"("x", "y") = "x"2 + "y"2 – "r"2. Then, the formula for the curvature in this case gives formula_23 Parabola. Consider the parabola "y" "ax"2 + "bx" + "c". It is the graph of a function, with derivative 2"ax" + "b", and second derivative 2"a". So, the signed curvature is formula_24 It has the sign of a for all values of x. This means that, if "a" &gt; 0, the concavity is upward directed everywhere; if "a" &lt; 0, the concavity is downward directed; for "a" = 0, the curvature is zero everywhere, confirming that the parabola degenerates into a line in this case. The (unsigned) curvature is maximal for "x" = –, that is at the stationary point (zero derivative) of the function, which is the vertex of the parabola. Consider the parametrization γ("t") ("t", "at"2 + "bt" + "c") ("x", "y"). The first derivative of x is 1, and the second derivative is zero. Substituting into the formula for general parametrizations gives exactly the same result as above, with x replaced by t. If we use primes for derivatives with respect to the parameter t. The same parabola can also be defined by the implicit equation "F"("x", "y") = 0 with "F"("x", "y") "ax"2 + "bx" + "c" – "y". As "Fy" = –1, and "Fyy" = "Fxy" = 0, one obtains exactly the same value for the (unsigned) curvature. However, the signed curvature is meaningless here, as –"F"("x", "y") = 0 is a valid implicit equation for the same parabola, which gives the opposite sign for the curvature. Frenet–Serret formulas for plane curves. The expression of the curvature In terms of arc-length parametrization is essentially the first Frenet–Serret formula formula_25 where the primes refer to the derivatives with respect to the arc length s, and N("s") is the normal unit vector in the direction of T′(s). As planar curves have zero torsion, the second Frenet–Serret formula provides the relation formula_26 For a general parametrization by a parameter t, one needs expressions involving derivatives with respect to t. As these are obtained by multiplying by the derivatives with respect to s, one has, for any proper parametrization formula_27 Curvature comb. A "curvature comb" can be used to represent graphically the curvature of every point on a curve. If formula_28 is a parametrised curve its comb is defined as the parametrized curve formula_29 where formula_30 are the curvature and normal vector and formula_31 is a scaling factor (to be chosen as to enhance the graphical representation). Space curves. As in the case of curves in two dimensions, the curvature of a regular space curve C in three dimensions (and higher) is the magnitude of the acceleration of a particle moving with unit speed along a curve. Thus if γ("s") is the arc-length parametrization of C then the unit tangent vector T("s") is given by formula_32 and the curvature is the magnitude of the acceleration: formula_33 The direction of the acceleration is the unit normal vector N("s"), which is defined by formula_34 The plane containing the two vectors T("s") and N("s") is the osculating plane to the curve at γ("s"). The curvature has the following geometrical interpretation. There exists a circle in the osculating plane tangent to γ("s") whose Taylor series to second order at the point of contact agrees with that of γ("s"). This is the osculating circle to the curve. The radius of the circle "R"("s") is called the radius of curvature, and the curvature is the reciprocal of the radius of curvature: formula_35 The tangent, curvature, and normal vector together describe the second-order behavior of a curve near a point. In three dimensions, the third-order behavior of a curve is described by a related notion of torsion, which measures the extent to which a curve tends to move as a helical path in space. The torsion and curvature are related by the Frenet–Serret formulas (in three dimensions) and their generalization (in higher dimensions). General expressions. For a parametrically-defined space curve in three dimensions given in Cartesian coordinates by γ("t") ("x"("t"), "y"("t"), "z"("t")), the curvature is formula_36 where the prime denotes differentiation with respect to the parameter t. This can be expressed independently of the coordinate system by means of the formula formula_37 where × denotes the vector cross product. The following formula is valid for the curvature of curves in a Euclidean space of any dimension: formula_38 Curvature from arc and chord length. Given two points P and Q on C, let "s"("P","Q") be the arc length of the portion of the curve between P and Q and let "d"("P","Q") denote the length of the line segment from P to Q. The curvature of C at P is given by the limit formula_39 where the limit is taken as the point Q approaches P on C. The denominator can equally well be taken to be "d"("P","Q")3. The formula is valid in any dimension. Furthermore, by considering the limit independently on either side of P, this definition of the curvature can sometimes accommodate a singularity at P. The formula follows by verifying it for the osculating circle. Surfaces. The curvature of curves drawn on a surface is the main tool for the defining and studying the curvature of the surface. Curves on surfaces. For a curve drawn on a surface (embedded in three-dimensional Euclidean space), several curvatures are defined, which relates the direction of curvature to the surface's unit normal vector, including the: Any non-singular curve on a smooth surface has its tangent vector T contained in the tangent plane of the surface. The normal curvature, "k"n, is the curvature of the curve projected onto the plane containing the curve's tangent T and the surface normal u; the geodesic curvature, "k"g, is the curvature of the curve projected onto the surface's tangent plane; and the geodesic torsion (or relative torsion), "τ"r, measures the rate of change of the surface normal around the curve's tangent. Let the curve be arc-length parametrized, and let t u × T so that T, t, u form an orthonormal basis, called the Darboux frame. The above quantities are related by: formula_40 Principal curvature. All curves on the surface with the same tangent vector at a given point will have the same normal curvature, which is the same as the curvature of the curve obtained by intersecting the surface with the plane containing T and u. Taking all possible tangent vectors, the maximum and minimum values of the normal curvature at a point are called the principal curvatures, "k"1 and "k"2, and the directions of the corresponding tangent vectors are called principal normal directions. Normal sections. Curvature can be evaluated along surface normal sections, similar to above (see for example the Earth radius of curvature). Developable surfaces. Some curved surfaces, such as those made from a smooth sheet of paper, can be flattened down into the plane without distorting their intrinsic features in any way. Such developable surfaces have zero Gaussian curvature (see below). Gaussian curvature. In contrast to curves, which do not have intrinsic curvature, but do have extrinsic curvature (they only have a curvature given an embedding), surfaces can have intrinsic curvature, independent of an embedding. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, "k"1"k"2. It has a dimension of length−2 and is positive for spheres, negative for one-sheet hyperboloids and zero for planes and cylinders. It determines whether a surface is locally convex (when it is positive) or locally saddle-shaped (when it is negative). Gaussian curvature is an "intrinsic" property of the surface, meaning it does not depend on the particular embedding of the surface; intuitively, this means that ants living on the surface could determine the Gaussian curvature. For example, an ant living on a sphere could measure the sum of the interior angles of a triangle and determine that it was greater than 180 degrees, implying that the space it inhabited had positive curvature. On the other hand, an ant living on a cylinder would not detect any such departure from Euclidean geometry; in particular the ant could not detect that the two surfaces have different mean curvatures (see below), which is a purely extrinsic type of curvature. Formally, Gaussian curvature only depends on the Riemannian metric of the surface. This is Gauss's celebrated Theorema Egregium, which he found while concerned with geographic surveys and mapmaking. An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. It runs around P while the thread is completely stretched and measures the length "C"("r") of one complete trip around P. If the surface were flat, the ant would find "C"("r") 2π"r". On curved surfaces, the formula for "C"("r") will be different, and the Gaussian curvature K at the point P can be computed by the Bertrand–Diguet–Puiseux theorem as formula_41 The integral of the Gaussian curvature over the whole surface is closely related to the surface's Euler characteristic; see the Gauss–Bonnet theorem. The discrete analog of curvature, corresponding to curvature being concentrated at a point and particularly useful for polyhedra, is the (angular) defect; the analog for the Gauss–Bonnet theorem is Descartes' theorem on total angular defect. Because (Gaussian) curvature can be defined without reference to an embedding space, it is not necessary that a surface be embedded in a higher-dimensional space in order to be curved. Such an intrinsically curved two-dimensional surface is a simple example of a Riemannian manifold. Mean curvature. The mean curvature is an "extrinsic" measure of curvature equal to half the sum of the principal curvatures, . It has a dimension of length−1. Mean curvature is closely related to the first variation of surface area. In particular, a minimal surface such as a soap film has mean curvature zero and a soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature is extrinsic and depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero. Second fundamental form. The intrinsic and extrinsic curvature of a surface can be combined in the second fundamental form. This is a quadratic form in the tangent plane to the surface at a point whose value at a particular tangent vector X to the surface is the normal component of the acceleration of a curve along the surface tangent to X; that is, it is the normal curvature to a curve tangent to X (see above). Symbolically, formula_42 where N is the unit normal to the surface. For unit tangent vectors X, the second fundamental form assumes the maximum value "k"1 and minimum value "k"2, which occur in the principal directions u1 and u2, respectively. Thus, by the principal axis theorem, the second fundamental form is formula_43 Thus the second fundamental form encodes both the intrinsic and extrinsic curvatures. Shape operator. An encapsulation of surface curvature can be found in the shape operator, "S", which is a self-adjoint linear operator from the tangent plane to itself (specifically, the differential of the Gauss map). For a surface with tangent vectors X and normal N, the shape operator can be expressed compactly in index summation notation as formula_44 The Weingarten equations give the value of "S" in terms of the coefficients of the first and second fundamental forms as formula_45 The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gauss curvature is its determinant, and the mean curvature is half its trace. Curvature of space. By extension of the former argument, a space of three or more dimensions can be intrinsically curved. The curvature is "intrinsic" in the sense that it is a property defined at every point in the space, rather than a property defined with respect to a larger space that contains it. In general, a curved space may or may not be conceived as being embedded in a higher-dimensional ambient space; if not then its curvature can only be defined intrinsically. After the discovery of the intrinsic definition of curvature, which is closely connected with non-Euclidean geometry, many mathematicians and scientists questioned whether ordinary physical space might be curved, although the success of Euclidean geometry up to that time meant that the radius of curvature must be astronomically large. In the theory of general relativity, which describes gravity and cosmology, the idea is slightly generalised to the "curvature of spacetime"; in relativity theory spacetime is a pseudo-Riemannian manifold. Once a time coordinate is defined, the three-dimensional space corresponding to a particular time is generally a curved Riemannian manifold; but since the time coordinate choice is largely arbitrary, it is the underlying spacetime curvature that is physically significant. Although an arbitrarily curved space is very complex to describe, the curvature of a space which is locally isotropic and homogeneous is described by a single Gaussian curvature, as for a surface; mathematically these are strong conditions, but they correspond to reasonable physical assumptions (all points and all directions are indistinguishable). A positive curvature corresponds to the inverse square radius of curvature; an example is a sphere or hypersphere. An example of negatively curved space is hyperbolic geometry (see also: non-positive curvature). A space or space-time with zero curvature is called flat. For example, Euclidean space is an example of a flat space, and Minkowski space is an example of a flat spacetime. There are other examples of flat geometries in both settings, though. A torus or a cylinder can both be given flat metrics, but differ in their topology. Other topologies are also possible for curved space &lt;templatestyles src="Crossreference/styles.css" /&gt;. Generalizations. The mathematical notion of "curvature" is also defined in much more general contexts. Many of these generalizations emphasize different aspects of the curvature as it is understood in lower dimensions. One such generalization is kinematic. The curvature of a curve can naturally be considered as a kinematic quantity, representing the force felt by a certain observer moving along the curve; analogously, curvature in higher dimensions can be regarded as a kind of tidal force (this is one way of thinking of the sectional curvature). This generalization of curvature depends on how nearby test particles diverge or converge when they are allowed to move freely in the space; see Jacobi field. Another broad generalization of curvature comes from the study of parallel transport on a surface. For instance, if a vector is moved around a loop on the surface of a sphere keeping parallel throughout the motion, then the final position of the vector may not be the same as the initial position of the vector. This phenomenon is known as holonomy. Various generalizations capture in an abstract form this idea of curvature as a measure of holonomy; see curvature form. A closely related notion of curvature comes from gauge theory in physics, where the curvature represents a field and a vector potential for the field is a quantity that is in general path-dependent: it may change if an observer moves around a loop. Two more generalizations of curvature are the scalar curvature and Ricci curvature. In a curved surface such as the sphere, the area of a disc on the surface differs from the area of a disc of the same radius in flat space. This difference (in a suitable limit) is measured by the scalar curvature. The difference in area of a sector of the disc is measured by the Ricci curvature. Each of the scalar curvature and Ricci curvature are defined in analogous ways in three and higher dimensions. They are particularly important in relativity theory, where they both appear on the side of Einstein's field equations that represents the geometry of spacetime (the other side of which represents the presence of matter and energy). These generalizations of curvature underlie, for instance, the notion that curvature can be a property of a measure; see curvature of a measure. Another generalization of curvature relies on the ability to compare a curved space with another space that has "constant" curvature. Often this is done with triangles in the spaces. The notion of a triangle makes senses in metric spaces, and this gives rise to CAT("k") spaces. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\kappa = \\frac{1}{R}," }, { "math_id": 1, "text": "\\|\\boldsymbol{\\gamma}'\\| = \\sqrt{x'(s)^2+y'(s)^2} = 1." }, { "math_id": 2, "text": "\\mathbf T(s)=\\bigl(x'(s),y'(s)\\bigr)" }, { "math_id": 3, "text": "\\begin{align}\n\\mathbf{T}(s) &= \\boldsymbol{\\gamma}'(s), \\\\[8mu]\n\\|\\mathbf{T}(s)\\|^2 &= 1 \\ \\text{(constant)} \\implies \\mathbf{T}'(s)\\cdot \\mathbf{T}(s) = 0, \\\\[5mu]\n\\kappa(s) &= \\|\\mathbf{T}'(s)\\| = \\|\\boldsymbol{\\gamma}''(s)\\| = \\sqrt{x''(s)^2+y''(s)^2}\n\\end{align}" }, { "math_id": 4, "text": "R(s)=\\frac{1}{\\kappa(s)}," }, { "math_id": 5, "text": " \\mathbf{C}(s)= \\boldsymbol{\\gamma}(s) + \\frac 1{\\kappa(s)^2}\\mathbf{T}'(s)." }, { "math_id": 6, "text": "\\mathbf{T}'(s)=k(s)\\mathbf{N}(s)," }, { "math_id": 7, "text": "k = \\frac{x'y''-y'x''}{\\bigl({x'}^2+{y'}^2\\bigr)\\vphantom{'}^{3/2}}," }, { "math_id": 8, "text": "\\kappa = \\frac{\\left|x'y''-y'x''\\right|}{\\bigl({x'}^2+{y'}^2\\bigr)\\vphantom{'}^{3/2}}." }, { "math_id": 9, "text": "\nk = \\frac{\\det\\left(\\boldsymbol{\\gamma}',\\boldsymbol{\\gamma}''\\right)}{\\|\\boldsymbol{\\gamma}'\\|^3},\\qquad\n\\kappa = \\frac{\\left|\\det\\left(\\boldsymbol{\\gamma}',\\boldsymbol{\\gamma}''\\right)\\right|}{\\|\\boldsymbol{\\gamma}'\\|^3}.\n" }, { "math_id": 10, "text": "\\frac{d\\boldsymbol{\\gamma}}{dt}= \\frac{ds}{dt}\\mathbf T," }, { "math_id": 11, "text": " \\frac{dt}{ds}= \\frac 1{\\|\\boldsymbol{\\gamma}'\\|}," }, { "math_id": 12, "text": "\\begin{align}\nx&=t\\\\\ny&=f(t).\n\\end{align}" }, { "math_id": 13, "text": "\\kappa = \\frac{\\left|y''\\right|}{\\bigl(1+{y'}^2\\bigr)\\vphantom{'}^{3/2}}," }, { "math_id": 14, "text": "k = \\frac{y''}{\\bigl(1+{y'}^2\\bigr)\\vphantom{'}^{3/2}}," }, { "math_id": 15, "text": "k(x)=y'' \\Bigl(1 + O\\bigl({\\textstyle y'}^2\\bigr) \\Bigr)." }, { "math_id": 16, "text": "\\kappa(\\theta) = \\frac{\\left|r^2 + 2{r'}^2 - r\\, r''\\right|}{\\bigl(r^2+{r'}^2 \\bigr)\\vphantom{'}^{3/2}}" }, { "math_id": 17, "text": "\\begin{align}\nx&=r(\\theta)\\cos \\theta\\\\\ny&=r(\\theta)\\sin \\theta\n\\end{align}" }, { "math_id": 18, "text": "\\kappa = \\frac{\\left|F_y^2F_{xx}-2F_xF_yF_{xy}+F_x^2F_{yy}\\right|}{\\bigl(F_x^2+F_y^2\\bigr)\\vphantom{'}^{3/2}}." }, { "math_id": 19, "text": "\\frac {dy}{dx}=-\\frac{F_x}{F_y}." }, { "math_id": 20, "text": "k(t)= \\frac{r^2\\sin^2 t + r^2\\cos^2 t}{\\bigl(r^2\\cos^2 t+r^2\\sin^2 t\\bigr)\\vphantom{'}^{3/2}} = \\frac 1r." }, { "math_id": 21, "text": "\\boldsymbol\\gamma(s)= \\left(r\\cos \\frac sr,\\, r\\sin \\frac sr\\right)." }, { "math_id": 22, "text": "\\boldsymbol\\gamma'(s) = \\left(-\\sin \\frac sr,\\, \\cos \\frac sr\\right)" }, { "math_id": 23, "text": "\\begin{align}\n\\kappa &= \\frac{\\left|F_y^2F_{xx}-2F_xF_yF_{xy}+F_x^2F_{yy}\\right|}{\\bigl(F_x^2+F_y^2\\bigr)\\vphantom{'}^{3/2}}\\\\\n&=\\frac{8y^2 + 8x^2}{\\bigl(4x^2+4y^2\\bigr)\\vphantom{'}^{3/2}}\\\\\n&=\\frac {8r^2}{\\bigl(4r^2\\bigr)\\vphantom{'}^{3/2}} =\\frac1r.\\end{align}" }, { "math_id": 24, "text": "k(x)=\\frac{2a}{ \\bigl(1+\\left(2ax+b\\right)^2\\bigr)\\vphantom{)}^{3/2}}." }, { "math_id": 25, "text": "\\mathbf T'(s) = \\kappa(s) \\mathbf N(s)," }, { "math_id": 26, "text": "\\begin{align}\n\\frac {d\\mathbf{N}}{ds} &= -\\kappa\\mathbf{T},\\\\\n &= -\\kappa\\frac{d\\boldsymbol{\\gamma}}{ds}.\n\\end{align}" }, { "math_id": 27, "text": "\n\\mathbf{N}'(t) = -\\kappa(t)\\boldsymbol{\\gamma}'(t).\n" }, { "math_id": 28, "text": "t \\mapsto x(t)" }, { "math_id": 29, "text": " t \\mapsto x(t) + d\\kappa(t)n(t)" }, { "math_id": 30, "text": "\\kappa, n" }, { "math_id": 31, "text": "d" }, { "math_id": 32, "text": "\\mathbf{T}(s) = \\boldsymbol{\\gamma}'(s)" }, { "math_id": 33, "text": "\\kappa(s) = \\|\\mathbf{T}'(s)\\| = \\|\\boldsymbol{\\gamma}''(s)\\|." }, { "math_id": 34, "text": "\\mathbf{N}(s) = \\frac{\\mathbf{T}'(s)}{\\|\\mathbf{T}'(s)\\|}." }, { "math_id": 35, "text": "\\kappa(s) = \\frac{1}{R(s)}." }, { "math_id": 36, "text": "\n\\kappa=\\frac{\\sqrt{\\bigl(z''y'-y''z'\\bigr)\\vphantom{'}^2+\\bigl(x''z'-z''x'\\bigr)\\vphantom{'}^2+\\bigl(y''x'-x''y'\\bigr)\\vphantom{'}^2}}\n {\\bigl({x'}^2+{y'}^2+{z'}^2\\bigr)\\vphantom{'}^{3/2}},\n" }, { "math_id": 37, "text": "\\kappa = \\frac{\\bigl\\|\\boldsymbol{\\gamma}' \\times \\boldsymbol{\\gamma}''\\bigr\\|}{\\bigl\\|\\boldsymbol{\\gamma}'\\bigr\\|\\vphantom{'}^3}" }, { "math_id": 38, "text": "\n\\kappa = \\frac{\\sqrt{ \\bigl\\|\\boldsymbol{\\gamma}'\\bigr\\|\\vphantom{'}^2 \\bigl\\|\\boldsymbol{\\gamma}''\\bigr\\|\\vphantom{'}^2- \\bigl(\\boldsymbol{\\gamma}'\\cdot \\boldsymbol{\\gamma}''\\bigr)\\vphantom{'}^2 } }\n{\\bigl\\|\\boldsymbol{\\gamma}'\\bigr\\|\\vphantom{'}^3}.\n" }, { "math_id": 39, "text": "\\kappa(P) = \\lim_{Q\\to P}\\sqrt\\frac{24\\bigl(s(P,Q)-d(P,Q)\\bigr)}{s(P,Q)\\vphantom{Q}^3}" }, { "math_id": 40, "text": "\\begin{pmatrix} \\mathbf{T}'\\\\ \\mathbf{t}'\\\\ \\mathbf{u}' \\end{pmatrix} =\n\\begin{pmatrix}\n0&\\kappa_\\mathrm{g}&\\kappa_\\mathrm{n}\\\\\n-\\kappa_\\mathrm{g}&0&\\tau_\\mathrm{r}\\\\\n-\\kappa_\\mathrm{n}&-\\tau_\\mathrm{r}&0\n\\end{pmatrix}\n\\begin{pmatrix} \\mathbf{T}\\\\ \\mathbf{t}\\\\ \\mathbf{u} \\end{pmatrix}" }, { "math_id": 41, "text": " K = \\lim_{r\\to 0^+} 3\\left(\\frac{2\\pi r-C(r)}{\\pi r^3}\\right)." }, { "math_id": 42, "text": "\\operatorname{I\\!I}(\\mathbf{X},\\mathbf{X}) = \\mathbf{N}\\cdot (\\nabla_\\mathbf{X} \\mathbf{X})" }, { "math_id": 43, "text": "\\operatorname{I\\!I}(\\mathbf{X},\\mathbf{X}) = k_1\\left(\\mathbf{X}\\cdot \\mathbf{u}_1\\right)^2 + k_2\\left(\\mathbf{X}\\cdot \\mathbf{u}_2\\right)^2." }, { "math_id": 44, "text": "\\partial_a \\mathbf{N} = -S_{ba} \\mathbf{X}_{b} ." }, { "math_id": 45, "text": "S= \\left(EG-F^2\\right)^{-1}\\begin{pmatrix}\neG-fF& fG-gF \\\\\nfE-eF & gE- fF\\end{pmatrix}." } ]
https://en.wikipedia.org/wiki?curid=60770
60771035
3-4-3-12 tiling
In geometry of the Euclidean plane, the 3-4-3-12 tiling is one of 20 2-uniform tilings of the Euclidean plane by regular polygons, containing regular triangles, squares, and dodecagons, arranged in two vertex configuration: 3.4.3.12 and 3.12.12. The "3.12.12" vertex figure alone generates a truncated hexagonal tiling, while the "3.4.3.12" only exists in this 2-uniform tiling. There are 2 3-uniform tilings that contain both of these vertex figures among one more. It has square symmetry, p4m, [4,4], (*442). It is also called a demiregular tiling by some authors. Circle Packing. This 2-uniform tiling can be used as a circle packing. Cyan circles are in contact with 3 other circles (1 cyan, 2 pink), corresponding to the V3.122 planigon, and pink circles are in contact with 4 other circles (2 cyan, 2 pink), corresponding to the V3.4.3.12 planigon. It is homeomorphic to the ambo operation on the tiling, with the cyan and pink gap polygons corresponding to the cyan and pink circles (one dimensional duals to the respective planigons). Both images coincide. Dual tiling. The dual tiling has kite ('ties') and isosceles triangle faces, defined by face configurations: V3.4.3.12 and V3.12.12. The kites meet in sets of 4 around a center vertex, and the triangles are in pairs making planigon rhombi. Every four kites and four isosceles triangles make a square of side length formula_0. This is one of the only dual uniform tilings which only uses planigons (and semiplanigons) containing a 30° angle. Conversely, 3.4.3.12; 3.122 is one of the only uniform tilings in which every vertex is contained on a dodecagon. Related tilings. It has 2 related 3-uniform tilings that include both 3.4.3.12 and 3.12.12 vertex figures: This tiling can be seen in a series as a lattice of 4"n"-gons starting from the square tiling. For 16-gons ("n"=4), the gaps can be filled with isogonal octagons and isosceles triangles. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2+\\sqrt{3}" } ]
https://en.wikipedia.org/wiki?curid=60771035
60777609
Liñán's flame speed
In combustion, Liñán's flame speed provides the estimate of the upper limit for edge-flame propagation velocity, when the flame curvature is small. The formula is named after Amable Liñán. When the flame thickness is much smaller than the mixing-layer thickness through which the edge flame is propagating, a flame speed can be defined as the propagating speed of the flame front with respect to a region far ahead of the flame. For small flame curvatures (flame stretch), each point of the flame front propagates at a laminar planar premixed speed formula_0 that depends on a local equivalence ratio formula_1 just ahead of the flame. However, the flame front as a whole do not propagate at a speed formula_0 since the mixture ahead of the flame front undergoes thermal expansion due to the heating by the flame front, that aids the flame front to propagate faster with respect to the region far ahead from the flame front. Liñán estimated the edge flame speed to be: formula_2 where formula_3 and formula_4 is the density of the fluid far upstream and far downstream of the flame front. Here formula_5 is the stoichiometric value (formula_6) of the planar speed. Due to the thermal expansion, streamlines diverges as it approaches the flame and a pressure builds just ahead of the flame. The scaling law for the flame speed was verified experimentally In constant density approximation, this influence due to density variations disappear and the upper limit of the edge flame speed is given by the maximum value of formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_L" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\\frac{U}{S_L^0} \\sim \\left(\\frac{\\rho_u}{\\rho_b}\\right)^{1/2}," }, { "math_id": 3, "text": "\\rho_u" }, { "math_id": 4, "text": "\\rho_b" }, { "math_id": 5, "text": "S_L^0" }, { "math_id": 6, "text": "\\phi=1" } ]
https://en.wikipedia.org/wiki?curid=60777609
6077963
Biholomorphism
Bijective holomorphic function with a holomorphic inverse In the mathematical theory of functions of one or more complex variables, and also in complex algebraic geometry, a biholomorphism or biholomorphic function is a bijective holomorphic function whose inverse is also holomorphic. Formal definition. Formally, a "biholomorphic function" is a function formula_0 defined on an open subset "U" of the formula_1-dimensional complex space C"n" with values in C"n" which is holomorphic and one-to-one, such that its image is an open set formula_2 in C"n" and the inverse formula_3 is also holomorphic. More generally, "U" and "V" can be complex manifolds. As in the case of functions of a single complex variable, a sufficient condition for a holomorphic map to be biholomorphic onto its image is that the map is injective, in which case the inverse is also holomorphic (e.g., see Gunning 1990, Theorem I.11 or Corollary E.10 pg. 57). If there exists a biholomorphism formula_4, we say that "U" and "V" are biholomorphically equivalent or that they are biholomorphic. Riemann mapping theorem and generalizations. If formula_5 every simply connected open set other than the whole complex plane is biholomorphic to the unit disc (this is the Riemann mapping theorem). The situation is very different in higher dimensions. For example, open unit balls and open unit polydiscs are not biholomorphically equivalent for formula_6 In fact, there does not exist even a proper holomorphic function from one to the other. Alternative definitions. In the case of maps "f" : "U" → C defined on an open subset "U" of the complex plane C, some authors (e.g., Freitag 2009, Definition IV.4.1) define a conformal map to be an injective map with nonzero derivative i.e., "f"’("z")≠ 0 for every "z" in "U". According to this definition, a map "f" : "U" → C is conformal if and only if "f": "U" → "f"("U") is biholomorphic. Notice that per definition of biholomorphisms, nothing is assumed about their derivatives, so, this equivalence contains the claim that a homeomorphism that is complex differentiable must actually have nonzero derivative everywhere. Other authors (e.g., Conway 1978) define a conformal map as one with nonzero derivative, but without requiring that the map be injective. According to this weaker definition, a conformal map need not be biholomorphic, even though it is locally biholomorphic, for example, by the inverse function theorem. For example, if "f": "U" → "U" is defined by "f"("z") = "z"2 with "U" = C–{0}, then "f" is conformal on "U", since its derivative "f"’("z") = 2"z" ≠ 0, but it is not biholomorphic, since it is 2-1. References. "This article incorporates material from biholomorphically equivalent on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "\\phi^{-1}:V\\to U" }, { "math_id": 4, "text": "\\phi \\colon U \\to V" }, { "math_id": 5, "text": "n=1," }, { "math_id": 6, "text": "n>1." } ]
https://en.wikipedia.org/wiki?curid=6077963
6078504
Kharitonov's theorem
Kharitonov's theorem is a result used in control theory to assess the stability of a dynamical system when the physical parameters of the system are not known precisely. When the coefficients of the characteristic polynomial are known, the Routh–Hurwitz stability criterion can be used to check if the system is stable (i.e. if all roots have negative real parts). Kharitonov's theorem can be used in the case where the coefficients are only known to be within specified ranges. It provides a test of stability for a so-called interval polynomial, while Routh–Hurwitz is concerned with an ordinary polynomial. Definition. An interval polynomial is the family of all polynomials formula_0 where each coefficient formula_1 can take any value in the specified intervals formula_2 It is also assumed that the leading coefficient cannot be zero: formula_3. Theorem. An interval polynomial is stable (i.e. all members of the family are stable) if and only if the four so-called Kharitonov polynomials formula_4 formula_5 formula_6 formula_7 are stable. What is somewhat surprising about Kharitonov's result is that although in principle we are testing an infinite number of polynomials for stability, in fact we need to test only four. This we can do using Routh–Hurwitz or any other method. So it only takes four times more work to be informed about the stability of an interval polynomial than it takes to test one ordinary polynomial for stability. Kharitonov's theorem is useful in the field of robust control, which seeks to design systems that will work well despite uncertainties in component behavior due to measurement errors, changes in operating conditions, equipment wear and so on.
[ { "math_id": 0, "text": "\n p(s)= a_0 + a_1 s^1 + a_2 s^2 + ... + a_n s^n\n" }, { "math_id": 1, "text": "a_i \\in R" }, { "math_id": 2, "text": "\n l_i \\le a_i \\le u_i.\n" }, { "math_id": 3, "text": "0 \\notin [l_n, u_n]" }, { "math_id": 4, "text": "k_1(s) = l_0 + l_1 s^1 + u_2 s^2 + u_3 s^3 + l_4 s^4 + l_5 s^5 + \\cdots " }, { "math_id": 5, "text": "k_2(s) = u_0 + u_1 s^1 + l_2 s^2 + l_3 s^3 + u_4 s^4 + u_5 s^5 + \\cdots " }, { "math_id": 6, "text": "k_3(s) = l_0 + u_1 s^1 + u_2 s^2 + l_3 s^3 + l_4 s^4 + u_5 s^5 + \\cdots " }, { "math_id": 7, "text": "k_4(s) = u_0 + l_1 s^1 + l_2 s^2 + u_3 s^3 + u_4 s^4 + l_5 s^5 + \\cdots " } ]
https://en.wikipedia.org/wiki?curid=6078504
60785404
Nuclear acoustic resonance
Nuclear acoustic resonance is a phenomenon closely related to nuclear magnetic resonance. It involves utilizing ultrasound and ultrasonic acoustic waves of frequencies between 1 MHz and 100 MHz to determine the acoustic radiation resulted from interactions of particles that experience nuclear spins as a result of magnetic and/or electric fields. The principles of nuclear acoustic resonance are often compared with nuclear magnetic resonance, specifically its usage in conjunction with nuclear magnetic resonance systems for spectroscopy and related imaging methodologies. Due to this, it is denoted that nuclear acoustic resonance can be used for the imaging of objects as well. However, for most cases, nuclear acoustic resonance requires the presence of nuclear magnetic resonance to induce electron spins within specimens in order for the absorption of acoustic waves to occur. Research conducted through experimental and theoretical investigations relative to the absorption of acoustic radiation of different materials, ranging from metals to subatomic particles, have deducted that nuclear acoustic resonance has its specific usages in other fields other than imaging. Experimental observations of nuclear acoustic resonance was first obtained in 1963 by Alers and Fleury in solid aluminum. History. Nuclear acoustic resonance was first discussed in 1952 when Semen Altshuler proposed that the acoustic coupling to nuclear spins should be visible. This was also proposed by Alfred Kastler around the same time. From his specialization in the field, Altshuler theorized the nuclear spin-acoustic phonon interactions which resulted with experimentation in 1955. The experiments led physicists to suggest that nuclear acoustic resonance coupling in metals could be formulated and observed, with modern physicists discussing the many properties of nuclear acoustic resonance, although it is not a widely known concept. Concepts of nuclear acoustic resonance in objects have been theorized and predicted by many physicists, but it was not until in 1963 when the first observation of the phenomenon occurred in solid aluminum along with observation of its dispersion in 1973, and subsequently, the first experimental nuclear acoustic resonance in liquid gallium in 1975. However, the aspect of acoustic spin resonance has been observed by Bolef and Menes in 1966 through samples of indium antimonide where nuclear spins were shown to absorb acoustic energy exhibited by the sample. Theory of nuclear acoustic resonance. Nuclear Spin and Acoustic Radiation. The nuclei is deduced to spin due to its different properties ranging from magnetic to electric properties of different nuclei within atoms. Commonly this spin is utilized within the field of nuclear magnetic resonance, where an external RF (or ultra-high frequency range) magnetic field is used to excite and resonate with the nuclei spin within the internal system. This in turn allows the absorption or dispersion of electromagnetic radiation to occur, and allows magnetic resonance imaging equipment to detect and produce images. However, for nuclear acoustic resonance, the energy levels that determine the orientation of the spinning while under internal or external fields are transitioned by acoustic radiation. As acoustic waves are often between frequencies of 1 MHz and 100 MHz, they are usually characterized as ultrasound or ultrasonic (sound of frequencies above the audible range of formula_0). Comparison with Nuclear Magnetic Resonance. Similar to nuclear magnetic resonance, both phenomena introduces and utilizes external sources such as a DC magnetic field or different frequencies, and results from both methods produce similar data sets and trends in different variables. However, there are distinct differences in the methodologies of the two concepts. Nuclear acoustic resonance involves inducing internal spin-dependent interactions while nuclear magnetic resonance denotes interactions with external magnetic fields. Due to this, nuclear acoustic resonance is not solely dependent on nuclear magnetic resonance, and can be operated independently. Such cases where nuclear acoustic resonance is a better substitute for nuclear magnetic resonance include resonance in metals where electromagnetic waves can be difficult to penetrate and resonate, such as amorphous metals and alloys, while acoustic waves can easily pass through. However, the suitability for using nuclear acoustic resonance or nuclear magnetic resonance is reliant on the material to be used in order to achieve the most efficient and evident results. Physics of Nuclear Acoustic Resonance. Nuclear acoustic resonance implements physics from both nuclear magnetic resonance and acoustics, involving the use of laws of quantum mechanics to derive theory on acoustic resonance in objects with nuclei that have a nonzero angular momentum (I), with its magnitude given by formula_1. In elements where formula_2, the characteristic of the nuclei spin also includes electric moments, also known as the electric quadrupole moment (denoted as formula_3) for the weakest electric moment. This moment (formula_4) influences the electric field gradients within the nucleus as a result of surrounding charges relative to the nucleus. In effect, the results of nuclear magnetic resonance used to induce nuclear acoustic resonance is affected. By utilizing the magnetic spin of nuclei under RF magnetic fields, and their spin-lattice relaxation properties after excitement from the external field to higher energy states, it is possible for acoustic waves to interact with nuclear spins, which often involves externally generated phonon. However, interactions of acoustic waves with nuclear spins do not guarantee the observation of acoustic resonance in objects. During the interactions, the acoustic waves experience a slight change in magnitude caused by the absorption by the object under nuclear spin, and the measurement of the change is crucial to observe and detect nuclear acoustic resonance in the object. Hence due to the difficulties analyzing nuclear acoustic resonance, it is only observed indirectly. However, as further propositions are made, ultrasonic pulse-echo techniques are introduced to detect changes in acoustic attenuation in specimens during experiments due to its capability of detecting changes in solids around 1 part in formula_5, which is capable of detecting background attenuation, although not for nuclear spin-phonon coupling, in which has attenuation coefficients from "10""-7" to "10""-8" dB/cm. Hence a combination of a continuous wave (CW) ultrasonic composite-resonator technique and nuclear magnetic resonance techniques is required to actually detect nuclear acoustic resonance. Nuclear Acoustic Resonance in Metals. Coherent or incoherent generated phonon entice the nuclear spins in nuclear acoustic resonance processes, and as a result is compared with the direct spin-lattice relaxation mechanism. Due to this, spins are de-excited from interactions with resonant thermal phonon at low frequencies, which is often denoted to be insignificant. This is certainly the case when compared with the indirect or Raman process where multiple phonon are involved. However, as the direct spin-lattice relaxation characterizes solids at specific temperatures due to formations of a small percentage of the lattice vibration spectrum, it is proposed that solids can be subjected to acoustic energy using ultrasound with energy ranging from 10"10" to 10"12" in terms of density greater than energy from the incoherent thermal phonon. From this theory, it is predicted that observations of nuclear spin can be achieved at high temperatures using nuclear acoustic resonance principles and techniques, unlike normal circumstances where they are only visible at low temperatures. The initial direct observation of nuclear acoustic resonance occurred in 1963 with the use of samples of aluminum under an applied magnetic field, in which created an electromagnetic field that minimally affected the properties of the sound waves being used, specifically its velocity and attenuation. The experimental analysis deduced that the effects on velocity and attenuation by the external magnetic field was proportional to its square, which allowed the acoustic attenuation coefficient to be calculated for any nuclear spin systems undergoing absorption of acoustic energy, which is characterized as formula_6, where formula_7, with formula_8 being the incident acoustic power per unit area. formula_9is determined by formula_10being the density of the metal, formula_11as the velocity of the propagated sound wave, and formula_12being the peak value of the strain. Furthermore, formula_13, the power per unit volume being absorbed by the system undergoing nuclear spin, is characterized by formula_14 where N is the count of nuclear spins per unit volume of the metal, v is the frequency, and formula_15being the magnetic dipole coupling value. However, this formula does not factor in the effect of eddy currents on the metal caused by the magnetic fields. Nevertheless, the results of the experimental observation of nuclear acoustic resonance in aluminum devised propositions of further investigations in the field such as single crystals of metals with weak quadrupole moments and nuclear spins of 1/2. Nuclear Acoustic Resonance in Liquids. Due to the different properties of liquids when compared to solids, it is typically impossible to detect nuclear acoustic resonance in liquids due to difficulties when inducing resonance in liquids. In solids, the spin transitions of nuclear acoustic resonance are induced by two different coupling mechanisms. However, objects in the liquid state are strongly affected by their thermal properties, which also influences the dynamic electric field gradient, leading to a near impossibility of inducing nuclear acoustic resonance in liquids via the coupling method. Hence in the first experimental attempt to observe nuclear acoustic resonance in a liquid sample, a metallic specimen was used as the object of interest. Further experimentation led to usage of external factors such as using piezoelectric nano-particles to detect nuclear acoustic resonance in liquids, particularly in fluids. In the initial successful experimental investigation on nuclear acoustic resonance in liquid, a coherent electromagnetic wave inside the metal sample was produced by sound waves generated by external dc magnetic fields surrounding the metallic object; the generated sound wave resonate with the nuclear spins of the object, allowing nuclear acoustic resonance to be theoretically observed. The theoretical predictions were confirmed when samples of liquid gallium were observed and measured. From this experimental observation, it was proposed that nuclear acoustic resonance in liquids metals requires magnetic dipole interactions due to the properties of liquids, and in which creates a dependence on the distance between particles in the liquid metal instead of the ultrasonic displacement field as seen in solids. Due to this, and the fact that the total displacement field for the generated electromagnetic field is the superposition of the displacement fields, the electromagnetic field can be modeled by a sum of the coherent and incoherent parts due to Maxwell's equations. Hence Unterhorst, Muller, and Schanz devised that nuclear acoustic resonance in liquid metals can be achieved and observed if the diffusion length during the relation time is relatively small compared to the ultrasonic wavelength of the sound wave. Imaging. By utilizing ultrasound acoustic waves via propagation onto objects such as patients, imaging is possible when resonance is achieved. This is then computed by a system of equipment that combines techniques and concepts from both ultrasound and magnetic resonance imaging to produce images for medical purposes. However, due to the specific requirements of attaining nuclear acoustic resonance and the characteristics of ultrasound and magnetic resonance imaging, while imaging via nuclear acoustic resonance is achievable, experimentally limitations exist. Typical ultrasound techniques for imaging can obtain detection of acoustic attenuation differences of approximately 1 part in 1000, in which is not within the range of the required detection capability for nuclei spin systems which has acoustic coefficients from "10""-7" to "10""-8" dB/cm. Harmonic Correlation. Although experimental nuclear acoustic resonance techniques on objects such as metals can achieve acoustic resonance, it is not a viable option for medical imaging, although it may be useful for spectroscopy in non-organic compounds. Hence the concept of harmonic correlation is introduced. This allows a new method of obtaining, amplifying, and analyzing acoustic signals. This method allows the sensitivity of the detection technique to be enhanced by implementing broadband signals into narrow-band signals for analysis. Harmonic correlation in general determines the correlation between the amplitude functions of two harmonically related narrow-band signals directed towards a patient, in which the assumption that they originate from the same source is made in order for the processing algorithm that collects that data and simulates them to boost the sensitivity of the signal detection of the analysis. Hence harmonic correlation clarifies the consequences of the absorption process of the induced nuclear spin phonon, however, such a process is very complicated and requires rigorous treatment of the data collected.
[ { "math_id": 0, "text": "20-20,000 Hz" }, { "math_id": 1, "text": "\\sqrt{I(I+1)}" }, { "math_id": 2, "text": "I>1/2" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "I" }, { "math_id": 5, "text": "10^3" }, { "math_id": 6, "text": "\\alpha_n" }, { "math_id": 7, "text": "\\alpha = P_n / P_0" }, { "math_id": 8, "text": "P_0 = 1/2 \\rho v_s^3 \\epsilon^2 " }, { "math_id": 9, "text": "P_0" }, { "math_id": 10, "text": "\\rho" }, { "math_id": 11, "text": "v_s" }, { "math_id": 12, "text": "\\epsilon" }, { "math_id": 13, "text": "P_n" }, { "math_id": 14, "text": "P_n = N(hv)^2/(2I+1)kT \\sum_m W_{mm'}" }, { "math_id": 15, "text": "W_{mm'}" } ]
https://en.wikipedia.org/wiki?curid=60785404
60786392
Nanotechnology in warfare
Branch of nanoscience Nanotechnology in warfare is a branch of nano-science in which molecular systems are designed, produced and created to fit a nano-scale (1-100 nm). The application of such technology, specifically in the area of warfare and defence, has paved the way for future research in the context of weaponisation. Nanotechnology unites a variety of scientific fields including material science, chemistry, physics, biology and engineering. Advancements in this area, have led to categorised development of such nano-weapons with classifications varying from; small robotic machines, hyper-reactive explosives, and electromagnetic super-materials. With this technological growth, has emerged implications of associated risks and repercussions, as well as regulation to combat these effects. dem impacts give rise to issues concerning global security, the safety of society, and the environment. Legislation may need to be constantly monitored to keep up with the dynamic growth and development of nano-science, due to the potential benefits or dangers of its use. Anticipation of such impacts through regulation, would 'prevent irreversible damages' of implementing defence related nanotechnology in warfare. Origins. Historical use of nanotechnology in the area of warfare and defence has been rapid and expansive. Over the past two decades, numerous countries have funded military applications of this technology including; China, United Kingdom, Russia, and most notably the United States. The US government has been considered a national leader of research and development in this area, however now rivalled by international competition as appreciation of nanotechnology's eminence increases. Therefore, the growth of this area in the use of its power has a dominant platform in the front line of military interests. U.S. National Nanotechnology Initiative. In 2000, the United States government developed a National Nanotechnology Initiative to focus funding towards the development of nano-science and its technology, with a heavy focus on utilising the potential of nano-weapons. This initial US proposal has now grown to coordinate application of nanotechnology in numerous defence programs, as well as all military factions including Air Force, Army and Navy. From the financial year 2001 through to 2014, the US government contributed around $19.4 billion to nano-science, moreover the development and manufacturing of nano-weapons for military defence. The 21st Century Nanotechnology Research and Development Act (2003), envisions the United States continuing its leadership in the field of nanotechnology through national collaboration, productivity and competitiveness, to maintain this dominance. Developments. Successful transitions of nanotechnology into defence products: The United States government has had military purposed development of nanotechnology at the forefront of its national budget and policy throughout the Clinton and Bush administrations, with the Department of Defense planning to continue with this priority throughout the 21st century. In response to America's assertive public funding of defence purposed nanotechnology, numerous global actors have since created similar programmes. China. In the sub-category of nano materials China secures second place behind the United States in the amount of research publications they have released. Conjecture stands over the purpose of China's quick development to rival the U.S., with 1/5 of their government budget spent on research (US$337million). In 2018, Tsinghua University, Beijing, released their findings where they have enhanced carbon nanotubes to now withstand the weight of over 800 tonnes, requiring just 1formula_0of material. The scientific nanotechnology team hinted at aerospace, and armour boosting applications, showing promise for defence related nano-weapons. The Chinese Academy of Science's Vice President Chunli Bai, has stated the need to focus on closing the gap between "basic research and application," in order for China to advance its global competitiveness in nanotechnology. Between 2001 and 2004, approximately 60 countries globally implemented national nanotechnology programmes. According to R.D Shelton, an international technology assessor, research and development in this area "has now become a socio-economic target...an area of intense international collaboration and competition." As of 2017, data showed 4725 patents published in USPTO by the USA alone, maintaining their position as a leader in nanotechnology for over 20 years. Current research. Most recent research into military nanotechnological weapons includes production of defensive military apparatus, with objectives of enhancing existing designs of lightweight, flexible and durable materials. These innovative designs are equipped with features to also enhance offensive strategy through sensing devices and manipulation of electromechanical properties. Soldier battlesuit. The Institute for Soldier Nanotechnologies (ISN), deriving from a partnership between the United States Army and MIT, provided an opportunity to focus funding and research activities purely on developing armour to increase soldier survival. Each of seven teams produces innovative enhancements for different aspects of a future U.S. soldier bodysuit. These additional characteristics include energy-absorbing material protecting from blasts or ammunition shocks, engineered sensors to detect chemicals and toxins, as well as built in nano devices to identify personal medical issues such as haemorrhages and fractures. This suit would be made possible with advanced nano-materials such as carbon nanotubes woven into fibres, allowing strengthened structural capacities and flexibility, however preparation becomes an issue due to inability to use automated manufacturing. Enhanced materials. Creation of sol-gel ceramic coatings has protected metals from; wear, fractures and moisture, allowing adjustability to numerous shapes and sizes, as well as aiding "materials that cannot withstand high temperature". Current research focuses on resolving durability issues, where stress cracks between the coating and material set limitations on its use and longevity. The drive for this research is finding more efficient and cost effective uses in application of nanotechnology for Airforce and Navy military groups. Integration of fibre-reinforced nano-materials in structural features, such as missile casings, can limit overheating, increase reliability, strength and ductility of the materials used for such nanotechnology. Communication devices. Nanotechnology designed for advanced communication is expected to equip soldiers and vehicles with micro antenna rays, tags for remote identification, acoustic arrays, micro GPS receivers and wireless communication. Nanotech facilitates easier defence related communications due to lower energy consumption, light weight, efficiency of power, as well as smaller and cheaper to manufacture. Specific military uses of this technology include aerospace applications such as; solid oxide fuel cells to provide three times the energy, surveillance cameras on microchips, performance monitors, and cameras as light as 18g. Mini-nukes. The United States, along with countries such as Russia and Germany, are utilising the convenience of small nanotechnologies, adhering it to nuclear "mini-nuke" explosive devices. This weapon would weigh 5 lbs, with the force of 100 tonnes of TNT, giving it the possibility to annihilate and threaten humanity. The structural integrity would remain the same as nuclear bombs, however manufactured with nano-materials to allow production to a smaller scale. Engineers and scientists alike, realise some of these proposed developments may not be feasible within the next two decades as more research needs to be undertaken, improving models to be quicker and more efficient. Particularly molecular nanotechnology, requires further understanding of manipulation and reaction, in order to adapt it to a military arena. Implications. Nanotechnology and its use in warfare promises economic growth however comes with the increased threat to international security and peacekeeping. The rapid emergence of new nanotechnologies have sparked discussion surrounding the impacts such developments will have on geo-politics, ethics, and the environment. Geo-political. Difficulty in categorisation of nano-weapons, and their intended purposes (defensive or offensive) compromises the balance of stability and trust in the global environment. "A lack of transparency about an emerging technology not only negatively effects public perception but also negatively impacts the perceived balance of powers in the existing security environment." The peace and cohesion of the international structure may possibly be negatively affected with a continuing military-focused development of nanotechnology in warfare. Ambiguity and a lack of transparency in research increases difficulty of regulation in this area. Similarly, arguments put forward from a scientific standpoint, highlight the limited information known, concerning the implications of creating such powerful technology, in regards to reaction of the nano-particles themselves. "Although great scientific and technological progress has been made, many questions about the behaviour of matter at the nanoscale level remain, and considerable scientific knowledge has yet to be learned." Environmental. The introduction of nanotechnology into everyday life enables potential benefits of use, yet carries the possibility of unknown consequences for the environment and safety. Possible positive developments include creation of nano-devices to decrease remaining radio-activity in areas, as well as sensors to detect pollutants and adjust fuel-air mixtures. Associated risks may involve; military personnel inhaling nanoparticles added to fuel, possible absorption of nanoparticles from sensors into the skin, water, air or soil, dispersion of particles from blasts through the environment (via wind), alongside disposal of nano-tech batteries potentially affecting ecosystems. Applications for materials or explosive devices, allow a greater volume of nano-powders to be packed into a smaller weapon, resulting in a stronger and possibly lethal toxic effect. Social and ethical. It is unknown the full extent of consequences that may arise in social and ethical areas. Estimates can be made on the associated impacts as they may mirror similar progression of technological developments and affect all areas. The main ethical uncertainties entail the degree to which modern nanotechnology will threaten privacy, global equity and fairness, while giving rise to patent and property right disputes. An overarching social and humanitarian issue, branches from the creative intention of these developments. 'The power to kill or capture debate', highlights the unethical purpose and function of destruction these nanotechnological weapons supply to the user. Controversy surrounding the innovation and application of nanotechnology in warfare highlights dangers of not pre-determining risks, or accounting for possible impacts of such technology. "The threat of nuclear weapons led to the cold war. The same trend is foreseen with nanotechnology, which may lead to the so-called nanowars, a new age of destruction", stated by the U.S. Department of Defense. Similarly a report released by Oxford University, warns of the pre-eminent extinction of the human race with a 5% risk of this occurring due to development of 'molecular nanotech weapons'. Regulation. International regulation for such concerns surrounding issues of nanotechnology and its military application, are non existent. There is currently no framework to enforce or support international cooperation to limit production or monitor research and development of nanotechnology for defensive use. "Even if a transnational regulatory framework is established, it is impossible to determine if a nation is non-compliant if one is unable to determine the entire scope of research, development, or manufacturing." Producing legislation to keep-up with the rapid development of products and new materials in the scientific spheres, would pose as a hindrance to constructing working and relevant regulation. Productive regulation should assure public health and safety, account for environmental and international concerns, yet not restrict innovation of emerging ideas and applications for nanotechnology. Proposed regulation. Approaches to development of legislation, possibly include progression towards classified non-disclosive information pertaining to military use of nanotechnology. A paper written by Harvard Journal of Law and Technology, discusses laws that would revolve around specific export controls and discourage civilian or private research into nano-materials. This proposal suggests mimicking the U.S. Atomic Energy Act of 1954, restricting any distribution of information regarding the properties and features of the nanotechnology at creation. The Nanomaterial Registry. A United States National Registry for Nanotechnology has enabled a public sphere where reports are available for curated data on physico-chemical characteristics and interactions of nanomaterials. Requiring further development and more frequent voluntary additions, the register could initiate global regulation and cooperation regarding nanotechnology in warfare. The registry was developed to assist in the standardisation, formatting, and sharing of data. With more compliance and cooperation this data sharing model may "simplify the community level of effort in assessing nanomaterial data from environmental and biological interaction studies." Analysis of such a registry would be carried out with expertise by professional nano-scientists, creating a filtering mechanism for any potentially newly developed or dangerous materials. However, this idea of a specific nonmaterial registry is not original, as several databases have been developed previously including the caNanoLab and InterNano which are both engaging and accessible to the public, informatively curated by experts, and detail tools of nano manufacturing . The National Nanomaterial Registry, is a more updated version in which information is collated from a range of these sources and multiple additional data resources. It translates a greater range of content regarding; comparison tools with other materials, encouraging standard methods, alongside compliance rating features. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "cm^3" } ]
https://en.wikipedia.org/wiki?curid=60786392
607864
Sunrise problem
Problem asking the probability that the sun will rise tomorrow The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs. According to the Bayesian interpretation of probability, probability theory can be used to evaluate the plausibility of the statement, "The sun will rise tomorrow." The sunrise problem was first introduced publicly in 1763 by Richard Price in his famous coverage of Thomas Bayes' foundational work in Bayesianism. Laplace's approach. Pierre-Simon Laplace, who treated it by means of his rule of succession. Let "p" be the long-run frequency of sunrises, i.e., the sun rises on 100 × "p"% of days. "Prior" to knowing of any sunrises, one is completely ignorant of the value of "p". Laplace represented this prior ignorance by means of a uniform probability distribution on "p". For instance, the probability that "p" is between 20% and 50% is just 30%. This must not be interpreted to mean that in 30% of all cases, "p" is between 20% and 50%. Rather, it means that one's state of knowledge (or ignorance) justifies one in being 30% sure that the sun rises between 20% of the time and 50% of the time. "Given" the value of "p", and no other information relevant to the question of whether the sun will rise tomorrow, the probability that the sun will rise tomorrow is "p". But we are "not" "given the value of "p"". What we are given is the observed data: the sun has risen every day on record. Laplace inferred the number of days by saying that the universe was created about 6000 years ago, based on a young-earth creationist reading of the Bible. To find the conditional probability distribution of "p" given the data, one uses Bayes' theorem, which some call the "Bayes–Laplace rule". Having found the conditional probability distribution of "p" given the data, one may then calculate the conditional probability, given the data, that the sun will rise tomorrow. That conditional probability is given by the rule of succession. The plausibility that the sun will rise tomorrow increases with the number of days on which the sun has risen so far. Specifically, assuming "p" has an a-priori distribution that is uniform over the interval [0,1], and that, given the value of "p", the sun independently rises each day with probability "p", the desired conditional probability is: formula_0 By this formula, if one has observed the sun rising 10000 times previously, the probability it rises the next day is formula_1. Expressed as a percentage, this is approximately a formula_2 chance. However, Laplace recognized this to be a misapplication of the rule of succession through not taking into account all the prior information available immediately after deriving the result: E.T. Jaynes noted that Laplace's warning had gone unheeded by workers in the field. A reference class problem arises: the plausibility inferred will depend on whether we take the past experience of one person, of humanity, or of the earth. A consequence is that each referent would hold different plausibility of the statement. In Bayesianism, any probability is a conditional probability given what one knows. That varies from one person to another. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Pr(\\text{Sun rises tomorrow} \\mid \\text{It has risen } k \\text{ times previously}) = \\frac{\\int_0^1 p^{k+1}\\,dp}{\\int_0^1 p^k \\,dp}= \\frac{k+1}{k+2}." }, { "math_id": 1, "text": " 10001/10002 \\approx 0.99990002" }, { "math_id": 2, "text": " 99.990002 \\%" } ]
https://en.wikipedia.org/wiki?curid=607864
60801185
Williamson conjecture
In combinatorial mathematics, specifically in combinatorial design theory and combinatorial matrix theory the Williamson conjecture is that Williamson matrices of order formula_0 exist for all positive integers formula_0. Four symmetric and circulant matrices formula_1, formula_2, formula_3, formula_4 are known as "Williamson matrices" if their entries are formula_5 and they satisfy the relationship formula_6 where formula_7 is the identity matrix of order formula_0. John Williamson showed that if formula_1, formula_2, formula_3, formula_4 are Williamson matrices then formula_8 is an Hadamard matrix of order formula_9. It was once considered likely that Williamson matrices exist for all orders formula_0 and that the structure of Williamson matrices could provide a route to proving the Hadamard conjecture that Hadamard matrices exist for all orders formula_9. However, in 1993 the Williamson conjecture was shown to be false via an exhaustive computer search by Dragomir Ž. Ðoković, who showed that Williamson matrices do not exist in order formula_10. In 2008, the counterexamples 47, 53, and 59 were additionally discovered. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "\\pm1" }, { "math_id": 6, "text": "A^2 + B^2 + C^2 + D^2 = 4n I" }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "\\begin{bmatrix}\n A & B & C & D \\\\\n -B & A & -D & C \\\\\n -C & D & A & -B \\\\\n -D & -C & B & A\n\\end{bmatrix}" }, { "math_id": 9, "text": "4n" }, { "math_id": 10, "text": "n=35" } ]
https://en.wikipedia.org/wiki?curid=60801185
60810627
Newton–Gauss line
Line joining midpoints of a complete quadrilateral's 3 diagonals In geometry, the Newton–Gauss line (or Gauss–Newton line) is the line joining the midpoints of the three diagonals of a complete quadrilateral. The midpoints of the two diagonals of a convex quadrilateral with at most two parallel sides are distinct and thus determine a line, the Newton line. If the sides of such a quadrilateral are extended to form a complete quadrangle, the diagonals of the quadrilateral remain diagonals of the complete quadrangle and the Newton line of the quadrilateral is the Newton–Gauss line of the complete quadrangle. Complete quadrilaterals. Any four lines in general position (no two lines are parallel, and no three are concurrent) form a complete quadrilateral. This configuration consists of a total of six points, the intersection points of the four lines, with three points on each line and precisely two lines through each point. These six points can be split into pairs so that the line segments determined by any pair do not intersect any of the given four lines except at the endpoints. These three line segments are called diagonals of the complete quadrilateral. Existence of the Newton−Gauss line. It is a well-known theorem that the three midpoints of the diagonals of a complete quadrilateral are collinear. There are several proofs of the result based on areas or wedge products or, as the following proof, on Menelaus's theorem, due to Hillyer and published in 1920. Let the complete quadrilateral ABCA'B'C' be labeled as in the diagram with diagonals and their respective midpoints L, M, N. Let the midpoints of be P, Q, R respectively. Using similar triangles it is seen that QR intersects at L, RP intersects at M and PQ intersects at N. Again, similar triangles provide the following proportions, formula_0 However, the line "A’B'C " intersects the sides of triangle △"ABC", so by Menelaus's theorem the product of the terms on the right hand sides is −1. Thus, the product of the terms on the left hand sides is also −1 and again by Menelaus's theorem, the points L, M, N are collinear on the sides of triangle △"PQR". Applications to cyclic quadrilaterals. The following are some results that use the Newton–Gauss line of complete quadrilaterals that are associated with cyclic quadrilaterals, based on the work of Barbu and Patrascu. Equal angles. Given any cyclic quadrilateral ABCD, let point F be the point of intersection between the two diagonals and . Extend the diagonals and until they meet at the point of intersection, E. Let the midpoint of the segment be N, and let the midpoint of the segment be M (Figure 1). Theorem. If the midpoint of the line segment is P, the Newton–Gauss line of the complete quadrilateral ABCDEF and the line PM determine an angle ∠"PMN" equal to ∠"EFD". Proof. First show that the triangles △"NPM", △"EDF" are similar. Since "BE" ∥ "PN" and "FC" ∥ "PM", we know ∠"NPM" = ∠"EAC". Also, formula_1 In the cyclic quadrilateral ABCD, these equalities hold: formula_2 Therefore, ∠"NPM" = ∠"EDF". Let "R"1, "R"2 be the radii of the circumcircles of △"EDB", △"FCD" respectively. Apply the law of sines to the triangles, to obtain: formula_3 Since "BE" = 2 · "PN" and "FC" = 2 · "PM", this shows the equality formula_4 The similarity of triangles △"PMN", △"DFE" follows, and ∠"NMP" = ∠"EFD". Remark. If Q is the midpoint of the line segment , it follows by the same reasoning that ∠"NMQ" = ∠"EFA". Isogonal lines. Theorem. The line through E parallel to the Newton–Gauss line of the complete quadrilateral ABCDEF and the line EF are isogonal lines of ∠"BEC", that is, each line is a reflection of the other about the angle bisector. (Figure 2) Proof. Triangles △"EDF", △"NPM" are similar by the above argument, so ∠"DEF" = ∠"PNM". Let E' be the point of intersection of BC and the line parallel to the Newton–Gauss line NM through E. Since "PN" ∥ "BE" and "NM" ∥ "EE'," ∠"BEF" = ∠"PNF", and ∠"FNM" = ∠"E'EF". Therefore, formula_5 Two cyclic quadrilaterals sharing a Newton-Gauss line. Lemma. Let G and H be the orthogonal projections of the point F on the lines AB and CD respectively. The quadrilaterals MPGN and MQHN are cyclic quadrilaterals. Proof. ∠"EFD" = ∠"PMN", as previously shown. The points P and N are the respective circumcenters of the right triangles △"BFG", △"EFG". Thus, ∠"PGF" = ∠"PFG" and ∠"FGN" = ∠"GFN". Therefore, formula_6 Therefore, MPGN is a cyclic quadrilateral, and by the same reasoning, MQHN also lies on a circle. Theorem. Extend the lines GF, HF to intersect EC, EB at I, J respectively (Figure 4). The complete quadrilaterals EFGHIJ and ABCDEF have the same Newton–Gauss line. Proof. The two complete quadrilaterals have a shared diagonal, . N lies on the Newton–Gauss line of both quadrilaterals. N is equidistant from G and H, since it is the circumcenter of the cyclic quadrilateral EGFH. If triangles △"GMP", △"HMQ" are congruent, and it will follow that M lies on the perpendicular bisector of the line HG. Therefore, the line MN contains the midpoint of , and is the Newton–Gauss line of EFGHIJ. To show that the triangles △"GMP", △"HMQ" are congruent, first observe that PMQF is a parallelogram, since the points M, P are midpoints of respectively. Therefore, formula_7 Also note that formula_8 Hence, formula_9 Therefore, △"GMP" and △"HMQ" are congruent by SAS. Remark. Due to △"GMP", △"HMQ" being congruent triangles, their circumcircles MPGN, MQHN are also congruent. History. The Newton–Gauss line proof was developed by the two mathematicians it is named after: Sir Isaac Newton and Carl Friedrich Gauss. The initial framework for this theorem is from the work of Newton, in his previous theorem on the Newton line, in which Newton showed that the center of a conic inscribed in a quadrilateral lies on the Newton–Gauss line. The theorem of Gauss and Bodenmiller states that the three circles whose diameters are the diagonals of a complete quadrilateral are coaxal. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\overline{RL}}{\\overline{LQ}} = \\frac{\\overline{BA}}{\\overline{AC}}, \\quad \\frac{\\overline{QN}}{\\overline{NP}} = \\frac{\\overline{A'C'}}{\\overline{C'B}}, \\quad \\frac{\\overline{PM}}{\\overline{MR}} = \\frac{\\overline{CB'}}{\\overline{B'A'}}.\n" }, { "math_id": 1, "text": "\\tfrac{\\overline{BE}}{\\overline{PN}} = \\tfrac{\\overline{FC}}{\\overline{PM}} = 2." }, { "math_id": 2, "text": "\\begin{align}\n\\angle EDF &= \\angle ADF + \\angle EDA, \\\\\n &= \\angle ACB + \\angle ABC, \\\\\n &= \\angle EAC.\n\\end{align}" }, { "math_id": 3, "text": "\\frac {\\overline{BE}}{\\overline{FC}}=\\frac {2R_1\\sin \\angle EDB}{2R_2\\sin \\angle FDC}=\\frac {R_1}{R_2}=\\frac {2R_1\\sin \\angle EBD}{2R_2\\sin \\angle FCD} = \\frac{\\overline{DE}}{\\overline{DF}}." }, { "math_id": 4, "text": "\\tfrac{\\overline{PN}}{\\overline{PM}} = \\tfrac{\\overline{DE}}{\\overline{DF}}." }, { "math_id": 5, "text": "\\begin{align}\n\\angle CEE' &= \\angle DEF - \\angle E'EF, \\\\\n &= \\angle PNM - \\angle FNM, \\\\\n &= \\angle PNF = \\angle BEF.\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align} \n\\angle PGN + \\angle PMN \n& = (\\angle PGF + \\angle FGN) + \\angle PMN \\\\[4pt] \n& = \\angle PFG + \\angle GFN + \\angle EFD \\\\[4pt] \n&= 180^\\circ \n\\end{align}." }, { "math_id": 7, "text": "\\begin{align}\n& \\overline{MP} = \\overline{QF} = \\overline{HQ}, \\\\\n& \\overline{GP} = \\overline{PF} = \\overline{MQ}, \\\\\n& \\angle MPF = \\angle FQM.\n\\end{align}" }, { "math_id": 8, "text": "\\angle FPG = 2 \\angle PBG = 2 \\angle DBA = 2 \\angle DCA = 2 \\angle HCF = \\angle HQF." }, { "math_id": 9, "text": "\\begin{align}\n\\angle MPG &= \\angle MPF + \\angle FPG, \\\\\n &= \\angle FQM + \\angle HQF, \\\\\n &= \\angle HQF + \\angle FQM, \\\\\n &= \\angle HQM.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=60810627
6081205
Pivotal quantity
Function of observations and unobservable parameters In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters (including nuisance parameters). A pivot need not be a statistic — the function and its 'value' can depend on the parameters of the model, but its 'distribution' must not. If it is a statistic, then it is known as an 'ancillary statistic'. More formally, let formula_0 be a random sample from a distribution that depends on a parameter (or vector of parameters) formula_1. Let formula_2 be a random variable whose distribution is the same for all formula_1. Then formula_3 is called a 'pivotal quantity' (or simply a 'pivot'). Pivotal quantities are commonly used for normalization to allow data from different data sets to be compared. It is relatively easy to construct pivots for location and scale parameters: for the former we form differences so that location cancels, for the latter ratios so that scale cancels. Pivotal quantities are fundamental to the construction of test statistics, as they allow the statistic to not depend on parameters – for example, Student's t-statistic is for a normal distribution with unknown variance (and mean). They also provide one method of constructing confidence intervals, and the use of pivotal quantities improves performance of the bootstrap. In the form of ancillary statistics, they can be used to construct frequentist prediction intervals (predictive confidence intervals). Examples. Normal distribution. One of the simplest pivotal quantities is the z-score. Given a normal distribution with mean formula_4 and variance formula_5, and an observation 'x', the z-score: formula_6 has distribution formula_7 – a normal distribution with mean 0 and variance 1. Similarly, since the 'n'-sample sample mean has sampling distribution formula_8, the z-score of the mean formula_9 also has distribution formula_10 Note that while these functions depend on the parameters – and thus one can only compute them if the parameters are known (they are not statistics) — the distribution is independent of the parameters. Given formula_11 independent, identically distributed (i.i.d.) observations formula_12 from the normal distribution with unknown mean formula_4 and variance formula_5, a pivotal quantity can be obtained from the function: formula_13 where formula_14 and formula_15 are unbiased estimates of formula_4 and formula_5, respectively. The function formula_16 is the Student's t-statistic for a new value formula_17, to be drawn from the same population as the already observed set of values formula_18. Using formula_19 the function formula_20 becomes a pivotal quantity, which is also distributed by the Student's t-distribution with formula_21 degrees of freedom. As required, even though formula_4 appears as an argument to the function formula_3, the distribution of formula_20 does not depend on the parameters formula_4 or formula_22 of the normal probability distribution that governs the observations formula_23. This can be used to compute a prediction interval for the next observation formula_24 see Prediction interval: Normal distribution. Bivariate normal distribution. In more complicated cases, it is impossible to construct exact pivots. However, having approximate pivots improves convergence to asymptotic normality. Suppose a sample of size formula_11 of vectors formula_25 is taken from a bivariate normal distribution with unknown correlation formula_26. An estimator of formula_26 is the sample (Pearson, moment) correlation formula_27 where formula_28 are sample variances of formula_18 and formula_29. The sample statistic formula_30 has an asymptotically normal distribution: formula_31. However, a variance-stabilizing transformation formula_32 known as Fisher's 'z' transformation of the correlation coefficient allows creating the distribution of formula_33 asymptotically independent of unknown parameters: formula_34 where formula_35 is the corresponding distribution parameter. For finite samples sizes formula_11, the random variable formula_33 will have distribution closer to normal than that of formula_30. An even closer approximation to the standard normal distribution is obtained by using a better approximation for the exact variance: the usual form is formula_36. Robustness. From the point of view of robust statistics, pivotal quantities are robust to changes in the parameters — indeed, independent of the parameters — but not in general robust to changes in the model, such as violations of the assumption of normality. This is fundamental to the robust critique of non-robust statistics, often derived from pivotal quantities: such statistics may be robust within the family, but are not robust outside it. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X = (X_1,X_2,\\ldots,X_n) " }, { "math_id": 1, "text": " \\theta " }, { "math_id": 2, "text": " g(X,\\theta) " }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\sigma^2" }, { "math_id": 6, "text": " z = \\frac{x - \\mu}{\\sigma}," }, { "math_id": 7, "text": "N(0,1)" }, { "math_id": 8, "text": "N(\\mu,\\sigma^2/n)" }, { "math_id": 9, "text": " z = \\frac{\\overline{X} - \\mu}{\\sigma/\\sqrt{n}}" }, { "math_id": 10, "text": "N(0,1)." }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "X = (X_1, X_2, \\ldots, X_n) " }, { "math_id": 13, "text": " g(x,X) = \\frac{x - \\overline{X}}{s/\\sqrt{n}} " }, { "math_id": 14, "text": " \\overline{X} = \\frac{1}{n}\\sum_{i=1}^n{X_i} " }, { "math_id": 15, "text": " s^2 = \\frac{1}{n-1}\\sum_{i=1}^n{(X_i - \\overline{X})^2} " }, { "math_id": 16, "text": "g(x,X)" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "X" }, { "math_id": 19, "text": "x=\\mu" }, { "math_id": 20, "text": "g(\\mu,X)" }, { "math_id": 21, "text": "\\nu = n-1" }, { "math_id": 22, "text": "\\sigma" }, { "math_id": 23, "text": "X_1,\\ldots,X_n" }, { "math_id": 24, "text": "X_{n+1};" }, { "math_id": 25, "text": "(X_i,Y_i)'" }, { "math_id": 26, "text": "\\rho" }, { "math_id": 27, "text": " r = \\frac{\\frac1{n-1} \\sum_{i=1}^n (X_i - \\overline{X})(Y_i - \\overline{Y})}{s_X s_Y} " }, { "math_id": 28, "text": "s_X^2, s_Y^2" }, { "math_id": 29, "text": "Y" }, { "math_id": 30, "text": "r" }, { "math_id": 31, "text": "\\sqrt{n}\\frac{r-\\rho}{1-\\rho^2} \\Rightarrow N(0,1)" }, { "math_id": 32, "text": " z = \\rm{tanh}^{-1} r = \\frac12 \\ln \\frac{1+r}{1-r}" }, { "math_id": 33, "text": "z" }, { "math_id": 34, "text": "\\sqrt{n}(z-\\zeta) \\Rightarrow N(0,1)" }, { "math_id": 35, "text": "\\zeta = {\\rm tanh}^{-1} \\rho" }, { "math_id": 36, "text": "\\operatorname{Var}(z) \\approx \\frac1{n-3}" } ]
https://en.wikipedia.org/wiki?curid=6081205
60815736
Kink (materials science)
Kinks are deviations of a dislocation defect along its glide plane. In edge dislocations, the constant glide plane allows short regions of the dislocation to turn, converting into screw dislocations and producing kinks. Screw dislocations have rotatable glide planes, thus kinks that are generated along screw dislocations act as an anchor for the glide plane. Kinks differ from jogs in that kinks are strictly parallel to the glide plane, while jogs shift away from the glide plane. Energy. Pure-edge and screw dislocations are conceptually straight in order to minimize its length, and through it, the strain energy of the system. Low-angle mixed dislocations, on the other hand, can be thought of as primarily edge dislocation with screw kinks in a stair-case structure (or vice versa), switching between straight pure-edge and pure-screw dislocation segments. In reality, kinks are not sharp transitions. Both the total length of the dislocation and the kink angle are dependent on the free energy of the system. The primary dislocation regions lie in Peierls-Nabarro potential minima, while the kink requires addition energy in the form of an energy peak. To minimize free energy, the kink equilibrates at a certain length and angle. Large energy peaks create short but sharp kinks in order to minimize dislocation length within the high energy region, while small energy peaks create long and drawn-out kinks in order to minimize total dislocation length. Kink movement. Kinks facilitate the movement of dislocations along its glide plane under shear stress, and is directly responsible for plastic deformation of crystals. When a crystal undergoes shear force, e.g. cut with scissors, the applied shear force causes dislocations to move through the material, displacing atoms and deforming the material. The entire dislocation does not move at once – rather, the dislocation produces a pair of kinks, which then propagates in opposite directions down the length of the dislocation, eventually shifting the entire dislocation by a Burgers vector. The velocity of dislocations through kink propagation also clearly limited on the nucleation frequency of kinks, as a lack of kinks compromises the mechanism by which dislocations move. As shear force approaches infinity, the velocity at which dislocations migrate is limited by the physical properties of the material, maximizing at the material's sound velocity. At lower shear stresses, the velocity of dislocations end up relating exponentially with the applied shear force: formula_0 where formula_1 is applied shear force formula_2 and formula_3 are experimentally found constants The above equation gives the upper limit on dislocation velocity. The interactions of dislocation movement on its environment, particularly other defects such as jogs and precipitates, results in drag and slows down the dislocation: formula_4 where formula_5 is the drag parameter of the crystal Kink movement is strongly dependent on temperature as well. Higher thermal energy assists in the generation of kinks, as well as increasing atomic vibrations and promoting dislocation motion. Kinks may also form under compressive stress due to the buckling of crystal planes into a cavity. At high compressive forces, masses of dislocations move at once. Kinks align with each other, forming walls of kinks that propagate all at once. At sufficient forces, the tensile force produced by the dislocation core exceeds the fracture stress of the material, combining kink boundaries into sharp kinks and de-laminating the basal planes of the crystal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_0 = C \\tau^p, \\,\\!" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "v_D = v_0 e^{-D/\\tau}, \\,\\!" }, { "math_id": 5, "text": "D" } ]
https://en.wikipedia.org/wiki?curid=60815736
60817325
Diffusion-limited escape
Escape of gases from the atmosphere Diffusion-limited escape occurs when the rate of atmospheric escape to space is limited by the upward diffusion of escaping gases through the upper atmosphere, and not by escape mechanisms at the top of the atmosphere (the exobase). The escape of any atmospheric gas can be diffusion-limited, but only diffusion-limited escape of hydrogen has been observed in our solar system, on Earth, Mars, Venus and Titan. Diffusion-limited hydrogen escape was likely important for the rise of oxygen in Earth's atmosphere (the Great Oxidation Event) and can be used to estimate the oxygen and hydrogen content of Earth's prebiotic atmosphere. Diffusion-limited escape theory was first used by Donald Hunten in 1973 to describe hydrogen escape on one of Saturn's moons, Titan. The following year, in 1974, Hunten found that the diffusion-limited escape theory agreed with observations of hydrogen escape on Earth. Diffusion-limited escape theory is now used widely to model the composition of exoplanet atmospheres and Earth's ancient atmosphere. Diffusion-Limited Escape of Hydrogen on Earth. Hydrogen escape on Earth occurs at ~500 km altitude at the exobase (the lower border of the exosphere) where gases are collisionless. Hydrogen atoms at the exobase exceeding the escape velocity escape to space without colliding into another gas particle. For a hydrogen atom to escape from the exobase, it must first travel upward through the atmosphere from the troposphere. Near ground level, hydrogen in the form of H2O, H2, and CH4 travels upward in the homosphere through turbulent mixing, which dominates up to the homopause. At about 17 km altitude, the cold tropopause (known as the "cold trap") freezes out most of the H2O vapor that travels through it, preventing the upward mixing of some hydrogen. In the upper homosphere, hydrogen bearing molecules are split by ultraviolet photons leaving only H and H2 behind. The H and H2 diffuse upward through the heterosphere to the exobase where they escape the atmosphere by Jeans thermal escape and/or a number of suprathermal mechanisms. On Earth, the rate-limiting step or "bottleneck" for hydrogen escape is diffusion through the heterosphere. Therefore, hydrogen escape on Earth is diffusion-limited. By considering one dimensional molecular diffusion of H2 through a heavier background atmosphere, you can derive a formula for the upward diffusion-limited flux of hydrogen (formula_0): formula_1 formula_2 is a constant for a particular background atmosphere and planet, and formula_3is the total hydrogen mixing ratio in all its forms above the tropopause. You can calculate formula_3by summing all hydrogen bearing species weighted by the number of hydrogen atoms each species contains: formula_4 For Earth's atmosphere, formula_5cm−2 s−1, and, the concentration of hydrogen bearing gases above the tropopause is 1.8 ppmv (parts per million by volume) CH4, 3 ppmv H2O, and 0.55 ppmv H2. Plugging these numbers into the formulas above gives a predicted diffusion-limited hydrogen escape rate of formula_6H atoms cm−2 s−1. This calculated hydrogen flux agrees with measurements of hydrogen escape. Note that hydrogen is the only gas in Earth's atmosphere that escapes at the diffusion-limit. Helium escape is not diffusion-limited and instead escapes by a suprathermal process known as the polar wind. Derivation. Transport of gas molecules in the atmosphere occurs by two mechanisms: molecular and eddy diffusion. Molecular diffusion is the transport of molecules from an area of higher concentration to lower concentration due to thermal motion. Eddy diffusion is the transport of molecules by the turbulent mixing of a gas. The sum of molecular and eddy diffusion fluxes give the total flux of a gas formula_7 through the atmosphere: formula_8 The vertical eddy diffusion flux is given by formula_9 formula_10 is the eddy diffusion coefficient, formula_11 is the number density of the atmosphere (molecules cm−3), and formula_12 is the volume mixing ratio of gas formula_7. The above formula for eddy diffusion is a simplification for how gases actually mix in the atmosphere. The eddy diffusion coefficient can only be empirically derived from atmospheric tracer studies. The molecular diffusion flux, on the other hand, can be derived from theory. The general formula for the diffusion of gas 1 relative to gas 2 is given by formula_13 Each variable is defined in table on right. The terms on the right hand side of the formula account for diffusion due to molecular concentration, pressure, temperature, and force gradients respectively. The expression above ultimately comes from the Boltzmann transport equation. We can simplify the above equation considerably with several assumptions. We will consider only vertical diffusion, and a neutral gas such that the accelerations are both equal to gravity (formula_16) so the last term cancels. We are left with formula_17 We are interested in the diffusion of a lighter molecule (e.g. hydrogen) through a stationary heavier background gas (air). Therefore, we can take velocity of the heavy background gas to be zero: formula_18. We can also use the chain rule and the hydrostatic equation to rewrite the derivative in the second term. formula_19 The chain rule can also be used to simplify the derivative in the third term. formula_20 Making these substitutions gives formula_21 Note, that we have also made the substitution formula_22. The flux of molecular diffusion is given by formula_23 By adding the molecular diffusion flux and the eddy diffusion flux, we get the total flux of molecule 1 through the background gas formula_24 Temperature gradients are fairly small in the heterosphere, so formula_25, which leaves us with formula_26 The maximum flux of gas 1 occurs when formula_27. Qualitatively, this is because formula_28 must decrease with altitude in order to contribute to the upward flux of gas 1. If formula_28 decreases with altitude, then formula_15 must decrease rapidly with altitude (recall that formula_29). Rapidly decreasing formula_15 would require rapidly increasing formula_14 in order to drive a constant upward flux of gas 1 (recall formula_30). Rapidly increasing formula_14 isn't physically possible. For a mathematical explanation for why formula_27, see Walker 1977, p. 160. The maximum flux of gas 1 relative to gas 2 (formula_31, which occurs when formula_27) is therefore formula_32 Since formula_33, formula_34 or formula_35 This is the diffusion-limited flux of a molecule. For any particular atmosphere, formula_2 is a constant. For hydrogen (gas 1) diffusion through air (gas 2) in the heterosphere on Earth formula_36, formula_37m s−2 ,and formula_38 K. Both H and H2 diffuse through the heterosphere, so we will use a diffusion parameter that is the weighted sum of H and H2 number densities at the tropopause. formula_39 For formula_40 molecules cm−3, formula_41 molecules cm−3, formula_42 cm−1s−1, and formula_43 cm−1s−1, the binary diffusion parameter is formula_44. These numbers give formula_45molecules cm−2 s−1. In more detailed calculations the constant is formula_5molecules cm−2 s−1. The above formula can be used to calculate the diffusion-limited flux of gases other than hydrogen. Diffusion-Limited Escape in the Solar System. Every rocky body in the solar system with a substantial atmosphere, including Earth, Mars, Venus, and Titan, loses hydrogen at the diffusion-limited rate. For Mars, the constant governing diffusion-limited escape of hydrogen is formula_46 molecules cm−2 s−1. Spectroscopic measurements of Mars' atmosphere suggest that formula_47. Multiplying these numbers together gives the diffusion-limited rate escape of hydrogen: formula_48 H atoms cm−2 s−1 "Mariner" 6 and 7 spacecraft indirectly observed hydrogen escape flux on Mars between formula_49and formula_50 H atoms cm−2 s−1. These observations suggest that Mars' atmosphere is losing hydrogen at roughly the diffusion limited value. Observations of hydrogen escape on Venus and Titan are also at the diffusion-limit. On Venus, hydrogen escape was measured to be about formula_51 H atoms cm−2 s−1, while the calculated diffusion limited rate is about formula_52H atoms cm−2 s−1, which are in reasonable agreement. On Titan, hydrogen escape was measured by the "Cassini" spacecraft to be formula_53 H atoms cm−2 s−1, and the calculated diffusion-limited rate is formula_54H atoms cm−2 s−1. Applications to Earth's Ancient Atmosphere. Oxygen Content of the Prebiotic Atmosphere. We can use diffusion-limited hydrogen escape to estimate the amount of O2 on the Earth's atmosphere before the rise of life (the prebiotic atmosphere). The O2 content of the prebiotic atmosphere was controlled by its sources and sinks. If the potential sinks of O2 greatly outweighed the sources, then the atmosphere would have been nearly devoid of O2. In the prebiotic atmosphere, O2 was produced by the photolysis of CO2 and H2O in the atmosphere: &lt;chem&gt;CO_2 + h\nu -&gt; CO + O&lt;/chem&gt; &lt;chem&gt;H_2O + h\nu -&gt; 1/2O_2 + 2H&lt;/chem&gt; These reactions aren't necessarily a net source of O2. If the CO and O produced from CO2 photolysis remain in the atmosphere, then they will eventually recombine to make CO2. Likewise, if the H and O2 from H2O photolysis remain in the atmosphere, then they will eventually react to form H2O. The photolysis of H2O is a net source of O2 only if the hydrogen escapes to space. If we assume that hydrogen escape occurred at the diffusion-limit in the prebiotic atmosphere, then we can estimate the amount of H2 that escaped due to water photolysis. If the prebiotic atmosphere had a modern stratospheric H2O mixing ratio of 3 ppmv which is equivalent to 6 ppmv of H after photolysis, then formula_55H atoms cm−2 s−1 Stoichiometry says that every mol of H escape produced 0.25 mol of O2 (i.e. &lt;chem&gt;2H_2O -&gt; O_2 +4H&lt;/chem&gt;), so the abiotic net production of O2 from H2O photolysis was formula_56 O2 molecules cm−2 s−1. The main sinks of O2 would have been reactions with volcanic hydrogen. The modern volcanic H flux is about formula_57H atoms cm−2 s−1. If the prebiotic atmosphere had a similar volcanic hydrogen flux, then the potential O2 sink would have been a fourth of the hydrogen volcanism, or formula_58O2 molecules cm−2 s−1. These calculated values predict that potential O2 sinks were ~50 times greater than the abiotic source. Therefore, O2 must have been nearly absent in the prebiotic atmosphere. Photochemical models, which do more complicated versions of the calculations above, predict prebiotic O2 mixing ratios below 10−11, which is extremely low compared to the modern O2 mixing ratio of 0.21. Hydrogen Content of the Prebiotic Atmosphere. H2 concentrations in the prebiotic atmosphere were also controlled by its sources and sinks. In the prebiotic atmosphere, the main source of H2 was volcanic outgassing, and the main sink of outgassing H2 would have been escape to space. Some outgassed H2 would have reacted with atmospheric O2 to form water, but this was very likely a negligible sink of H2 because of scarce O2 (see the previous section). This is not the case in the modern atmosphere where the main sink of volcanic H2 is its reaction with plentiful atmospheric O2 to form H2O. If we assume that the prebiotic H2 concentration was at a steady-state, then the volcanic H2 flux was approximately equal to the escape flux of H2. formula_59 Additionally, if we assume that H2 was escaping at the diffusion-limited rate as it is on the modern Earth then formula_60 If the volcanic H2 flux was the modern value of formula_61H atoms cm−2 s−1, then we can estimate the total hydrogen content of the prebiotic atmosphere. formula_62 ppmv By comparison, H2 concentration in the modern atmosphere is 0.55 ppmv, so prebiotic H2 was likely several hundred times higher than today's value. This estimate should be considered as a lower bound on the actual prebiotic H2 concentration. There are several important factors that we neglected in this calculation. The Earth likely had higher rates of hydrogen outgassing because the interior of the Earth was much warmer ~4 billion years ago. Additionally, there is geologic evidence that the mantle was more reducing in the distant past, meaning that even more reduced gases (e.g. H2) would have been outgassed by volcanos relative to oxidized volcanic gases. Other reduced volcanic gases, like CH4 and H2S should also contribute to this calculation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi_{l}" }, { "math_id": 1, "text": "\\Phi_l = Cf_T(H)" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "f_T(H)" }, { "math_id": 4, "text": "f_T(H)=f_{H}+2f_{H_2}+2f_{H_2O}+4f_{CH_4}+..." }, { "math_id": 5, "text": "C=2.5\\times 10^{13}" }, { "math_id": 6, "text": "\\Phi_l = 4.3\\times 10^8" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "\\Phi_i = \\Phi_i^{mol}+\\Phi_i^{eddy}" }, { "math_id": 9, "text": "\\Phi_i^{eddy}=-Kn\\frac{df_i}{dz}" }, { "math_id": 10, "text": "K" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "f_i" }, { "math_id": 13, "text": "\\vec{v}_1-\\vec{v}_2=-D_{12}\\left(\\frac{n^2}{n_1n_2}\\nabla\\left(\\frac{n_1}{n}\\right)+\\frac{m_2-m_1}{m} \\nabla(\\ln{P})+\\alpha_T \\nabla (\\ln{T})-\\frac{m_1m_2}{mkT}(\\vec{a}_1-\\vec{a}_2) \\right) " }, { "math_id": 14, "text": "w_1" }, { "math_id": 15, "text": "n_1" }, { "math_id": 16, "text": "\\vec{a}_1=\\vec{a}_2=g" }, { "math_id": 17, "text": "w_1-w_2=-D_{12}\\left(\\frac{n^2}{n_1n_2}\\frac{d}{dz}\\left(\\frac{n_1}{n}\\right)+\\frac{m_2-m_1}{m} \\frac{d}{dz}(\\ln{P})+\\alpha_T \\frac{d}{dz} (\\ln{T})\\right) " }, { "math_id": 18, "text": "w_2=0 " }, { "math_id": 19, "text": "\\frac{d}{dz}\\ln {P}=\\frac{1}{P}\\frac{dP}{dz}=\\frac{-mg}{kT} " }, { "math_id": 20, "text": "\\frac{d}{dz}\\ln{T}=\\frac{1}{T}\\frac{dT}{dz} " }, { "math_id": 21, "text": "w_1=-D_{12}\\left(\\frac{n^2}{n_1n_2}\\frac{df_1}{dz}+\\frac{(m_2-m_1)g}{kT}+\\frac{\\alpha_T}{T} \\frac{dT}{dz} \\right) " }, { "math_id": 22, "text": "n_1/n=f_1 " }, { "math_id": 23, "text": "\\Phi_1^{mol}=w_1n_1=-D_{12}n_1\\left(\\frac{n^2}{n_1n_2}\\frac{df_1}{dz}+\\frac{(m_2-m_1)g}{kT}+\\frac{\\alpha_T}{T} \\frac{dT}{dz} \\right) " }, { "math_id": 24, "text": "\\Phi_1 = \\Phi_1^{mol}+\\Phi_1^{eddy}=-Kn\\frac{df_1}{dz}-D_{12}n_1\\left(\\frac{n^2}{n_1n_2}\\frac{df_1}{dz}+\\frac{(m_2-m_1)g}{kT}+\\frac{\\alpha_T}{T} \\frac{dT}{dz} \\right) " }, { "math_id": 25, "text": "dT/dz\\approx0" }, { "math_id": 26, "text": "\\Phi_1 = -Kn\\frac{df_1}{dz}-D_{12}n_1\\left(\\frac{n^2}{n_1n_2}\\frac{df_1}{dz}+\\frac{(m_2-m_1)g}{kT}\\right) " }, { "math_id": 27, "text": "df_1/dz=0" }, { "math_id": 28, "text": "f_1" }, { "math_id": 29, "text": "f_1=n_1/n" }, { "math_id": 30, "text": "\\Phi_1=w_1n_1" }, { "math_id": 31, "text": "\\Phi_l" }, { "math_id": 32, "text": "\\Phi_l = D_{12}n_1\\left(\\frac{(m_2-m_1)g}{kT}\\right) " }, { "math_id": 33, "text": "D_{12}=b_{12}/n" }, { "math_id": 34, "text": "\\Phi_l =\\frac{b_{12}g(m_2-m_1)}{kT}\\frac{n_1}{n}=\\frac{b_{12}g(m_2-m_1)}{kT}f_1 " }, { "math_id": 35, "text": "\\Phi_l =Cf_1 " }, { "math_id": 36, "text": "m_{air}-m_{hydrogen}\\approx 4.8 \\times 10^{-26}" }, { "math_id": 37, "text": "g=9.81 " }, { "math_id": 38, "text": "T\\approx 208 " }, { "math_id": 39, "text": "b_{12}=b_{H}\\frac{n_{H}}{n_{H}+n_{H2}}+b_{H2}\\frac{n_{H2}}{n_{H}+n_{H2}}" }, { "math_id": 40, "text": "n_H\\approx 1.8 \\times 10^7" }, { "math_id": 41, "text": "n_{H2}\\approx 5.2 \\times 10^7" }, { "math_id": 42, "text": "b_H\\approx 2.73 \\times 10^{19}" }, { "math_id": 43, "text": "b_{H2}\\approx 1.46 \\times 10^{19}" }, { "math_id": 44, "text": "b_{12}=1.8 \\times 10^{19}" }, { "math_id": 45, "text": "C=2.9\\times 10^{13}" }, { "math_id": 46, "text": "C_{mars}=1.1\\times 10^{13}" }, { "math_id": 47, "text": "f_T(H)=(30\\pm 10)\\times10^{-6}" }, { "math_id": 48, "text": "\\Phi_l^{mars}=C_{mars}f_T(H)=(3.3\\pm1.1)\\times 10^8" }, { "math_id": 49, "text": "1\\times 10^8" }, { "math_id": 50, "text": "2\\times 10^8" }, { "math_id": 51, "text": "1.7\\times10^7" }, { "math_id": 52, "text": "3\\times10^7" }, { "math_id": 53, "text": "(2.0\\pm2.1)\\times 10^{10}" }, { "math_id": 54, "text": "3\\times 10^{10}" }, { "math_id": 55, "text": "\\Phi_l(H)=(2.5 \\times 10^{13})\\cdot(6\\times 10^{-6})=1.5\\times 10^{8}" }, { "math_id": 56, "text": "3.75\\times 10^7" }, { "math_id": 57, "text": "7.5 \\times 10^{9}" }, { "math_id": 58, "text": "1.9 \\times 10^{9}" }, { "math_id": 59, "text": "\\Phi_{volc}(H_2)\\approx \\Phi_{esc}(H_2)" }, { "math_id": 60, "text": "\\Phi_{volc}(H_2)\\approx \\Phi_{esc}(H_2)=2.5 \\times 10^{13}f_T(H_2)" }, { "math_id": 61, "text": "3.75 \\times 10^{9}" }, { "math_id": 62, "text": "f_T(H_2) \\approx \\frac{\\Phi_{volc}(H_2)}{2.5 \\times 10^{13}}=\\frac{3.75\\times 10^9}{2.5 \\times 10^{13}}=1.5\\times 10^{-4}=150 " } ]
https://en.wikipedia.org/wiki?curid=60817325
60819045
Galactic algorithm
Classification of algorithm A galactic algorithm is one with record-breaking theoretical (asymptotic) performance, but which is never used in practice. Typical reasons are that the performance gains only appear for problems that are so large they never occur, or the algorithm's complexity outweighs a relatively small gain in performance. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth. Possible use cases. Even if they are never used in practice, galactic algorithms may still contribute to computer science: Examples. Integer multiplication. An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs formula_1 bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits." Matrix multiplication. The first improvement over brute-force matrix multiplication (which needs formula_2 multiplications) was the Strassen algorithm: a recursive algorithm that needs formula_3 multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, needing formula_4 multiplications. These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical." Communication channel capacity. Claude Shannon showed a simple but asymptotically optimal code that can reach the theoretical capacity of a communication channel. It requires assigning a random code word to every possible formula_5-bit message, then decoding by finding the closest code word. If formula_5 is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any formula_5 big enough to beat existing codes is also completely impractical. These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity. Sub-graphs. The problem of deciding whether a graph formula_6 contains formula_7 as a minor is NP-complete in general, but where formula_7 is fixed, it can be solved in polynomial time. The running time for testing whether formula_7 is a minor of formula_6 in this case is formula_8, where formula_5 is the number of vertices in formula_6 and the big O notation hides a constant that depends superexponentially on formula_7. The constant is greater than formula_9 in Knuth's up-arrow notation, where formula_10 is the number of vertices in formula_7. Even the case of formula_11 cannot be reasonably computed as the constant is greater than 2 tetrated by 65536, that is, formula_12. Cryptographic breaks. In cryptography jargon, a "break" is any attack faster in expectation than brute force – i.e., performing one trial decryption for each possible key. For many cryptographic systems, breaks are known, but are still practically infeasible with current technology. One example is the best attack known against 128-bit AES, which takes only formula_13 operations. Despite being impractical, theoretical breaks can provide insight into vulnerability patterns, and sometimes lead to discovery of exploitable breaks. Traveling salesman problem. For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum. (Many other algorithms could "usually" do much better, but could not provably do so.) In 2020, a newer and much more complex algorithm was discovered that can beat this by formula_14 percent. Although no one will ever switch to this algorithm for its very slight worst-case improvement, it is still considered important because "this minuscule improvement breaks through both a theoretical logjam and a psychological one". Hutter search. A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats. It works by searching through all possible algorithms (by runtime), while simultaneously searching through all possible proofs (by length of proof), looking for a proof of correctness for each algorithm. Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime. However, this constant is so big that the algorithm is entirely impractical. For example, if the shortest proof of correctness of a given algorithm is 1000 bits long, the search will examine at least 2999 other potential proofs first. Hutter search is related to Solomonoff induction, which is a formalization of Bayesian inference. All computable theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Again, the search over all possible explanations makes this procedure galactic. Optimization. Simulated annealing, when used with a logarithmic cooling schedule, has been proven to find the global optimum of any optimization problem. However, such a cooling schedule results in entirely impractical runtimes, and is never used. However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems. Minimum spanning trees. The expected linear time MST algorithm is able to discover the minimum spanning tree of a graph in formula_15, where formula_16 is the number of edges and formula_5 is the number of nodes of the graph. However, the constant factor that is hidden by the Big O notation is huge enough to make the algorithm impractical. An implementation is publicly available and given the experimentally estimated implementation constants, it would only be faster than Borůvka's algorithm for graphs in which formula_17. Hash tables. Researchers have found an algorithm that achieves the provably best-possible asymptotic performance in terms of time-space tradeoff. But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon. It’s just too complicated to construct." and "in practice, constants really matter. In the real world, a factor of 10 is a game ender.” References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Theta\\bigl(n^{2^{100}}\\bigr)" }, { "math_id": 1, "text": "O(n \\log n)" }, { "math_id": 2, "text": "O(n^3)" }, { "math_id": 3, "text": "O(n^{2.807})" }, { "math_id": 4, "text": "O(n^{2.373})" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "H" }, { "math_id": 8, "text": "O(n^2)" }, { "math_id": 9, "text": "2 \\uparrow \\uparrow (2 \\uparrow \\uparrow (2 \\uparrow \\uparrow (h/2) ) ) " }, { "math_id": 10, "text": "h" }, { "math_id": 11, "text": "h = 4" }, { "math_id": 12, "text": "{^{65536}2 = \\ \\atop {\\ }} {{\\underbrace{2^{2^{\\cdot^{\\cdot^{2}}}}}} \\atop 65536}" }, { "math_id": 13, "text": "2^{126}" }, { "math_id": 14, "text": "10^{-34}" }, { "math_id": 15, "text": "O(m + n)" }, { "math_id": 16, "text": "m" }, { "math_id": 17, "text": "m + n > 9 \\cdot 10^{151}" } ]
https://en.wikipedia.org/wiki?curid=60819045
608194
Apriori algorithm
Algorithm for frequent item set mining and association rule learning over transactional databases Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis. Overview. The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers, or details of a website frequentation or IP addresses). Other algorithms are designed for finding association rules in data having no transactions (Winepi and Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (an "itemset"). Given a threshold formula_0, the Apriori algorithm identifies the item sets which are subsets of at least formula_0 transactions in the database. Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as "candidate generation"), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length formula_1 from item sets of length formula_2. Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent formula_1-length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. The pseudo code for the algorithm is given below for a transaction database formula_3, and a support threshold of formula_4. Usual set theoretic notation is employed, though note that formula_3 is a multiset. formula_5 is the candidate set for level formula_1. At each step, the algorithm is assumed to generate the candidate sets from the large item sets of the preceding level, heeding the downward closure lemma. formula_6 accesses a field of the data structure that represents candidate set formula_7, which is initially assumed to be zero. Many details are omitted below, usually the most important part of the implementation is the data structure used for storing the candidate sets, and counting their frequencies. Apriori(T, ε) k ← 2 while Lk−1 is not empty Ck ← Apriori_gen(Lk−1, k) for transactions t in T for candidates c in Dt count[c] ← count[c] + 1 k ← k + 1 return Union(Lk) Apriori_gen(L, k) result ← list() for all p ∈ L, q ∈ L where p1 = q1, p2 = q2, ..., pk-2 = qk-2 and pk-1 &lt; qk-1 if u ∈ L for all u ⊆ c where |u| = k-1 result.add(c) return result Examples. Example 1. Consider the following database, where each row is a transaction and each cell is an individual item of the transaction: The association rules that can be determined from this database are the following: we can also illustrate this through a variety of examples. Example 2. Assume that a large supermarket tracks sales data by stock-keeping unit (SKU) for each item: each item, such as "butter" or "bread", is identified by a numerical SKU. The supermarket has a database of transactions where each transaction is a set of SKUs that were bought together. Let the database of transactions consist of following itemsets: We will use Apriori to determine the frequent item sets of this database. To do this, we will say that an item set is frequent if it appears in at least 3 transactions of the database: the value 3 is the "support threshold". The first step of Apriori is to count up the number of occurrences, called the support, of each member item separately. By scanning the database for the first time, we obtain the following result All the itemsets of size 1 have a support of at least 3, so they are all frequent. The next step is to generate a list of all pairs of the frequent items. For example, regarding the pair {1,2}: the first table of Example 2 shows items 1 and 2 appearing together in three of the itemsets; therefore, we say item {1,2} has support of three. The pairs {1,2}, {2,3}, {2,4}, and {3,4} all meet or exceed the minimum support of 3, so they are frequent. The pairs {1,3} and {1,4} are not. Now, because {1,3} and {1,4} are not frequent, any larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we can "prune" sets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs: in the example, there are no frequent triplets. {2,3,4} is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold. We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold. Limitations. Apriori, while historically significant, suffers from a number of inefficiencies or trade-offs, which have spawned other algorithms. Candidate generation generates large numbers of subsets (The algorithm attempts to load up the candidate set, with as many as possible subsets before each scan of the database). Bottom-up subset exploration (essentially a breadth-first traversal of the subset lattice) finds any maximal subset S only after all formula_8 of its proper subsets. The algorithm scans the database too many times, which reduces the overall performance. Due to this, the algorithm assumes that the database is permanently in the memory. Also, both the time and space complexity of this algorithm are very high: formula_9, thus exponential, where formula_10 is the horizontal width (the total number of items) present in the database. Later algorithms such as Max-Miner try to identify the maximal frequent item sets without enumerating their subsets, and perform "jumps" in the search space rather than a purely bottom-up approach. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "k-1" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "C_k" }, { "math_id": 6, "text": "\\mathrm{count}[c]" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "2^{|S|}-1" }, { "math_id": 9, "text": "O\\left(2^{|D|}\\right)" }, { "math_id": 10, "text": "|D|" } ]
https://en.wikipedia.org/wiki?curid=608194
60819688
Skin temperature (atmosphere)
The skin temperature of an atmosphere is the temperature of a hypothetical thin layer high in the atmosphere that is transparent to incident solar radiation and partially absorbing of infrared radiation from the planet. It provides an approximation for the temperature of the tropopause on terrestrial planets with greenhouse gases present in their atmospheres. The skin temperature of an atmosphere should not be confused with the "surface skin temperature", which is more readily measured by satellites, and depends on the thermal emission at the surface of a planet. Background. The concept of a skin temperature builds on a radiative-transfer model of an atmosphere, in which the atmosphere of a planet is divided into an arbitrary number of layers. Each layer is transparent to the visible radiation from the Sun but acts as a blackbody in the infrared, fully absorbing and fully re-emitting infrared radiation originating from the planet's surface and from other atmospheric layers. Layers are warmer near the surface and colder at higher altitudes. If the planet's atmosphere is in radiative equilibrium, then the uppermost of these opaque layers should radiate infrared radiation upwards with a flux equal to the incident solar flux. The uppermost opaque layer (the emission level) will thus radiate as a blackbody at the planet's equilibrium temperature. The skin layer of an atmosphere references a layer far above the emission level, at a height where the atmosphere is extremely diffuse. As a result, this thin layer is transparent to solar (visible) radiation and translucent to planetary/atmospheric (infrared) radiation. In other words, the skin layer acts as a graybody, because it is not a perfect absorber/emitter of infrared radiation. Instead, most of the infrared radiation coming from below (i.e. from the emission level) will pass through the skin layer, with only a small fraction being absorbed, resulting in a cold skin layer. Derivation. Consider a thin layer of gas high in the atmosphere with some absorptivity (i.e. the fraction of incoming energy that is absorbed), ε. If the emission layer has some temperature Teq, the total flux reaching the skin layer from below is given by: formula_0 assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann law. σ is the Stefan-Boltzmann constant. As a result: formula_1 is absorbed by the skin layer, while formula_2 passes through the skin layer, radiating directly into space. Assuming the skin layer is at some temperature Ts, and using Kirchhoff's law (absorptivity = emissivity), the total radiation flux produced by the skin layer is given by: formula_3 where the factor of 2 comes from the fact that the skin layer radiates in both the upwards and downwards directions. If the skin layer remains at a constant temperature, the energy fluxes in and out of the skin layer should be equal, so that: formula_4 Therefore, by rearranging the above equation, the skin temperature can be related to the equilibrium temperature of an atmosphere by: formula_5 The skin temperature is thus independent of the absorptivity/emissivity of the skin layer. Applications. A multi-layered model of a greenhouse atmosphere will produce predicted temperatures for the atmosphere that decrease with height, asymptotically approaching the skin temperature at high altitudes. The temperature profile of the Earth's atmosphere does not follow this type of trend at all altitudes, as it exhibits two temperature inversions, i.e. regions where the atmosphere gets warmer with increasing altitude. These inversions take place in the stratosphere and the thermosphere, due to absorption of solar ultraviolet (UV) radiation by ozone and absorption of solar extreme ultraviolet (XUV) radiation respectively. Although the reality of Earth's atmospheric temperature profile deviates from the many-layered model due to these inversions, the model is relatively accurate within Earth's troposphere. The skin temperature is a close approximation for the temperature of the tropopause on Earth. An equilibrium temperature of 255 K on Earth yields a skin temperature of 214 K, which compares with a tropopause temperature of 209 K. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = \\sigma T_{eq}^4" }, { "math_id": 1, "text": "F_{in} = \\epsilon\\sigma T_{eq}^4" }, { "math_id": 2, "text": "F_{thru} = (1-\\epsilon)\\sigma T_{eq}^4" }, { "math_id": 3, "text": "F_{out,Total} = 2\\epsilon\\sigma T_{s}^4" }, { "math_id": 4, "text": "\\epsilon\\sigma T_{eq}^4 = 2\\epsilon\\sigma T_{s}^4" }, { "math_id": 5, "text": "T_{s}=T_{eq}\\left ( \\frac{1}{2} \\right )^{1/4}" } ]
https://en.wikipedia.org/wiki?curid=60819688
60826875
Digital ion trap
Scientific analytical tool The digital ion trap (DIT) is an quadrupole ion trap driven by digital signals, typically in a rectangular waveform, generated by switching rapidly between discrete DC voltage levels. The digital ion trap has been mainly developed as a mass analyzer. History. A digital ion trap (DIT) is an ion trap having a trapping waveform generated by the rapid switching between discrete high-voltage levels. The timing of the high voltage switch is controlled precisely with digital electronic circuitry. Ion motion in a quadrupole ion trap driven by a rectangular wave signal was theoretically studied in 1970s by Sheretov, E.P. and Richards, J.A. Sheretov also implemented the pulsed waveform drive for the quadrupole ion trap working in mass-selective instability mode, although no resonance excitation/ejection was used. The idea was substantially revisited by Ding L. and Kumashiro S. in 1999, where the ion stability in the rectangular wave quadrupole field was mapped in the Mathieu space "a"-"q" coordinate system, with the parameters "a" and "q" having the same definition as the Mathieu parameters normally used in dealing with sinusoidal RF driven quadrupole field. The secular frequency dependence on the "a", "q" parameters was also derived thus the foundation was laid for many modern ion trap operation modes based on the resonance excitation. Also, in 1999, Peter T.A. Reilly began trapping and subsequently ablating and mass analyzing the product ions from nanoparticles obtained from car exhaust with a primitive hybrid square wave/sine wave driven 3D ion trap. In 2001 Reilly attended the 49th American Society for Mass Spectrometry (ASMS) Conference on Mass Spectrometry and Applied Topics where he presented his nanoparticle mass analysis work and met Li Ding for the first time. Reilly suggested to Ding at that time that they should focus the DIT for analysis in the high mass range where other instruments could not compete. However, work published by Ding and Shimadzu over the years following the 2001 meeting were focused on development of square wave driven DIT's in the conventional mass range of commercial instrumentation. During this time Reilly began developing digital waveforms to increase the mass range of quadrupole-based mass spectrometers and ion traps that operate with rectangular waveforms. Over the course of eighteen years, the Reilly group contributed substantially to the development of modern digital waveform technology (DWT), its implementation and characterization, methods of waveform generation, and general theory which includes but is not limited to stability diagrams, the pseudopotential model, and more recently digital quadrupole acceptance. In parallel to Reilly's achievements but also working separately, the Ding group at the Shimadzu Research Lab continued to implement their digital drive technology for a 3D ion trap. Finally, after 18 years Shimadzu unveiled a bench top MALDI square wave driven 3D ion trap mass spectrometer that was designed to work in the higher mass range at the 2019 ASMS conference. The DIT technology has also been developed and implemented in the linear and 3D quadrupole ion traps by many other groups around the world. The Stability Under the Digital Drive. For a 3D type of quadrupole ion trap, ion motion under the influence of a digital waveform (see figure right) can be expressed in terms of the conventional trapping parameters: formula_0 and formula_1 Here, "Ω =2πf" is the angular frequency of the digital waveform. Similar definitions of the formula_2 for the 2D (linear) ion trap were also given in literature. There are at least two postulates about the nature of the DC component. The first, of which has been attributed to Ding, assumes for the DIT that the DC component, "U" depends on not only the mid-level of the AC voltages, V1and V2, but also the duty cycle, "d" of the waveform: formula_3 Whereas, the second but more general postulate assumes that there is no DC component unless there is an explicit DC voltage offset added to the waveforms. The latter interpretation is explained by the change to the stability diagram that results when the duty cycle moves away from "d =" 0.5. When this happens the range of stable "q" and "a" values for both quadrupole axes change. These changes cause the motion of ions to be more displaced along one axis compared to the other. This, consequently is the effect of the DC bias. It is important to accurately know the stability of ions inside the DIT. For example, different waveform duty cycles result in a different stability boundary. For the case of a square wave, where "d" = 0.5, the boundary of the first stability region crosses the formula_4 axis at approximately 0.712, which is less than 0.908, the boundary value formula_4 for a sinusoidal waveform. The stability of ion motion in a digitally driven quadrupole can be calculated from the analytical matrix solutions of Hill's equation: formula_5 formula_6 The analytical solutions apply to any periodic function so long as each period, formula_7 can be represented as a series of "n" constant potential steps formula_8. Each constant potential step is represented in dimensionless Mathieu space by the waveform potential parameter formula_9, where "q" and "a" were defined earlier by (1) and (2). The value formula_10 in (3) is the temporal width of the constant potential step. In a digital system that is operated without a physical DC offset the waveform potential reduces to the value formula_11. The sign of the parameter will depend on the sign of the constant potential at each step, and the appropriate matrix will depend on the sign of the parameter. Because a digital waveform may be approximated as existing in only high and low states (potential sign), the stability of ions, as demonstrated by Brabeck, may be determined in as few as two or three constant potential steps. In the simple but frequent case that a full cycle of a digital waveform can be represented by two constant potential steps, the matrix representing the first potential step would be multiplied onto the matrix representing the second potential step. In the general case, the final matrix of a waveform cycle defined by "n" constant potential steps is: formula_12 The matrix (4) is often referred to as the transfer matrix. It is used to evaluate whether an ion will have stable motion. If the absolute value of the trace of this matrix is less than 2 the ion is said to have stable motion. Stable motion simply means that the secular oscillation of the ion has a maximum displacement. When the absolute value of the trace is greater than 2 ion motion is not stable and the displacement of the ion increases with each secular oscillation. Ion trajectories in a linear or 3D DIT as well as in a digital mass filter, may also be calculated using a similar procedure. Unlike stability calculation it is advantageous for the purpose of resolution and accuracy to represent each period of the waveform with an adequate number of constant voltage steps. The trajectory for the constant potential step, formula_13"," for example, is calculated by multiplication of the appropriate matrix (3) for that step onto the trajectory vector of the step, formula_14: formula_15 A stability diagram may be generated by calculating the matrix trace for each axis over a defined range of "q" and "a" values. The stability diagram of a square wave is very similar to that of the traditional harmonic quadrupole field. Having the additional parameter "d" in the waveform, the digital ion trap can perform certain experiments which are not available in the conventional harmonic wave RF ion trap. One example is the digital asymmetric wave isolation which is the method of using "a d" value around 0.6 to narrow the mass range to isolate a precursor ion. The DIT is a versatile instrument because it is capable of operating at constant AC voltage without a DC offset for any conceivable duty cycle and frequency. The dynamic frequency does not impose a limit on the mass range. The Mathieu space stability diagram of the linear and 3D DIT change with duty cycle. When "a = 0" there will be a finite range of stable "q" values for each quadrupole axis that will depend on the duty cycle. Fig 3 (a) shows a Mathieu space stability diagram for the duty cycle "d = 0.50" of a linear DIT. The horizontal line indicates where the parameter "a = 0". The range of completely stable "q" values appear where this line passes through the green colored region; it ranges from "q =" "0" to roughly "q = 0.7125". The blue colored areas in the figure depict stability along the "x-"axis only. Yellow colored regions depict stability along the "y-"axis only. When the duty cycle is increased to "d = 0.60" the range of completely stable "q" values decreases (see Fig 3 (b)) as indicated by the reduction of green that the horizontal line intersects. In this representation the total range of stable "q" values along the "x-"axis, that is defined by the intersection of the line through the blue and green regions, is greater than the total range of stable "q" values along the "y-"axis that is defined by the intersection of the line through the yellow and green regions. In Fig 3 (b) the overall stability of the linear DIT in the "y" direction is smaller than in the "x" direction. If the frequency of the linear DIT is decreased to cause a particular ion to have a "q" value that corresponds to right hand side boundary of the completely stable green region, then it will excite and ultimately eject in the y direction. This is the fundamental mechanism that allows control over the direction of ion excitation in a linear DIT without resonant excitation. The DIT and other forms of digital mass analyzers scan ions by scanning the frequency of the drive waveform. The AC voltage is typically fixed during the scan. Digital devices use a duty cycle which allows them to operate completely independent of a DC voltage and without resonant excitation. When the DC voltage is zero the parameter "a" is also zero. Consequently, ion stability will depend on "q". With these considerations it was possible to design a new type of stability diagram that is more suitable for planning and performing experiment. In 2014 Brabeck and Reilly created a stability diagram that maps the range of stable mass-to-charge ratios, "m/z" to the corresponding range of drive frequencies based on several user inputs. For a particular duty cycle, the operator can quickly reference the range of stable masses at each frequency of a scan. Fig 4 (a) and (b) shows the frequency-"m/z" stability diagram for a linear DIT with a duty cycle of "d = 0.50" and "d= 0."60 respectively. Secular Frequency and Pseudopotential Well Depth. Secular frequency is the fundamental frequency component of the ion motion in the quadrupole field driven by a periodical signal, and it is usually chosen for resonance excitation of ion motion to achieve ion ejection, and/or ion energy activation which may lead to the collision induced dissociation. The secular frequency is conventionally written as: formula_16 For digital driving signal, Ding derived the expression of the secular frequency using matrix transform theory. formula_17 Where :formula_18 are two diagonal elements of the transform matrix of ion motion. For a DC free square wave ( formula_19 ) the transform matrix may be expressed using the stability parameter formula_20, thus: formula_21 The formula (6) and (7) give a direct relation between the secular frequency and the digital drive waveform parameters (frequency and amplitude), without using the iterative process that is needed for a sinusoidal driven quadrupole ion trap. Normally the depth of the 'effective potential' well, or the pseudopotential well, is used to estimate the maximum kinetic energy of ions that remain trapped. For DIT, this was also derived using Dehmelt approximation: formula_22 [eV] Instrumentation and Performance. Initially the digital ion trap was constructed in form of a 3D ion trap, where the drive signal was fed to the ring electrode of the trap. Instead of scanning up the RF voltage, in the DIT, the frequency of the rectangular waveform signal is scanned down during a forward mass scan. This avoided the high voltage breakdown which set the upper limitation of a mass scan. Mass range of DIT up to 18,000 Th was demonstrated by use of an atmospheric MALDI ion source and was later expanded to cover "m/z" of a singly charged antibody at about 900,000 Th by Koichi Tanaka etc. The MOSFET switch circuit is responsible to provide the rectangular wave drive signal. The drive circuit of DIT is much compact compared to the RF generator with LC resonator circuit used for conventional sinusoidal wave ion trap. It also provides the capability of fast start up and fast termination of the waveform, which enables injection and ejection of ion with high efficiency. A field adjusting electrode placed adjacent to the entrance end-cap and biased with certain dc voltages helped to achieve good mass resolution for both forward and reverse mass scans, as well as for precursor isolation. With trapping voltage of +/- 1kV, a zoom scan resolving power of 19,000 was demonstrated. Many new features for tandem mass analysis were gradually revealed by using the digital ion trap. Ions can be selectively removed from the ion trap by boundary ejection simply by varying the duty cycle of the digital waveform, instead of applying the conventional "resolving DC" voltage. Since rectangular waveforms are employed in the DIT, electrons can be injected into the trap during one of the voltage level without being accelerated up by the varying electric field. This enabled Electron-capture dissociation, which needs very low energy electron beam to interact with the trapped ions, achieved in the digital ion trap, without assistance of a magnetic field. Other forms of digital ion trap were also developed, including the linear ion trap constructed using printed circuit boards and the rods structure linear ion guide/trap. Two sets of switch circuitry were normally used to generate 2 phases of rectangular pulse waveform for two pairs of rods in case of the linear digital ion trap. Commercialization. Hexin Instrument Co., Ltd (Guangzhou, China) commercialized a portable ion trap mass spectrometer DT-100 in 2017 for VOC monitoring. The mass spectrometer employs a VUV photo ionization source and a digital linear ion trap as mass analyzer. With an overall weight of 13 kg and size of 350 x 320 x 190 mm3 including the rechargeable Li battery. The specification includes a mass range 20 - 500 Th for both MS and MS2, and mass resolving resolution of 0.3 Th (FWHM) at 106 Th. Shimadzu Corp. released the MALDI digital ion trap mass spectrometer MALDImini-1 in 2019. Having a foot print of a A3 paper, the MALDI mass spectrometer covered an impressive mass range up to 70,000 Th and a MSn mass rang to 5,000 Th. Tandem mass analysis function up to MS3 is available, which allows researchers to carry out comprehensive structural analyses, such as direct glycopeptide analysis, post translational modification analysis, and branched glycan structural analysis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " a_z = -\\frac {8eU} {m r_0^2 \\Omega^2} \\qquad\\qquad (1) \\!" }, { "math_id": 1, "text": " q_z = \\frac {4eV} {m r_0^2 \\Omega^2} . \\qquad\\qquad (2) \\!" }, { "math_id": 2, "text": "a , q" }, { "math_id": 3, "text": " U = dV_1 + (1-d) V_2 " }, { "math_id": 4, "text": "q_z" }, { "math_id": 5, "text": "\\mathbf V(f_n, \\tau_n)=\\begin{bmatrix} cos(\\tau_n\\sqrt{f_n)} & 1/\\sqrt{f_n} sin(\\tau_n\\sqrt{f_n}\\\\ -\\sqrt{f_n}sin(\\tau_n\\sqrt{f_n}) & cos(\\tau_n\\sqrt{f_n}) \\end{bmatrix} \\qquad f_n >0 \\qquad\\qquad\\qquad (3a)" }, { "math_id": 6, "text": "\\mathbf V(f_n, \\tau_n)=\\begin{bmatrix} cosh(\\tau_n\\sqrt{-f_n)} & 1/\\sqrt{-f_n} sinh(\\tau_n\\sqrt{-f_n}\\\\ \\sqrt{-f_n}sinh(\\tau_n\\sqrt{-f_n}) & cosh(\\tau_n\\sqrt{-f_n}) \\end{bmatrix} \\qquad f_n < 0 \\qquad\\qquad (3b)" }, { "math_id": 7, "text": " T" }, { "math_id": 8, "text": "T = \\sum^n_1 t_n" }, { "math_id": 9, "text": "f = a\\pm2q" }, { "math_id": 10, "text": "\\tau_n = t_n\\pi" }, { "math_id": 11, "text": "\\pm 2q" }, { "math_id": 12, "text": "\\mathbf M=\\mathbf V(f_1,\\tau_1)\\times \\mathbf V(f_2,\\tau_2)...\\times \\mathbf V(f_n, \\tau_n) \\qquad\\qquad\\qquad(4)" }, { "math_id": 13, "text": " k" }, { "math_id": 14, "text": " k-1" }, { "math_id": 15, "text": "\\binom {u_k}{\\dot u_k}=\\mathbf V(f_k, \\tau_k)\\binom{u_{k-1}}{\\dot u_{k-1}} \\qquad \\qquad (5)" }, { "math_id": 16, "text": " \\omega_z = \\frac {1} {2} \\beta_z\\Omega \\qquad\\qquad (6) \\!" }, { "math_id": 17, "text": " \\omega_z = \\frac {\\Omega} {2\\pi} arccos \\frac {\\phi_{11} + \\phi_{22} } {2}" }, { "math_id": 18, "text": " \\phi_{11}, \\phi_{22}" }, { "math_id": 19, "text": "a =0 " }, { "math_id": 20, "text": "q " }, { "math_id": 21, "text": " \\beta_z = \\frac {1} {\\pi} arccos[cos (\\pi{\\sqrt{q_z/2}})cosh ( \\pi {\\sqrt{q_z/2}}) \\qquad\\qquad (7) \\!" }, { "math_id": 22, "text": " D_z = \\frac {{\\pi}^2}{48} q_z V \\approx 0.206 q_z V " } ]
https://en.wikipedia.org/wiki?curid=60826875
60827351
Dynamic discrete choice
Dynamic discrete choice (DDC) models, also known as discrete choice models of dynamic programming, model an agent's choices over discrete options that have future implications. Rather than assuming observed choices are the result of static utility maximization, observed choices in DDC models are assumed to result from an agent's maximization of the present value of utility, generalizing the utility theory upon which discrete choice models are based. The goal of DDC methods is to estimate the structural parameters of the agent's decision process. Once these parameters are known, the researcher can then use the estimates to simulate how the agent would behave in a counterfactual state of the world. (For example, how a prospective college student's enrollment decision would change in response to a tuition increase.) Mathematical representation. Agent formula_0's maximization problem can be written mathematically as follows: formula_1 where Simplifying assumptions and notation. It is standard to impose the following simplifying assumptions and notation of the dynamic decision problem: The flow utility can be written as an additive sum, consisting of deterministic and stochastic elements. The deterministic component can be written as a linear function of the structural parameters. formula_13 Define by formula_14 the "ex ante" value function for individual formula_0 in period formula_9 just before formula_15 is revealed: formula_16 where the expectation operator formula_17 is over the formula_18's, and where formula_19 represents the probability distribution over formula_20 conditional on formula_21. The expectation over state transitions is accomplished by taking the integral over this probability distribution. It is possible to decompose formula_14 into deterministic and stochastic components: formula_22 where formula_23 is the value to choosing alternative formula_8 at time formula_9 and is written as formula_24 where now the expectation formula_17 is taken over the formula_25. The states formula_21 follow a Markov chain. That is, attainment of state formula_21 depends only on the state formula_26 and not formula_27 or any prior state. Conditional value functions and choice probabilities. The value function in the previous section is called the conditional value function, because it is the value function conditional on choosing alternative formula_8 in period formula_9. Writing the conditional value function in this way is useful in constructing formulas for the choice probabilities. To write down the choice probabilities, the researcher must make an assumption about the distribution of the formula_10's. As in static discrete choice models, this distribution can be assumed to be iid Type I extreme value, generalized extreme value, multinomial probit, or mixed logit. For the case where formula_10 is multinomial logit (i.e. drawn iid from the Type I extreme value distribution), the formulas for the choice probabilities would be: formula_28 Estimation. Estimation of dynamic discrete choice models is particularly challenging, due to the fact that the researcher must solve the backwards recursion problem for each guess of the structural parameters. The most common methods used to estimate the structural parameters are maximum likelihood estimation and method of simulated moments. Aside from estimation methods, there are also solution methods. Different solution methods can be employed due to complexity of the problem. These can be divided into full-solution methods and non-solution methods. Full-solution methods. The foremost example of a full-solution method is the nested fixed point (NFXP) algorithm developed by John Rust in 1987. The NFXP algorithm is described in great detail in its documentation manual. A recent work by Che-Lin Su and Kenneth Judd in 2012 implements another approach (dismissed as intractable by Rust in 1987), which uses constrained optimization of the likelihood function, a special case of mathematical programming with equilibrium constraints (MPEC). Specifically, the likelihood function is maximized subject to the constraints imposed by the model, and expressed in terms of the additional variables that describe the model's structure. This approach requires powerful optimization software such as Artelys Knitro because of the high dimensionality of the optimization problem. Once it is solved, both the structural parameters that maximize the likelihood, and the solution of the model are found. In the later article Rust and coauthors show that the speed advantage of MPEC compared to NFXP is not significant. Yet, because the computations required by MPEC do not rely on the structure of the model, its implementation is much less labor intensive. Despite numerous contenders, the NFXP maximum likelihood estimator remains the leading estimation method for Markov decision models. Non-solution methods. An alternative to full-solution methods is non-solution methods. In this case, the researcher can estimate the structural parameters without having to fully solve the backwards recursion problem for each parameter guess. Non-solution methods are typically faster while requiring more assumptions, but the additional assumptions are in many cases realistic. The leading non-solution method is conditional choice probabilities, developed by V. Joseph Hotz and Robert A. Miller. Examples. Bus engine replacement model. The bus engine replacement model developed in the seminal paper is one of the first dynamic stochastic models of discrete choice estimated using real data, and continues to serve as classical example of the problems of this type. The model is a simple regenerative optimal stopping stochastic dynamic problem faced by the decision maker, Harold Zurcher, superintendent of maintenance at the Madison Metropolitan Bus Company in Madison, Wisconsin. For every bus in operation in each time period Harold Zurcher has to decide whether to replace the engine and bear the associated replacement cost, or to continue operating the bus at an ever raising cost of operation, which includes insurance and the cost of lost ridership in the case of a breakdown. Let formula_29 denote the odometer reading (mileage) at period formula_9, formula_30 cost of operating the bus which depends on the vector of parameters formula_31, formula_32 cost of replacing the engine, and formula_33 the discount factor. Then the per-period utility is given by formula_34 where formula_35 denotes the decision (keep or replace) and formula_36 and formula_37 represent the component of the utility observed by Harold Zurcher, but not John Rust. It is assumed that formula_36 and formula_37 are independent and identically distributed with the Type I extreme value distribution, and that formula_38 are independent of formula_39 conditional on formula_29. Then the optimal decisions satisfy the Bellman equation formula_40 where formula_41 and formula_42 are respectively transition densities for the observed and unobserved states variables. Time indices in the Bellman equation are dropped because the model is formulated in the infinite horizon settings, the unknown optimal policy is stationary, i.e. independent of time. Given the distributional assumption on formula_42, the probability of particular choice formula_35 is given by formula_43 where formula_44 is a unique solution to the functional equation formula_45 It can be shown that the latter functional equation defines a contraction mapping if the state space formula_29 is bounded, so there will be a unique solution formula_44 for any formula_31, and further the implicit function theorem holds, so formula_44 is also a smooth function of formula_31 for each formula_46. Estimation with nested fixed point algorithm. The contraction mapping above can be solved numerically for the fixed point formula_44 that yields choice probabilities formula_47 for any given value of formula_31. The log-likelihood function can then be formulated as formula_48 where formula_49 and formula_50 represent data on state variables (odometer readings) and decision (keep or replace) for formula_51 individual buses, each in formula_52 periods. The joint algorithm for solving the fixed point problem given a particular value of parameter formula_31 and maximizing the log-likelihood formula_53 with respect to formula_31 was named by John Rust "nested fixed point algorithm" (NFXP). Rust's implementation of the nested fixed point algorithm is highly optimized for this problem, using Newton–Kantorovich iterations to calculate formula_47 and quasi-Newton methods, such as the Berndt–Hall–Hall–Hausman algorithm, for likelihood maximization. Estimation with MPEC. In the nested fixed point algorithm, formula_47 is recalculated for each guess of the parameters "θ". The MPEC method instead solves the constrained optimization problem: formula_54 This method is faster to compute than non-optimized implementations of the nested fixed point algorithm, and takes about as long as highly optimized implementations. Estimation with non-solution methods. The conditional choice probabilities method of Hotz and Miller can be applied in this setting. Hotz, Miller, Sanders, and Smith proposed a computationally simpler version of the method, and tested it on a study of the bus engine replacement problem. The method works by estimating conditional choice probabilities using simulation, then backing out the implied differences in value functions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\nV\\left(x_{n0}\\right)=\\max_{\\left\\{d_{nt}\\right\\}_{t=1}^T} \\mathbb{E} \\left(\\sum_{t^{\\prime}=t}^T \\sum_{i=1}^J \\beta^{t'-t} \\left(d_{nt}=i\\right)U_{nit} \\left(x_{nt}, \\varepsilon_{nit}\\right)\\right),\n" }, { "math_id": 2, "text": "x_{nt}" }, { "math_id": 3, "text": "x_{n0}" }, { "math_id": 4, "text": "d_{nt}" }, { "math_id": 5, "text": "J" }, { "math_id": 6, "text": "\\beta \\in \\left(0,1\\right)" }, { "math_id": 7, "text": "U_{nit}" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "t" }, { "math_id": 10, "text": "\\varepsilon_{nit}" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\mathbb{E}\\left(\\cdot\\right)" }, { "math_id": 13, "text": "\\begin{alignat}{5}\nU_{nit}\\left(x_{nt},\\varepsilon_{nit}\\right) &&\\; = \\;&& u_{nit} &&\\; + \\;&& \\varepsilon_{nit} \\\\\n &&\\; = \\;&& X_{nt}\\alpha_{i} &&\\; + \\;&& \\varepsilon_{nit}\n\\end{alignat}" }, { "math_id": 14, "text": "V_{nt}(x_{nt})" }, { "math_id": 15, "text": "\\varepsilon_{nt}" }, { "math_id": 16, "text": "\nV_{nt}(x_{nt}) = \\mathbb{E} \\max_i \\left\\{ u_{nit}(x_{nt}) + \\varepsilon_{nit} + \\beta \\int_{x_{t+1}} V_{nt+1} (x_{nt+1}) \\, dF\\left(x_{t+1} \\mid x_t \\right) \\right\\}\n" }, { "math_id": 17, "text": "\\mathbb{E}" }, { "math_id": 18, "text": "\\varepsilon" }, { "math_id": 19, "text": "dF\\left(x_{t+1} \\mid x_t \\right)" }, { "math_id": 20, "text": "x_{t+1}" }, { "math_id": 21, "text": "x_{t}" }, { "math_id": 22, "text": "\nV_{nt}(x_{nt}) = \\mathbb{E} \\max_i \\left\\{ v_{nit}(x_{nt}) + \\varepsilon_{nit} \\right\\}\n" }, { "math_id": 23, "text": "v_{nit}" }, { "math_id": 24, "text": "\nv_{nit}(x_{nt}) = u_{nit}\\left(x_{nt}\\right) + \\beta \\int_{x_{t+1}} \\mathbb{E} \\max_{j} \\left\\{ v_{njt+1}(x_{nt+1}) + \\varepsilon_{njt+1} \\right\\} \\, dF(x_{t+1} \\mid x_t)\n" }, { "math_id": 25, "text": "\\varepsilon_{njt+1}" }, { "math_id": 26, "text": "x_{t-1}" }, { "math_id": 27, "text": "x_{t-2}" }, { "math_id": 28, "text": "P_{nit} = \\frac{\\exp(v_{nit})}{\\sum_{j=1}^J \\exp(v_{njt})}" }, { "math_id": 29, "text": "x_t" }, { "math_id": 30, "text": "c(x_t,\\theta)" }, { "math_id": 31, "text": "\\theta" }, { "math_id": 32, "text": "RC" }, { "math_id": 33, "text": "\\beta" }, { "math_id": 34, "text": "\nU(x_t,\\xi_t,d,\\theta)= \n\\begin{cases}\n-c(x_t,\\theta) + \\xi_{t,\\text{keep}}, & \\\\\n-RC-c(0,\\theta) + \\xi_{t,\\text{replace}}, &\n\\end{cases}\n=\nu(x_t,d,\\theta) +\n\\begin{cases}\n\\xi_{t,\\text{keep}}, & \\textrm{if }\\;\\; d=\\text{keep}, \\\\\n\\xi_{t,\\text{replace}}, & \\textrm{if }\\;\\; d=\\text{replace},\n\\end{cases}\n" }, { "math_id": 35, "text": "d" }, { "math_id": 36, "text": "\\xi_{t,\\text{keep}}" }, { "math_id": 37, "text": "\\xi_{t,\\text{replace}}" }, { "math_id": 38, "text": "\\xi_{t,\\bullet}" }, { "math_id": 39, "text": "\\xi_{t-1,\\bullet} " }, { "math_id": 40, "text": "\nV(x,\\xi,\\theta) = \\max_{d=\\text{keep},\\text{replace}} \\left\\{ u(x,d,\\theta)+\\xi_d + \\iint V(x',\\xi',\\theta) q(d\\xi'\\mid x',\\theta) p(dx'\\mid x,d,\\theta) \\right\\}\n" }, { "math_id": 41, "text": "p(dx'\\mid x,d,\\theta)" }, { "math_id": 42, "text": "q(d\\xi'\\mid x',\\theta)" }, { "math_id": 43, "text": "\nP(d\\mid x,\\theta) = \\frac{ \\exp\\{ u(x,d,\\theta)+\\beta EV(x,d,\\theta)\\}}{\\sum_{d' \\in D(x)} \n\\exp\\{ u(x,d',\\theta)+\\beta EV(x,d',\\theta)\\} }\n" }, { "math_id": 44, "text": "EV(x,d,\\theta)" }, { "math_id": 45, "text": "\nEV(x,d,\\theta)= \\int \\left[ \\log\\left( \\sum_{d=\\text{keep},\\text{replace}} \\exp\\{u(x,d',\\theta)+\\beta EV(x',d',\\theta)\\}\\right) \\right] p(x'\\mid x,d,\\theta).\n" }, { "math_id": 46, "text": "(x,d)" }, { "math_id": 47, "text": "P(d\\mid x,\\theta)" }, { "math_id": 48, "text": "\nL(\\theta) = \\sum_{i=1}^N \\sum_{t=1}^{T_i} \\log(P(d_{it}\\mid x_{it},\\theta))+\\log(p(x_{it}\\mid x_{it-1},d_{it-1},\\theta)),\n" }, { "math_id": 49, "text": "x_{i,t}" }, { "math_id": 50, "text": "d_{i,t}" }, { "math_id": 51, "text": "i=1,\\dots,N" }, { "math_id": 52, "text": "t=1,\\dots,T_i" }, { "math_id": 53, "text": "L(\\theta)" }, { "math_id": 54, "text": "\n\\begin{align}\n\\max & \\qquad L(\\theta) & \\\\\n\\text{subject to} & \\qquad EV(x,d,\\theta)= \\int \\left[ \\log\\left( \\sum_{d=\\text{keep},\\text{replace}} \\exp\\{ u(x,d',\\theta) + \\beta EV(x',d',\\theta)\\}\\right) \\right] p(x'\\mid x,d,\\theta)\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=60827351
60828
Lepton
Class of elementary particles In particle physics, a lepton is an elementary particle of half-integer spin (spin ) that does not undergo strong interactions. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons or muons), including the electron, muon, and tauon, and neutral leptons, better known as neutrinos. Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed. The best known of all leptons is the electron. There are six types of leptons, known as "flavours", grouped in three "generations". The first-generation leptons, also called "electronic leptons", comprise the electron () and the electron neutrino (); the second are the "muonic leptons", comprising the muon () and the muon neutrino (); and the third are the "tauonic leptons", comprising the tau () and the tau neutrino (). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons and neutrinos through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high-energy collisions (such as those involving cosmic rays and those carried out in particle accelerators). Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks, however, leptons are not subject to the strong interaction, but they are subject to the other three fundamental interactions: gravitation, the weak interaction, and to electromagnetism, of which the latter is proportional to charge, and is thus zero for the electrically neutral neutrinos. For every lepton flavor, there is a corresponding type of antiparticle, known as an antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. According to certain theories, neutrinos may be their own antiparticle. It is not currently known whether this is the case. The first charged lepton, the electron, was theorized in the mid-19th century by several scientists and was discovered in 1897 by J. J. Thomson. The next lepton to be observed was the muon, discovered by Carl D. Anderson in 1936, which was classified as a meson at the time. After investigation, it was realized that the muon did not have the expected properties of a meson, but rather behaved like an electron, only with higher mass. It took until 1947 for the concept of "leptons" as a family of particles to be proposed. The first neutrino, the electron neutrino, was proposed by Wolfgang Pauli in 1930 to explain certain characteristics of beta decay. It was first observed in the Cowan–Reines neutrino experiment conducted by Clyde Cowan and Frederick Reines in 1956. The muon neutrino was discovered in 1962 by Leon M. Lederman, Melvin Schwartz, and Jack Steinberger, and the tau discovered between 1974 and 1977 by Martin Lewis Perl and his colleagues from the Stanford Linear Accelerator Center and Lawrence Berkeley National Laboratory. The tau neutrino remained elusive until July 2000, when the DONUT collaboration from Fermilab announced its discovery. Leptons are an important part of the Standard Model. Electrons are one of the components of atoms, alongside protons and neutrons. Exotic atoms with muons and taus instead of electrons can also be synthesized, as well as lepton–antilepton particles such as positronium. Etymology. The name "lepton" comes from the Greek "leptós", "fine, small, thin" (neuter nominative/accusative singular form: λεπτόν "leptón"); the earliest attested form of the word is the Mycenaean Greek , "re-po-to", written in Linear B syllabic script. "Lepton" was first used by physicist Léon Rosenfeld in 1948: Following a suggestion of Prof. C. Møller, I adopt—as a pendant to "nucleon"—the denomination "lepton" (from λεπτός, small, thin, delicate) to denote a particle of small mass. Rosenfeld chose the name as the common name for electrons and (then hypothesized) neutrinos. Additionally, the muon, initially classified as a meson, was reclassified as a lepton in the 1950s. The masses of those particles are small compared to nucleons—the mass of an electron () and the mass of a muon (with a value of ) are fractions of the mass of the "heavy" proton (), and the mass of a neutrino is nearly zero. However, the mass of the tau (discovered in the mid-1970s) () is nearly twice that of the proton and ‍ times that of the electron. History. The first lepton identified was the electron, discovered by J.J. Thomson and his team of British physicists in 1897. Then in 1930, Wolfgang Pauli postulated the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay. Pauli theorized that an undetected particle was carrying away the difference between the energy, momentum, and angular momentum of the initial and observed final particles. The electron neutrino was simply called the neutrino, as it was not yet known that neutrinos came in different flavours (or different "generations"). Nearly 40 years after the discovery of the electron, the muon was discovered by Carl D. Anderson in 1936. Due to its mass, it was initially categorized as a meson rather than a lepton. It later became clear that the muon was much more similar to the electron than to mesons, as muons do not undergo the strong interaction, and thus the muon was reclassified: electrons, muons, and the (electron) neutrino were grouped into a new group of particles—the leptons. In 1962, Leon M. Lederman, Melvin Schwartz, and Jack Steinberger showed that more than one type of neutrino exists by first detecting interactions of the muon neutrino, which earned them the 1988 Nobel Prize, although by then the different flavours of neutrino had already been theorized. The tau was first detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his colleagues at the SLAC LBL group. Like the electron and the muon, it too was expected to have an associated neutrino. The first evidence for tau neutrinos came from the observation of "missing" energy and momentum in tau decay, analogous to the "missing" energy and momentum in beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab, making it the second-to-latest particle of the Standard Model to have been directly observed, with Higgs boson being discovered in 2012. Although all present data is consistent with three generations of leptons, some particle physicists are searching for a fourth generation. The current lower limit on the mass of such a fourth charged lepton is , while its associated neutrino would have a mass of at least . Properties. Spin and chirality. Leptons are spin  particles. The spin-statistics theorem thus implies that they are fermions and thus that they are subject to the Pauli exclusion principle: no two leptons of the same species can be in the same state at the same time. Furthermore, it means that a lepton can have only two possible spin states, namely up or down. A closely related property is chirality, which in turn is closely related to a more easily visualized property called helicity. The helicity of a particle is the direction of its spin relative to its momentum; particles with spin in the same direction as their momentum are called "right-handed" and they are otherwise called "left-handed". When a particle is massless, the direction of its momentum relative to its spin is the same in every reference frame, whereas for massive particles it is possible to 'overtake' the particle by choosing a faster-moving reference frame; in the faster frame, the helicity is reversed. Chirality is a technical property, defined through transformation behaviour under the Poincaré group, that does not change with reference frame. It is contrived to agree with helicity for massless particles, and is still well defined for particles with mass. In many quantum field theories, such as quantum electrodynamics and quantum chromodynamics, left- and right-handed fermions are identical. However, the Standard Model's weak interaction treats left-handed and right-handed fermions differently: only left-handed fermions (and right-handed anti-fermions) participate in the weak interaction. This is an example of parity violation explicitly written into the model. In the literature, left-handed fields are often denoted by a capital subscript (e.g. the normal electron e) and right-handed fields are denoted by a capital subscript (e.g. a positron e). Right-handed neutrinos and left-handed anti-neutrinos have no possible interaction with other particles (see "Sterile neutrino") and so are not a functional part of the Standard Model, although their exclusion is not a strict requirement; they are sometimes listed in particle tables to emphasize that they would have no active role if included in the model. Even though electrically charged right-handed particles (electron, muon, or tau) do not engage in the weak interaction specifically, they can still interact electrically, and hence still participate in the combined electroweak force, although with different strengths ("Y"W). Electromagnetic interaction. One of the most prominent properties of leptons is their electric charge, Q. The electric charge determines the strength of their electromagnetic interactions. It determines the strength of the electric field generated by the particle (see Coulomb's law) and how strongly the particle reacts to an external electric or magnetic field (see Lorentz force). Each generation contains one lepton with "Q" = −1 "e" and one lepton with zero electric charge. The lepton with electric charge is commonly simply referred to as a "charged lepton" while a neutral lepton is called a "neutrino". For example, the first generation consists of the electron with a negative electric charge and the electrically neutral electron neutrino . In the language of quantum field theory, the electromagnetic interaction of the charged leptons is expressed by the fact that the particles interact with the quantum of the electromagnetic field, the photon. The Feynman diagram of the electron–photon interaction is shown on the right. Because leptons possess an intrinsic rotation in the form of their spin, charged leptons generate a magnetic field. The size of their magnetic dipole moment μ is given by formula_0 where m is the mass of the lepton and g is the so-called "g factor" for the lepton. First-order quantum mechanical approximation predicts that the g factor is 2 for all leptons. However, higher-order quantum effects caused by loops in Feynman diagrams introduce corrections to this value. These corrections, referred to as the "anomalous magnetic dipole moment", are very sensitive to the details of a quantum field theory model, and thus provide the opportunity for precision tests of the Standard Model. The theoretical and measured values for the "electron" anomalous magnetic dipole moment are within agreement within eight significant figures. The results for the "muon", however, are problematic, hinting at a small, persistent discrepancy between the Standard Model and experiment. Weak interaction. In the Standard Model, the left-handed charged lepton and the left-handed neutrino are arranged in doublet that transforms in the spinor representation () of the weak isospin SU(2) gauge symmetry. This means that these particles are eigenstates of the isospin projection "T"3 with eigenvalues and respectively. In the meantime, the right-handed charged lepton transforms as a weak isospin scalar () and thus does not participate in the weak interaction, while there is no evidence that a right-handed neutrino exists at all. The Higgs mechanism recombines the gauge fields of the weak isospin SU(2) and the weak hypercharge U(1) symmetries to three massive vector bosons (, , ) mediating the weak interaction, and one massless vector boson, the photon (γ), responsible for the electromagnetic interaction. The electric charge Q can be calculated from the isospin projection T3 and weak hypercharge "Y"W through the Gell-Mann–Nishijima formula, To recover the observed electric charges for all particles, the left-handed weak isospin doublet (νeL, e) must thus have "Y"W −1, while the right-handed isospin scalar e must have . The interaction of the leptons with the massive weak interaction vector bosons is shown in the figure on the right. Mass. In the Standard Model, each lepton starts out with no intrinsic mass. The charged leptons (i.e. the electron, muon, and tau) obtain an effective mass through interaction with the Higgs field, but the neutrinos remain massless. For technical reasons, the masslessness of the neutrinos implies that there is no mixing of the different generations of charged leptons as there is for quarks. The zero mass of neutrino is in close agreement with current direct experimental observations of the mass. However, it is known from indirect experiments—most prominently from observed neutrino oscillations—that neutrinos have to have a nonzero mass, probably less than . This implies the existence of physics beyond the Standard Model. The currently most favoured extension is the so-called seesaw mechanism, which would explain both why the left-handed neutrinos are so light compared to the corresponding charged leptons, and why we have not yet seen any right-handed neutrinos. Lepton flavor quantum numbers. The members of each generation's weak isospin doublet are assigned leptonic numbers that are conserved under the Standard Model. Electrons and electron neutrinos have an "electronic number" of , while muons and muon neutrinos have a "muonic number" of , while tau particles and tau neutrinos have a "tauonic number" of . The antileptons have their respective generation's leptonic numbers of −1. Conservation of the leptonic numbers means that the number of leptons of the same type remains the same, when particles interact. This implies that leptons and antileptons must be created in pairs of a single generation. For example, the following processes are allowed under conservation of leptonic numbers:  →   + , but none of these:  →   + , However, neutrino oscillations are known to violate the conservation of the individual leptonic numbers. Such a violation is considered to be smoking gun evidence for physics beyond the Standard Model. A much stronger conservation law is the conservation of the total number of leptons (L with "no" subscript), conserved even in the case of neutrino oscillations, but even it is still violated by a tiny amount by the chiral anomaly. Universality. The coupling of leptons to all types of gauge boson are flavour-independent: The interaction between leptons and a gauge boson measures the same for each lepton. This property is called lepton universality and has been tested in measurements of the muon and tau lifetimes and of boson partial decay widths, particularly at the Stanford Linear Collider (SLC) and Large Electron–Positron Collider (LEP) experiments.138 The decay rate () of muons through the process → + + is approximately given by an expression of the form (see muon decay for more details) formula_1 where K2 is some constant, and GF is the Fermi coupling constant. The decay rate of tau particles through the process → + + is given by an expression of the same form formula_2 where K3 is some other constant. Muon–tauon universality implies that . On the other hand, electron–muon universality implies formula_3 The branching ratios for the electronic mode (17.82%) and muonic (17.39%) mode of tau decay are not equal due to the mass difference of the final state leptons. Universality also accounts for the ratio of muon and tau lifetimes. The lifetime formula_4 of a lepton formula_5 (with formula_5 = "μ" or "τ") is related to the decay rate by formula_6, where formula_7 denotes the branching ratios and formula_8 denotes the resonance width of the process formula_9 with x and y replaced by two different particles from "e" or "μ" or "τ". The ratio of tau and muon lifetime is thus given by formula_10 Using values from the 2008 "Review of Particle Physics" for the branching ratios of the muon and tau yields a lifetime ratio of ~ , comparable to the measured lifetime ratio of ~ . The difference is due to K2 and K3 not "actually" being constants: They depend slightly on the mass of leptons involved. Recent tests of lepton universality in meson decays, performed by the LHCb, BaBar, and Belle experiments, have shown consistent deviations from the Standard Model predictions. However the combined statistical and systematic significance is not yet high enough to claim an observation of new physics. In July 2021 results on lepton universality have been published testing W decays, previous measurements by the LEP had given a slight imbalance but the new measurement by the ATLAS collaboration have twice the precision and give a ratio of formula_11, which agrees with the standard-model prediction of unity. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu = g\\, \\frac{\\; Q \\hbar \\;}{4 m} \\ ," }, { "math_id": 1, "text": "\\Gamma \\left ( \\mu^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\mu \\right ) \\approx K_2\\, G_\\text{F}^2\\, m_\\mu^5 ~," }, { "math_id": 2, "text": "\\Gamma \\left ( \\tau^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\tau \\right ) \\approx K_3\\, G_\\text{F}^2\\, m_\\tau^5 ~," }, { "math_id": 3, "text": "0.9726 \\times \\Gamma \\left( \\tau^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\tau \\right) = \\Gamma \\left( \\tau^- \\rarr \\mu^- + \\bar{\\nu_\\mu} +\\nu_\\tau \\right) ~." }, { "math_id": 4, "text": "\\Tau_\\ell" }, { "math_id": 5, "text": "\\ell" }, { "math_id": 6, "text": "\\Tau_\\ell = \\frac{\\; \\mathcal{B} \\left( \\ell^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\ell \\right) \\; }{ \\Gamma \\left( \\ell^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\ell \\right)}\\," }, { "math_id": 7, "text": "\\; \\mathcal{B} (x \\rarr y) \\;" }, { "math_id": 8, "text": "\\;\\Gamma(x \\rarr y) \\;" }, { "math_id": 9, "text": "\\; x \\rarr y ~," }, { "math_id": 10, "text": "\\frac{\\, \\Tau_\\tau \\,}{\\Tau_\\mu} = \\frac{\\; \\mathcal{B} \\left( \\tau^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\tau \\right) \\;}{ \\mathcal{B} \\left( \\mu^- \\rarr e^- + \\bar{\\nu_e} +\\nu_\\mu \\right) }\\, \\left(\\frac{m_\\mu}{m_\\tau}\\right)^5 ~." }, { "math_id": 11, "text": "\\mathcal{B} (W\\rarr \\tau^-+\\nu_\\tau)/\\mathcal{B}( W\\rarr \\mu^-+\\nu_\\mu)=0.992\\pm0.013" } ]
https://en.wikipedia.org/wiki?curid=60828
6083511
Completely positive map
C*-algebra mapping preserving positive elements In mathematics a positive map is a map between C*-algebras that sends positive elements to positive elements. A completely positive map is one which satisfies a stronger, more robust condition. Definition. Let formula_0 and formula_1 be C*-algebras. A linear map formula_2 is called a positive map if formula_3 maps positive elements to positive elements: formula_4. Any linear map formula_5 induces another map formula_6 in a natural way. If formula_7 is identified with the C*-algebra formula_8 of formula_9-matrices with entries in formula_0, then formula_10 acts as formula_11 formula_3 is called k-positive if formula_12 is a positive map and completely positive if formula_3 is k-positive for all k. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "\\phi: A\\to B" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "a\\geq 0 \\implies \\phi(a)\\geq 0" }, { "math_id": 5, "text": "\\phi:A\\to B" }, { "math_id": 6, "text": "\\textrm{id} \\otimes \\phi : \\mathbb{C}^{k \\times k} \\otimes A \\to \\mathbb{C}^{k \\times k} \\otimes B" }, { "math_id": 7, "text": "\\mathbb{C}^{k\\times k}\\otimes A" }, { "math_id": 8, "text": "A^{k\\times k}" }, { "math_id": 9, "text": "k\\times k" }, { "math_id": 10, "text": "\\textrm{id}\\otimes\\phi" }, { "math_id": 11, "text": "\n\\begin{pmatrix}\na_{11} & \\cdots & a_{1k} \\\\\n\\vdots & \\ddots & \\vdots \\\\\na_{k1} & \\cdots & a_{kk}\n\\end{pmatrix} \\mapsto \\begin{pmatrix}\n\\phi(a_{11}) & \\cdots & \\phi(a_{1k}) \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\phi(a_{k1}) & \\cdots & \\phi(a_{kk})\n\\end{pmatrix}.\n" }, { "math_id": 12, "text": "\\textrm{id}_{\\mathbb{C}^{k\\times k}} \\otimes \\phi" }, { "math_id": 13, "text": "a_1\\leq a_2\\implies \\phi(a_1)\\leq\\phi(a_2)" }, { "math_id": 14, "text": "a_1,a_2\\in A_{sa}" }, { "math_id": 15, "text": "-\\|a\\|_A 1_A \\leq a \\leq \\|a\\|_A 1_A" }, { "math_id": 16, "text": "a\\in A_{sa}" }, { "math_id": 17, "text": "\\|\\phi(1_A)\\|_B" }, { "math_id": 18, "text": "\\to\\mathbb{C}" }, { "math_id": 19, "text": "V:H_1\\to H_2" }, { "math_id": 20, "text": "L(H_1)\\to L(H_2), \\ A \\mapsto V A V^\\ast" }, { "math_id": 21, "text": "\\phi:A \\to \\mathbb{C}" }, { "math_id": 22, "text": "C(X)" }, { "math_id": 23, "text": "C(Y)" }, { "math_id": 24, "text": "X, Y" }, { "math_id": 25, "text": "C(X)\\to C(Y)" }, { "math_id": 26, "text": "\\mathbb{C}^{n \\times n}" }, { "math_id": 27, "text": "\\mathbb{C}^{2\\times 2} \\otimes \\mathbb{C}^{2\\times 2}" }, { "math_id": 28, "text": "\n\\begin{bmatrix}\n\\begin{pmatrix}1&0\\\\0&0\\end{pmatrix}&\n\\begin{pmatrix}0&1\\\\0&0\\end{pmatrix}\\\\\n\\begin{pmatrix}0&0\\\\1&0\\end{pmatrix}&\n\\begin{pmatrix}0&0\\\\0&1\\end{pmatrix}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 & 0 & 0 & 1 \\\\\n0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 1 \\\\\n\\end{bmatrix}.\n" }, { "math_id": 29, "text": "I_2 \\otimes T" }, { "math_id": 30, "text": "\n\\begin{bmatrix}\n\\begin{pmatrix}1&0\\\\0&0\\end{pmatrix}^T&\n\\begin{pmatrix}0&1\\\\0&0\\end{pmatrix}^T\\\\\n\\begin{pmatrix}0&0\\\\1&0\\end{pmatrix}^T&\n\\begin{pmatrix}0&0\\\\0&1\\end{pmatrix}^T\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{bmatrix} ,\n" }, { "math_id": 31, "text": "\\circ" } ]
https://en.wikipedia.org/wiki?curid=6083511
60835248
Tsai-Hill failure criterion
The Tsai–Hill failure criterion is one of the phenomenological material failure theories, which is widely used for anisotropic composite materials which have different strengths in tension and compression. The Tsai-Hill criterion predicts failure when the failure index in a laminate reaches 1. Tsai–Hill failure criterion in plane stress. The Tsai-Hill criterion is based on an energy theory with interactions between stresses. Ply rupture appears when: formula_0 Where: formula_1 is the allowable strength of the ply in the longitudinal direction (0° direction) formula_2 is the allowable strength of the ply in the transversal direction (90° direction) formula_3 is the allowable in-plane shear strength of the ply between the longitudinal and the transversal directionsformula_4 The Tsai hill criterion is "interactive", i.e. the stresses in different directions are not decoupled and do affect the failure simultaneously. Furthermore, it is a failure mode independent criterion, as it does not predict the way in which the material will fail, as opposed to mode-dependent criteria such as the Hashin criterion, or the Puck failure criterion. This can be important as some types of failure can be more critical than others. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\begin{align}\n \\left(\\cfrac{\\sigma_{11}}{X_{11}}\\right)^2 - \\left(\\cfrac{\\sigma_{11} \\sigma_{22}}{X_{11}^2}\\right) + \\left(\\cfrac{\\sigma_{22}}{X_{22}}\\right)^2 + \\left(\\cfrac{\\tau_{12}}{S_{12}}\\right)^2 \\geq 1\n \\end{align}\n" }, { "math_id": 1, "text": "\n \\begin{align}\n X_{11} \\end{align}" }, { "math_id": 2, "text": "\n\\begin{align}\n X_{22} \\end{align}" }, { "math_id": 3, "text": "\n\\begin{align}\n S_{12} \\end{align}" }, { "math_id": 4, "text": "\n" } ]
https://en.wikipedia.org/wiki?curid=60835248
6084009
Perceptual transparency
Perceptual transparency is the phenomenon of seeing one surface behind another. In our everyday life, we often experience the view of objects through transparent surfaces. Physically transparent surfaces allow the transmission of a certain amount of light rays through them. Sometimes nearly the totality of rays is transmitted across the surface without significant changes of direction or chromaticity, as in the case of air; sometimes only light at a certain wavelength is transmitted, as for coloured glass. Perceptually, the problem of transparency is much more challenging: both the light rays coming from the transparent surface and those coming from the object behind it do reach the same retinal location, triggering a single sensorial process. The system somehow maps this information onto a perceptual representation of two different objects. Physical transparency was shown to be neither a sufficient nor a necessary condition for perceptual transparency. Fuchs (1923) showed that when a small portion of a transparent surface is observed, neither the surface colour, nor the fusion colour is perceived, but only the colour resulting from the fusion of that of the transparent surface and that of the background. Tudor-Hart (1928) showed it is not possible to perceive transparency in a totally homogeneous field. Metzger (1975) showed that patterns of opaque paper can induce the illusion of transparency, in the absence of physical transparency. In order to distinguish perceptual from physical transparency, the former has often been addressed as transparency illusion. Paradoxically, however, two models developed within a physical context have long dominated the research in the field of perceptual transparency: the episcotister model by Metelli (1970; 1974) and the filter model by Beck et al. (1984). Metelli’s episcotister model and the luminance conditions for transparency. Although he was not the first author to study the phenomenon of transparency illusion, the Gestalt psychologist Metelli was probably the one who made the major contribution to the problem. Like his forecomers, Metelli faces the problem from a phenomenical more than from a physiological point of view. In other words, he did not investigate which are the physiological algorithms or the brain networks underlying transparency perception, but studied and classified the conditions under which a transparency illusion is generated. In doing so, Metelli marks an approach to the problem that will be followed by many scientists after him. The model is based on the idea that the perceptual colour scission following transparency is the opposite of colour fusion in a rotating episcotister, i.e. a rotating disk that alternates open and solid sectors. Metelli referred to colour fusion in a physical situation in which an episcotister rotates in front of an opaque background of reflectance A; the episcotister has an open sector of size t (a proportion of the total disk) and a solid sector of size (1-t) having reflectance r. The reflectance of the solid sectors and that of background are fused by rotation to produce a virtual reflectance value formula_0: formula_1 that is the weighted sum of background reflectance and episcotister solid sector reflectance. An episcotister is not a transparent object. Nevertheless, Beck et al. (1984) proposed an alternative model, based on transparent filters, that has the characteristic of including the effects of repetitive reflections between the transparent layer and the underlying surface. Both the episcotister model and the filter model, in their original formulation, were written in terms of reflectance values. A consequence is that their validity as physical models is dependent on the illumination conditions. However, both models can be rewritten in terms of luminance, as shown by Gerbino et al. (1990). Although physically correct in a number of situations, the filter model never gained a significant role in the prediction of perceptual transparency. In spite of being much more complicated than the episcotister model, it doesn’t lead to significant improvements in predictions about the occurrence of the illusion. While Metelli's episcotister model has long remained the preferred framework for the study of luminance conditions in transparency illusion, its validity as a theory of perception has been challenged by different studies. Beck et al. (1984) showed that only constraints (i) and (ii) imposed by the episcotister model are necessary for illusion of transparency; when constraints (iii) and (iv) are not fulfilled, the illusion can still be experienced. They also argued that the degree of perceived transparency depends on lightness more than reflectance. Masin and Fukuda (1993) proposed as alternative conditions for transparency to (i) and (ii) the ordinal condition p Є (a, q) [or q Є (p, b)], that was shown to agree better than episcotister model with transparency-judgements performed by naïve subjects in a yes-no task (Masin 1997). Metelli's equations were extended to three-dimensional colour space by D'Zmura et al. (1997). According to the model, transparency illusion would be generated by coherent convergence and translation in colour space. However, also in colour space, evidence was found in which the perceptual appearance does not reflect the physical model. For instance, D'Zmura et al. (1997) showed that equiluminant convergence and translation in colour space can elicit an impression of transparency, even if no episcotister nor physical filter can generate this stimulus configuration. Chen and D'Zmura (1998) showed deviations from the predictions of convergence model when the transparent regions have complementary hues.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "P=t*A +(1-t)*R " } ]
https://en.wikipedia.org/wiki?curid=6084009
60842493
FourQ
In cryptography, FourQ is an elliptic curve developed by Microsoft Research. It is designed for key agreements schemes (elliptic-curve Diffie–Hellman) and digital signatures (Schnorr), and offers about 128 bits of security. It is equipped with a reference implementation made by the authors of the original paper. The open source implementation is called "FourQlib" and runs on Windows and Linux and is available for x86, x64, and ARM. It is licensed under the MIT License and the source code is available on GitHub. Its name is derived from the four dimensional Gallant–Lambert–Vanstone scalar multiplication, which allows high performance calculations. The curve is defined over a two dimensional extension of the prime field defined by the Mersenne prime formula_0. History. The curve was published in 2015 by Craig Costello and Patrick Longa from Microsoft Research on ePrint. The paper was presented in Asiacrypt in 2015 in Auckland, New Zealand, and consequently a reference implementation was published on Microsoft's website. There were some efforts to standardize usage of the curve under IETF; these efforts were withdrawn in late 2017. Mathematical properties. The curve is defined by a twisted Edwards equation formula_1 formula_2 is a non-square in formula_3, where formula_4 is the Mersenne prime formula_5. In order to avoid small subgroup attacks, all points are verified to lie in an "N"-torsion subgroup of the elliptic curve, where "N" is specified as a 246-bit prime dividing the order of the group. The curve is equipped with two nontrivial endomorphisms: formula_6 related to the formula_4-power Frobenius map, and formula_7, a low degree efficiently computable endomorphism (see complex multiplication). Cryptographic properties. Security. The currently best known discrete logarithm attack is the generic Pollard's rho algorithm, requiring about formula_8 group operations on average. Therefore, it typically belongs to the 128 bit security level. In order to prevent timing attacks, all group operations are done in constant time, i.e. without disclosing information about key material. Efficiency. Most cryptographic primitives, and most notably ECDH, require fast computation of scalar multiplication, i.e. formula_9 for a point formula_10 on the curve and an integer formula_11, which is usually thought as distributed uniformly at random over formula_12. Since we look at a prime order cyclic subgroup, one can write scalars formula_13 such that formula_14 and formula_15 for every point formula_10 in the "N"-torsion subgroup. Hence, for a given formula_11 we may write formula_16 If we find small formula_17, we may compute formula_9 quickly by utilizing the implied equation formula_18 Babai rounding technique is used to find small formula_17. For FourQ it turns that one can guarantee an efficiently computable solution with formula_19. Moreover, as the characteristic of the field is a Mersenne prime, modulations can be carried efficiently. Both properties (four dimensional decomposition and Mersenne prime characteristic), alongside usage of fast multiplication formulae (extended twisted Edwards coordinates), make FourQ the currently fastest elliptic curve for the 128 bit security level. Uses. FourQ is implemented in the cryptographic library CIRCL, published by Cloudflare. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{127} - 1" }, { "math_id": 1, "text": "-x^2 + y^2 = 1 + d x^2 y^2" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "\\mathbb{F}_{p^2}" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "2^{127}-1" }, { "math_id": 6, "text": "\\psi" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "2^{122.5}" }, { "math_id": 9, "text": "[k]P" }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "\\{0, \\ldots, N-1\\}" }, { "math_id": 13, "text": "\\lambda_\\psi, \\lambda_\\phi" }, { "math_id": 14, "text": "\\psi(P) = [\\lambda_\\psi]P" }, { "math_id": 15, "text": "\\phi(P) = [\\lambda_\\phi]P" }, { "math_id": 16, "text": "k = a_1 + a_2 \\lambda_\\phi + a_3 \\lambda_\\psi + a_4 \\lambda_\\phi\\lambda_\\psi \\pmod N" }, { "math_id": 17, "text": "a_i" }, { "math_id": 18, "text": "[k]P = [a_1]P + [a_2] \\phi(P) + [a_3] \\psi(P) + [a_4] \\phi(\\psi(P))" }, { "math_id": 19, "text": "a_i < 2^{64}" } ]
https://en.wikipedia.org/wiki?curid=60842493
60842845
Enumeration algorithm
In computer science, an enumeration algorithm is an algorithm that enumerates the answers to a computational problem. Formally, such an algorithm applies to problems that take an input and produce a list of solutions, similarly to function problems. For each input, the enumeration algorithm must produce the list of all solutions, without duplicates, and then halt. The performance of an enumeration algorithm is measured in terms of the time required to produce the solutions, either in terms of the total time required to produce all solutions, or in terms of the maximal delay between two consecutive solutions and in terms of a preprocessing time, counted as the time before outputting the first solution. This complexity can be expressed in terms of the size of the input, the size of each individual output, or the total size of the set of all outputs, similarly to what is done with output-sensitive algorithms. Formal definitions. An enumeration problem formula_0 is defined as a relation formula_1 over strings of an arbitrary alphabet formula_2: formula_3 An algorithm solves formula_0 if for every input formula_4 the algorithm produces the (possibly infinite) sequence formula_5 such that formula_5 has no duplicate and formula_6 if and only if formula_7. The algorithm should halt if the sequence formula_5 is finite. Common complexity classes. Enumeration problems have been studied in the context of computational complexity theory, and several complexity classes have been introduced for such problems. A very general such class is EnumP, the class of problems for which the correctness of a possible output can be checked in polynomial time in the input and output. Formally, for such a problem, there must exist an algorithm A which takes as input the problem input "x", the candidate output "y", and solves the decision problem of whether "y" is a correct output for the input "x", in polynomial time in "x" and "y". For instance, this class contains all problems that amount to enumerating the witnesses of a problem in the class NP. Other classes that have been defined include the following. In the case of problems that are also in EnumP, these problems are ordered from least to most specific: Connection to computability theory. The notion of enumeration algorithms is also used in the field of computability theory to define some high complexity classes such as RE, the class of all recursively enumerable problems. This is the class of sets for which there exist an enumeration algorithm that will produce all elements of the set: the algorithm may run forever if the set is infinite, but each solution must be produced by the algorithm after a finite time.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "R \\subseteq \\Sigma^* \\times \\Sigma^*" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "z\\in y" }, { "math_id": 7, "text": "(x,z)\\in R" } ]
https://en.wikipedia.org/wiki?curid=60842845
60847281
Jarman–Bell principle
Ecological concept linking an herbivore's diet and size The Jarman–Bell principle is a concept in ecology that the food quality of a herbivore's intake decreases as the size of the herbivore increases, but the amount of such food increases to counteract the low quality foods. It operates by observing the allometric (non- linear scaling) properties of herbivores. The principle was coined by P.J Jarman (1968.) and R.H.V Bell (1971). Large herbivores can subsist on low quality food. Their gut size is larger than smaller herbivores. The increased size allows for better digestive efficiency, and thus allow viable consumption of low quality food. Small herbivores require more energy per unit of body mass compared to large herbivores. A smaller size, thus smaller gut size and lower efficiency, imply that these animals need to select high quality food to function. Their small gut limits the amount of space for food, so they eat low quantities of high quality diet. Some animals practice coprophagy, where they ingest fecal matter to recycle untapped/undigested nutrients. However, the Jarman–Bell principle is not without exception. Small herbivorous members of mammals, birds and reptiles were observed to be inconsistent with the trend of small body mass being linked with high-quality food. There have also been disputes over the mechanism behind the Jarman–Bell principle; that larger body sizes does not increase digestive efficiency. The implications of larger herbivores ably subsisting on poor quality food compared to smaller herbivores mean that the Jarman–Bell principle may contribute evidence for Cope's rule. Furthermore, the Jarman–Bell principle is also important by providing evidence for the ecological framework of "resource partitioning, competition, habitat use and species packing in environments" and has been applied in several studies. Links with allometry. Allometry refers to the non-linear scaling factor of one variable with respect to another. The relationship between such variables is expressed as a power law, where the exponent is a value not equal to 1 (thereby implying a non-linear relationship). Allometric relationships can be mathematically expressed as follow: formula_0 (BM = body mass) Kleiber's law. Kleiber's law describes how larger animals use less energy relative to small animals. Max Kleiber developed a formula that estimates this phenomenon (the exact values are not always consistent). formula_1 Where MR = metabolic rate (kcal/day), W = weight/body mass (Kg) Gut capacity scales linearly with body size (gut capacity = BM1.0) but maintenance metabolism (energy required to maintain homeostasis) scales fractionally ( = BM0.75). Both of these factors are linked through the MR/GC (metabolic requirement to gut capacity ratio). If body mass increases, then the observed ratio demonstrates how large bodies display a lower MR/GC ratio relative to a small body. That is, smaller herbivores require more metabolic energy per unit of body mass than a large one. Retention time. The allometric scaling of retention time (the time that food remains inside the digestive system) with respect to body mass: formula_2 Where Tr = retention time (hours), D = digestibility of the food, W = weight/body mass (Kg). This formula was refined from a previous iteration because the previous formula took into account the entire gut, rather than focusing on the fermentation site where cellulose (the fibrous substance) is broken down. Explanation. Food intake. The energy gained food depends on the rate of digestion, retention time and the digestible content of the food. As herbivores, food intake is achieved through three main steps: ingestion, digestion, and absorption. Plant- based food is hard to digest and is done so with the help of symbiotic microbes in the gut of the herbivore. When food is passed through the digestive system (including multiple stomach chambers), it breaks down further through symbiotic microbes at fermentation site(s). There exists different types of stomach plans: In order, the stomach plans represent the general level of efficiency when digesting plant-based food; ruminants are better compared to pseudoruminants and monogastrics. The development of the rumen not only allows a site for fermentation but also decrease the food digestion (increase retention time). However, a body mass ranging from 600 to 1200 kg is enough to cause sufficient digestion regardless of stomach plan. Link to the Jarman–Bell principle. The Jarman–Bell Principle implies that the food quality a herbivore consumes is inversely proportional to the size of the herbivore, but the quantity of such food is proportional. The principle relies on the allometric (non-linear) scaling of size and energy requirement. The metabolic rate per unit of body mass of large animals is slow enough to subside on a consistent flow of low-quality food. However, in small animals, the rate is higher and they cannot draw sufficient energy from low-quality food to live on. The length of the digestive tract scales proportionally to the size of the animal. A longer digestive tract allows for more retention time and hence increases the efficiency of digestion and absorption. Larger body mass. Poorer quality food selects animals to grow larger in size, and hence develop an increased digestive efficiency compared to smaller animals. Larger sized animals have a larger/longer digestive tract, allowing for more quantities of low quality food to be processed (retention time). Although herbivores can consume high quality food, the relative abundance of low quality food and other ecological factors such as resource competition and predator presence influence foraging behavior of the animal to primarily consume low quality food. Other factors include the size of the mouth constraining the selective ability of foraging, and the absolute energy large animals require compared to small (though smaller animals require higher energy per unit body mass). Smaller body mass. Smaller animals have a limited digestive tract relative to larger animals. As such, they have a shorter retention time of food and cannot digest and absorb food to the same degree as larger animals. To counteract this disadvantage, high-quality food is selected, with quantity being limited by the animals gut size. Another method to counteract this is to practice coprophagy, where re-ingestion of fecal matter recycles untapped/undigested nutrients. However, there are also reports of larger animals, including primates and horses (under dietary restrictions), practicing coprophagy. Through the extra flexibility of subsisting on low-quality food, the Jarman–Bell Principle suggests an evolutionary advantage of larger animals and hence provides evidence for Cope's rule. Exceptions. The Jarman–Bell Principle has some notable exceptions. Small herbivorous members of class Mammalia, Aves and Reptilia were observed to be inconsistent with the trend of small body mass being linked with high quality food. This discrepancy could be due to ecological factors which apply pressure and encourage an adaptive approach to the given environment, rather than taking on an optimal form of digestive physiology. Small rodents subjected to low quality diet were observed to increase food intake and increase the size of their cecum and intestine, counteracting their low quality diet by allowing viable consumption of such food and hence refuting the link between diet quality and body size. Refuting the mechanism of the Jarman–Bell principle. Although the pattern of low food quality and body size appears consistent across multiple species, the explanation behind the principle (bigger size allowed better digestion via more retention time) has been disputed. M. Clauss et al. argues that retention time is not proportional to body mass above 500 grams. That is, smaller species (that are above 500 grams but not too large) have been observed to rival larger species in their mean retention time. Retention time being proportional to food intake was only observed in non-ruminant animals, not ruminants. Clauss et al. suggests that this is due to the diverse adaptations that support the rumen such that the digestive efficiency of ruminants remain consistent and independent of body size and food intake. Applications and examples. In addition to providing evidence for ecological frameworks such as "resource partitioning, competition, habitat use and species packing in environment" and Cope's rule, the Jarman–Bell Principle has been applied to model primate behaviours and explain sexual segregation in ungulates. Sexual segregation in polygynous ungulates. Sexual segregation in Soay sheep ("Ovis aries") has been observed. Soay sheep are polygynous in nature; males have multiple partners (opposed to polygynandry). Two main hypotheses have been proposed to explain the observed phenomena. Sexual dimorphism-body size hypothesis. Male soay sheep are morphologically larger than females. Larger overall size implies larger gut size, and hence digestive efficiency. As males are larger they can subsist on lower quality food. This leads to resource partitioning of males and females and thus sexual segregation on an intraspecies level. Activity budget hypothesis. The time taken to process food depends on the food quality; poorer/high fibre food requires more time to process and ruminate. This extra time influences behaviour and, over a group of ungulates, lead to segregation via food quality. Since males are larger and can handle low quality food, their feeding and ruminating activity will differ from females. The digestive efficiency between both sexes of Soay sheep. Pérez-Barbería F. J., et al. (2008) tested the proposed hypothesis by feeding Soay sheep grass hay and observing the digestive efficiency between both sexes via their faecal output. Given that the supplied food is the same, more faecal matter implies less digestion and thus lower digestive effectiveness. Male Soay sheep produced less faecal matter than females. Although this result is consistent with the Jarman–Bell principle in that it observes the relationship between size and food quality, it does not adequately explain the proposed hypotheses. For hypothesis (1), the sheep were kept in an environment where the food abundance and quality were controlled. There was no need for resources to be partitioned and segregation to occur. For hypothesis (2), there are many external factors which may influence behavioural changes in males, enough to induce sexual segregation, that are not explored in Pérez-Barbería F. J, et al. experiment. In the experiment, the sheep were kept in a controlled environment with a controlled diet (monitoring for digestive efficiency only). Males consume more food than females, thereby having a greater allowance of energy to expend. Activities such as predator lookout, migration or simply standing all use energy, and since males have more energy, there could be enough leeway to induce sexual segregation. However, the cost:benefit ratio of segregating from a group remains equivocal and hard to test. Size induced sexual segregation threshold. By observing effective food digestibility in Soay sheep, the Jarman–Bell principle seems to apply at an intraspecific level. The threshold at which this occurs was tested at 30%, but other studies (Ruckstuhl and Neuhaus 2002) have shown the threshold to be close to 20% Modelling primate behavior. Primates are very diverse in their dietary range, general morphological and physiological adaptations. The Jarman–Bell principle was used to help organise these variables. It expects a negative trend between body size and food quality. This trend is supported by observed primate adaptations and how they help them survive in their environment. It can also be used to hypothesis the general diet of newly discovered/mysterious primates that have not been researched by taking into account the animal's body size. For example, information about pygmy chimpanzees was scarce around 1980s. However, it was expected to have a fruity diet. Steven J. C. Gaulin examined 102 primate species (from various scientific literature) for links between size and diet, and hence the Jarman–Bell principle. Omnivorous primates seemed inconsistent with the trend, likely due to the diversity of their diet. Omnivorous diets. Both of the above omnivores and the majority of primate omnivores live in open ranges, particularly ecotonal regions (where two biomes meet). In these environments, food abundance is comparatively lower than forest biomes. The diet would shift to a mixture of low amounts of high-quality food, and high amounts of low-quality food to maximise forage and energy. The universality of the Jarman–Bell principle. Deviations from the expected trend question the universality of the principle. Steven J. C. Gaulin notes that, when the principle is applied to offer any type of explanation, it is subjected to numerous other phenomena that occur at the same time. For example, the habitat range constrains the size of an organism; large primates are too heavy to live on tree tops. Or perhaps the use of adaptations or even tools were enough to allow viable consumption of food quality that would not otherwise be sufficient. Gigantism in dinosaurs. Extinct dinosaurs, particularly the large sauropods, can be imagined primarily through two methods. Method one involves fossil records; bones and dentition. Method two involves drawing ideas from extant animals and how their body mass is linked with their diet. Comparing digestion in extant, herbivorous reptiles and mammals and relating this to Sauropod gigantism. Reptiles generally have a shorter retention time than mammals. However, this loss of digestive efficiency is offset by their ability to process food into smaller particles for digestion. Smaller particles are easier to digest and ferment. As sauropods are reptiles, it would be expected that they have a similar retention time to extant reptiles. However, the lack of particle reduction mechanisms (e.g. gastric mills, chewing teeth) challenges this expectation. Marcus Clauss et al. hypothesised that sauropods have a very enlarged gut capacity to account for this. Retention time is inversely proportional to intake amount. Therefore, an enlarged gut cavity allows increased intake, and thus shorter retention time similar to other herbivorous reptiles. Nutrient constraints. D. M. Wilkinson and G. D. Ruxton considered the available nutrients as a driving factor for sauropod gigantism. Sauropods appeared during the late triassic period and became extinct at the end of the cretaceous period. During this time period, herbivorous plant matter such as conifers, ginkgos, cycads, ferns and horsetails may have been the dietary choice of sauropods. These plants have a high carbon/nitrogen content. Large amounts of these plant matter would be consumed to meet the bodily nitrogen requirement. Hence, more carbon content is consumed than required. Clauss Hummel et al. (2005), cited in D. M. Wilkinson and G. D. Ruxton's paper, argues that larger sizes does not necessarily improve digestive efficiency. Rather, it allows nutrient prioritisation. For example, if there exists a diet with high carbon but low nitrogen content, then meeting the nitrogen dietary requirement suggests consuming a high level of carbon diet. Since gut volume scales linearly with body mass, larger animals have more capacity to digests food. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y = aBM^b, b \\neq 1" }, { "math_id": 1, "text": "MR = 70*W^{0.75}" }, { "math_id": 2, "text": "T_r = 7.67*D*W^{0.346}" } ]
https://en.wikipedia.org/wiki?curid=60847281
60848020
Tutte–Grothendieck invariant
In mathematics, a Tutte–Grothendieck (TG) invariant is a type of graph invariant that satisfies a generalized deletion–contraction formula. Any evaluation of the Tutte polynomial would be an example of a TG invariant. Definition. A graph function "f" is TG-invariant if: formula_0 Above "G" / "e" denotes edge contraction whereas "G" \ "e" denotes deletion. The numbers "c", "x", "y", "a", "b" are parameters. Generalization to matroids. The matroid function "f" is TG if: formula_1 It can be shown that "f" is given by: formula_2 where "E" is the edge set of "M"; "r" is the rank function; and formula_3 is the generalization of the Tutte polynomial to matroids. Grothendieck group. The invariant is named after Alexander Grothendieck because of a similar construction of the Grothendieck group used in the Riemann–Roch theorem. For more details see: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(G) = \\begin{cases}\nc^{|V(G)|} & \\text{if G has no edges} \\\\\nxf(G/e) & \\text{if } e \\text{ is a bridge} \\\\\nyf(G \\backslash e) & \\text{if } e \\text{ is a loop} \\\\\naf(G/e) + bf(G \\backslash e) & \\text{else}\n\\end{cases}" }, { "math_id": 1, "text": "\\begin{align}\n&f(M_1\\oplus M_2) = f(M_1)f(M_2) \\\\\n&f(M) = af(M/e) + b f(M \\backslash e) \\ \\ \\ \\text{if } e \\text{ is not coloop or bridge}\n\\end{align}" }, { "math_id": 2, "text": "f(M) = a^{|E| - r(E)}b^{r(E)} T(M; x/a, y/b)" }, { "math_id": 3, "text": "T(M; x, y) = \\sum_{A \\subset E(M)} (x-1)^{r(E)-r(A)} (y-1)^{|A|-r(A)}" } ]
https://en.wikipedia.org/wiki?curid=60848020
6085
Cauchy sequence
Sequence of points that get progressively closer to each other In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences. It is not sufficient for each term to become arbitrarily close to the preceding term. For instance, in the sequence of square roots of natural numbers: formula_1 the consecutive terms become arbitrarily close to each other – their differences formula_2 tend to zero as the index n grows. However, with growing values of n, the terms formula_3 become arbitrarily large. So, for any index n and distance d, there exists an index m big enough such that formula_4 As a result, no matter how far one goes, the remaining terms of the sequence never get close to each other; hence the sequence is not Cauchy. The utility of Cauchy sequences lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination. Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets. In real numbers. A sequence formula_5 of real numbers is called a Cauchy sequence if for every positive real number formula_6 there is a positive integer "N" such that for all natural numbers formula_7 formula_8 where the vertical bars denote the absolute value. In a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring formula_9 to be infinitesimal for every pair of infinite "m", "n". For any real number "r", the sequence of truncated decimal expansions of "r" forms a Cauchy sequence. For example, when formula_10 this sequence is (3, 3.1, 3.14, 3.141, ...). The "m"th and "n"th terms differ by at most formula_11 when "m" &lt; "n", and as "m" grows this becomes smaller than any fixed positive number formula_12 Modulus of Cauchy convergence. If formula_13 is a sequence in the set formula_14 then a "modulus of Cauchy convergence" for the sequence is a function formula_15 from the set of natural numbers to itself, such that for all natural numbers formula_16 and natural numbers formula_17 formula_18 Any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The existence of a modulus for a Cauchy sequence follows from the well-ordering property of the natural numbers (let formula_19 be the smallest possible formula_20 in the definition of Cauchy sequence, taking formula_21 to be formula_22). The existence of a modulus also follows from the principle of countable choice. "Regular Cauchy sequences" are sequences with a given modulus of Cauchy convergence (usually formula_23 or formula_24). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent to a regular Cauchy sequence; this can be proven without using any form of the axiom of choice. Moduli of Cauchy convergence are used by constructive mathematicians who do not wish to use any form of choice. Using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Regular Cauchy sequences were used by and by in constructive mathematics textbooks. In a metric space. Since the definition of a Cauchy sequence only involves metric concepts, it is straightforward to generalize it to any metric space "X". To do so, the absolute value formula_25 is replaced by the distance formula_26 (where "d" denotes a metric) between formula_27 and formula_28 Formally, given a metric space formula_29 a sequence formula_5 is Cauchy, if for every positive real number formula_30 there is a positive integer formula_20 such that for all positive integers formula_7 the distance formula_31 Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in "X". Nonetheless, such a limit does not always exist within "X": the property of a space that every Cauchy sequence converges in the space is called "completeness", and is detailed below. Completeness. A metric space ("X", "d") in which every Cauchy sequence converges to an element of "X" is called complete. Examples. The real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In this construction, each equivalence class of Cauchy sequences of rational numbers with a certain tail behavior—that is, each class of sequences that get arbitrarily close to one another— is a real number. A rather different type of example is afforded by a metric space "X" which has the discrete metric (where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of "X" must be constant beyond some fixed point, and converges to the eventually repeating term. Non-example: rational numbers. The rational numbers formula_32 are not complete (for the usual distance): There are sequences of rationals that converge (in formula_33) to irrational numbers; these are Cauchy sequences having no limit in formula_34 In fact, if a real number "x" is irrational, then the sequence ("x""n"), whose "n"-th term is the truncation to "n" decimal places of the decimal expansion of "x", gives a Cauchy sequence of rational numbers with irrational limit "x". Irrational numbers certainly exist in formula_35 for example: Non-example: open interval. The open interval formula_42 in the set of real numbers with an ordinary distance in formula_33 is not a complete space: there is a sequence formula_43 in it, which is Cauchy (for arbitrarily small distance bound formula_44 all terms formula_0 of formula_45 fit in the formula_46 interval), however does not converge in formula_47 — its 'limit', number 0, does not belong to the space formula_48 Other properties. These last two properties, together with the Bolzano–Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano–Weierstrass theorem and the Heine–Borel theorem. Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of constructing the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological. One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers (or, more generally, of elements of any complete normed linear space, or Banach space). Such a series formula_53 is considered to be convergent if and only if the sequence of partial sums formula_54 is convergent, where formula_55 It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers formula_56 formula_57 If formula_58 is a uniformly continuous map between the metric spaces "M" and "N" and ("x""n") is a Cauchy sequence in "M", then formula_59 is a Cauchy sequence in "N". If formula_60 and formula_61 are two Cauchy sequences in the rational, real or complex numbers, then the sum formula_62 and the product formula_63 are also Cauchy sequences. Generalizations. In topological vector spaces. There is also a concept of Cauchy sequence for a topological vector space formula_47: Pick a local base formula_64 for formula_47 about 0; then (formula_65) is a Cauchy sequence if for each member formula_66 there is some number formula_20 such that whenever formula_67 is an element of formula_68 If the topology of formula_47 is compatible with a translation-invariant metric formula_69 the two definitions agree. In topological groups. Since the topological vector space definition of Cauchy sequence requires only that there be a continuous "subtraction" operation, it can just as well be stated in the context of a topological group: A sequence formula_70 in a topological group formula_71 is a Cauchy sequence if for every open neighbourhood formula_72 of the identity in formula_71 there exists some number formula_20 such that whenever formula_73 it follows that formula_74 As above, it is sufficient to check this for the neighbourhoods in any local base of the identity in formula_75 As in the construction of the completion of a metric space, one can furthermore define the binary relation on Cauchy sequences in formula_71 that formula_70 and formula_76 are equivalent if for every open neighbourhood formula_72 of the identity in formula_71 there exists some number formula_20 such that whenever formula_73 it follows that formula_77 This relation is an equivalence relation: It is reflexive since the sequences are Cauchy sequences. It is symmetric since formula_78 which by continuity of the inverse is another open neighbourhood of the identity. It is transitive since formula_79 where formula_80 and formula_81 are open neighbourhoods of the identity such that formula_82; such pairs exist by the continuity of the group operation. In groups. There is also a concept of Cauchy sequence in a group formula_71: Let formula_83 be a decreasing sequence of normal subgroups of formula_71 of finite index. Then a sequence formula_60 in formula_71 is said to be Cauchy (with respect to formula_84) if and only if for any formula_85 there is formula_20 such that for all formula_86 Technically, this is the same thing as a topological group Cauchy sequence for a particular choice of topology on formula_87 namely that for which formula_84 is a local base. The set formula_88 of such Cauchy sequences forms a group (for the componentwise product), and the set formula_89 of null sequences (sequences such that formula_90) is a normal subgroup of formula_91 The factor group formula_92 is called the completion of formula_71 with respect to formula_93 One can then show that this completion is isomorphic to the inverse limit of the sequence formula_94 An example of this construction familiar in number theory and algebraic geometry is the construction of the formula_95-adic completion of the integers with respect to a prime formula_96 In this case, formula_71 is the integers under addition, and formula_97 is the additive subgroup consisting of integer multiples of formula_98 If formula_84 is a cofinal sequence (that is, any normal subgroup of finite index contains some formula_97), then this completion is canonical in the sense that it is isomorphic to the inverse limit of formula_99 where formula_84 varies over all normal subgroups of finite index. For further details, see Ch. I.10 in Lang's "Algebra". In a hyperreal continuum. A real sequence formula_100 has a natural hyperreal extension, defined for hypernatural values "H" of the index "n" in addition to the usual natural "n". The sequence is Cauchy if and only if for every infinite "H" and "K", the values formula_101 and formula_102 are infinitely close, or adequal, that is, formula_103 where "st" is the standard part function. Cauchy completion of categories. introduced a notion of Cauchy completion of a category. Applied to formula_32 (the category whose objects are rational numbers, and there is a morphism from "x" to "y" if and only if formula_104), this Cauchy completion yields formula_105 (again interpreted as a category using its natural ordering). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_n" }, { "math_id": 1, "text": "a_n=\\sqrt n," }, { "math_id": 2, "text": "a_{n+1}-a_n = \\sqrt{n+1}-\\sqrt{n} = \\frac{1}{\\sqrt{n+1}+\\sqrt{n}} < \\frac{1}{2\\sqrt n}" }, { "math_id": 3, "text": "a_n" }, { "math_id": 4, "text": "a_m - a_n > d." }, { "math_id": 5, "text": "x_1, x_2, x_3, \\ldots" }, { "math_id": 6, "text": "\\varepsilon," }, { "math_id": 7, "text": "m, n > N," }, { "math_id": 8, "text": "|x_m - x_n| < \\varepsilon," }, { "math_id": 9, "text": "x_m - x_n" }, { "math_id": 10, "text": "r = \\pi," }, { "math_id": 11, "text": "10^{1-m}" }, { "math_id": 12, "text": "\\varepsilon." }, { "math_id": 13, "text": "(x_1, x_2, x_3, ...)" }, { "math_id": 14, "text": "X," }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "k" }, { "math_id": 17, "text": "m, n > \\alpha(k)," }, { "math_id": 18, "text": "|x_m - x_n| < 1/k." }, { "math_id": 19, "text": "\\alpha(k)" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": "\\varepsilon" }, { "math_id": 22, "text": "1/k" }, { "math_id": 23, "text": "\\alpha(k) = k" }, { "math_id": 24, "text": "\\alpha(k) = 2^k" }, { "math_id": 25, "text": "\\left|x_m - x_n\\right|" }, { "math_id": 26, "text": "d\\left(x_m, x_n\\right)" }, { "math_id": 27, "text": "x_m" }, { "math_id": 28, "text": "x_n." }, { "math_id": 29, "text": "(X, d)," }, { "math_id": 30, "text": "\\varepsilon > 0" }, { "math_id": 31, "text": "d\\left(x_m, x_n\\right) < \\varepsilon." }, { "math_id": 32, "text": "\\Q" }, { "math_id": 33, "text": "\\R" }, { "math_id": 34, "text": "\\Q." }, { "math_id": 35, "text": "\\R," }, { "math_id": 36, "text": "x_0=1, x_{n+1}=\\frac{x_n+2/x_n}{2}" }, { "math_id": 37, "text": "x_n = F_n / F_{n-1}" }, { "math_id": 38, "text": "\\phi" }, { "math_id": 39, "text": "\\phi^2 = \\phi+1," }, { "math_id": 40, "text": "\\varphi = (1+\\sqrt5)/2," }, { "math_id": 41, "text": "x \\neq 0," }, { "math_id": 42, "text": "X = (0, 2)" }, { "math_id": 43, "text": "x_n = 1/n" }, { "math_id": 44, "text": "d > 0" }, { "math_id": 45, "text": "n > 1/d" }, { "math_id": 46, "text": "(0, d)" }, { "math_id": 47, "text": "X" }, { "math_id": 48, "text": "X ." }, { "math_id": 49, "text": "\\varepsilon > 0," }, { "math_id": 50, "text": "\\varepsilon/2" }, { "math_id": 51, "text": "x_N" }, { "math_id": 52, "text": "M + 1" }, { "math_id": 53, "text": "\\sum_{n=1}^{\\infty} x_n" }, { "math_id": 54, "text": "(s_{m})" }, { "math_id": 55, "text": "s_m = \\sum_{n=1}^{m} x_n." }, { "math_id": 56, "text": "p > q," }, { "math_id": 57, "text": "s_p - s_q = \\sum_{n=q+1}^p x_n." }, { "math_id": 58, "text": "f : M \\to N" }, { "math_id": 59, "text": "(f(x_n))" }, { "math_id": 60, "text": "(x_n)" }, { "math_id": 61, "text": "(y_n)" }, { "math_id": 62, "text": "(x_n + y_n)" }, { "math_id": 63, "text": "(x_n y_n)" }, { "math_id": 64, "text": "B" }, { "math_id": 65, "text": "x_k" }, { "math_id": 66, "text": "V\\in B," }, { "math_id": 67, "text": "n,m > N, x_n - x_m" }, { "math_id": 68, "text": "V." }, { "math_id": 69, "text": "d," }, { "math_id": 70, "text": "(x_k)" }, { "math_id": 71, "text": "G" }, { "math_id": 72, "text": "U" }, { "math_id": 73, "text": "m,n>N" }, { "math_id": 74, "text": "x_n x_m^{-1} \\in U." }, { "math_id": 75, "text": "G." }, { "math_id": 76, "text": "(y_k)" }, { "math_id": 77, "text": "x_n y_m^{-1} \\in U." }, { "math_id": 78, "text": "y_n x_m^{-1} = (x_m y_n^{-1})^{-1} \\in U^{-1}" }, { "math_id": 79, "text": "x_n z_l^{-1} = x_n y_m^{-1} y_m z_l^{-1} \\in U' U''" }, { "math_id": 80, "text": "U'" }, { "math_id": 81, "text": "U''" }, { "math_id": 82, "text": "U'U'' \\subseteq U" }, { "math_id": 83, "text": "H=(H_r)" }, { "math_id": 84, "text": "H" }, { "math_id": 85, "text": "r" }, { "math_id": 86, "text": "m, n > N, x_n x_m^{-1} \\in H_r." }, { "math_id": 87, "text": "G," }, { "math_id": 88, "text": "C" }, { "math_id": 89, "text": "C_0" }, { "math_id": 90, "text": "\\forall r, \\exists N, \\forall n > N, x_n \\in H_r" }, { "math_id": 91, "text": "C." }, { "math_id": 92, "text": "C/C_0" }, { "math_id": 93, "text": "H." }, { "math_id": 94, "text": "(G/H_r)." }, { "math_id": 95, "text": "p" }, { "math_id": 96, "text": "p." }, { "math_id": 97, "text": "H_r" }, { "math_id": 98, "text": "p_r." }, { "math_id": 99, "text": "(G/H)_H," }, { "math_id": 100, "text": "\\langle u_n : n \\in \\N \\rangle" }, { "math_id": 101, "text": "u_H" }, { "math_id": 102, "text": "u_K" }, { "math_id": 103, "text": "\\mathrm{st}(u_H-u_K)= 0" }, { "math_id": 104, "text": "x \\leq y" }, { "math_id": 105, "text": "\\R\\cup\\left\\{\\infty\\right\\}" } ]
https://en.wikipedia.org/wiki?curid=6085
60851869
Parallel algorithms for minimum spanning trees
In graph theory a minimum spanning tree (MST) formula_0 of a graph formula_1 with formula_2 and formula_3 is a tree subgraph of formula_4 that contains all of its vertices and is of minimum weight. MSTs are useful and versatile tools utilised in a wide variety of practical and theoretical fields. For example, a company looking to supply multiple stores with a certain product from a single warehouse might use an MST originating at the warehouse to calculate the shortest paths to each company store. In this case the stores and the warehouse are represented as vertices and the road connections between them - as edges. Each edge is labelled with the length of the corresponding road connection. If formula_4 is edge-unweighted every spanning tree possesses the same number of edges and thus the same weight. In the edge-weighted case, the spanning tree, the sum of the weights of the edges of which is lowest among all spanning trees of formula_4, is called a minimum spanning tree (MST). It is not necessarily unique. More generally, graphs that are not necessarily connected have minimum spanning forests, which consist of a union of MSTs for each connected component. As finding MSTs is a widespread problem in graph theory, there exist many sequential algorithms for solving it. Among them are Prim's, Kruskal's and Borůvka's algorithms, each utilising different properties of MSTs. They all operate in a similar fashion - a subset of formula_5 is iteratively grown until a valid MST has been discovered. However, as practical problems are often quite large (road networks sometimes have billions of edges), performance is a key factor. One option of improving it is by parallelising known MST algorithms. Prim's algorithm. This algorithm utilises the cut-property of MSTs. A simple high-level pseudocode implementation is provided below: formula_6 formula_7 where formula_8 is a random vertex in formula_9 repeat formula_10 times find lightest edge formula_11 s.t. formula_12 but formula_13 formula_14 formula_15 return T Each edge is observed exactly twice - namely when examining each of its endpoints. Each vertex is examined exactly once for a total of formula_16 operations aside from the selection of the lightest edge at each loop iteration. This selection is often performed using a priority queue (PQ). For each edge at most one decreaseKey operation (amortised in formula_17) is performed and each loop iteration performs one deleteMin operation (formula_18). Thus using Fibonacci heaps the total runtime of Prim's algorithm is asymptotically in formula_19. It is important to note that the loop is inherently sequential and can not be properly parallelised. This is the case, since the lightest edge with one endpoint in formula_20 and on in formula_21 might change with the addition of edges to formula_0. Thus no two selections of a lightest edge can be performed at the same time. However, there do exist some attempts at parallelisation. One possible idea is to use formula_22 processors to support PQ access in formula_17 on an EREW-PRAM machine, thus lowering the total runtime to formula_16. Kruskal's algorithm. Kruskal's MST algorithm utilises the cycle property of MSTs. A high-level pseudocode representation is provided below. formula_23 forest with every vertex in its own subtree foreach formula_24 in ascending order of weight if formula_25 and formula_26 in different subtrees of formula_0 formula_15 return T The subtrees of formula_0 are stored in union-find data structures, which is why checking whether or not two vertices are in the same subtree is possible in amortised formula_27 where formula_28 is the inverse Ackermann function. Thus the total runtime of the algorithm is in formula_29. Here formula_30 denotes the single-valued inverse Ackermann function, for which any realistic input yields an integer less than five. Approach 1: Parallelising the sorting step. Similarly to Prim's algorithm there are components in Kruskal's approach that can not be parallelised in its classical variant. For example, determining whether or not two vertices are in the same subtree is difficult to parallelise, as two union operations might attempt to join the same subtrees at the same time. Really the only opportunity for parallelisation lies in the sorting step. As sorting is linear in the optimal case on formula_18 processors, the total runtime can be reduced to formula_31. Approach 2: Filter-Kruskal. Another approach would be to modify the original algorithm by growing formula_0 more aggressively. This idea was presented by Osipov et al. The basic idea behind Filter-Kruskal is to partition the edges in a similar way to quicksort and filter out edges that connect vertices that belong to the same tree in order to reduce the cost of sorting. A high-level pseudocode representation is provided below. filterKruskal(formula_4): if formula_32 KruskalThreshold: return kruskal(formula_4) pivot = chooseRandom(formula_5) formula_33, formula_34partition(formula_5, pivot) formula_35 filterKruskal(formula_36) formula_37 filter(formula_38) formula_39 formula_40 filterKruskal(formula_38) return formula_41 partition(formula_5, pivot): formula_42 formula_43 foreach formula_24: if weight(formula_44) formula_45 pivot: formula_46 else formula_47 return (formula_36, formula_38) filter(formula_5): formula_48 foreach formula_24: if find-set(u) formula_49 find-set(v): formula_50 return formula_51 Filter-Kruskal is better suited for parallelisation, since sorting, partitioning and filtering have intuitively easy parallelisations where the edges are simply divided between the cores. Borůvka's algorithm. The main idea behind Borůvka's algorithm is edge contraction. An edge formula_52 is contracted by first removing formula_26 from the graph and then redirecting every edge formula_53 to formula_54. These new edges retain their old edge weights. If the goal is not just to determine the weight of an MST but also which edges it comprises, it must be noted between which pairs of vertices an edge was contracted. A high-level pseudocode representation is presented below. formula_6 while formula_55 formula_56 for formula_57 formula_58 formula_40 lightest formula_59 for formula_60 contract formula_52 formula_61 return T It is possible that contractions lead to multiple edges between a pair of vertices. The intuitive way of choosing the lightest of them is not possible in formula_62. However, if all contractions that share a vertex are performed in parallel this is doable. The recursion stops when there is only a single vertex remaining, which means the algorithm needs at most formula_63 iterations, leading to a total runtime in formula_64. Parallelisation. One possible parallelisation of this algorithm yields a polylogarithmic time complexity, i.e. formula_65 and there exists a constant formula_66 so that formula_67. Here formula_68 denotes the runtime for a graph with formula_69 edges, formula_70 vertices on a machine with formula_71 processors. The basic idea is the following: while formula_55 find lightest incident edges // formula_72 assign the corresponding subgraph to each vertex // formula_73 contract each subgraph // formula_74 The MST then consists of all the found lightest edges. This parallelisation utilises the adjacency array graph representation for formula_1. This consists of three arrays - formula_75 of length formula_76 for the vertices, formula_77 of length formula_69 for the endpoints of each of the formula_69 edges and formula_66 of length formula_69 for the edges' weights. Now for vertex formula_78 the other end of each edge incident to formula_78 can be found in the entries between formula_79 and formula_80. The weight of the formula_78-th edge in formula_75 can be found in formula_81. Then the formula_78-th edge in formula_77 is between vertices formula_25 and formula_26 if and only if formula_82 and formula_83. Finding the lightest incident edge. First the edges are distributed between each of the formula_71 processors. The formula_78-th processor receives the edges stored between formula_84 and formula_85. Furthermore, each processor needs to know to which vertex these edges belong (since formula_77 only stores one of the edge's endpoints) and stores this in the array formula_86. Obtaining this information is possible in formula_18 using formula_71 binary searches or in formula_87 using a linear search. In practice the latter approach is sometimes quicker, even though it is asymptotically worse. Now each processor determines the lightest edge incident to each of its vertices. formula_88 find(formula_89, formula_75) for formula_90 if formula_91 formula_92 ifformula_93 formula_94 Here the issue arises some vertices are handled by more than one processor. A possible solution to this is that every processor has its own formula_95 array which is later combined with those of the others using a reduction. Each processor has at most two vertices that are also handled by other processors and each reduction is in formula_96. Thus the total runtime of this step is in formula_72. Assigning subgraphs to vertices. Observe the graph that consists solely of edges collected in the previous step. These edges are directed away from the vertex to which they are the lightest incident edge. The resulting graph decomposes into multiple weakly connected components. The goal of this step is to assign to each vertex the component of which it is a part. Note that every vertex has exactly one outgoing edge and therefore each component is a pseudotree - a tree with a single extra edge that runs in parallel to the lightest edge in the component but in the opposite direction. The following code mutates this extra edge into a loop: parallel forAll formula_57 formula_97 if formula_98 formula_99 Now every weakly connected component is a directed tree where the root has a loop. This root is chosen as the representative of each component. The following code uses doubling to assign each vertex its representative: while formula_100 forAll formula_57 formula_101 Now every subgraph is a star. With some advanced techniques this step needs formula_73 time. Contracting the subgraphs. In this step each subgraph is contracted to a single vertex. formula_102 number of subgraphs formula_103 find a bijective function formula_104 star root formula_105 formula_106 Finding the bijective function is possible in formula_107 using a prefix sum. As we now have a new set of vertices and edges the adjacency array must be rebuilt, which can be done using Integersort on formula_108 in formula_109 time. Complexity. Each iteration now needs formula_74 time and just like in the sequential case there are formula_63 iterations, resulting in a total runtime of formula_110. If formula_111 the efficiency of the algorithm is in formula_112 and it is relatively efficient. If formula_113 then it is absolutely efficient. Further algorithms. There are multiple other parallel algorithms that deal the issue of finding an MST. With a linear number of processors it is possible to achieve this in formula_18. Bader and Cong presented an MST-algorithm, that was five times quicker on eight cores than an optimal sequential algorithm. Another challenge is the External Memory model - there is a proposed algorithm due to Dementiev et al. that is claimed to be only two to five times slower than an algorithm that only makes use of internal memory References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "G = (V, E)" }, { "math_id": 2, "text": "|V| = n" }, { "math_id": 3, "text": "|E| = m" }, { "math_id": 4, "text": "G" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "T \\gets \\emptyset" }, { "math_id": 7, "text": "S \\gets \\{s\\}" }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "V" }, { "math_id": 10, "text": "|V| - 1" }, { "math_id": 11, "text": "(u,v)" }, { "math_id": 12, "text": "u \\in S" }, { "math_id": 13, "text": "v \\in (V \\setminus S)" }, { "math_id": 14, "text": "S \\gets S \\cup \\{v\\}" }, { "math_id": 15, "text": "T \\gets T \\cup \\{(u,v)\\}" }, { "math_id": 16, "text": "O(n + m)" }, { "math_id": 17, "text": "O(1)" }, { "math_id": 18, "text": "O(\\log n)" }, { "math_id": 19, "text": "O(m + n \\log n)" }, { "math_id": 20, "text": "S" }, { "math_id": 21, "text": "V \\setminus S" }, { "math_id": 22, "text": "O(n)" }, { "math_id": 23, "text": "T \\gets" }, { "math_id": 24, "text": "(u, v) \\in E" }, { "math_id": 25, "text": "u" }, { "math_id": 26, "text": "v" }, { "math_id": 27, "text": "O(\\alpha(m, n))" }, { "math_id": 28, "text": "\\alpha(m, n)" }, { "math_id": 29, "text": "O(sort(n) + \\alpha(n))" }, { "math_id": 30, "text": "\\alpha(n)" }, { "math_id": 31, "text": "O(m \\alpha(n))" }, { "math_id": 32, "text": "m <" }, { "math_id": 33, "text": "(E_{\\leq}" }, { "math_id": 34, "text": "E_{>}) \\gets " }, { "math_id": 35, "text": "A \\gets" }, { "math_id": 36, "text": "E_{\\leq}" }, { "math_id": 37, "text": "E_{>} \\gets" }, { "math_id": 38, "text": "E_{>}" }, { "math_id": 39, "text": "A \\gets A" }, { "math_id": 40, "text": "\\cup" }, { "math_id": 41, "text": "A" }, { "math_id": 42, "text": "E_{\\leq} \\gets \\emptyset " }, { "math_id": 43, "text": "E_{>} \\gets \\emptyset" }, { "math_id": 44, "text": "u, v" }, { "math_id": 45, "text": "\\leq" }, { "math_id": 46, "text": "E_{\\leq} \\gets E_{\\leq} \\cup {(u, v)}" }, { "math_id": 47, "text": "E_{>} \\gets E_{>} \\cup {(u, v)} " }, { "math_id": 48, "text": "E_{filtered} \\gets \\emptyset " }, { "math_id": 49, "text": "\\neq" }, { "math_id": 50, "text": "E_{filtered} \\gets E_{filtered} \\cup {(u, v)}" }, { "math_id": 51, "text": "E_{filtered}" }, { "math_id": 52, "text": "\\{u, v\\}" }, { "math_id": 53, "text": "\\{w, v\\} \\in E" }, { "math_id": 54, "text": "\\{w, u\\}" }, { "math_id": 55, "text": "|V| > 1" }, { "math_id": 56, "text": "S \\gets \\emptyset" }, { "math_id": 57, "text": "v \\in V" }, { "math_id": 58, "text": "S \\gets S" }, { "math_id": 59, "text": "\\{u, v\\} \\in E" }, { "math_id": 60, "text": "\\{u, v\\} \\in S" }, { "math_id": 61, "text": "T \\gets T \\cup S" }, { "math_id": 62, "text": "O(m)" }, { "math_id": 63, "text": "\\log n" }, { "math_id": 64, "text": "O(m \\log n)" }, { "math_id": 65, "text": "T(m, n, p) \\cdot p \\in O(m \\log n)" }, { "math_id": 66, "text": "c" }, { "math_id": 67, "text": "T(m, n, p) \\in O(\\log^c m)" }, { "math_id": 68, "text": "T(m, n, p)" }, { "math_id": 69, "text": "m" }, { "math_id": 70, "text": "n" }, { "math_id": 71, "text": "p" }, { "math_id": 72, "text": "O(\\frac{m}{p} + \\log n + \\log p)" }, { "math_id": 73, "text": "O(\\frac{n}{p} + \\log n)" }, { "math_id": 74, "text": "O(\\frac{m}{p} + \\log n)" }, { "math_id": 75, "text": "\\Gamma" }, { "math_id": 76, "text": "n + 1" }, { "math_id": 77, "text": "\\gamma" }, { "math_id": 78, "text": "i" }, { "math_id": 79, "text": "\\gamma [\\Gamma [i-1]]" }, { "math_id": 80, "text": "\\gamma [\\Gamma[i]]" }, { "math_id": 81, "text": "c[i]" }, { "math_id": 82, "text": "\\Gamma[u] \\leq i < \\Gamma[u + 1]" }, { "math_id": 83, "text": "\\gamma[i] = v" }, { "math_id": 84, "text": "\\gamma[\\frac{i m}{p}]" }, { "math_id": 85, "text": "\\gamma[\\frac{(i + 1)m}{p} - 1]" }, { "math_id": 86, "text": "pred" }, { "math_id": 87, "text": "O(\\frac{n}{p} + p)" }, { "math_id": 88, "text": "v \\gets" }, { "math_id": 89, "text": "\\frac{i m}{p}" }, { "math_id": 90, "text": "e \\gets \\frac{i m}{p}; e < \\frac{(i+1) m}{p} - 1; e++" }, { "math_id": 91, "text": "\\Gamma[v+1] = e " }, { "math_id": 92, "text": "v++" }, { "math_id": 93, "text": "c[e] < c[pred[v]]" }, { "math_id": 94, "text": "pred[v] \\gets e" }, { "math_id": 95, "text": "prev" }, { "math_id": 96, "text": "O(\\log p)" }, { "math_id": 97, "text": "w \\gets pred[v]" }, { "math_id": 98, "text": "pred[w] = v \\land v < w" }, { "math_id": 99, "text": "pred[v] \\gets v" }, { "math_id": 100, "text": "\\exists v \\in V: pred[v] \\neq pred[pred[v]]" }, { "math_id": 101, "text": "pred[v] \\gets pred[pred[v]]" }, { "math_id": 102, "text": "k \\gets" }, { "math_id": 103, "text": "V' \\gets \\{0, \\dots , k-1\\}" }, { "math_id": 104, "text": "f:" }, { "math_id": 105, "text": "\\rightarrow \\{0, \\dots, k-1\\}" }, { "math_id": 106, "text": "E' \\gets \\{(f(pred[v]), f(pred[w]), c, e_{old}): (v, w) \\in E \\land pred[v] \\neq pred[w]\\}" }, { "math_id": 107, "text": "O(\\frac{n}{p} + \\log p)" }, { "math_id": 108, "text": "E'" }, { "math_id": 109, "text": "O(\\frac{m}{p} + \\log p)" }, { "math_id": 110, "text": "O(\\log n(\\frac{m}{p} + \\log n))" }, { "math_id": 111, "text": "m \\in \\Omega(p \\log^2 p)" }, { "math_id": 112, "text": "\\Theta(1)" }, { "math_id": 113, "text": "m \\in O(n)" } ]
https://en.wikipedia.org/wiki?curid=60851869
60854
Additive category
Preadditive category that admits all finitary products In mathematics, specifically in category theory, an additive category is a preadditive category C admitting all finitary biproducts. Definition. There are two equivalent definitions of an additive category: One as a category equipped with additional structure, and another as a category equipped with "no extra structure" but whose objects and morphisms satisfy certain equations. Via preadditive categories. A category C is preadditive if all its hom-sets are abelian groups and composition of morphisms is bilinear; in other words, C is enriched over the monoidal category of abelian groups. In a preadditive category, every finitary product (including the empty product, i.e., a final object) is necessarily a coproduct (or initial object in the case of an empty diagram), and hence a biproduct, and conversely every finitary coproduct is necessarily a product (this is a consequence of the definition, not a part of it). Thus an additive category is equivalently described as a preadditive category admitting all finitary products, or a preadditive category admitting all finitary coproducts. Via semiadditive categories. We give an alternative definition. Define a semiadditive category to be a category (note: not a preadditive category) which admits a zero object and all binary biproducts. It is then a remarkable theorem that the Hom sets naturally admit an abelian monoid structure. A proof of this fact is given below. An additive category may then be defined as a semiadditive category in which every morphism has an additive inverse. This then gives the Hom sets an abelian group structure instead of merely an abelian monoid structure. Generalization. More generally, one also considers additive R-linear categories for a commutative ring R. These are categories enriched over the monoidal category of R-modules and admitting all finitary biproducts. Examples. The original example of an additive category is the category of abelian groups Ab. The zero object is the trivial group, the addition of morphisms is given pointwise, and biproducts are given by direct sums. More generally, every module category over a ring R is additive, and so in particular, the category of vector spaces over a field K is additive. The algebra of matrices over a ring, thought of as a category as described below, is also additive. Internal characterisation of the addition law. Let C be a semiadditive category, so a category having all finitary biproducts. Then every hom-set has an addition, endowing it with the structure of an abelian monoid, and such that the composition of morphisms is bilinear. Moreover, if C is additive, then the two additions on hom-sets must agree. In particular, a semiadditive category is additive if and only if every morphism has an additive inverse. This shows that the addition law for an additive category is "internal" to that category. To define the addition law, we will use the convention that for a biproduct, "pk" will denote the projection morphisms, and "ik" will denote the injection morphisms. For each object A, we define: Then, for "k" = 1, 2, we have "p""k" ∘ ∆ = 1"A" and ∇ ∘ "i""k" = 1"A". Next, given two morphisms α"k": "A" → "B", there exists a unique morphism α1 ⊕ α2: "A" ⊕ "A" → "B" ⊕ "B" such that "p""l" ∘ (α1 ⊕ α2) ∘ "i""k" equals α"k" if "k" = "l", and 0 otherwise. We can therefore define α1 + α2 := ∇ ∘ (α1 ⊕ α2) ∘ ∆. This addition is both commutative and associative. The associativity can be seen by considering the composition formula_0 We have α + 0 = α, using that α ⊕ 0 = "i"1 ∘ α ∘ "p"1. It is also bilinear, using for example that ∆ ∘ β = (β ⊕ β) ∘ ∆ and that (α1 ⊕ α2) ∘ (β1 ⊕ β2) = (α1 ∘ β1) ⊕ (α2 ∘ β2). We remark that for a biproduct "A" ⊕ "B" we have "i"1 ∘ "p"1 + "i"2 ∘ "p"2 = 1. Using this, we can represent any morphism "A" ⊕ "B" → "C" ⊕ "D" as a matrix. Matrix representation of morphisms. Given objects "A"1, ..., "An" and "B"1, ..., "Bm" in an additive category, we can represent morphisms "f": "A"1 ⊕ ⋅⋅⋅ ⊕ "An" → "B"1 ⊕ ⋅⋅⋅ ⊕ "Bm" as m-by-n matrices formula_1 where formula_2 Using that ∑"k" "i""k" ∘ "p""k" = 1, it follows that addition and composition of matrices obey the usual rules for matrix addition and multiplication. Thus additive categories can be seen as the most general context in which the algebra of matrices makes sense. Recall that the morphisms from a single object A to itself form the endomorphism ring End "A". If we denote the n-fold product of A with itself by "A""n", then morphisms from "An" to "Am" are "m"-by-"n" matrices with entries from the ring End "A". Conversely, given any ring R, we can form a category Mat("R") by taking objects "An" indexed by the set of natural numbers (including 0) and letting the hom-set of morphisms from "An" to "Am" be the set of m-by-n matrices over R, and where composition is given by matrix multiplication. Then Mat("R") is an additive category, and "A""n" equals the n-fold power ("A"1)"n". This construction should be compared with the result that a ring is a preadditive category with just one object, shown here. If we interpret the object "A""n" as the left module "R""n", then this "matrix category" becomes a subcategory of the category of left modules over R. This may be confusing in the special case where m or n is zero, because we usually don't think of matrices with 0 rows or 0 columns. This concept makes sense, however: such matrices have no entries and so are completely determined by their size. While these matrices are rather degenerate, they do need to be included to get an additive category, since an additive category must have a zero object. Thinking about such matrices can be useful in one way, though: they highlight the fact that given any objects A and B in an additive category, there is exactly one morphism from A to 0 (just as there is exactly one 0-by-1 matrix with entries in End "A") and exactly one morphism from 0 to B (just as there is exactly one 1-by-0 matrix with entries in End "B") – this is just what it means to say that 0 is a zero object. Furthermore, the zero morphism from A to B is the composition of these morphisms, as can be calculated by multiplying the degenerate matrices. Additive functors. A functor "F": C → D between preadditive categories is "additive" if it is an abelian group homomorphism on each hom-set in C. If the categories are additive, then a functor is additive if and only if it preserves all biproduct diagrams. That is, if B is a biproduct of "A"1, ... , "An" in C with projection morphisms "pk" and injection morphisms "ij", then "F"("B") should be a biproduct of "F"("A"1), ... , "F"("An") in D with projection morphisms "F"("p""j") and injection morphisms "F"("ij"). Almost all functors studied between additive categories are additive. In fact, it is a theorem that all adjoint functors between additive categories must be additive functors (see here). Most of the interesting functors studied in category theory are adjoints. Generalization. When considering functors between R-linear additive categories, one usually restricts to R-linear functors, so those functors giving an R-module homomorphism on each hom-set. Special cases. Many commonly studied additive categories are in fact abelian categories; for example, Ab is an abelian category. The free abelian groups provide an example of a category that is additive but not abelian. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A\\ \\xrightarrow{\\quad\\Delta\\quad}\\ A \\oplus A \\oplus A\\ \\xrightarrow{\\alpha_1\\,\\oplus\\,\\alpha_2\\,\\oplus\\,\\alpha_3}\\ B \\oplus B \\oplus B\\ \\xrightarrow{\\quad\\nabla\\quad}\\ B" }, { "math_id": 1, "text": "\\begin{pmatrix} f_{11} & f_{12} & \\cdots & f_{1n} \\\\\nf_{21} & f_{22} & \\cdots & f_{2n} \\\\\n\\vdots & \\vdots & \\cdots & \\vdots \\\\\nf_{m1} & f_{m2} & \\cdots & f_{mn} \\end{pmatrix}\n" }, { "math_id": 2, "text": "f_{kl} := p_k \\circ f \\circ i_l\\colon A_l \\to B_k." } ]
https://en.wikipedia.org/wiki?curid=60854