id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
700131
One-loop Feynman diagram
Feynman diagram with only one cycle <templatestyles src="Template:TOC limit/styles.css" /> In physics, a one-loop Feynman diagram is a connected Feynman diagram with only one cycle (unicyclic). Such a diagram can be obtained from a connected tree diagram by taking two external lines of the same type and joining them together into an edge. Diagrams with loops (in graph theory, these kinds of loops are called cycles, while the word loop is an edge connecting a vertex with itself) correspond to the quantum corrections to the classical field theory. Because one-loop diagrams only contain one cycle, they express the next-to-classical contributions called the "semiclassical contributions". One-loop diagrams are usually computed as the integral over one independent momentum that can "run in the cycle". The Casimir effect, Hawking radiation and Lamb shift are examples of phenomena whose existence can be implied using one-loop Feynman diagrams, especially the well-known "triangle diagram": The evaluation of one-loop Feynman diagrams usually leads to divergent expressions, which are either due to: Infrared divergences are usually dealt with by assigning the zero mass particles a small mass "λ", evaluating the corresponding expression and then taking the limit formula_0. Ultraviolet divergences are dealt with by renormalization.
[ { "math_id": 0, "text": "\\lambda \\to 0" } ]
https://en.wikipedia.org/wiki?curid=700131
700154
WKB approximation
Solution method for linear differential equations In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly. The name is an initialism for Wentzel–Kramers–Brillouin. It is also known as the LG or Liouville–Green method. Other often-used letter combinations include JWKB and WKBJ, where the "J" stands for Jeffreys. Brief history. This method is named after physicists Gregor Wentzel, Hendrik Anthony Kramers, and Léon Brillouin, who all developed it in 1926. In 1923, mathematician Harold Jeffreys had developed a general method of approximating solutions to linear, second-order differential equations, a class that includes the Schrödinger equation. The Schrödinger equation itself was not developed until two years later, and Wentzel, Kramers, and Brillouin were apparently unaware of this earlier work, so Jeffreys is often neglected credit. Early texts in quantum mechanics contain any number of combinations of their initials, including WBK, BWK, WKBJ, JWKB and BWKJ. An authoritative discussion and critical survey has been given by Robert B. Dingle. Earlier appearances of essentially equivalent methods are: Francesco Carlini in 1817, Joseph Liouville in 1837, George Green in 1837, Lord Rayleigh in 1912 and Richard Gans in 1915. Liouville and Green may be said to have founded the method in 1837, and it is also commonly referred to as the Liouville–Green or LG method. The important contribution of Jeffreys, Wentzel, Kramers, and Brillouin to the method was the inclusion of the treatment of turning points, connecting the evanescent and oscillatory solutions at either side of the turning point. For example, this may occur in the Schrödinger equation, due to a potential energy hill. Formulation. Generally, WKB theory is a method for approximating the solution of a differential equation whose "highest derivative is multiplied by a small parameter" ε. The method of approximation is as follows. For a differential equation formula_0 assume a solution of the form of an asymptotic series expansion formula_1 in the limit "δ" → 0. The asymptotic scaling of δ in terms of ε will be determined by the equation – see the example below. Substituting the above ansatz into the differential equation and cancelling out the exponential terms allows one to solve for an arbitrary number of terms "S""n"("x") in the expansion. WKB theory is a special case of multiple scale analysis. An example. This example comes from the text of Carl M. Bender and Steven Orszag. Consider the second-order homogeneous linear differential equation formula_2 where formula_3. Substituting formula_4 results in the equation formula_5 To leading order in "ϵ" (assuming, for the moment, the series will be asymptotically consistent), the above can be approximated as formula_6 In the limit "δ" → 0, the dominant balance is given by formula_7 So δ is proportional to "ϵ". Setting them equal and comparing powers yields formula_8 which can be recognized as the eikonal equation, with solution formula_9 Considering first-order powers of ϵ fixes formula_10 This has the solution formula_11 where "k"1 is an arbitrary constant. We now have a pair of approximations to the system (a pair, because "S"0 can take two signs); the first-order WKB-approximation will be a linear combination of the two: formula_12 Higher-order terms can be obtained by looking at equations for higher powers of δ. Explicitly, formula_13 for "n" ≥ 2. Precision of the asymptotic series. The asymptotic series for "y"("x") is usually a divergent series, whose general term "δ""n" "S""n"("x") starts to increase after a certain value "n" = "n"max. Therefore, the smallest error achieved by the WKB method is at best of the order of the last included term. For the equation formula_2 with "Q"("x") <0 an analytic function, the value formula_14 and the magnitude of the last term can be estimated as follows: formula_15 formula_16 where formula_17 is the point at which formula_18 needs to be evaluated and formula_19 is the (complex) turning point where formula_20, closest to formula_21. The number "n"max can be interpreted as the number of oscillations between formula_17 and the closest turning point. If formula_22 is a slowly changing function, formula_23 the number "n"max will be large, and the minimum error of the asymptotic series will be exponentially small. Application in non relativistic quantum mechanics. The above example may be applied specifically to the one-dimensional, time-independent Schrödinger equation, formula_24 which can be rewritten as formula_25 Approximation away from the turning points. The wavefunction can be rewritten as the exponential of another function S (closely related to the action), which could be complex, formula_26 so that its substitution in Schrödinger's equation gives: formula_27 Next, the semiclassical approximation is used. This means that each function is expanded as a power series in ħ. formula_28 Substituting in the equation, and only retaining terms up to first order in ℏ, we get: formula_29 which gives the following two relations: formula_30 which can be solved for 1D systems, first equation resulting in:formula_31and the second equation computed for the possible values of the above, is generally expressed as:formula_32 Thus, the resulting wavefunction in first order WKB approximation is presented as, formula_33 In the classically allowed region, namely the region where formula_34 the integrand in the exponent is imaginary and the approximate wave function is oscillatory. In the classically forbidden region formula_35, the solutions are growing or decaying. It is evident in the denominator that both of these approximate solutions become singular near the classical turning points, where "E" = "V"("x"), and cannot be valid. (The turning points are the points where the classical particle changes direction.) Hence, when formula_36, the wavefunction can be chosen to be expressed as:formula_37and for formula_35,formula_38The integration in this solution is computed between the classical turning point and the arbitrary position x'. Validity of WKB solutions. From the condition:formula_39 It follows that: formula_40 For which the following two inequalities are equivalent since the terms in either side are equivalent, as used in the WKB approximation: formula_41 The first inequality can be used to show the following: formula_42 where formula_43 is used and formula_44 is the local de Broglie wavelength of the wavefunction. The inequality implies that the variation of potential is assumed to be slowly varying. This condition can also be restated as the fractional change of formula_45 or that of the momentum formula_46, over the wavelength formula_47, being much smaller than formula_48. Similarly it can be shown that formula_44 also has restrictions based on underlying assumptions for the WKB approximation that:formula_49which implies that the de Broglie wavelength of the particle is slowly varying. Behavior near the turning points. We now consider the behavior of the wave function near the turning points. For this, we need a different method. Near the first turning points, "x"1, the term formula_50 can be expanded in a power series, formula_51 To first order, one finds formula_52 This differential equation is known as the Airy equation, and the solution may be written in terms of Airy functions, formula_53 Although for any fixed value of formula_54, the wave function is bounded near the turning points, the wave function will be peaked there, as can be seen in the images above. As formula_54 gets smaller, the height of the wave function at the turning points grows. It also follows from this approximation that: formula_55 Connection conditions. It now remains to construct a global (approximate) solution to the Schrödinger equation. For the wave function to be square-integrable, we must take only the exponentially decaying solution in the two classically forbidden regions. These must then "connect" properly through the turning points to the classically allowed region. For most values of "E", this matching procedure will not work: The function obtained by connecting the solution near formula_56 to the classically allowed region will not agree with the function obtained by connecting the solution near formula_57 to the classically allowed region. The requirement that the two functions agree imposes a condition on the energy "E", which will give an approximation to the exact quantum energy levels.The wavefunction's coefficients can be calculated for a simple problem shown in the figure. Let the first turning point, where the potential is decreasing over x, occur at formula_59 and the second turning point, where potential is increasing over x, occur at formula_61. Given that we expect wavefunctions to be of the following form, we can calculate their coefficients by connecting the different regions using Airy and Bairy functions. formula_62 First classical turning point. For formula_58 ie. decreasing potential condition or formula_59 in the given example shown by the figure, we require the exponential function to decay for negative values of x so that wavefunction for it to go to zero. Considering Bairy functions to be the required connection formula, we get: formula_63 We cannot use Airy function since it gives growing exponential behaviour for negative x. When compared to WKB solutions and matching their behaviours at formula_64, we conclude: formula_65, formula_66 and formula_67. Thus, letting some normalization constant be formula_68, the wavefunction is given for increasing potential (with x) as: formula_69 Second classical turning point. For formula_60 ie. increasing potential condition or formula_61 in the given example shown by the figure, we require the exponential function to decay for positive values of x so that wavefunction for it to go to zero. Considering Airy functions to be the required connection formula, we get: formula_70 We cannot use Bairy function since it gives growing exponential behaviour for positive x. When compared to WKB solutions and matching their behaviours at formula_64, we conclude: formula_71, formula_72 and formula_67. Thus, letting some normalization constant be formula_73, the wavefunction is given for increasing potential (with x) as: formula_74 Common oscillating wavefunction. Matching the two solutions for region formula_75, it is required that the difference between the angles in these functions is formula_76 where the formula_77 phase difference accounts for changing cosine to sine for the wavefunction and formula_78 difference since negation of the function can occur by letting formula_79. Thus: formula_80 Where "n" is a non-negative integer. This condition can also be rewritten as saying that: The area enclosed by the classical energy curve is formula_81. Either way, the condition on the energy is a version of the Bohr–Sommerfeld quantization condition, with a "Maslov correction" equal to 1/2. It is possible to show that after piecing together the approximations in the various regions, one obtains a good approximation to the actual eigenfunction. In particular, the Maslov-corrected Bohr–Sommerfeld energies are good approximations to the actual eigenvalues of the Schrödinger operator. Specifically, the error in the energies is small compared to the typical spacing of the quantum energy levels. Thus, although the "old quantum theory" of Bohr and Sommerfeld was ultimately replaced by the Schrödinger equation, some vestige of that theory remains, as an approximation to the eigenvalues of the appropriate Schrödinger operator. General connection conditions. Thus, from the two cases the connection formula is obtained at a classical turning point, formula_82: formula_83 and: formula_84 The WKB wavefunction at the classical turning point away from it is approximated by oscillatory sine or cosine function in the classically allowed region, represented in the left and growing or decaying exponentials in the forbidden region, represented in the right. The implication follows due to the dominance of growing exponential compared to decaying exponential. Thus, the solutions of oscillating or exponential part of wavefunctions can imply the form of wavefunction on the other region of potential as well as at the associated turning point. Probability density. One can then compute the probability density associated to the approximate wave function. The probability that the quantum particle will be found in the classically forbidden region is small. In the classically allowed region, meanwhile, the probability the quantum particle will be found in a given interval is approximately the "fraction of time the classical particle spends in that interval" over one period of motion. Since the classical particle's velocity goes to zero at the turning points, it spends more time near the turning points than in other classically allowed regions. This observation accounts for the peak in the wave function (and its probability density) near the turning points. Applications of the WKB method to Schrödinger equations with a large variety of potentials and comparison with perturbation methods and path integrals are treated in Müller-Kirsten. Examples in quantum mechanics. Although WKB potential only applies to smoothly varying potentials, in the examples where rigid walls produce infinities for potential, the WKB approximation can still be used to approximate wavefunctions in regions of smoothly varying potentials. Since the rigid walls have highly discontinuous potential, the connection condition cannot be used at these points and the results obtained can also differ from that of the above treatment. Bound states for 1 rigid wall. The potential of such systems can be given in the form: formula_85 where formula_86. Finding wavefunction in bound region, ie. within classical turning points formula_87 and formula_88, by considering approximations far from formula_87 and formula_88 respectively we have two solutions: formula_89 formula_90 Since wavefunction must vanish near formula_87, we conclude formula_91. For airy functions near formula_88, we require formula_92. We require that angles within these functions have a phase difference formula_76 where the formula_77 phase difference accounts for changing sine to cosine and formula_78 allowing formula_93. formula_94Where "n" is a non-negative integer. Note that the right hand side of this would instead be formula_95 if n was only allowed to non-zero natural numbers. Thus we conclude that, for formula_96formula_97In 3 dimensions with spherically symmetry, the same condition holds where the position x is replaced by radial distance r, due to its similarity with this problem. Bound states within 2 rigid wall. The potential of such systems can be given in the form: formula_98 where formula_86. For formula_99 between formula_87 and formula_88 which are thus the classical turning points, by considering approximations far from formula_87 and formula_88 respectively we have two solutions: formula_100 formula_101 Since wavefunctions must vanish at formula_87 and formula_88. Here, the phase difference only needs to account for formula_78 which allows formula_93. Hence the condition becomes: formula_102where formula_96 but not equal to zero since it makes the wavefunction zero everywhere. Quantum bouncing ball. Consider the following potential a bouncing ball is subjected to: formula_103 The wavefunction solutions of the above can be solved using the WKB method by considering only odd parity solutions of the alternative potential formula_104. The classical turning points are identified formula_105 and formula_106. Thus applying the quantization condition obtained in WKB: formula_107 Letting formula_108 where formula_96, solving for formula_109 with given formula_104, we get the quantum mechanical energy of a bouncing ball: formula_110 This result is also consistent with the use of equation from bound state of one rigid wall without needing to consider an alternative potential. Quantum Tunneling. The potential of such systems can be given in the form: formula_111 where formula_86. It's solutions for an incident wave is given: formula_112 Where the wavefunction in the classically forbidden region is the WKB approximation but neglecting the growing exponential, which is a fair assumption for wide potential barriers through which the wavefunction is not expected to grow to high magnitudes. By the requirement of continuity of wavefunction and its derivatives, the following relation can be shown:formula_113 where formula_114 and formula_115. Using formula_116 we express the values without signs as: formula_117 formula_118 formula_119 Thus, the transmission coefficient is found to be: formula_120 where formula_121, formula_114 and formula_115. The result can be stated as formula_122 where formula_123. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\varepsilon \\frac{d^ny}{dx^n} + a(x)\\frac{d^{n-1}y}{dx^{n-1}} + \\cdots + k(x)\\frac{dy}{dx} + m(x)y= 0," }, { "math_id": 1, "text": " y(x) \\sim \\exp\\left[\\frac{1}{\\delta}\\sum_{n=0}^{\\infty} \\delta^n S_n(x)\\right]" }, { "math_id": 2, "text": " \\epsilon^2 \\frac{d^2 y}{dx^2} = Q(x) y, " }, { "math_id": 3, "text": "Q(x) \\neq 0" }, { "math_id": 4, "text": "y(x) = \\exp \\left[\\frac{1}{\\delta} \\sum_{n=0}^\\infty \\delta^n S_n(x)\\right]" }, { "math_id": 5, "text": "\\epsilon^2\\left[\\frac{1}{\\delta^2} \\left(\\sum_{n=0}^\\infty \\delta^nS_n'\\right)^2 + \\frac{1}{\\delta} \\sum_{n=0}^{\\infty}\\delta^n S_n''\\right] = Q(x)." }, { "math_id": 6, "text": "\\frac{\\epsilon^2}{\\delta^2} S_0'^2 + \\frac{2\\epsilon^2}{\\delta} S_0' S_1' + \\frac{\\epsilon^2}{\\delta} S_0'' = Q(x)." }, { "math_id": 7, "text": "\\frac{\\epsilon^2}{\\delta^2} S_0'^2 \\sim Q(x)." }, { "math_id": 8, "text": "\\epsilon^0: \\quad S_0'^2 = Q(x)," }, { "math_id": 9, "text": "S_0(x) = \\pm \\int_{x_0}^x \\sqrt{Q(x')}\\,dx'." }, { "math_id": 10, "text": "\\epsilon^1: \\quad 2 S_0' S_1' + S_0'' = 0." }, { "math_id": 11, "text": "S_1(x) = -\\frac{1}{4} \\ln Q(x) + k_1," }, { "math_id": 12, "text": "y(x) \\approx c_1 Q^{-\\frac{1}{4}}(x) \\exp\\left[\\frac{1}{\\epsilon} \\int_{x_0}^x \\sqrt{Q(t)} \\, dt\\right] + c_2 Q^{-\\frac{1}{4}}(x) \\exp\\left[-\\frac{1}{\\epsilon} \\int_{x_0}^x\\sqrt{Q(t)} \\, dt\\right]." }, { "math_id": 13, "text": " 2S_0' S_n' + S''_{n-1} + \\sum_{j=1}^{n-1}S'_j S'_{n-j} = 0" }, { "math_id": 14, "text": "n_\\max" }, { "math_id": 15, "text": "n_\\max \\approx 2\\epsilon^{-1} \\left| \\int_{x_0}^{x_{\\ast}} \\sqrt{-Q(z)}\\,dz \\right| , " }, { "math_id": 16, "text": "\\delta^{n_\\max}S_{n_\\max}(x_0) \\approx \\sqrt{\\frac{2\\pi}{n_\\max}} \\exp[-n_\\max], " }, { "math_id": 17, "text": "x_0" }, { "math_id": 18, "text": "y(x_0)" }, { "math_id": 19, "text": "x_{\\ast}" }, { "math_id": 20, "text": "Q(x_{\\ast}) = 0" }, { "math_id": 21, "text": "x = x_0" }, { "math_id": 22, "text": "\\epsilon^{-1}Q(x)" }, { "math_id": 23, "text": "\\epsilon\\left| \\frac{dQ}{dx} \\right| \\ll Q^2 , ^{\\text{[might be }Q^{3/2}\\text{?]}}" }, { "math_id": 24, "text": "-\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2} \\Psi(x) + V(x) \\Psi(x) = E \\Psi(x)," }, { "math_id": 25, "text": "\\frac{d^2}{dx^2} \\Psi(x) = \\frac{2m}{\\hbar^2} \\left( V(x) - E \\right) \\Psi(x)." }, { "math_id": 26, "text": "\\Psi(\\mathbf x) = e^{i S(\\mathbf{x}) \\over \\hbar}, " }, { "math_id": 27, "text": "i\\hbar \\nabla^2 S(\\mathbf x) - (\\nabla S(\\mathbf x))^2 = 2m \\left( V(\\mathbf x) - E \\right)," }, { "math_id": 28, "text": "S = S_0 + \\hbar S_1 + \\hbar^2 S_2 + \\cdots " }, { "math_id": 29, "text": "(\\nabla S_0+\\hbar \\nabla S_1)^2-i\\hbar(\\nabla^2 S_0) = 2m(E-V(\\mathbf x)) " }, { "math_id": 30, "text": "\\begin{align}\n(\\nabla S_0)^2= 2m (E-V(\\mathbf x)) = (p(\\mathbf x))^2\\\\ \n2\\nabla S_0 \\cdot \\nabla S_1 - i \\nabla^2 S_0 = 0 \n\\end{align}" }, { "math_id": 31, "text": "S_0(x) = \\pm \\int \\sqrt{ \\frac{2m}{\\hbar^2} \\left( E - V(x)\\right) } \\,dx=\\pm\\int p(x) \\,dx " }, { "math_id": 32, "text": "\\Psi(x) \\approx C_+ \\frac{ e^{+ \\frac i \\hbar \\int p(x)\\,dx} }{\\sqrt{|p(x)| }} + C_- \\frac{ e^{- \\frac i \\hbar \\int p(x)\\,dx} }{\\sqrt{|p(x)| }} " }, { "math_id": 33, "text": "\\Psi(x) \\approx \\frac{ C_{+} e^{+ \\frac{i}{\\hbar} \\int \\sqrt{2m \\left( E - V(x) \\right)}\\,dx} + C_{-} e^{- \\frac{i}{\\hbar} \\int \\sqrt{2 m \\left( E - V(x) \\right)}\\,dx} }{ \\sqrt[4]{2m \\mid E - V(x) \\mid} } " }, { "math_id": 34, "text": "V(x) < E" }, { "math_id": 35, "text": "V(x) > E" }, { "math_id": 36, "text": "E > V(x)" }, { "math_id": 37, "text": "\\Psi(x') \\approx C \\frac{\\cos{(\\frac 1 \\hbar \\int |p(x)|\\,dx} + \\alpha) }{\\sqrt{|p(x)| }} + D \\frac{ \\sin{(- \\frac 1 \\hbar \\int |p(x)|\\,dx} +\\alpha)}{\\sqrt{|p(x)| }} " }, { "math_id": 38, "text": "\\Psi(x') \\approx \\frac{ C_{+} e^{+ \\frac{i}{\\hbar} \\int |p(x)|\\,dx}}{\\sqrt{|p(x)|}} + \\frac{ C_{-} e^{- \\frac{i}{\\hbar} \\int |p(x)|\\,dx} }{ \\sqrt{|p(x)|} } . " }, { "math_id": 39, "text": "(S_0'(x))^2-(p(x))^2 + \\hbar (2 S_0'(x)S_1'(x)-iS_0''(x)) = 0 " }, { "math_id": 40, "text": "\\hbar\\mid 2 S_0'(x)S_1'(x)\\mid+\\hbar \\mid i S_0''(x)\\mid \\ll \\mid(S_0'(x))^2\\mid +\\mid (p(x))^2\\mid " }, { "math_id": 41, "text": "\\begin{align}\n\\hbar \\mid S_0''(x)\\mid \\ll \\mid(S_0'(x))^2\\mid\\\\\n2\\hbar \\mid S_0'S_1' \\mid \\ll \\mid(p'(x))^2\\mid\n\\end{align} " }, { "math_id": 42, "text": "\\begin{align}\n\\hbar \\mid S_0''(x)\\mid \\ll \\mid(p(x))\\mid^2\\\\\n\\frac{1}{2}\\frac{\\hbar}{|p(x)|}\\left|\\frac{dp^2}{dx}\\right| \\ll |p(x)|^2\\\\\n\\lambda \\left|\\frac{dV}{dx}\\right| \\ll \\frac{|p|^2}{m}\\\\\n\\end{align} " }, { "math_id": 43, "text": "|S_0'(x)|= |p(x)| " }, { "math_id": 44, "text": "\\lambda(x) " }, { "math_id": 45, "text": "E-V(x) " }, { "math_id": 46, "text": "p(x) " }, { "math_id": 47, "text": "\\lambda " }, { "math_id": 48, "text": "1 " }, { "math_id": 49, "text": "\\left|\\frac{d\\lambda}{dx}\\right| \\ll 1 " }, { "math_id": 50, "text": "\\frac{2m}{\\hbar^2}\\left(V(x)-E\\right)" }, { "math_id": 51, "text": "\\frac{2m}{\\hbar^2}\\left(V(x)-E\\right) = U_1 \\cdot (x - x_1) + U_2 \\cdot (x - x_1)^2 + \\cdots\\;." }, { "math_id": 52, "text": "\\frac{d^2}{dx^2} \\Psi(x) = U_1 \\cdot (x - x_1) \\cdot \\Psi(x)." }, { "math_id": 53, "text": "\\Psi(x) = C_A \\operatorname{Ai}\\left( \\sqrt[3]{U_1} \\cdot (x - x_1) \\right) + C_B \\operatorname{Bi}\\left( \\sqrt[3]{U_1} \\cdot (x - x_1) \\right)= C_A \\operatorname{Ai}\\left( u \\right) + C_B \\operatorname{Bi}\\left( u \\right)." }, { "math_id": 54, "text": "\\hbar" }, { "math_id": 55, "text": "\\frac{1}{\\hbar}\\int p(x) dx = \\sqrt{U_1} \\int \\sqrt{x-a}\\, dx = \\frac 2 3 (\\sqrt[3]{U_1} (x-a))^{\\frac 3 2} = \\frac 2 3 u^{\\frac 3 2}" }, { "math_id": 56, "text": "+\\infty" }, { "math_id": 57, "text": "-\\infty" }, { "math_id": 58, "text": "U_1 < 0" }, { "math_id": 59, "text": "x=x_1 \n" }, { "math_id": 60, "text": "U_1 > 0" }, { "math_id": 61, "text": "x=x_2\n" }, { "math_id": 62, "text": "\\begin{align} \n\\Psi_{V>E} (x) \\approx A \\frac{ e^{\\frac 2 3 u^\\frac{3}{2}}}{\\sqrt[4]{u}} + B \\frac{ e^{-\\frac 2 3 u^\\frac{3}{2}} }{\\sqrt[4]{u}} \\\\ \n\\Psi_{E>V}(x) \\approx C \\frac{\\cos{(\\frac 2 3 u^\\frac{3}{2} - \\alpha ) } }{\\sqrt[4]{u} } + D \\frac{ \\sin{(\\frac 2 3 u^\\frac{3}{2} - \\alpha)}}{\\sqrt[4]{u} }\\\\ \n\n\\end{align} " }, { "math_id": 63, "text": "\\begin{align}\n\\operatorname{Bi}(u) \\rightarrow -\\frac{1}{\\sqrt \\pi}\\frac{1}{\\sqrt[4]{u}} \\sin{\\left(\\frac 2 3 |u|^{\\frac 3 2} - \\frac \\pi 4\\right)} \\quad \\textrm{where,} \\quad u \\rightarrow -\\infty\\\\\n\\operatorname{Bi}(u) \\rightarrow \\frac{1}{\\sqrt \\pi}\\frac{1}{\\sqrt[4]{u}} e^{\\frac 2 3 u^{\\frac 3 2}} \\quad \\textrm{where,} \\quad u \\rightarrow +\\infty \\\\\n\\end{align} " }, { "math_id": 64, "text": "\\pm \\infty " }, { "math_id": 65, "text": "B=-D=N " }, { "math_id": 66, "text": "A=C=0 " }, { "math_id": 67, "text": "\\alpha = \\frac \\pi 4 " }, { "math_id": 68, "text": "N " }, { "math_id": 69, "text": "\\Psi_{\\text{WKB}}(x) = \\begin{cases}\n-\\frac{N}{\\sqrt{|p(x)|}}\\exp{(-\\frac 1 \\hbar \\int_{x}^{x_1} |p(x)| dx )} & \\text{if } x < x_1\\\\ \n \\frac{N}{\\sqrt{|p(x)|}} \\sin{(\\frac 1 \\hbar \\int_{x}^{x_1} |p(x)| dx - \\frac \\pi 4)} & \\text{if } x_2 > x > x_1 \\\\\n \\end{cases} " }, { "math_id": 70, "text": "\\begin{align}\n\\operatorname{Ai} (u)\\rightarrow \\frac{1}{2\\sqrt \\pi}\\frac{1}{\\sqrt[4]{u}} e^{-\\frac 2 3 u^{\\frac 3 2}} \\quad \\textrm{where,} \\quad u \\rightarrow + \\infty \\\\\n\\operatorname{Ai}(u) \\rightarrow \\frac{1}{\\sqrt \\pi}\\frac{1}{\\sqrt[4]{u}} \\cos{\\left(\\frac 2 3 |u|^{\\frac 3 2} - \\frac \\pi 4\\right)} \\quad \\textrm{where,} \\quad u \\rightarrow -\\infty\\\\\n\\end{align} " }, { "math_id": 71, "text": "2A=C=N' " }, { "math_id": 72, "text": "D=B=0 " }, { "math_id": 73, "text": "N' " }, { "math_id": 74, "text": "\\Psi_{\\text{WKB}}(x) = \\begin{cases}\n\n \\frac{N'}{\\sqrt{|p(x)|}} \\cos{(\\frac 1 \\hbar \\int_{x}^{x_2} |p(x)| dx - \\frac \\pi 4)} & \\text{if } x_1 < x < x_2 \\\\ \n \\frac{N'}{2\\sqrt{|p(x)|}}\\exp{(-\\frac 1 \\hbar \\int_{x_2}^{x} |p(x)| dx )} & \\text{if } x > x_2\\\\ \n \\end{cases}" }, { "math_id": 75, "text": "x_1<x<x_2 " }, { "math_id": 76, "text": "\\pi(n+1/2)" }, { "math_id": 77, "text": "\\frac \\pi 2" }, { "math_id": 78, "text": "n \\pi" }, { "math_id": 79, "text": "N= (-1)^n N' " }, { "math_id": 80, "text": "\\int_{x_1}^{x_2} \\sqrt{2m \\left( E-V(x)\\right)}\\,dx = (n+1/2)\\pi \\hbar ," }, { "math_id": 81, "text": "2\\pi\\hbar(n+1/2)" }, { "math_id": 82, "text": "x=a\n" }, { "math_id": 83, "text": " \\frac{N}{\\sqrt{|p(x)|}} \\sin{\\left(\\frac 1 \\hbar \\int_{x}^{a} |p(x)| dx - \\frac \\pi 4\\right)} \\Longrightarrow - \\frac{N}{\\sqrt{|p(x)|}}\\exp{\\left(\\frac 1 \\hbar \\int_{a}^{x} |p(x)| dx \\right)} " }, { "math_id": 84, "text": " \\frac{N'}{\\sqrt{|p(x)|}} \\cos{\\left(\\frac 1 \\hbar \\int_{x}^{a} |p(x)| dx - \\frac \\pi 4\\right)} \\Longleftarrow \\frac{N'}{2\\sqrt{|p(x)|}}\\exp{\\left(-\\frac 1 \\hbar \\int_{a}^{x} |p(x)| dx \\right)} " }, { "math_id": 85, "text": "V(x) = \\begin{cases}\nV(x) & \\text{if } x \\geq x_1\\\\\n \\infty & \\text{if } x < x_1 \\\\\n \\end{cases}" }, { "math_id": 86, "text": "x_1 < x_2 " }, { "math_id": 87, "text": "x_1 " }, { "math_id": 88, "text": "x_2 " }, { "math_id": 89, "text": "\\Psi_{\\text{WKB}}(x) = \n\\frac{A}{\\sqrt{|p(x)|}}\\sin{\\left(\\frac 1 \\hbar \\int_{x}^{x_1} |p(x)| dx +\\alpha \\right)} " }, { "math_id": 90, "text": "\\Psi_{\\text{WKB}}(x) = \n\\frac{B}{\\sqrt{|p(x)|}}\\cos{\\left(\\frac 1 \\hbar \\int_{x}^{x_2} |p(x)| dx +\\beta \\right)} " }, { "math_id": 91, "text": "\\alpha = 0 " }, { "math_id": 92, "text": "\\beta = - \\frac \\pi 4 " }, { "math_id": 93, "text": "B= (-1)^n A " }, { "math_id": 94, "text": "\\frac 1 \\hbar \\int_{x_1}^{x_2} |p(x)| dx = \\pi \\left(n + \\frac 3 4\\right) " }, { "math_id": 95, "text": "\\pi(n-1/4)" }, { "math_id": 96, "text": "n = 1,2,3,\\cdots " }, { "math_id": 97, "text": "\\int_{x_1}^{x_2} \\sqrt{2m \\left( E-V(x)\\right)}\\,dx = \\left(n-\\frac 1 4\\right)\\pi \\hbar " }, { "math_id": 98, "text": "V(x) = \\begin{cases}\n\\infty & \\text{if } x > x_2 \\\\\nV(x) & \\text{if } x_2 \\geq x \\geq x_1\\\\\n \\infty & \\text{if } x < x_1 \\\\\n \\end{cases} " }, { "math_id": 99, "text": "E \\geq V(x) " }, { "math_id": 100, "text": "\\Psi_{\\text{WKB}}(x) = \n\\frac{A}{\\sqrt{|p(x)|}}\\sin{\\left(\\frac 1 \\hbar \\int_{x}^{x_1} |p(x)| dx \\right)} " }, { "math_id": 101, "text": "\\Psi_{\\text{WKB}}(x) = \n\\frac{B}{\\sqrt{|p(x)|}}\\sin{\\left(\\frac 1 \\hbar \\int_{x}^{x_2} |p(x)| dx \\right)} " }, { "math_id": 102, "text": "\\int_{x_1}^{x_2} \\sqrt{2m \\left( E-V(x)\\right)}\\,dx = n\\pi \\hbar " }, { "math_id": 103, "text": "V(x) = \\begin{cases}\nmgx & \\text{if } x \\geq 0\\\\\n \\infty & \\text{if } x < 0 \\\\\n \\end{cases}" }, { "math_id": 104, "text": "V(x) = mg|x|" }, { "math_id": 105, "text": "x_1 = - {E \\over mg} " }, { "math_id": 106, "text": "x_2 = {E \\over mg} " }, { "math_id": 107, "text": "\\int_{x_1}^{x_2} \\sqrt{2m \\left( E-V(x)\\right)}\\,dx = (n_{\\text{odd}}+1/2)\\pi \\hbar" }, { "math_id": 108, "text": "n_{\\text{odd}}=2n-1 " }, { "math_id": 109, "text": "E " }, { "math_id": 110, "text": "E = {\\left(3\\left(n-\\frac 1 4\\right)\\pi\\right)^{\\frac 2 3} \\over 2}(mg^2\\hbar^2)^{\\frac 1 3}. " }, { "math_id": 111, "text": "V(x) = \\begin{cases}\n0 & \\text{if } x < x_1 \\\\\nV(x) & \\text{if } x_2 \\geq x \\geq x_1\\\\\n0 & \\text{if } x > x_2 \\\\ \n \\end{cases} " }, { "math_id": 112, "text": "V(x) = \\begin{cases}\nA \\exp({ i p_0 x \\over \\hbar} ) + B \\exp({- i p_0 x \\over \\hbar}) & \\text{if } x < x_1 \\\\\n \\frac{C}{\\sqrt{|p(x)|}}\\exp{(-\\frac 1 \\hbar \\int_{x_1}^{x} |p(x)| dx )} & \\text{if } x_2 \\geq x \\geq x_1\\\\ \nD \\exp({ i p_0 x \\over \\hbar} ) & \\text{if } x > x_2 \\\\\n\n \\end{cases} " }, { "math_id": 113, "text": "\\frac {|E|^2} {|A|^2} = \\frac{4}{(1+{a_1^2}/{p_0^2} )} \\frac{a_1}{a_2}\\exp\\left(-\\frac 2 \\hbar \\int_{x_1}^{x_2} |p(x')| dx'\\right) " }, { "math_id": 114, "text": "a_1 = |p(x_1)|" }, { "math_id": 115, "text": "a_2 = |p(x_2)| " }, { "math_id": 116, "text": "\\mathbf J(\\mathbf x,t) = \\frac{i\\hbar}{2m}(\\psi^* \\nabla\\psi-\\psi\\nabla\\psi^*) " }, { "math_id": 117, "text": "J_{\\text{inc.}} = \\frac{\\hbar}{2m}(\\frac{2p_0}{\\hbar}|A|^2) " }, { "math_id": 118, "text": "J_{\\text{ref.}} = \\frac{\\hbar}{2m}(\\frac{2p_0}{\\hbar}|B|^2) " }, { "math_id": 119, "text": "J_{\\text{trans.}} = \\frac{\\hbar}{2m}(\\frac{2p_0}{\\hbar}|E|^2) " }, { "math_id": 120, "text": "T = \\frac {|E|^2} {|A|^2} = \\frac{4}{(1+{a_1^2}/{p_0^2} )} \\frac{a_1}{a_2}\\exp\\left(-\\frac 2 \\hbar \\int_{x_1}^{x_2} |p(x')| dx'\\right) " }, { "math_id": 121, "text": "p(x) = \\sqrt {2m( E - V(x))} " }, { "math_id": 122, "text": "T \\sim ~ e^{-2\\gamma} " }, { "math_id": 123, "text": "\\gamma = \\int_{x_1}^{x_2} |p(x')| dx' " } ]
https://en.wikipedia.org/wiki?curid=700154
7001745
Impedance of free space
Physical constant; ratio of electric to magnetic field strength in a vacuum In electromagnetism, the impedance of free space, "Z"0, is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is, formula_0 where is the electric field strength, and is the magnetic field strength. Its presently accepted value is‍ "Z"0 = , where Ω is the ohm, the SI unit of electrical resistance. The impedance of free space (that is, the wave impedance of a plane wave in free space) is equal to the product of the vacuum permeability "μ"0 and the speed of light in vacuum "c"0. Before 2019, the values of both these constants were taken to be exact (they were given in the definitions of the ampere and the metre respectively), and the value of the impedance of free space was therefore likewise taken to be exact. However, with the redefinition of the SI base units that came into force on 20 May 2019, the impedance of free space is subject to experimental measurement because only the speed of light in vacuum "c"0 retains an exactly defined value. Terminology. The analogous quantity for a plane wave travelling through a dielectric medium is called the "intrinsic impedance" of the medium and designated η (eta). Hence "Z"0 is sometimes referred to as the "intrinsic impedance of free space", and given the symbol "η"0. It has numerous other synonyms, including: Relation to other constants. From the above definition, and the plane wave solution to Maxwell's equations, formula_1 where "μ"0 ≈ H/m is the magnetic constant, also known as the permeability of free space, "ε"0 ≈ F/m is the electric constant, also known as the permittivity of free space, "c" is the speed of light in free space, The reciprocal of "Z"0 is sometimes referred to as the "admittance of free space" and represented by the symbol "Y"0. Historical exact value. Between 1948 and 2019, the SI unit the ampere was defined by "choosing" the numerical value of "μ"0 to be exactly . Similarly, since 1983 the SI metre has been defined relative to the second by "choosing" the value of "c"0 to be . Consequently, until the 2019 redefinition, formula_2 "exactly", or formula_3 "exactly", or formula_4 This chain of dependencies changed when the ampere was redefined on 20 May 2019. Approximation as 120π ohms. It is very common in textbooks and papers written before about 1990 to substitute the approximate value 120π ohms for "Z"0. This is equivalent to taking the speed of light "c" to be precisely in conjunction with the then-current definition of "μ"0 as . For example, Cheng 1989 states that the radiation resistance of a Hertzian dipole is formula_5 ("result in ohms; not exact"). This practice may be recognized from the resulting discrepancy in the units of the given formula. Consideration of the units, or more formally dimensional analysis, may be used to restore the formula to a more exact form, in this case to formula_6
[ { "math_id": 0, "text": "Z_0 = \\frac{|\\mathbf E|}{|\\mathbf H|}," }, { "math_id": 1, "text": "Z_0 = \\frac{|\\mathbf E|}{|\\mathbf H|} = \\mu_0 c = \\sqrt{\\frac{\\mu_0}{\\varepsilon_0}} = \\frac{1}{\\varepsilon_0 c}," }, { "math_id": 2, "text": "Z_0 = \\mu_0 c = 4\\pi \\times 29.979\\,2458~\\Omega" }, { "math_id": 3, "text": "Z_0 = \\mu_0 c = \\pi \\times 119.916\\,9832~\\Omega" }, { "math_id": 4, "text": "Z_0 = 376.730\\,313\\,461\\,77\\ldots~\\Omega." }, { "math_id": 5, "text": "R_r \\approx 80 \\pi^2 \\left( \\frac{l}{\\lambda}\\right)^2" }, { "math_id": 6, "text": "R_r = \\frac{2 \\pi}{3} Z_0 \\left( \\frac{l}{\\lambda}\\right)^2." } ]
https://en.wikipedia.org/wiki?curid=7001745
70019617
PH-tree
Spatial index that partitions space based on the bit-representation of keys The PH-tree is a tree data structure used for spatial indexing of multi-dimensional data (keys) such as geographical coordinates, points, feature vectors, rectangles or bounding boxes. The PH-tree is space partitioning index with a structure similar to that of a quadtree or octree. However, unlike quadtrees, it uses a splitting policy based on tries and similar to Crit bit trees that is based on the bit-representation of the keys. The bit-based splitting policy, when combined with the use of different internal representations for nodes, provides scalability with high-dimensional data. The bit-representation splitting policy also imposes a maximum depth, thus avoiding degenerated trees and the need for rebalancing. Overview. The basic PH-tree is a spatial index that maps keys, which are d-dimensional vectors with integers, to user defined values. The PH-tree is a multi-dimensional generalization of a Crit bit tree in the sense that a Crit bit tree is equivalent to a PH-tree with formula_0-dimensional keys. Like the Crit bit tree, and unlike most other spatial indexes, the PH-tree is a "map" rather than a "multimap". A d-dimensional PH-tree is a tree of nodes where each node partitions space by subdividing it into formula_1 "quadrants" (see below for how potentially large nodes scales with high dimensional data). Each "quadrant" contains at most one "entry", either a key-value pair (leaf quadrant) or a key-subnode pair. For a key-subnode pair, the key represents the center of the subnode. The key is also the common prefix (bit-representation) of all keys in the subnode and its child subnodes. Each node has at least two entries, otherwise it is merged with the parent node. Some other structural properties of PH-trees are: Splitting strategy. Similar to most quadtrees, the PH-tree is a hierarchy of nodes where every node splits the space in all d dimensions. Thus, a node can have up to formula_1 subnodes, one for each quadrant. Quadrant numbering. The PH-tree uses the bits of the multi-dimensional keys to determine their position in the tree. All keys that have the same leading bits are stored in the same branch of the tree. For example, in a node at level L, to determine the quadrant where a key should be inserted (or removed or looked up), it looks at the L's bit of each dimension of the key. For a 3D node with 8 quadrants (forming a cube) the L's bit of the first dimension of the key determines whether the target quadrant is on the left or the right of the cube, the L's bit of the second dimension determines whether it is at the front or the back, and the L's bit of the third dimension determines bottom vs top, see picture. 1D example. Example with three 1D keys with 8bit values: formula_4, formula_5 and formula_6. Adding formula_7 and formula_8 to an empty tree results in a single node. The two keys first differ in their 6th bit so the node has a level formula_9 (starting with 0). The node has a 5bit prefix representing the common 5 bits of both keys. The node has two quadrants, each key is stored in one quadrant. Adding a third key formula_10 results in one additional node at formula_11 with one quadrant containing the original node as subnode and the other quadrant containing the new key formula_12. 2D example. With 2D keys every node has formula_13 quadrants. The position of the quadrant where a key is stored is extracted from the respective bits of the keys, one bit from each dimension. The four quadrants of the node form a 2D hypercube (quadrants may be empty). The bits that are extracted from the keys form the hypercube address formula_14, for formula_15 and for formula_16. formula_14 is effectively the position of the quadrant in the node's hypercube. Node structure. The ordering of the entries in a node always follows Z-ordering. Entries in a node can, for example, be stored in fixed size arrays of size formula_1. h is then effectively the array index of a quadrant. This allows lookup, insert and remove with formula_17 and there is no need to store h. Space complexity is however formula_18 per node, so it is less suitable for high dimensional data. Another solution is to store entries in a sorted collection, such as dynamic arrays and/or B-trees. This slows down lookup operations to formula_19 but reduces memory consumption to formula_20. The original implementation aimed for minimal memory consumption by switching between fixed and dynamic array representation depending on which uses less memory. Other implementations do not switch dynamically but use fixed arrays for formula_21, dynamic arrays for formula_22 and B-trees for high dimensional data. Operations. Lookup, insertion and removal operations all work very similar: find the correct node, then perform the operation on the node. Window queries and k-nearest-neighbor searches are more complex. Lookup. The "Lookup" operation determines whether a key exists in the tree. It walks down the tree and checks every node whether it contains a candidate subnode or a user value that matches the key. function lookup(key) is entry ← get_root_entry() // if the tree is not empty the root entry contains a root node while entry != NIL &amp;&amp; entry.is_subnode() do node ← entry.get_node() entry ← node.get_entry(key) repeat return entry // entry can be NIL function get_entry(key) is node ← current node entry ← node.get_entry_at(h) return entry // entry can be NIL Insert. The "Insert" operation inserts a new key-value pair into the tree unless they key already exists. The operation traverses the tree like the "Lookup" function and then inserts the key into the node. There are several cases to consider: function insert(node, key, value) level ← node.get_level() // Level is 0 for root h ← extract_bits_at_level(key, level) entry ← node.get_entry(h) if entry == NIL then // Case 1. entry_new ← create_entry(key, value) node.set_entry(h, entry_new) else if !entry.is_subnode() &amp;&amp; entry.get_key() == key then // Case 2. Collision, there is already an entry return ← failed_insertion else // Case 3. level_diff ← get_level_of_difference(key, entry.get_key()) entry_new ← create_entry(key, value) // new subnode with existing entry and new entry subnode_new ← create_node(level_diff, entry, entry_new) node.set_entry(h, subnode_new) end if return Remove. Removal works inversely to insertion, with the additional constraint that any subnode has to be removed if less than two entries remain. The remaining entry is moved to the parent node. Window queries. Window queries are queries that return all keys that lie inside a rectangular axis-aligned hyperbox. They can be defined to be two d-dimensional points formula_23 and formula_24 that represent the "lower left" and "upper right" corners of the query box. A trivial implementation traverses all entries in a node (starting with the root node) and if an entry matches it either adds it to the result list (if it is a user entry) or recursively traverses it (if it is a subnode). function query(node, min, max, result_list) is foreach entry ← node.get_entries() do if entry.is_subnode() then if entry.get_prefix() &gt;= min and entry.get_prefix() &lt;= max then query(entry.get_subnode(), min, max, result_list) end if else if entry.get_key() &gt;= min and entry.get_key() &lt;= max then result_list.add(entry) end if end if repeat return In order to accurately estimate query time complexity the analysis needs to include the dimensionality formula_3. Traversing and comparing all formula_25 entries in a node has a time complexity of formula_26 because each comparison of formula_3-dimensional key with formula_27 takes formula_28 time. Since nodes can have up to formula_1 entries, this does not scale well with increasing dimensionality formula_3. There are various ways how this approach can be improved by making use of the hypercube address h. Min h &amp; max h. The idea is to find minimum and maximum values for the quadrant's addresses formula_14 such that the search can avoid some quadrants that do not overlap with the query box. Let formula_29 be the center of a node (this is equal to the node's prefix) and formula_30 and formula_31 be two bit strings with formula_3 bits each. Also, let subscript formula_32 with formula_33 indicate the formula_32's bit of formula_30 and formula_31 and the formula_32'th dimension of formula_23, formula_24 and formula_29. Let formula_34 and formula_35. formula_30 then has a `formula_0` for every dimension where the "lower" half of the node and all quadrants in it does not overlap with the query box. Similarly, formula_30 has a `formula_36` for every dimension where the "upper" half does not overlap with the query box. formula_30 and formula_31 then present the lowest and highest formula_14 in a node that need to be traversed. Quadrants with formula_37 or formula_38 do not intersect with the query box. A proof is available in. With this, the above query function can be improved to: function query(node, min, max, result_list) is h_min ← calculate h_min h_max ← calculate h_max for each entry ← node.get_entries_range(h_min, h_max) do repeat return Calculating formula_30 and formula_31 is formula_39. Depending on the distribution of the occupied quadrants in a node this approach will allow avoiding anywhere from no to almost all key comparisons. This reduces the average traversal time but the resulting complexity is still formula_40. Check quadrants for overlap with query box. Between formula_30 and formula_31 there can still be quadrants that do not overlap with the query box. Idea: formula_30 and formula_31 each have one bit for every dimensions that indicates whether the query box overlaps with the lower/upper half of a node in that dimension. This can be used to quickly check whether a quadrant formula_14 overlaps with the query box without having to compare formula_3-dimensional keys: a quadrant formula_14 overlaps with the query box if for every `formula_36` bit in formula_14 there is a corresponding `formula_36` bit in formula_30 and for every `formula_0` bit in formula_14 there is a corresponding `formula_0` bit in formula_31. On a CPU with 64bit registers it is thus possible to check for overlap of up to formula_41-dimensional keys in formula_17. function is_overlap(h, h_min, h_max) is return (h | h_min) &amp; h_max == h // evaluates to 'true' if quadrant and query overlap. function query(node, min, max, result_list) is h_min ← calculate h_min h_max ← calculate h_max for each entry ← node.get_entries_range(h_min, h_max) do h ← entry.get_h(); if (h | h_min) &amp; h_max == h then // evaluates to 'true' if quadrant and query overlap. end if repeat return The resulting time complexity is formula_42 compared to the formula_26 of the full iteration. Traverse quadrants that overlap with query box. For higher dimensions with larger nodes it is also possible to avoid iterating through all formula_14 and instead directly calculate the next higher formula_14 that overlaps with the query box. The first step puts `formula_0`-bits into a given formula_43 for all quadrants that have no overlap with the query box. The second step increments the adapted formula_14 and the added `formula_0`-bits trigger an overflow so that the non-overlapping quadrants are skipped. The last step removes all the undesirable bits used for triggering the overflow. The logic is described in detail in. The calculation works as follows: function increment_h(h_input, h_min, h_max) is h_out = h_input | (~ h_max ) // pre - mask h_out += 1 // increment h_out = ( h_out &amp; h_max ) | h_min // post - mask return h_out Again, for formula_44 this can be done on most CPUs in formula_17. The resulting time complexity for traversing a node is formula_45. This works best if most of the quadrants that overlap with the query box are occupied with an entry. k-nearest neighbors. k nearest neighbor searches can be implemented using standard algorithms. Floating point keys. The PH-tree can only store integer values. Floating point values can trivially be stored as integers casting them as an integer. However, the authors also propose an approach without loss of precision. Lossless conversion. Lossless converting of a floating point value into an integer value (and back) without loss if precision can be achieved by simply interpreting the 32 or 64 bits of the floating point value as an integer (with 32 or 64 bits). Due to the way that IEEE 754 encodes floating point values, the resulting integer values have the same ordering as the original floating point values, at least for positive values. Ordering for negative values can be achieved by inverting the non-sign bits. Example implementations in Java: long encode(double value) { long r = Double.doubleToRawLongBits(value); return (r &gt;= 0) ? r : r ^ 0x7FFFFFFFFFFFFFFFL; Example implementations in C++: std::int64_t encode(double value) { std::int64_t r; memcpy(&amp;r, &amp;value, sizeof(r)); return r &gt;= 0 ? r : r ^ 0x7FFFFFFFFFFFFFFFL; Encoding (and the inverse decoding) is lossless for all floating point values. The ordering works well in practice, including formula_46 and formula_47. However, the integer representation also turns formula_48 into a normal comparable value (smaller than infinity), infinities are comparable to each other and formula_49 is larger than formula_47. That means that, for example, a query range formula_50 will "not" match a value of formula_47. In order to match formula_47 the query range needs to be formula_51. Hyperbox keys. In order to store volumes (axis-aligned hyper-boxes) as keys, implementations typically use "corner representation" which converts the two formula_3-dimensional minimum and maximum corners of a box into a single key with formula_52 dimensions, for example by interleaving them: formula_53. This works trivially for lookup, insert and remove operations. Window queries need to be converted from formula_3-dimensional vectors to formula_52-dimensional vectors. For example, for a window query that matches all boxes that are completely "inside" the query box, the query keys are: formula_54 formula_55 For a window query operation that matches all boxes that "intersect" with a query box, the query keys are: formula_56 formula_57 Scalability. In high dimensions with less than formula_1 entries, a PH-tree may have only a single node, effectively “degenerating” into a B-Tree with Z-order curve. The add/remove/lookup operations remain formula_58 and window queries can use the quadrant filters. However, this cannot avoid the curse of dimensionality, for high dimensional data with formula_59 or formula_60 a PH-tree is only marginally better than a full scan. Uses. Research has reported fast add/remove/exact-match operations with large and fast changing datasets. Window queries have been shown to work well especially for small windows or large dataset The PH-tree is mainly suited for in-memory use. The size of the nodes (number of entries) is fixed while persistent storage tends to benefit from indexes with configurable node size to align node size with page size on disk. This is easier with other spatial indexes, such as R-Trees. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1" }, { "math_id": 1, "text": "2^d" }, { "math_id": 2, "text": "2^n" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "k_0 = \\{1\\}_{base\\ 10} = \\{00000001\\}_{base\\ 2}" }, { "math_id": 5, "text": "k_1 = \\{4\\}_{10} = \\{00000100\\}_{2}" }, { "math_id": 6, "text": "k_2 = \\{35\\}_{10} = \\{00100011\\}_{2}" }, { "math_id": 7, "text": "k_0" }, { "math_id": 8, "text": "k_1" }, { "math_id": 9, "text": "L=5" }, { "math_id": 10, "text": "k_3" }, { "math_id": 11, "text": "L=2" }, { "math_id": 12, "text": "k_2" }, { "math_id": 13, "text": "2^d=4" }, { "math_id": 14, "text": "h" }, { "math_id": 15, "text": "k_0 \\rarr h=\\{00\\}_2" }, { "math_id": 16, "text": "k_1 \\rarr h=\\{01\\}_2" }, { "math_id": 17, "text": "O(1)" }, { "math_id": 18, "text": "O(2^d)" }, { "math_id": 19, "text": "O(\\log{n_{node\\_entries}})" }, { "math_id": 20, "text": "O(n_{node\\_entries})" }, { "math_id": 21, "text": "d \\lesssim 4" }, { "math_id": 22, "text": "d \\lesssim 8" }, { "math_id": 23, "text": "min" }, { "math_id": 24, "text": "max" }, { "math_id": 25, "text": "n_{node\\_entries}" }, { "math_id": 26, "text": "O(d \\cdot n_{node\\_entries})" }, { "math_id": 27, "text": "min/max" }, { "math_id": 28, "text": "O(d)" }, { "math_id": 29, "text": "C" }, { "math_id": 30, "text": "h_{min}" }, { "math_id": 31, "text": "h_{max}" }, { "math_id": 32, "text": "i" }, { "math_id": 33, "text": "0 \\leq i < d" }, { "math_id": 34, "text": "h_{min,i} = (min_i \\leq C_i)" }, { "math_id": 35, "text": "h_{max,i} = (max_i \\geq C_i)" }, { "math_id": 36, "text": "0" }, { "math_id": 37, "text": "h < h_{min}" }, { "math_id": 38, "text": "h > h_{max}" }, { "math_id": 39, "text": "O(2d) = O(d)" }, { "math_id": 40, "text": "O(d + d \\cdot n_{node\\_entries})" }, { "math_id": 41, "text": "64" }, { "math_id": 42, "text": "O(d + n_{node\\_entries})" }, { "math_id": 43, "text": "h_{input}" }, { "math_id": 44, "text": "d \\leq 64" }, { "math_id": 45, "text": "O(d + n_{overlapping\\_quadrants})" }, { "math_id": 46, "text": "\\pm\\infty" }, { "math_id": 47, "text": "-0.0" }, { "math_id": 48, "text": "NaN" }, { "math_id": 49, "text": "0.0" }, { "math_id": 50, "text": "[0.0, 10.0]" }, { "math_id": 51, "text": "[-0.0, 10.0]" }, { "math_id": 52, "text": "2d" }, { "math_id": 53, "text": "k = \\{min_0, max_0, min_1, max_1, ..., min_{d-1}, max_{d-1}\\}" }, { "math_id": 54, "text": "k_{min} = \\{min_0, min_0, min_1, min_1, ..., min_{d-1}, min_{d-1}\\}" }, { "math_id": 55, "text": "k_{max} = \\{max_0, max_0, max_1, max_1, ..., max_{d-1}, max_{d-1}\\}" }, { "math_id": 56, "text": "k_{min} = \\{-\\infty, min_0, -\\infty, min_1, ..., -\\infty, min_{d-1}\\}" }, { "math_id": 57, "text": "k_{max} = \\{max_0, +\\infty, max_1, +\\infty, ..., max_{d-1}, +\\infty\\}" }, { "math_id": 58, "text": "O(\\log{n})" }, { "math_id": 59, "text": "d=50" }, { "math_id": 60, "text": "d=100" } ]
https://en.wikipedia.org/wiki?curid=70019617
70020816
Multidimensional assignment problem
Generalization of linear assignment problem from two to multiple dimensions The multidimensional assignment problem (MAP) is a fundamental combinatorial optimization problem which was introduced by William Pierskalla. This problem can be seen as a generalization of the linear assignment problem. In words, the problem can be described as follows: An instance of the problem has a number of "agents" (i.e., "cardinality" parameter) and a number of "job characteristics" (i.e., "dimensionality" parameter) such as task, machine, time interval, etc. For example, an agent can be assigned to perform task X, on machine Y, during time interval Z. Any agent can be assigned to perform a job with any combination of unique job characteristics at some "cost". These costs may vary based on the assignment of agent to a combination of job characteristics - specific task, machine, time interval, etc. The problem is to minimize the "total cost" of assigning the agents so that the assignment of agents to each job characteristic is an injective function, or one-to-one function from agents to a given job characteristic. Alternatively, describing the problem using graph theory: The multidimensional assignment problem consists of finding, in a weighted multipartite graph, a matching of a given size, in which the sum of weights of the edges is minimum. Formal definition. Various formulations of this problem can be found in the literature. Using cost-functions, the formula_0–dimensional assignment problem (or formula_0–MAP) can be stated as follows: Given formula_0 sets, formula_1 and formula_2, of equal size, together with a cost array or multidimensional weight function formula_3 : formula_4, find formula_5 permutations formula_6 : "A" → formula_7 such that the total cost function: formula_8 is minimized. Problem parameters. The multidimensional assignment problem (MAP) has two key parameters that determine "the size of a problem instance": Size of cost array. Any problem instance of the MAP with parameters formula_11 has its specific cost array formula_3, which consists of formula_12 instance-specific costs/weights parameters formula_13. formula_12 is the "size" of cost array. Number of feasible solutions. The feasible region or solution space of the MAP is very large. The number formula_14 of feasible solutions (the size of the MAP instance) depends on the MAP parameters formula_11. Specifically, formula_15. Computational complexity. The problem is generally NP-hard. In other words, there is no known algorithm for solving this problem in polynomial time, and so a long computational time may be needed for solving problem instances of even moderate size (based on dimensionality and cardinality parameters). Applications. The problem found application in many domains: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "J_1, \\ldots J_{D-1}" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "A \\times J_1 \\times \\ldots \\times J_{D-1} \\rightarrow \\mathbb{R}_+" }, { "math_id": 5, "text": "D-1" }, { "math_id": 6, "text": "\\pi_{d}" }, { "math_id": 7, "text": "J_d" }, { "math_id": 8, "text": "\\sum_{a\\in A}C(a,\\pi_{1}(a),\\ldots,\\pi_{D-1}(a))" }, { "math_id": 9, "text": "N = |A|" }, { "math_id": 10, "text": "|A|" }, { "math_id": 11, "text": "D, N" }, { "math_id": 12, "text": "N^{D}" }, { "math_id": 13, "text": "C(a,a_1,\\ldots,a_{D-1})" }, { "math_id": 14, "text": "K" }, { "math_id": 15, "text": "K = (N!)^{D-1}" } ]
https://en.wikipedia.org/wiki?curid=70020816
70021988
History index model
Model in functional data analysis In statistical analysis, the standard framework of varying coefficient models (also known as concurrent regression models), where the current value of a response process is modeled in dependence on the current value of a predictor process, is disadvantageous when it is assumed that past and present values of the predictor process influence current response. In contrast to these approaches, the history index model includes the effect of recent past values of the predictor through the history index function. Specifically, the influence of past predictor values is modeled by a smooth history index functions, while the effects on the response are described by smooth varying coefficient functions. Definition. In Functional data analysis, functional data are considered as realizations of a Stochastic process formula_0 that is an formula_1 process on a bounded and closed interval formula_2. Let the current functional response process formula_3 at time formula_4 depends on the recent history of the predictor process formula_5 in a sliding window of length formula_6. Then the history index model is defined as formula_7 (1) for formula_8 with a suitable formula_9. Then, a "history index function" is formula_10 defining the history index factor at formula_11 by quantifying the influence of the recent history of the predictor values on the response. In most cases, formula_10 is assumed to be smooth. For identifiability, formula_10 is normalized by requiring that formula_12 and that formula_13, which is no real restriction as formula_14. Estimation of the history index model. Estimation of the history index function. At each fixed time point formula_15, the model in (1) reduces to a functional linear model between the scalar response formula_3 and the functional predictor formula_16 Also, formula_17 is a centered functional covariate and formula_18 is a centered response process. Writing the model as formula_19 (2) with regression parameter functions formula_20 the functions formula_21 contain the factor formula_22 for each formula_15. To satisfy the constraint formula_23 and stabilize resulting estimators, over an equidistant grid of time points formula_24 in formula_25 we can define formula_26. (3) When the history index function is recovered, model (1) reduces to a varying coefficient model. Estimation of the varying coefficient function. Once the estimate of formula_22 has been obtained, the remaining unknown component in model (2) is the varying coefficient function formula_27. Define formula_28 From (2), formula_29, formula_30 and therefore formula_31 Application of the history index model. The applications of the varying coefficient model, which considers both the past and present information at the same time, have received an increasing attention in recent years. For example, Sentürk et al. proposes a time varying lagged regression model to assess the association between predictors, such as cognitive and functional impairment scores, with the frequency of clinic visits of older adults. Also, Zemplenyi et al. suggests a function-on-function regression model that leverages data from nearby DNA methylation probes to identify epigenetic regions that exhibit windows of susceptibility to ambient particulate matter less 2.5 microns (PM2.5). In this trend, the history index model have also been used in various situations. Delay differential equation. The modeling of time dynamical systems is of interest in multiple scientific fields. A delay differential equation (DDE) is a natural extension of a variety of differential equations, such as ordinary differential equation, random differential equation and stochastic differential equation, when observed processes have an aftereffect. For dynamic learning of random differential equations with a delay (RDED), Dubey et al. utilize functional linear regression with history index to learn the distributed delay, where the regression parameter function then corresponds to a history index function for the process of interest. Let formula_32 denote multivariate stochastic process where formula_33 is a continuously differentiable process of interest. formula_34 is a vector function of additional covariates and formula_35 is a time window of interest. The model is defined as formula_36 formula_37 where formula_38 is an initial condition process, formula_39, formula_40 are delays, formula_41 is a smooth function, formula_42 are history index functions, and formula_43 is a random drift process that is independent of formula_32. For the purpose of illustration and technical derivations, we assume that formula_44 is a univariate process: the corresponding multivariate generalization is straightforward. By using the RDED described above, it is utilized to predict the growth rate of COVID-19 cases in the United States. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X(t), t\\in \\mathcal{I}" }, { "math_id": 1, "text": "L^{2}" }, { "math_id": 2, "text": "\\mathcal{I}" }, { "math_id": 3, "text": "Y(t)" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\Delta" }, { "math_id": 7, "text": "\n\\mathrm{E}\\{Y(t)|X(t)\\}=\\beta_{0}+\\beta_{1}(t)\\int_{0}^{\\Delta}\\gamma(u)X(t-u)du,\n" }, { "math_id": 8, "text": "t \\in [\\Delta,T]" }, { "math_id": 9, "text": "T>0" }, { "math_id": 10, "text": "\\gamma(\\cdot)" }, { "math_id": 11, "text": "\\beta_{1}(\\cdot)" }, { "math_id": 12, "text": "\\int_{0}^{\\Delta} \\gamma^{2}(u) du = 1" }, { "math_id": 13, "text": "\\gamma(0)>0" }, { "math_id": 14, "text": "\\{-\\beta_{1}(t)\\}\\{-\\gamma(u)\\}=\\beta_{1}(t)\\gamma(u)" }, { "math_id": 15, "text": "\nt\n" }, { "math_id": 16, "text": "X(t), t-\\Delta \\leq s \\leq t." }, { "math_id": 17, "text": "X^{C}(s)=X(s)-\\mathrm{E}\\{X(s)\\}" }, { "math_id": 18, "text": "Y^{C}(s)=Y(s)-\\mathrm{E}\\{Y(s)\\}" }, { "math_id": 19, "text": "\n\\mathrm{E}\\{Y^{C}(t)|X^{C}(t)\\}=\\beta_{1}(t)\\int_{0}^{\\Delta}\\gamma(s)X^{C}(t-s)ds=\\int_{0}^{\\Delta}\\alpha_{t}(s)X^{C}(t-s)ds,\n" }, { "math_id": 20, "text": "\n\\alpha_{t}(s)=\\beta_{1}(t)\\gamma(s),\n" }, { "math_id": 21, "text": "\n\\alpha_{t}(s)\n" }, { "math_id": 22, "text": "\n\\gamma(s)\n" }, { "math_id": 23, "text": "\n\\int_{0}^{\\Delta}\\gamma^{2}(u)du=1\n" }, { "math_id": 24, "text": "\n(t_{1},\\ldots,t_{R})\n" }, { "math_id": 25, "text": "\n[\\Delta,T],\n" }, { "math_id": 26, "text": "\n\\gamma(s)=\\frac{\\Sigma_{r=1}^{R}\\alpha_{t_{r}}(s)}{[\\int_{0}^{\\Delta}\\{\\Sigma_{r=1}^{R}\\alpha_{t_{r}}(s)\\}^{2}ds]^{1/2}}\n" }, { "math_id": 27, "text": "\n\\beta_{1}\n" }, { "math_id": 28, "text": "\n\\tilde{X}(t)=\\int_{0}^{\\Delta}\\gamma(s)X^{C}(t-s)ds.\n" }, { "math_id": 29, "text": "\n\\mathrm{cov}\\{X(t),Y(t)\\}=\\mathrm{cov}[\\mathrm{E}\\{X^{C}(t)|X\\},\\mathrm{E}\\{Y^{C}(t)|X\\}]+\\mathrm{E}[\\mathrm{cov}(X^{C}(t),Y^{C}(t)|X)]=\\beta_{1}(t)\\int_{0}^{\\Delta}\\gamma(s)\\mathrm{cov}\\{X(t-s),X(t)\\}ds\n" }, { "math_id": 30, "text": "\n\\mathrm{cov}\\{X(t),\\tilde{X}(t)\\}=\\int_{0}^{\\Delta}\\gamma(s)\\mathrm{cov}\\{X(t-s),X(t)\\}ds,\n" }, { "math_id": 31, "text": "\\beta_{1}(t)=\\mathrm{cov}\\{X(t),Y(t)\\}/\\int_{0}^{\\Delta}\\gamma(s)\\mathrm{cov}\\{X(t-s),X(t)\\}ds." }, { "math_id": 32, "text": "\n(X(\\cdot),\\mathbf {U}(\\cdot))\n" }, { "math_id": 33, "text": "\nX(\\cdot)\n" }, { "math_id": 34, "text": "\n\\mathbf {U}(\\cdot)=(U_{1}(\\cdot),\\ldots,U_{J}(\\cdot))^{T}\n" }, { "math_id": 35, "text": "\n[t_{0},T]\n" }, { "math_id": 36, "text": "\n\\frac{dX(t)}{dt}=\\alpha(t)+\\int_{0}^{\\tau_{0}}\\gamma(s,t)X(t-s)ds + \\int_{0}^{\\tau_{1}} \\gamma_{1}(s,t)U(t-s)ds+Z(t), t\\in [t_{0},T],\n" }, { "math_id": 37, "text": " X(t)=g(t), t\\in[t_{0}-\\tau_{0},t_{0}]," }, { "math_id": 38, "text": "g" }, { "math_id": 39, "text": "\\tau_{0}" }, { "math_id": 40, "text": "\\tau_{1}" }, { "math_id": 41, "text": "\\alpha(t)" }, { "math_id": 42, "text": "\\gamma(s,t),\\gamma_{1}(s,t)" }, { "math_id": 43, "text": "Z(\\cdot)" }, { "math_id": 44, "text": "\nU(\\cdot)\n" } ]
https://en.wikipedia.org/wiki?curid=70021988
70022997
B-Y
Color difference signal formula_0 indicates a color difference signal between Blue (B) and a Luminance component, as part of a Luminance (Y) and Chrominance (C) color model. It has different meanings depending on the exact model used: See also. R-Y &lt;templatestyles src="Dmbox/styles.css" /&gt; Index of articles associated with the same name This includes a list of related items that share the same name (or similar names). &lt;br&gt; If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
[ { "math_id": 0, "text": "B-Y" } ]
https://en.wikipedia.org/wiki?curid=70022997
7003
Cauchy distribution
Probability distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution formula_1 is the distribution of the x-intercept of a ray issuing from formula_2 with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined (but see below). The Cauchy distribution does not have finite moments of order greater than or equal to one; only fractional absolute moments exist. The Cauchy distribution has no moment generating function. In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper half-plane. It is one of the few stable distributions with a probability density function that can be expressed analytically, the others being the normal distribution and the Lévy distribution. History. A function with the form of the density function of the Cauchy distribution was studied geometrically by Fermat in 1659, and later was known as the witch of Agnesi, after Agnesi included it as an example in her 1748 calculus textbook. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. Poisson noted that if the mean of observations following such a distribution were taken, the mean error did not converge to any finite number. As such, Laplace's use of the central limit theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter. Constructions. Here are the most important constructions. Rotational symmetry. If one stands in front of a line and kicks a ball with a direction (more precisely, an angle) uniformly at random towards the line, then the distribution of the point where the ball hits the line is a Cauchy distribution. More formally, consider a point at formula_3 in the x-y plane, and select a line passing the point, with its direction (angle with the formula_4-axis) chosen uniformly (between -90° and +90°) at random. The intersection of the line with the x-axis is the Cauchy distribution with location formula_5 and scale formula_0. This definition gives a simple way to sample from the standard Cauchy distribution. Let formula_6 be a sample from a uniform distribution from formula_7, then we can generate a sample, formula_4 from the standard Cauchy distribution using formula_8 When formula_9 and formula_10 are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio formula_11 has the standard Cauchy distribution. More generally, if formula_12 is a rotationally symmetric distribution on the plane, then the ratio formula_11 has the standard Cauchy distribution. Probability density function (PDF). The Cauchy distribution is the probability distribution with the following probability density function (PDF) formula_13 where formula_5 is the location parameter, specifying the location of the peak of the distribution, and formula_0 is the scale parameter which specifies the half-width at half-maximum (HWHM), alternatively formula_14 is full width at half maximum (FWHM). formula_0 is also equal to half the interquartile range and is sometimes called the probable error. Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining what would now be called a Dirac delta function. Properties of PDF. The maximum value or amplitude of the Cauchy PDF is formula_15, located at formula_16. It is sometimes convenient to express the PDF in terms of the complex parameter formula_17 formula_18 The special case when formula_19 and formula_20 is called the standard Cauchy distribution with the probability density function formula_21 In physics, a three-parameter Lorentzian function is often used: formula_22 where formula_23 is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case where formula_24 Cumulative distribution function (CDF). The Cauchy distribution is the probability distribution with the following cumulative distribution function (CDF): formula_25 and the quantile function (inverse cdf) of the Cauchy distribution is formula_26 It follows that the first and third quartiles are formula_27, and hence the interquartile range is formula_14. For the standard distribution, the cumulative distribution function simplifies to arctangent function formula_28: formula_29 Other constructions. The standard Cauchy distribution is the Student's "t"-distribution with one degree of freedom, and so it may be constructed by any method that constructs the Student's t-distribution. If formula_30 is a formula_31 positive-semidefinite covariance matrix with strictly positive diagonal entries, then for independent and identically distributed formula_32 and any random formula_33-vector formula_34 independent of formula_35 and formula_36 such that formula_37 and formula_38 (defining a categorical distribution) it holds that formula_39 Properties. The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to formula_5. The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution. Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the family of Cauchy-distributed random variables is closed under linear fractional transformations with real coefficients. In this connection, see also McCullagh's parametrization of the Cauchy distributions. Sum of Cauchy-distributed random variables. If formula_40 are an IID sample from the standard Cauchy distribution, then their sample mean formula_41 is also standard Cauchy distributed. In particular, the average does not converge to the mean, and so the standard Cauchy distribution does not follow the law of large numbers. This can be proved by repeated integration with the PDF, or more conveniently, by using the characteristic function of the standard Cauchy distribution (see below):formula_42With this, we have formula_43, and so formula_44 has a standard Cauchy distribution. More generally, if formula_40 are independent and Cauchy distributed with location parameters formula_45 and scales formula_46, and formula_47 are real numbers, then formula_48 is Cauchy distributed with location formula_49 and scaleformula_50. We see that there is no law of large numbers for any weighted sum of independent Cauchy distributions. This shows that the condition of finite variance in the central limit theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case. Central limit theorem. If formula_51 are and IID sample with PDF formula_52 such that formula_53 is finite, but nonzero, then formula_54 converges in distribution to a Cauchy distribution with scale formula_0. Characteristic function. Let formula_35 denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by formula_55 which is just the Fourier transform of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: formula_56 The "n"th moment of a distribution is the "n"th derivative of the characteristic function evaluated at formula_57. Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment. Kullback–Leibler divergence. The Kullback–Leibler divergence between two Cauchy distributions has the following symmetric closed-form formula: formula_58 Any f-divergence between two Cauchy distributions is symmetric and can be expressed as a function of the chi-squared divergence. Closed-form expression for the total variation, Jensen–Shannon divergence, Hellinger distance, etc are available. Entropy. The entropy of the Cauchy distribution is given by: formula_59 The derivative of the quantile function, the quantile density function, for the Cauchy distribution is: formula_60 The differential entropy of a distribution can be defined in terms of its quantile density, specifically: formula_61 The Cauchy distribution is the maximum entropy probability distribution for a random variate formula_35 for which formula_62 Moments. The Cauchy distribution is usually used as an illustrative counterexample in elementary probability courses, as a distribution with no well-defined (or "indefinite") moments. Sample moments. If we take an IID sample formula_51 from the standard Cauchy distribution, then the sequence of their sample mean is formula_63, which also has the standard Cauchy distribution. Consequently, no matter how many terms we take, the sample average does not converge. Similarly, the sample variance formula_64 also does not converge. A typical trajectory of formula_65 looks like long periods of slow convergence to zero, punctuated by large jumps away from zero, but never getting too far away. A typical trajectory of formula_66 looks similar, but the jumps accumulate faster than the decay, diverging to infinity. These two kinds of trajectories are plotted in the figure. Moments of sample lower than order 1 would converge to zero. Moments of sample higher than order 2 would diverge to infinity even faster than sample variance. Mean. If a probability distribution has a density function formula_67, then the mean, if it exists, is given by We may evaluate this two-sided improper integral by computing the sum of two one-sided improper integrals. That is, for an arbitrary real number formula_68. For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean. When the mean of a probability distribution function (PDF) is undefined, no one can compute a reliable average over the experimental data points, regardless of the sample’s size. Note that the Cauchy principal value of the mean of the Cauchy distribution is formula_69 which is zero. On the other hand, the related integral formula_70 is "not" zero, as can be seen by computing the integral. This again shows that the mean (1) cannot exist. Various results in probability theory about expected values, such as the strong law of large numbers, fail to hold for the Cauchy distribution. Smaller moments. The absolute moments for formula_71 are defined. For formula_72 we have formula_73 Higher moments. The Cauchy distribution does not have finite moments of any order. Some of the higher raw moments do exist and have a value of infinity, for example, the raw second moment: formula_74 By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to formula_75 since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the central moments and standardized moments are undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do. Moments of truncated distributions. Consider the truncated distribution defined by restricting the standard Cauchy distribution to the interval [−10100, 10100]. Such a truncated distribution has all moments (and the central limit theorem applies for i.i.d. observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution. Estimation of parameters. Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if an i.i.d. sample of size "n" is taken from a Cauchy distribution, one may calculate the sample mean as: formula_76 Although the sample values formula_77 will be concentrated about the central value formula_5, the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of formula_5 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central value formula_5 and the scaling parameter formula_0 are needed. One simple method is to take the median value of the sample as an estimator of formula_5 and half the sample interquartile range as an estimator of formula_0. Other, more precise and robust methods have been developed For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for formula_5 that is more efficient than using either the sample median or the full sample mean. However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used. Maximum likelihood can also be used to estimate the parameters formula_5 and formula_0. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size formula_78 is: formula_79 Maximizing the log likelihood function with respect to formula_5 and formula_0 by taking the first derivative produces the following system of equations: formula_80 formula_81 Note that formula_82 is a monotone function in formula_0 and that the solution formula_0 must satisfy formula_83 Solving just for formula_5 requires solving a polynomial of degree formula_84, and solving just for formula_85 requires solving a polynomial of degree formula_86. Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating formula_5 using the sample median is only about 81% as asymptotically efficient as estimating formula_5 by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of formula_5 as the maximum likelihood estimate. When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for formula_5. The shape can be estimated using the median of absolute values, since for location 0 Cauchy variables formula_72, the formula_87 the shape parameter. Multivariate Cauchy distribution. A random vector formula_88 is said to have the multivariate Cauchy distribution if every linear combination of its components formula_89 has a Cauchy distribution. That is, for any constant vector formula_90, the random variable formula_91 should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by: formula_92 where formula_93 and formula_94 are real functions with formula_93 a homogeneous function of degree one and formula_94 a positive homogeneous function of degree one. More formally: formula_95 formula_96 for all formula_97. An example of a bivariate Cauchy distribution can be given by: formula_98 Note that in this example, even though the covariance between formula_4 and formula_99 is 0, formula_4 and formula_99 are not statistically independent. We also can write this formula for complex variable. Then the probability density function of complex cauchy is : formula_100 Like how the standard Cauchy distribution is the Student t-distribution with one degree of freedom, the multidimensional Cauchy density is the multivariate Student distribution with one degree of freedom. The density of a formula_101 dimension Student distribution with one degree of freedom is: formula_102 The properties of multidimensional Cauchy distribution are then special cases of the multivariate Student distribution. Lévy measure. The Cauchy distribution is the stable distribution of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter formula_120 is given, for formula_121 by: formula_122 where formula_123 and formula_124 can be expressed explicitly. In the case formula_125 of the Cauchy distribution, one has formula_126. This last representation is a consequence of the formula formula_127 Relativistic Breit–Wigner distribution. In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "f(x; x_0,\\gamma)" }, { "math_id": 2, "text": "(x_0,\\gamma)" }, { "math_id": 3, "text": "(x_0, \\gamma)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x_0" }, { "math_id": 6, "text": " u " }, { "math_id": 7, "text": "[0,1]" }, { "math_id": 8, "text": " x = \\tan\\left(\\pi(u-\\frac{1}{2})\\right) " }, { "math_id": 9, "text": "U" }, { "math_id": 10, "text": "V" }, { "math_id": 11, "text": "U/V" }, { "math_id": 12, "text": "(U, V)" }, { "math_id": 13, "text": "f(x; x_0,\\gamma) = \\frac{1}{\\pi\\gamma \\left[1 + \\left(\\frac{x - x_0}{\\gamma}\\right)^2\\right]} = { 1 \\over \\pi } \\left[ { \\gamma \\over (x - x_0)^2 + \\gamma^2 } \\right], " }, { "math_id": 14, "text": "2\\gamma" }, { "math_id": 15, "text": "\\frac{1}{\\pi \\gamma}" }, { "math_id": 16, "text": "x=x_0" }, { "math_id": 17, "text": "\\psi= x_0 + i\\gamma" }, { "math_id": 18, "text": "\nf(x;\\psi)=\\frac{1}{\\pi}\\,\\textrm{Im}\\left(\\frac{1}{x-\\psi}\\right)=\\frac{1}{\\pi}\\,\\textrm{Re}\\left(\\frac{-i}{x-\\psi}\\right)\n" }, { "math_id": 19, "text": "x_0 = 0" }, { "math_id": 20, "text": "\\gamma = 1" }, { "math_id": 21, "text": " f(x; 0,1) = \\frac{1}{\\pi (1 + x^2)}. \\!" }, { "math_id": 22, "text": "f(x; x_0,\\gamma,I) = \\frac{I}{\\left[1 + \\left(\\frac{x-x_0}{\\gamma}\\right)^2\\right]} = I \\left[ { \\gamma^2 \\over (x - x_0)^2 + \\gamma^2 } \\right], " }, { "math_id": 23, "text": "I" }, { "math_id": 24, "text": "I = \\frac{1}{\\pi\\gamma}.\\!" }, { "math_id": 25, "text": "F(x; x_0,\\gamma)=\\frac{1}{\\pi} \\arctan\\left(\\frac{x-x_0}{\\gamma}\\right)+\\frac{1}{2}" }, { "math_id": 26, "text": "Q(p; x_0,\\gamma) = x_0 + \\gamma\\,\\tan\\left[\\pi\\left(p-\\tfrac{1}{2}\\right)\\right]." }, { "math_id": 27, "text": "(x_0 - \\gamma, x_0 + \\gamma)" }, { "math_id": 28, "text": "\\arctan(x)" }, { "math_id": 29, "text": "F(x; 0,1)=\\frac{1}{\\pi} \\arctan\\left(x\\right)+\\frac{1}{2}" }, { "math_id": 30, "text": "\\Sigma" }, { "math_id": 31, "text": "p\\times p" }, { "math_id": 32, "text": "X,Y\\sim N(0,\\Sigma)" }, { "math_id": 33, "text": "p" }, { "math_id": 34, "text": "w" }, { "math_id": 35, "text": "X" }, { "math_id": 36, "text": "Y" }, { "math_id": 37, "text": "w_1+\\cdots+w_p=1" }, { "math_id": 38, "text": "w_i\\geq 0, i=1,\\ldots,p," }, { "math_id": 39, "text": "\\sum_{j=1}^p w_j\\frac{X_j}{Y_j}\\sim\\mathrm{Cauchy}(0,1)." }, { "math_id": 40, "text": "X_1, X_2, \\ldots, X_n" }, { "math_id": 41, "text": "\\bar X = \\frac 1n \\sum_i X_i" }, { "math_id": 42, "text": "\\varphi_X(t) = \\operatorname{E}\\left[e^{iXt} \\right ] = e^{-|t|}." }, { "math_id": 43, "text": "\\varphi_{\\sum_i X_i}(t) = e^{-n |t|} " }, { "math_id": 44, "text": "\\bar X" }, { "math_id": 45, "text": "x_1, \\ldots, x_n" }, { "math_id": 46, "text": "\\gamma_1, \\ldots, \\gamma_n" }, { "math_id": 47, "text": "a_1, \\ldots, a_n" }, { "math_id": 48, "text": "\\sum_i a_iX_i" }, { "math_id": 49, "text": "\\sum_i a_ix_i" }, { "math_id": 50, "text": "\\sum_i |a_i|\\gamma_i" }, { "math_id": 51, "text": "X_1, X_2, \\ldots " }, { "math_id": 52, "text": "\\rho" }, { "math_id": 53, "text": "\\lim_{c \\to \\infty}\\frac{1}{c} \\int_{-c}^c x^2\\rho(x) \\, dx = \\frac{2\\gamma}{\\pi} " }, { "math_id": 54, "text": "\\frac 1n \\sum_{i=1}^n X_i" }, { "math_id": 55, "text": "\\varphi_X(t) = \\operatorname{E}\\left[e^{iXt} \\right ] =\\int_{-\\infty}^\\infty f(x;x_0,\\gamma)e^{ixt}\\,dx = e^{ix_0t - \\gamma |t|}." }, { "math_id": 56, "text": "f(x; x_0,\\gamma) = \\frac{1}{2\\pi}\\int_{-\\infty}^\\infty \\varphi_X(t;x_0,\\gamma)e^{-ixt} \\, dt \\!" }, { "math_id": 57, "text": "t=0" }, { "math_id": 58, "text": "\n\\mathrm{KL}\\left(p_{x_{0,1}, \\gamma_{1}}: p_{x_{0,2}, \\gamma_{2}}\\right)=\\log \\frac{\\left(\\gamma_{1}+\\gamma_{2}\\right)^{2}+\\left(x_{0,1}-x_{0,2}\\right)^{2}}{4 \\gamma_{1} \\gamma_{2}}.\n" }, { "math_id": 59, "text": "\n\\begin{align}\nH(\\gamma) & =-\\int_{-\\infty}^\\infty f(x;x_0,\\gamma) \\log(f(x;x_0,\\gamma)) \\, dx \\\\[6pt]\n& =\\log(4\\pi\\gamma)\n\\end{align}\n" }, { "math_id": 60, "text": "Q'(p; \\gamma) = \\gamma\\,\\pi\\,{\\sec}^2\\left[\\pi\\left(p-\\tfrac 1 2 \\right)\\right].\\!" }, { "math_id": 61, "text": "H(\\gamma) = \\int_0^1 \\log\\,(Q'(p; \\gamma))\\,\\mathrm dp = \\log(4\\pi\\gamma)" }, { "math_id": 62, "text": "\\operatorname{E}[\\log(1+(X-x_0)^2/\\gamma^2)]=\\log 4" }, { "math_id": 63, "text": "S_n = \\frac 1n \\sum_{i=1}^n X_i" }, { "math_id": 64, "text": "V_n = \\frac 1n \\sum_{i=1}^n (X_i - S_n)^2" }, { "math_id": 65, "text": "S_1, S_2, ..." }, { "math_id": 66, "text": "V_1, V_2, ..." }, { "math_id": 67, "text": "f(x)" }, { "math_id": 68, "text": "a" }, { "math_id": 69, "text": "\\lim_{a\\to\\infty}\\int_{-a}^a x f(x)\\,dx " }, { "math_id": 70, "text": "\\lim_{a\\to\\infty}\\int_{-2a}^a x f(x)\\,dx " }, { "math_id": 71, "text": "p\\in(-1,1)" }, { "math_id": 72, "text": "X\\sim\\mathrm{Cauchy}(0,\\gamma)" }, { "math_id": 73, "text": "\\operatorname{E}[|X|^p] = \\gamma^p \\mathrm{sec}(\\pi p/2)." }, { "math_id": 74, "text": "\n\\begin{align}\n\\operatorname{E}[X^2] & \\propto \\int_{-\\infty}^\\infty \\frac{x^2}{1+x^2}\\,dx = \\int_{-\\infty}^\\infty 1 - \\frac{1}{1+x^2}\\,dx \\\\[8pt]\n& = \\int_{-\\infty}^\\infty dx - \\int_{-\\infty}^\\infty \\frac{1}{1+x^2}\\,dx = \\int_{-\\infty}^\\infty dx-\\pi = \\infty.\n\\end{align}\n" }, { "math_id": 75, "text": "\\infty - \\infty" }, { "math_id": 76, "text": "\\bar{x}=\\frac 1 n \\sum_{i=1}^n x_i" }, { "math_id": 77, "text": "x_i" }, { "math_id": 78, "text": "n" }, { "math_id": 79, "text": "\\hat\\ell(x_1,\\dotsc,x_n \\mid \\!x_0,\\gamma ) = - n \\log (\\gamma \\pi) - \\sum_{i=1}^n \\log \\left(1 + \\left(\\frac{x_i - x_0}{\\gamma}\\right)^2\\right)" }, { "math_id": 80, "text": " \\frac{d \\ell}{d x_{0}} = \\sum_{i=1}^n \\frac{2(x_i - x_0)}{\\gamma^2 + \\left(x_i - \\!x_0\\right)^2} =0" }, { "math_id": 81, "text": " \\frac{d \\ell}{d \\gamma} = \\sum_{i=1}^n \\frac{2\\left(x_i - x_0\\right)^2}{\\gamma (\\gamma^2 + \\left(x_i - x_0\\right)^2)} - \\frac{n}{\\gamma} = 0" }, { "math_id": 82, "text": " \\sum_{i=1}^n \\frac{\\left(x_i - x_0\\right)^2}{\\gamma^2 + \\left(x_i - x_0\\right)^2} " }, { "math_id": 83, "text": " \\min |x_i-x_0|\\le \\gamma\\le \\max |x_i-x_0|. " }, { "math_id": 84, "text": "2n-1" }, { "math_id": 85, "text": "\\,\\!\\gamma" }, { "math_id": 86, "text": "2n" }, { "math_id": 87, "text": "\\operatorname{median}(|X|) = \\gamma" }, { "math_id": 88, "text": "X=(X_1, \\ldots, X_k)^T" }, { "math_id": 89, "text": "Y=a_1X_1+ \\cdots + a_kX_k" }, { "math_id": 90, "text": "a\\in \\mathbb R^k" }, { "math_id": 91, "text": "Y=a^TX" }, { "math_id": 92, "text": "\\varphi_X(t) = e^{ix_0(t)-\\gamma(t)}, \\!" }, { "math_id": 93, "text": "x_0(t)" }, { "math_id": 94, "text": "\\gamma(t)" }, { "math_id": 95, "text": "x_0(at) = ax_0(t)," }, { "math_id": 96, "text": "\\gamma (at) = |a|\\gamma (t)," }, { "math_id": 97, "text": "t" }, { "math_id": 98, "text": "f(x, y; x_0,y_0,\\gamma)= { 1 \\over 2 \\pi } \\left[ { \\gamma \\over ((x - x_0)^2 + (y - y_0)^2 +\\gamma^2)^{3/2} } \\right] ." }, { "math_id": 99, "text": "y" }, { "math_id": 100, "text": "f(z; z_0,\\gamma)= { 1 \\over 2 \\pi } \\left[ { \\gamma \\over (|z-z_0|^2 +\\gamma^2)^{3/2} } \\right] ." }, { "math_id": 101, "text": "k" }, { "math_id": 102, "text": "f({\\mathbf x}; {\\mathbf\\mu},{\\mathbf\\Sigma}, k)= \\frac{\\Gamma\\left(\\frac{1+k}{2}\\right)}{\\Gamma(\\frac{1}{2})\\pi^{\\frac{k}{2}}\\left|{\\mathbf\\Sigma}\\right|^{\\frac{1}{2}}\\left[1+({\\mathbf x}-{\\mathbf\\mu})^T{\\mathbf\\Sigma}^{-1}({\\mathbf x}-{\\mathbf\\mu})\\right]^{\\frac{1+k}{2}}} ." }, { "math_id": 103, "text": "X \\sim \\operatorname{Cauchy}(x_0,\\gamma)" }, { "math_id": 104, "text": " kX + \\ell \\sim \\textrm{Cauchy}(x_0 k+\\ell, \\gamma |k|)" }, { "math_id": 105, "text": "X \\sim \\operatorname{Cauchy}(x_0, \\gamma_0)" }, { "math_id": 106, "text": "Y \\sim \\operatorname{Cauchy}(x_1,\\gamma_1)" }, { "math_id": 107, "text": " X+Y \\sim \\operatorname{Cauchy}(x_0+x_1,\\gamma_0 +\\gamma_1)" }, { "math_id": 108, "text": " X-Y \\sim \\operatorname{Cauchy}(x_0-x_1, \\gamma_0+\\gamma_1)" }, { "math_id": 109, "text": "X \\sim \\operatorname{Cauchy}(0,\\gamma)" }, { "math_id": 110, "text": " \\tfrac{1}{X} \\sim \\operatorname{Cauchy}(0, \\tfrac{1}{\\gamma})" }, { "math_id": 111, "text": "\\psi = x_0+i\\gamma" }, { "math_id": 112, "text": "X \\sim \\operatorname{Cauchy}(\\psi)" }, { "math_id": 113, "text": "X \\sim \\operatorname{Cauchy}(x_0,|\\gamma|)" }, { "math_id": 114, "text": "\\frac{aX+b}{cX+d} \\sim \\operatorname{Cauchy}\\left(\\frac{a\\psi+b}{c\\psi+d}\\right)" }, { "math_id": 115, "text": "b" }, { "math_id": 116, "text": "c" }, { "math_id": 117, "text": "d" }, { "math_id": 118, "text": "\\frac{X-i}{X+i} \\sim \\operatorname{CCauchy}\\left(\\frac{\\psi-i}{\\psi+i}\\right)" }, { "math_id": 119, "text": "\\operatorname{CCauchy}" }, { "math_id": 120, "text": " \\gamma " }, { "math_id": 121, "text": " X \\sim \\operatorname{Stable}(\\gamma, 0, 0)\\," }, { "math_id": 122, "text": "\\operatorname{E}\\left( e^{ixX} \\right) = \\exp\\left( \\int_{ \\mathbb{R} } (e^{ixy} - 1) \\Pi_\\gamma(dy) \\right)" }, { "math_id": 123, "text": "\\Pi_\\gamma(dy) = \\left( c_{1, \\gamma} \\frac{1}{y^{1 + \\gamma}} 1_{ \\left\\{y > 0\\right\\} } + c_{2,\\gamma} \\frac{1}{|y|^{1 + \\gamma}} 1_{\\left\\{ y < 0 \\right\\}} \\right) \\, dy \n" }, { "math_id": 124, "text": " c_{1, \\gamma}, c_{2, \\gamma} " }, { "math_id": 125, "text": " \\gamma = 1 " }, { "math_id": 126, "text": " c_{1, \\gamma} = c_{2, \\gamma} " }, { "math_id": 127, "text": "\\pi |x| = \\operatorname{PV }\\int_{\\mathbb{R} \\smallsetminus\\lbrace 0 \\rbrace} (1 - e^{ixy}) \\, \\frac{dy}{y^2} " }, { "math_id": 128, "text": "\\operatorname{Cauchy}(0,1) \\sim \\textrm{t}(\\mathrm{df}=1)\\," }, { "math_id": 129, "text": "\\operatorname{Cauchy}(\\mu,\\sigma) \\sim \\textrm{t}_{(\\mathrm{df}=1)}(\\mu,\\sigma)\\," }, { "math_id": 130, "text": "X, Y \\sim \\textrm{N}(0,1)\\, X, Y" }, { "math_id": 131, "text": " \\tfrac X Y\\sim \\textrm{Cauchy}(0,1)\\," }, { "math_id": 132, "text": "X \\sim \\textrm{U}(0,1)\\," }, { "math_id": 133, "text": " \\tan \\left( \\pi \\left(X-\\tfrac{1}{2}\\right) \\right) \\sim \\textrm{Cauchy}(0,1)\\," }, { "math_id": 134, "text": "X \\sim \\operatorname{Log-Cauchy}(0, 1)" }, { "math_id": 135, "text": "\\ln(X) \\sim \\textrm{Cauchy}(0, 1)" }, { "math_id": 136, "text": "\\tfrac1X \\sim \\operatorname{Cauchy}\\left(\\tfrac{x_0}{x_0^2+\\gamma^2},\\tfrac{\\gamma}{x_0^2+\\gamma^2}\\right)" }, { "math_id": 137, "text": "X \\sim \\textrm{Stable}(1, 0, \\gamma, \\mu)" }, { "math_id": 138, "text": "X \\sim \\operatorname{Cauchy}(\\mu, \\gamma)" }, { "math_id": 139, "text": "X \\sim \\textrm{N}(0,1)" }, { "math_id": 140, "text": "Z \\sim \\operatorname{Inverse-Gamma}(1/2, s^2/2)" }, { "math_id": 141, "text": "Y = \\mu + X \\sqrt Z \\sim \\operatorname{Cauchy}(\\mu,s)" }, { "math_id": 142, "text": "X \\sim \\textrm{N}(0,1) I\\{X\\ge0\\}" }, { "math_id": 143, "text": "\\hat{\\beta}" }, { "math_id": 144, "text": "x_{t+1}=\\beta{x}_t+\\varepsilon_{t+1},\\beta>1" } ]
https://en.wikipedia.org/wiki?curid=7003
70035399
Kleinman symmetry
Kleinman symmetry, named after American physicist D.A. Kleinman, gives a method of reducing the number of distinct coefficients in the rank-3 second order nonlinear optical susceptibility when the applied frequencies are much smaller than any resonant frequencies. Formulation. Assuming an instantaneous response we can consider the second order polarisation to be given by formula_0 for formula_1 the applied field onto a nonlinear medium. For a lossless medium with spatial indices formula_2 we already have full permutation symmetry, where the spatial indices and frequencies are permuted simultaneously according to formula_3 In the regime where all frequencies formula_4 for resonance formula_5 then this response must be independent of the applied frequencies, i.e. the susceptibility should be dispersionless, and so we can permute the spatial indices without also permuting the frequency arguments. This is the Kleinman symmetry condition. In second harmonic generation. Kleinman symmetry in general is too strong a condition to impose, however it is useful for certain cases like in second harmonic generation (SHG). Here, it is always possible to permute the last two indices, meaning it is convenient to use the contracted notation formula_6 which is a 3x6 rank-2 tensor where the index formula_7 is related to combinations of indices as shown in the figure. This notation is used in section VII of Kleinman's original work on the subject in 1962. Note that for processes other than SHG there may be further, or fewer reduction of the number of terms required to fully describe the second order polarisation response.
[ { "math_id": 0, "text": "P(t) = \\epsilon_0 \\chi^{(2)}E^2(t)" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "i,j,k" }, { "math_id": 3, "text": "\\chi_{ijk}^{(2)}(\\omega_3;\\omega_1+\\omega_2) = \\chi_{jki}^{(2)}(\\omega_1;-\\omega_2+\\omega_3) = \\chi_{kij}^{(2)}(\\omega_2;\\omega_3-\\omega_1)\n= \\chi_{ikj}^{(2)}(\\omega_3;\\omega_2+\\omega_1) = \\chi_{kji}^{(2)}(\\omega_2;-\\omega_1+\\omega_3)\n= \\chi_{jik}^{(2)}(\\omega_1;\\omega_3-\\omega_2)" }, { "math_id": 4, "text": "\\omega_i \\ll \\omega_0" }, { "math_id": 5, "text": "\\omega_0" }, { "math_id": 6, "text": "d_{il} = \\frac{1}{2}\\chi^{(2)}_{ijk}(\\omega_3;\\omega_1,\\omega_2)" }, { "math_id": 7, "text": "l" } ]
https://en.wikipedia.org/wiki?curid=70035399
70036677
Howell normal form
In linear algebra and ring theory, the Howell normal form is a generalization of the row echelon form of a matrix over formula_0, the ring of integers modulo N. The row spans of two matrices agree if, and only if, their Howell normal forms agree. The Howell normal form generalizes the Hermite normal form, which is defined for matrices over formula_1. Definition. A matrix formula_2 over formula_0 is called to be in "row echelon form" if it has the following properties: With elementary transforms, each matrix in the row echelon form can be reduced in a way that the following properties will hold: If formula_4 adheres to both above properties, it is said to be in "reduced row echelon form". If formula_4 adheres to the following additional property, it is said to be in Howell normal form (formula_13 denotes the row span of formula_4): Properties. For every matrix formula_4 over formula_0, there is a unique matrix formula_20 in the Howell normal form, such that formula_21. The matrix formula_20 can be obtained from matrix formula_4 via a sequence of elementary transforms. From this follows that for two matrices formula_22 over formula_0, their row spans are equal if and only if their Howell normal forms are equal. For example, the matrices formula_23 have the same Howell normal form over formula_24: formula_25 Note that formula_4 and formula_26 are two distinct matrices in the row echelon form, which would mean that their span is the same if they're treated as matrices over some field. Moreover, they're in the Hermite normal form, meaning that their row span is also the same if they're considered over formula_1, the ring of integers. However, formula_24 is not a field and over general rings it is sometimes possible to nullify a row's pivot by multiplying the row with a scalar without nullifying the whole row. In this particular case, formula_27 It implies formula_28, which wouldn't be true over any field or over integers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Z_N" }, { "math_id": 1, "text": "\\Z" }, { "math_id": 2, "text": "A \\in \\Z_N^{n \\times m}" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "1 \\leq i \\leq r" }, { "math_id": 6, "text": "j_i" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "j_1 < j_2 < \\dots < j_r" }, { "math_id": 9, "text": "A_{ij_i}" }, { "math_id": 10, "text": "N" }, { "math_id": 11, "text": "1 \\leq k < i \\leq r" }, { "math_id": 12, "text": "0 \\leq A_{kj_i} < A_{ij_i}" }, { "math_id": 13, "text": "S(A)" }, { "math_id": 14, "text": "v \\in S(A)" }, { "math_id": 15, "text": "v_k = 0" }, { "math_id": 16, "text": "1 \\leq k < j_i" }, { "math_id": 17, "text": "v \\in S(A_{i \\dots m})" }, { "math_id": 18, "text": "A_{i\\dots m}" }, { "math_id": 19, "text": "m" }, { "math_id": 20, "text": "H" }, { "math_id": 21, "text": "S(A)=S(H)" }, { "math_id": 22, "text": "A, B \\in \\Z_N^{n \\times m}" }, { "math_id": 23, "text": "A = \\begin{bmatrix}\n4 & 1 & 0 \\\\\n0 & 0 & 5 \\\\\n0 & 0 & 0\n\\end{bmatrix}, \\;\\;\\; B = \\begin{bmatrix}\n8 & 5 & 5 \\\\\n0 & 9 & 8 \\\\\n0 & 0 & 10\n\\end{bmatrix}" }, { "math_id": 24, "text": "\\Z_{12}" }, { "math_id": 25, "text": "H=\\begin{bmatrix}\n4 & 1 & 0 \\\\\n0 & 3 & 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}." }, { "math_id": 26, "text": "B" }, { "math_id": 27, "text": "3 \\cdot \\begin{bmatrix}4 & 1 & 0\\end{bmatrix} \\equiv \\begin{bmatrix}0 & 3 & 0\\end{bmatrix} \\pmod{12}." }, { "math_id": 28, "text": "\\begin{bmatrix}0 & 3 & 0\\end{bmatrix} \\in S(A)" } ]
https://en.wikipedia.org/wiki?curid=70036677
70048
Rectangle
Quadrilateral with four right angles In Euclidean plane geometry, a rectangle is a quadrilateral with four right angles. It can also be defined as: an equiangular quadrilateral, since equiangular means that all of its angles are equal (360°/4 = 90°); or a parallelogram containing a right angle. A rectangle with four sides of equal length is a "square". The term "oblong" is used to refer to a non-square rectangle. A rectangle with vertices "ABCD" would be denoted as  "ABCD". The word rectangle comes from the Latin "rectangulus", which is a combination of "rectus" (as an adjective, right, proper) and "angulus" (angle). A crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals (therefore only two sides are parallel). It is a special case of an antiparallelogram, and its angles are not right angles and not all equal, though opposite angles are equal. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles. Rectangles are involved in many tiling problems, such as tiling the plane by rectangles or tiling a rectangle by polygons. Characterizations. A convex quadrilateral is a rectangle if and only if it is any one of the following: Classification. Traditional hierarchy. A rectangle is a special case of a parallelogram in which each pair of adjacent sides is perpendicular. A parallelogram is a special case of a trapezium (known as a trapezoid in North America) in which "both" pairs of opposite sides are parallel and equal in length. A trapezium is a convex quadrilateral which has at least one pair of parallel opposite sides. A convex quadrilateral is Alternative hierarchy. De Villiers defines a rectangle more generally as any quadrilateral with axes of symmetry through each pair of opposite sides. This definition includes both right-angled rectangles and crossed rectangles. Each has an axis of symmetry parallel to and equidistant from a pair of opposite sides, and another which is the perpendicular bisector of those sides, but, in the case of the crossed rectangle, the first axis is not an axis of symmetry for either side that it bisects. Quadrilaterals with two axes of symmetry, each through a pair of opposite sides, belong to the larger class of quadrilaterals with at least one axis of symmetry through a pair of opposite sides. These quadrilaterals comprise isosceles trapezia and crossed isosceles trapezia (crossed quadrilaterals with the same vertex arrangement as isosceles trapezia). Properties. Symmetry. A rectangle is cyclic: all corners lie on a single circle. It is equiangular: all its corner angles are equal (each of 90 degrees). It is isogonal or vertex-transitive: all corners lie within the same symmetry orbit. It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°). Rectangle-rhombus duality. The dual polygon of a rectangle is a rhombus, as shown in the table below. Miscellaneous. A rectangle is a rectilinear polygon: its sides meet at right angles. A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area). Two rectangles, neither of which will fit inside the other, are said to be incomparable. Formulae. If a rectangle has length formula_2 and width formula_3, then: Theorems. The isoperimetric theorem for rectangles states that among all rectangles of a given perimeter, the square has the largest area. The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle. A parallelogram with equal diagonals is a rectangle. The Japanese theorem for cyclic quadrilaterals states that the incentres of the four triangles determined by the vertices of a cyclic quadrilateral taken three at a time form a rectangle. The British flag theorem states that with vertices denoted "A", "B", "C", and "D", for any point "P" on the same plane of a rectangle: formula_8 For every convex body "C" in the plane, we can inscribe a rectangle "r" in "C" such that a homothetic copy "R" of "r" is circumscribed about "C" and the positive homothety ratio is at most 2 and formula_9. There exists a unique rectangle with sides formula_10 and formula_11, where formula_10 is less than formula_11, with two ways of being folded along a line through its center such that the area of overlap is minimized and each area yields a different shape – a triangle and a pentagon. The unique ratio of side lengths is formula_12. Crossed rectangles. A "crossed" "quadrilateral" (self-intersecting) consists of two opposite sides of a non-self-intersecting quadrilateral along with the two diagonals. Similarly, a crossed rectangle is a "crossed quadrilateral" which consists of two opposite sides of a rectangle along with the two diagonals. It has the same vertex arrangement as the rectangle. It appears as two identical triangles with a common vertex, but the geometric intersection is not considered a vertex. A "crossed quadrilateral" is sometimes likened to a bow tie or butterfly, sometimes called an "angular eight". A three-dimensional rectangular wire frame that is twisted can take the shape of a bow tie. The interior of a "crossed rectangle" can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise. A "crossed rectangle" may be considered equiangular if right and left turns are allowed. As with any "crossed quadrilateral", the sum of its interior angles is 720°, allowing for internal angles to appear on the outside and exceed 180°. A rectangle and a crossed rectangle are quadrilaterals with the following properties in common: Other rectangles. In spherical geometry, a spherical rectangle is a figure whose four edges are great circle arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. The surface of a sphere in Euclidean solid geometry is a non-Euclidean surface in the sense of elliptic geometry. Spherical geometry is the simplest form of elliptic geometry. In elliptic geometry, an elliptic rectangle is a figure in the elliptic plane whose four edges are elliptic arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. In hyperbolic geometry, a hyperbolic rectangle is a figure in the hyperbolic plane whose four edges are hyperbolic arcs which meet at equal angles less than 90°. Opposite arcs are equal in length. Tessellations. The rectangle is used in many periodic tessellation patterns, in brickwork, for example, these tilings: Squared, perfect, and other tiled rectangles. A rectangle tiled by squares, rectangles, or triangles is said to be a "squared", "rectangled", or "triangulated" (or "triangled") rectangle respectively. The tiled rectangle is "perfect" if the tiles are similar and finite in number and no two tiles are the same size. If two such tiles are the same size, the tiling is "imperfect". In a perfect (or imperfect) triangled rectangle the triangles must be right triangles. A database of all known perfect rectangles, perfect squares and related shapes can be found at squaring.net. The lowest number of squares need for a perfect tiling of a rectangle is 9 and the lowest number needed for a perfect tilling a square is 21, found in 1978 by computer search. A rectangle has commensurable sides if and only if it is tileable by a finite number of unequal squares. The same is true if the tiles are unequal isosceles right triangles. The tilings of rectangles by other tiles which have attracted the most attention are those by congruent non-rectangular polyominoes, allowing all rotations and reflections. There are also tilings by congruent polyaboloes. Unicode. The following Unicode code points depict rectangles: U+25AC ▬ BLACK RECTANGLE U+25AD ▭ WHITE RECTANGLE U+25AE ▮ BLACK VERTICAL RECTANGLE U+25AF ▯ WHITE VERTICAL RECTANGLE References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1}{4}(a+c)(b+d)" }, { "math_id": 1, "text": "\\tfrac{1}{2} \\sqrt{(a^2+c^2)(b^2+d^2)}." }, { "math_id": 2, "text": "\\ell" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "A = \\ell w\\," }, { "math_id": 5, "text": "P = 2\\ell + 2w = 2(\\ell + w)\\," }, { "math_id": 6, "text": "d=\\sqrt{\\ell^2 + w^2}" }, { "math_id": 7, "text": "\\ell = w\\," }, { "math_id": 8, "text": "\\displaystyle (AP)^2 + (CP)^2 = (BP)^2 + (DP)^2." }, { "math_id": 9, "text": "0.5 \\text{ × Area}(R) \\leq \\text{Area}(C) \\leq 2 \\text{ × Area}(r)" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "b" }, { "math_id": 12, "text": "\\displaystyle \\frac {a} {b}=0.815023701..." } ]
https://en.wikipedia.org/wiki?curid=70048
7005062
Energy conversion efficiency
Ratio between the useful output and the input of a machine Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1. Overview. Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter. formula_0 Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy. Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible. However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5. When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV ( Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion. Related, more specific terms include Chemical conversion efficiency. The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature. A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature. An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V. For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell.  A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83. The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction. Fuel heating values and efficiency. In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded. Wall-plug efficiency, luminous efficiency, and efficacy. In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses. The wall-plug efficiency differs from the "luminous efficiency" in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy. Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of &lt; 40%. Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping s, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. "Luminaire efficiency" refers to the total lumen-output from the fixture per the lamp output. With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\eta = \\frac{P_\\mathrm{out}}{P_\\mathrm{in}}\n" } ]
https://en.wikipedia.org/wiki?curid=7005062
7005239
First-price sealed-bid auction
Auction where all participants concurrently submit undisclosed bids A first-price sealed-bid auction (FPSBA) is a common type of auction. It is also known as blind auction. In this type of auction, all bidders simultaneously submit sealed bids so that no bidder knows the bid of any other participant. The highest bidder pays the price that was submitted. Strategic analysis. In a FPSBA, each bidder is characterized by their monetary valuation of the item for sale. Suppose Alice is a bidder and her valuation is formula_0. Then, if Alice is rational: Alice would like to bid the smallest amount that can make her win the item, as long as this amount is less than formula_0. For example, if there is another bidder Bob and he bids formula_1 and formula_2, then Alice would like to bid formula_3 (where formula_4 is the smallest amount that can be added, e.g. one cent). Unfortunately, Alice does not know what the other bidders are going to bid. Moreover, she does not even know the valuations of the other bidders. Hence, strategically, we have a Bayesian game - a game in which agents do not know the payoffs of the other agents. The interesting challenge in such a game is to find a Bayesian Nash equilibrium. However, this is not easy even when there are only two bidders. The situation is simpler when the valuations of the bidders are independent and identically distributed random variables, so that the valuations are all drawn from a known prior distribution. Example. Suppose there are two bidders, Alice and Bob, whose valuations formula_0 and formula_5 are drawn from a continuous uniform distribution over the interval [0,1]. Then, it is a Bayesian-Nash equilibrium when each bidder bids exactly half his/her value: Alice bids formula_6 and Bob bids formula_7. PROOF: The proof takes the point-of-view of Alice. We assume that she knows that Bob bids formula_8, but she does not know formula_5. We find the best response of Alice to Bob's strategy. Suppose Alice bids formula_9. There are two cases: All in all, Alice's expected gain is: formula_15. The maximum gain is attained when formula_16. The derivative is (see Inverse functions and differentiation): formula_17 and it is zero when Alice's bid formula_9 satisfies: formula_18 Now, since we are looking for a symmetric equilibrium, we also want Alice's bid formula_9 to equal formula_19. So we have: formula_20 formula_21 formula_22 The solution of this differential equation is: formula_23. Generalization. Denote by: Then, a FPSBA has a unique symmetric BNE in which the bid of player formula_25 is given by: formula_28 Incentive-compatible variant. The FPSBA is not incentive-compatible even in the weak sense of Bayesian-Nash-Incentive-Compatibility (BNIC), since there is no Bayesian-Nash equilibrium in which bidders report their true value. However, it is easy to create a variant of FPSBA which is BNIC, if the priors on the valuations are common knowledge. For example, for the case of Alice and Bob described above, the rules of the BNIC variant are: In effect, this variant simulates the Bayesian-Nash equilibrium strategies of the players, so in the Bayesian-Nash equilibrium, both bidders bid their true value. This example is a special case of a much more general principle: the revelation principle. Comparison to second-price auction. The following table compares FPSBA to sealed-bid second-price auction (SPSBA): The auctioneer's revenue is calculated in the example case, in which the valuations of the agents are drawn independently and uniformly at random from [0,1]. As an example, when there are formula_29 agents: In both cases, the auctioneer's "expected" revenue is 1/3. This fact that the revenue is the same is not a coincidence - it is a special case of the revenue equivalence theorem. This holds only when the agents' valuations are statistically independent; when the valuations are dependent, we have a common value auction, and in this case, the revenue in a second-price auction is usually higher than in a first-price auction. The item for sale may not be sold if the final bid is not high enough to satisfy the seller, that is, the seller reserves the right to accept or reject the highest bid. If the seller announces to the bidders the reserve price, it is a public reserve price auction. In contrast, if the seller does not announce the reserve price before the sale but only after the sale, it is a secret reserve price auction. Comparison to other auctions. A FPSBA is distinct from the English auction in that bidders can only submit one bid each. Furthermore, as bidders cannot see the bids of other participants, they cannot adjust their own bids accordingly. FPSBA has been argued to be strategically equivalent to the Dutch auction. What are effectively FPSBA are commonly called tendering for procurement by companies and organizations, particularly for government contracts and auctions for mining leases. FPSBA are thought to lead to low procurement costs through competition and low corruption through increased transparency, even though they may entail a higher ex-post extra cost of the completed project and extra time to complete it. A generalized first-price auction is a non-truthful auction mechanism for sponsored search (aka position auction). A generalization of both 1st-price and 2nd-price auctions is an auction in which the price is some convex combination of the 1st and 2nd price. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "y<a" }, { "math_id": 3, "text": "y+\\varepsilon" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "b" }, { "math_id": 6, "text": "a/2" }, { "math_id": 7, "text": "b/2" }, { "math_id": 8, "text": "f(b) = b/2" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "x\\geq f(b)" }, { "math_id": 11, "text": "a-x" }, { "math_id": 12, "text": "f^{-1}(x)=2x" }, { "math_id": 13, "text": "x<f(b)" }, { "math_id": 14, "text": "1-f^{-1}(x)" }, { "math_id": 15, "text": "G(x) = f^{-1}(x)\\cdot(a-x)" }, { "math_id": 16, "text": "G'(x)=0" }, { "math_id": 17, "text": "G'(x) = - f^{-1}(x) + (a-x)\\cdot {1 \\over f'(f^{-1}(x))}" }, { "math_id": 18, "text": "f^{-1}(x) = (a-x)\\cdot {1 \\over f'(f^{-1}(x))}" }, { "math_id": 19, "text": "f(a)" }, { "math_id": 20, "text": "f^{-1}(f(a)) = (a-f(a))\\cdot {1 \\over f'(f^{-1}(f(a)))}" }, { "math_id": 21, "text": "a = (a-f(a))\\cdot {1 \\over f'(a)}" }, { "math_id": 22, "text": "a f'(a) = (a-f(a))" }, { "math_id": 23, "text": "f(a) = a/2" }, { "math_id": 24, "text": "v_i" }, { "math_id": 25, "text": "i" }, { "math_id": 26, "text": "y_i" }, { "math_id": 27, "text": "y_i = \\max_{j\\neq i}{v_j}" }, { "math_id": 28, "text": "E[y_i | y_i < v_i]" }, { "math_id": 29, "text": "n=2" }, { "math_id": 30, "text": "\\max(a/2,b/2)" }, { "math_id": 31, "text": "\\min(a,b)" } ]
https://en.wikipedia.org/wiki?curid=7005239
70057643
YJK
Color space implemented by the Yamaha V9958 graphic chip YJK is a proprietary color space implemented by the Yamaha V9958 graphic chip on MSX2+ computers. It has the advantage of encoding images by implementing less resolution for color information than for brightness, taking advantage of the human visual systems' lower acuity for color differences. This saves memory, transmission and computing power.YJK is composed of three components: formula_0, formula_1 and formula_2. formula_0 is similar to luminance (but computed differently), formula_1 and formula_2 are the chrominance components (representing the red and green color differences). The formula_0 component is a 5-bit value (0 to 31), specified for each individual pixel. The formula_1 and formula_2 components are stored together in 6 bits (-32 to 31) and shared between 4 nearby pixels (4:2:0 chroma sub-sampling). This arrangement allows for the encoding of 19,268 different colors. While conceptually similar to YUV, chroma sampling, numerical relationship between the components, and transformation to and from RGB are different in YJK. Formulas. The three component signals are created from an original RGB (red, green and blue) source. The weighted values of formula_3, formula_4 and formula_5 are added together to produce a single formula_0 signal, representing the overall brightness of that pixel. The formula_1 signal is then created by subtracting the formula_0 from the red signal of the original RGB, and then scaling; and formula_2 by subtracting the formula_0 from the green, and then scaling by a different factor. These formulae approximate the conversion between the RGB color space and YJK: From RGB to YJK: formula_6 formula_7 formula_8 From YJK to RGB: formula_9 formula_10 formula_11 You may note that the formula_0 component of YJK is not true luminance, since the green component has less weight than the blue component. Also, contrary to YUV where chrominance is based on Red-Blue differences, on YJK its calculated based on Red-Green differences. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "J" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "G" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "Y = B/2 + R/4 + G/8 " }, { "math_id": 7, "text": "J = R-Y " }, { "math_id": 8, "text": "K=G-Y " }, { "math_id": 9, "text": "R = Y + J " }, { "math_id": 10, "text": "G = Y + K " }, { "math_id": 11, "text": "B= (5/4)Y -J/2 -K/4 " } ]
https://en.wikipedia.org/wiki?curid=70057643
70059874
2 Samuel 9
Second Book of Samuel chapter 2 Samuel 9 is the ninth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 13 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 8–10. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of this chapter is as follows: A. David's intention (9:1) B. David speaks to Ziba (9:2–5) C. Mephibosheth does obeisance (9:6) D. David fulfills his covenant with Jonathan (9:7) C'. Mephibosheth does obeisance (9:8) B'. David speaks to Ziba (9:9–11) A'. David's intention is accomplished (9:12–13) This chapter is connected with events concerning the house of Saul and the death of Ishbosheth in 2 Samuel 2–4, but more strongly with the story of the Gibeonites' revenge in 2 Samuel 21:1–14, which should precede the accommodation of Mephibosheth at David's table. David inquires about the house of Saul (9:1–4). The section begins with David asking about 'showing kindness to the house of Saul for Jonathan's sake' (verse 1), which is based on his promises to Jonathan in their covenant before YHWH () and his promise to Saul that he 'would not cut off his descendants' (). The passage contains a flashback to a time early in David's reign (c. 999 BCE according to Steinmann), placed in this chapter in anticipation of the events in 1 Samuel 16 and 1 Samuel 19 concerning Ziba and Mephibosheth. David did not have much information about Saul's house since his escape from that house (c. 1015 BCE), whereas his last contact with Jonathan was at Horesh (1 Samuel 23:16–18; c. 1013–1012 BCE) about one year after Mephibosheth's birth. David's official knew about Saul's servant, Ziba, who had the information about Saul's descendants (verse 2). Ziba only identified Mephibosheth as the surviving member of the house of Saul, because Saul's sons from concubines and the grandsons through his daughter Merab (cf. 2 Samuel 21:8) were not considered heirs to Saul's house. "And David said, "Is there still anyone left of the house of Saul, that I may show him kindness for Jonathan's sake?"" David and Mephibosheth (9:5–13). The presence of a Saulide in David's household emphasizes that David was dealing honorably with Jonathan's descendant, using the word 'kindness' ("khesed"), which occurs in verses 1, 3, and 7, to conform with Jonathan's appeal to 'show me the kindness ("khesed')' of the Lord' in . David granted Mephibosheth son of Jonathan special patronage (verse 7), at royal expense (v. 11), his grandfather's property restored to him (verse 7) and arrangements were made for Ziba to act as estate manager to provide for the family (verse 10). Saul's estate (verse 7) was a crown property, so it should belong to David after he became king, but at that time some may also be still the property of remaining members in Saul's house, including his children of concubines, relatives and the family of his daughters. Merab, Saul's oldest daughter, was with her husband, Adriel, in Meholah (1 Samuel 14:49; 18:17–19), whereas Michal, Saul's other daughter, resided with her husband, king David, in Jerusalem (2 Samuel 6:16–23), so some lands in other areas may have been maintained by caretakers, including Ziba. This could be how David's officials were able to trace Ziba. Now David could declare that all the estate should be given to Mephibosheth as Saul's sole legitimate heir. "Then King David sent and brought him out of the house of Machir the son of Ammiel, from Lo Debar." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70059874
70060173
Sum of four cubes problem
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Is every integer the sum of four perfect cubes? The sum of four cubes problem asks whether every integer is the sum of four cubes of integers. It is conjectured the answer is affirmative, but this conjecture has been neither proven nor disproven. Some of the cubes may be negative numbers, in contrast to Waring's problem on sums of cubes, where they are required to be positive. Partial results. The substitutions formula_0, formula_1, and formula_2 in the identity formula_3 lead to the identity formula_4 which shows that every integer multiple of 6 is the sum of four cubes. (More generally, the same proof shows that every multiple of 6 in every ring is the sum of four cubes.) Since every integer is congruent to its own cube modulo 6, it follows that every integer is the sum of "five" cubes of integers. In 1966, V. A. Demjanenko proved that any integer that is congruent neither to 4 nor to −4 modulo 9 is the sum of four cubes of integers. For this, he used the following identities: formula_5 These identities (and those derived from them by passing to opposites) immediately show that any integer which is congruent neither to 4 nor to −4 modulo 9 and is congruent neither to 2 nor to −2 modulo 18 is a sum of four cubes of integers. Using more subtle reasonings, Demjanenko proved that integers congruent to 2 or to −2 modulo 18 are also sums of four cubes of integers. The problem therefore only arises for integers congruent to 4 or to −4 modulo 9. One example is formula_6 but it is not known if every such integer can be written as a sum of four cubes. 18x±2 case. According to Henri Cohen's translation of Demjanenko's paper, these identities formula_7 together with their complementary identities leave the 108x±38 case, proving the proposition. He also proves the 108x±38 case in his paper. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X = T" }, { "math_id": 1, "text": "Y = T" }, { "math_id": 2, "text": "Z = - T + 1" }, { "math_id": 3, "text": " (X + Y + Z)^{3} - X^{3} - Y^{3} - Z^{3} = 3 (X + Y) (X + Z) (Y + Z)" }, { "math_id": 4, "text": "(T + 1)^{3} + (- T)^{3} + (- T)^{3} + (T - 1)^{3} = 6 T ," }, { "math_id": 5, "text": "\\begin{align}\n6x &= (x+1)^{3}+(x-1)^{3}-x^{3}-x^{3} \\\\\n6x+3 &= x^3+(-x+4)^3+(2x-5)^3+(-2x+4)^3 \\\\\n18x+1 &= (2x+14)^3+(-2x-23)^3+(-3x-26)^3+(3x+30)^3 \\\\\n18x+7 &= (x+2)^3+(6x-1)^3+(8x-2)^3+(-9x+2)^3 \\\\\n18x+8 &= (x-5)^3+(-x+14)^3+(-3x+29)^3+(3x-30)^3\\ .\n\\end{align}" }, { "math_id": 6, "text": " 13 = 10^3 + 7^3 + 1^3 + (-11)^3," }, { "math_id": 7, "text": "\\begin{align}\n54x + 2 & = (29484x^{2} + 2211x + 43)^{3} + (-29484x^{2} - 2157x - 41)^{3} + (9828x^{2} + 485x + 4)^{3} + (-9828x^{2} - 971x - 22)^{3} \\\\\n54x + 20 & = (3x - 11)^{3} + (-3x + 10)^{3} + (x + 2)^{3} + (-x + 7)^{3} \\\\\n216x - 16 & = (14742x^{2} - 2157x + 82)^{3} + (-14742x^{2} + 2211x - 86)^{3} + (4914x^{2} - 971x + 44)^{3} + (-4914x^{2} + 485x - 8)^{3} \\\\\n216x + 92 & = (3x - 164)^{3} + (-3x + 160)^{3} + (x - 35)^{3} + (-x + 71)^{3}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=70060173
7006101
Leftover hash lemma
Lemma in cryptography The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby. Imagine that you have a secret key X that has n uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the key, and know that an adversary was able to learn the values of some "t" &lt; "n" bits of that key, but you do not know which "t" bits. Can you still use your key, or do you have to throw it away and choose a new key? The leftover hash lemma tells us that we can produce a key of about "n" − "t" bits, over which the adversary has almost no knowledge. Since the adversary knows all but "n" − "t" bits, this is almost optimal. More precisely, the leftover hash lemma tells us that we can extract a length asymptotic to formula_0 (the min-entropy of X) bits from a random variable X that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about X, will have almost no knowledge about the extracted value. That is why this is also called privacy amplification (see privacy amplification section in the article Quantum key distribution). Randomness extractors achieve the same result, but use (normally) less randomness. Let X be a random variable over formula_1 and let formula_2. Let formula_3 be a 2-universal hash function. If formula_4 then for S uniform over formula_5 and independent of X, we have: formula_6 where U is uniform over formula_7 and independent of S. formula_8 is the min-entropy of X, which measures the amount of randomness X has. The min-entropy is always less than or equal to the Shannon entropy. Note that formula_9 is the probability of correctly guessing X. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess X. formula_10 is a statistical distance between X and Y. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_\\infty(X)" }, { "math_id": 1, "text": "\\mathcal{X}" }, { "math_id": 2, "text": "m > 0" }, { "math_id": 3, "text": "h\\colon \\mathcal{S} \\times \\mathcal{X} \\rightarrow \\{0,\\, 1\\}^m" }, { "math_id": 4, "text": "m \\leq H_\\infty(X) - 2 \\log\\left(\\frac{1}{\\varepsilon}\\right)" }, { "math_id": 5, "text": "\\mathcal{S}" }, { "math_id": 6, "text": "\\delta\\left[(h(S, X), S), (U, S)\\right] \\leq \\varepsilon." }, { "math_id": 7, "text": "\\{0, 1\\}^m" }, { "math_id": 8, "text": "H_\\infty(X) = -\\log \\max_x \\Pr[X=x]" }, { "math_id": 9, "text": "\\max_x \\Pr[X=x]" }, { "math_id": 10, "text": "0 \\le \\delta(X, Y) = \\frac{1}{2} \\sum_v \\left| \\Pr[X=v] - \\Pr[Y=v] \\right| \\le 1" } ]
https://en.wikipedia.org/wiki?curid=7006101
70061567
Topical drug delivery
Route of drug administration Topical drug delivery (TDD) is a route of drug administration that allows the topical formulation to be delivered across the skin upon application, hence producing a localized effect to treat skin disorders like eczema. The formulation of topical drugs can be classified into corticosteroids, antibiotics, antiseptics, and anti-fungal. The mechanism of topical delivery includes the diffusion and metabolism of drugs in the skin. Historically, topical route was the first route of medication used to deliver drugs in humans in ancient Egyptian and Babylonian in 3000 BCE. In these ancient cities, topical medications like ointments and potions were used on the skin. The delivery of topical drugs needs to pass through multiple skin layers and undergo pharmacokinetics, hence factor like dermal diseases minimize the bioavailability of the topical drugs. The wide use of topical drugs leads to the advancement in topical drug delivery. These advancements are used to enhance the delivery of topical medications to the skin by using chemical and physical agents. For chemical agents, carriers like liposomes and nanotechnologies are used to enhance the absorption of topical drugs. On the other hand, physical agents, like micro-needles is other approach for enhancement ofabsorption. Besides using carriers, other factors such as pH, lipophilicity, and drug molecule size govern the effectiveness of topical formulation. History. In ancient times, human skin was used as a layer for self-expression by painting cosmetic products on it. They used those products as a protection for their skin from the sun and dry environment. Later on in 2000 BCE, the Chinese used topical remedies that wrap in bandages to treat skin diseases. The contact between these topical remedies and skin deliver its therapeutic effect on the skin. The newer development of topical drugs occurred between 130 and 200 AD. This development was made by Claudius Galenus, a Greek physician. He first loaded the herb medication to Western medicine and formulated it as cream. More recently in the 1920s, some observations were made when applying topical skin, such as to determine its systemic effects. In 1938, Zondek successfully managed urogenital infections after applying chloroxylenol on the skin by the use of disinfectant in ointment form. After some years, observations were made from various experiments. These experiments led to the development of skin toxicology in the mid-1970s, including symptoms like irritation, skin inflammation, and skin photo-toxicity upon application of topical drugs. After the development of toxicology, a mathematical model was also created for skin diffusion coefficient formulated by Michaels. This formulation suggests how they related to the aqueous solubility and partition coefficient in skin. Skin absorption. Skin layers. The human body's largest organ is the skin layers, which protects against foreign particles. Human skin contains several layers, including the subcutaneous layer, the dermis, the epidermis, the stratum corneum, and the appendages. Each of these layers have an effect on the absorption of topical drug. When the topical drug is applied to the skin, it must pass via the stratum corneum, which is the outermost skin layer. Stratum corneum's function includes prevention of water loss in skin and inhibit the penetration of foreign molecules into the dermal layers. Hence, it also prevents the hydrophilic molecules to get absorbed into the skin since it is made out of bilayered lipids. With this barrier, stratum corneum affects the permeability of topical drugs. Another part of the skin is called the appendages, and it is known as the “shortcut” for topical drug delivery. The shortcut pathway allows the drug molecules to first pass the stratum corneum barrier via hair follicles. Diffusion. When drugs are applied to skin topically, the drug molecules will undergo passive diffusion. This process occurs down the concentration gradient when drug molecules move to one area to another region. Diffusion is described by a mathematical equation. The drug molecule (J), known as flux and it represents the entry of topical drug molecules across the skin membrane. The skin membrane is the area (A) for the topical drug molecules to travel across. The skin membrane thickness is known as (h) in the expression, and it determines the diffusion path length. The (C) is the concentration of the diffusing substance across the skin layers and the (D) is the diffusion coefficient. The expression illustrates the transportation of topical drug molecules across the stratum corneum membrane through diffusion. Diffusion expression: formula_0 Mechanism. Upon application of the topical drug on the skin, it will diffuse to the outer layer of the skin, known as stratum corneum. There are three routes possible for the drugs to cross the skin. The first route is through the appendages. It is known as the "first cut" where the drug molecules will be partitioned into the sweat gland to bypass the stratum corneum barrier. If the drug molecules is not transported via the "first cut", it is usually remains in the stratum corneum's bilayered lipids, where the drug molecules transport through either the transcellular route or paracellular route into the deeper area of the skin like subcutaneous layer. For the paracellular route, it means that the solutes transport via the junction between the cell. When the topical drug molecules transport via the paracellular route, it needs to travel across the stratum corneum, which is a highly fat region, but between the cells. On the other hand, the topical drug molecules may travel through the transcellular route. This route allows molecules to be transported via the cell. Transcellular route transports the drug molecule into the bilayered lipid cells found in stratum corneum. Inside of the bilayered lipids in the stratum corneum is a water-soluble environment, and the drug molecules will diffuse through these bilayered lipids into deeper area of the skin. During the transportation of the topical drug molecules, it can bind to the keratin that exists as one of the skin components in the stratum corneum. Skin metabolism. The activities of skin metabolism are commonly occurring on the skin surface, appendages, the stratum corneum, and the viable epidermis. This process comprises phase one hydrolysis, reduction, and oxidation, also known as functionalization phase. If phase one is not sufficient enough to metabolize the drugs, phase two conjugation reaction occurs. This phase includes glucuronidation, sulfation, and acetylation. It is found that phase two activities are lower than phase two in the skin. One common example is thearylamine-type hair dye, after it is applied topically, it will undergo metabolism in the skin through enzyme N-acetyltransferase, thus resulting in a N-acetylated metabolite. These metabolic enzymes cause the loss of topical drug activities, thus reducing its bioavailability. They may eventually form atoxic compound that reaches to the systemic circulation and causes damage to the skin layers. The longer the topical drug remains in the skin, the greater amount of it will be metabolized by the underlying enzymes. To reduce such an effect, the topical drug needs to remain on the skin for a shorter period of time. Also, certain amount of topical molecules needs to be applied to the skin and cause metabolic enzymes saturation. Factors affect topical absorption. The amount of topical drug molecules being delivered to the skin is affected solely by the physicochemical properties of the topical drug. The first factor is the weight of the drug molecule. The smaller of the drug molecular weight or particle size, the higher rate of its diffusion and absorption into the skin. The second factor is the lipophilicity of the drug molecules, since the three pathways for absorption are quite lipophilic. The higher lipophilicity of it, the easier of the drug molecules to be absorbed when compared to the hydrophilic drug molecules. The third parameter is the pH level of the skin. The pH of the skin layers are basic, hence basic topical drugs will be absorbed better than acidic topical drugs. These factors are vital to determine the permeability of topical drug delivery. Skin permeability enhancers. Colloidal System. Colloidal system is one of the techniques used for topical drug delivery into the skin and functions as skin permeability enhancers. They are known as carriers and can be classified into nanoparticles, liposomes, and nanoemugel. Liposome. Liposomes contain a bilayer of phospholipids in a sphere shape that may exist as one or more than one bilayer of phospholipids. With this structure, its function is to trap hydrophilic or lipophilic drug molecules within the spherical bilayers. The hydrophilic drug molecule sticks to the hydrophilic head since it is polar and favours water. On the other hand, the lipophilic drug molecules will be entrapped in the phospholipid tails of the bilayer due to its lipophilic nature. With these mechanisms, liposomes will behave like carriers and carry the lipophilic or hydrophilic drug molecules into the stratum corneum and release them into deeper layers of the skin by interacting with the bilayers lipids found in stratum corneum. The Use of liposome as carrier enhances the overall permeability of topical drug into the skin to reach the target site. For example, a drug like amphotericin B, is used to treat fungal infections. The drug is loaded into liposome and this carrier enhances the penetration of amphotericin B into the skin, regardless of its molecular weight. Nanoemulgel. Nanoemulgel is another type of enhancer for delivery of topical drugs into the skin. The formulation process for nanoemulgel is produced by incorporating the nanoemulsion into a gel matrix. The gels are made out of aqueous bases and it allows for a more rapid release of drugs through dissolution. The use of nanoemulgel enhances patient compliance because the use of gel is less greasy than traditional cream or ointment, hence there is less incident in skin irritation. Nanoemulgel increases the topical drug bioavailability by inserting the lipophilic drug molecules into the oil droplet of the nanoemulgel and it will travel through the skin layers. With its high dissolution rate, the nanoemulgel produces a high concentration gradient toward the skin, thus allowing for a rapid uptake of oil droplet into the stratum corneum. Also, the surfactant being incorporated into the nanoemulgel has the ability to penetrate through the bilayer lipid by interrupting the hydrogen bond between the lipid in the skin to further enhance its permeability. In terms of treatment, the use of nanoemulgel is against cancer cells and useful in skin cancer. Also, the formulation of nanoemulgel with methoxsalen is used to treat psoriasis. The carrier enhances both the penetration and accumulation of methoxsalen in the skin layers. Physical Agents. Micro-needles. Micro-needle belongs to the physical enhancer to improve absorption of topical drug molecules into the skin. It is known as ‘poke and patch’ because it uses tiny needles and stick into the skin across the stratum corneum. These tiny needles ensure that they will not contact the nerve endings or cutaneous blood vessels under the skin, hence they can be removed easily from the skin. There are several types of micro-needle, the first one is solid micro-needles. The solid micro-needles are used to project into the skin. Once the needles are removed after insertion, the topical drugs are applied to skin. This enhances the ability of drugs to diffuse across the viable epidermis. The second type is the dissolvable micro-needle. These types of needles are composed of materials that allow them to dissolve after poking into the skin, hence no need to remove the needles after injection. The third type of micro-needle is the swell-able micro-needles, which consist of hydrogel. After poking its needle into the skin, it allows the skin interstitial fluid diffuse into the micro-needles, thus it will swell to diffuse the drug molecules across the skin. It is found that micro-needles are safe and effective in enhancing skin permeability.
[ { "math_id": 0, "text": "J= ADC/h" } ]
https://en.wikipedia.org/wiki?curid=70061567
7006166
Green–Tao theorem
Theorem about prime numbers In number theory, the Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, for every natural number "k", there exist arithmetic progressions of primes with "k" terms. The proof is an extension of Szemerédi's theorem. The problem can be traced back to investigations of Lagrange and Waring from around 1770. Statement. Let formula_0 denote the number of primes less than or equal to formula_1. If formula_2 is a subset of the prime numbers such that formula_3 then for all positive integers formula_4, the set formula_2 contains infinitely many arithmetic progressions of length formula_4. In particular, the entire set of prime numbers contains arbitrarily long arithmetic progressions. In their later work on the generalized Hardy–Littlewood conjecture, Green and Tao stated and conditionally proved the asymptotic formula formula_5 for the number of "k" tuples of primes formula_6 in arithmetic progression. Here, formula_7 is the constant formula_8 The result was made unconditional by Green–Tao and Green–Tao–Ziegler. Overview of the proof. Green and Tao's proof has three main components: Numerous simplifications to the argument in the original paper have been found. provide a modern exposition of the proof. Numerical work. The proof of the Green–Tao theorem does not show how to find the arithmetic progressions of primes; it merely proves they exist. There has been separate computational work to find large arithmetic progressions in the primes. The Green–Tao paper states 'At the time of writing the longest known arithmetic progression of primes is of length 23, and was found in 2004 by Markus Frind, Paul Underwood, and Paul Jobling: 56211383760397 + 44546738095860 · "k"; "k" = 0, 1, . . ., 22.'. On January 18, 2007, Jarosław Wróblewski found the first known case of 24 primes in arithmetic progression: 468,395,662,504,823 + 205,619 · 223,092,870 · "n", for "n" = 0 to 23. The constant 223,092,870 here is the product of the prime numbers up to 23, more compactly written 23# in primorial notation. On May 17, 2008, Wróblewski and Raanan Chermoni found the first known case of 25 primes: 6,171,054,912,832,631 + 366,384 · 23# · "n", for "n" = 0 to 24. On April 12, 2010, Benoît Perichon with software by Wróblewski and Geoff Reynolds in a distributed PrimeGrid project found the first known case of 26 primes (sequence in the OEIS): 43,142,746,595,714,191 + 23,681,770 · 23# · "n", for "n" = 0 to 25. In September 2019 Rob Gahan and PrimeGrid found the first known case of 27 primes (sequence in the OEIS): 224,584,605,939,537,911 + 81,292,139 · 23# · "n", for "n" = 0 to 26. Extensions and generalizations. Many of the extensions of Szemerédi's theorem hold for the primes as well. Independently, Tao and Ziegler and Cook, Magyar, and Titichetrakun derived a multidimensional generalization of the Green–Tao theorem. The Tao–Ziegler proof was also simplified by Fox and Zhao. In 2006, Tao and Ziegler extended the Green–Tao theorem to cover polynomial progressions. More precisely, given any integer-valued polynomials "P"1, ..., "P""k" in one unknown "m" all with constant term 0, there are infinitely many integers "x", "m" such that "x" + "P"1("m"), ..., "x" + "P""k"("m") are simultaneously prime. The special case when the polynomials are "m", 2"m", ..., "km" implies the previous result that there are length "k" arithmetic progressions of primes. Tao proved an analogue of the Green–Tao theorem for the Gaussian primes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi(N)" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\\limsup_{N\\rightarrow\\infty} \\frac{|A\\cap [1,N]|}{\\pi(N)}>0," }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "(\\mathfrak{S}_k + o(1))\\frac{N^2}{(\\log N)^k}" }, { "math_id": 6, "text": "p_1 < p_2 < \\dotsb < p_k \\leq N" }, { "math_id": 7, "text": "\\mathfrak{S}_k" }, { "math_id": 8, "text": "\\mathfrak{S}_k := \\frac{1}{2(k-1)}\\left(\\prod_{p \\leq k}\\frac{1}p\\left(\\frac{p}{p - 1}\\right)^{\\!k-1}\\right)\\!\\left(\\prod_{p > k}\\left(1 - \\frac{k-1}p\\right)\\!\\left(\\frac{p}{p - 1}\\right)^{\\!k-1}\\right)\\!." } ]
https://en.wikipedia.org/wiki?curid=7006166
70063034
2 Samuel 19
Second Book of Samuel chapter 2 Samuel 19 is the nineteenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 43 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 6–12, 14–16, 25, 27–29, 38. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The story of Absalom's rebellion can be observed as five consecutive episodes: A. David's flight from Jerusalem (15:13–16:14) B. The victorious Absalom and his counselors (16:15–17:14) C. David reaches Mahanaim (17:15–29) B'. The rebellion is crushed and Absalom is executed (18:1–19:8abc) A'. David's reentry into Jerusalem (19:8d–20:3) God's role seems to be understated in the whole events, but is disclosed by a seemingly insignificant detail: 'the crossing of the Jordan river'. The Hebrew root word"' 'br", "to cross" (in various nominal and verbal forms) is used more than 30 times in these chapters (compared to 20 times in the rest of 2 Samuel) to report David's flight from Jerusalem, his crossing of the Jordan river, and his reentry into Jerusalem. In 2 Samuel 17:16, stating that David should cross the Jordan (17:16), the verb" 'br" is even reinforced by a 'Hebrew infinitive absolute' to mark this critical moment: "king David is about to cross out of the land of Israel." David's future was in doubt until it was stated that God had rendered foolish Ahithophel's good counsel to Absalom (2 Samuel 17:14), thus granting David's prayer (15:31), and saving David from Absalom's further actions. Once Absalom was defeated, David's crossing back over the Jordan echoes the Israelites' first crossing over the Jordan under Joshua's leadership (Joshua 1–4): Here God's role is not as explicit as during Joshua's crossing, but the signs are clear that God was with David, just as with Joshua. Joab reproved David (19:1–8). With his prolonged mourning for Absalom David placed his personal grief over his responsibility towards his troops and supporters who had helped him fighting. Joab took initiatives to rebuke David, warning about another possible rebellion (verse 7). Joab's harsh words managed to wake the king from his depression and to see him sitting on his throne watching his troops marching past. "Then the king arose and took his seat in the gate." "And the people were all told, "Behold, the king is sitting in the gate."" "And all the people came before the king." "Now Israel had fled every man to his own home." David restored as king (19:9–33). 'Bringing the king back' to his residence in Jerusalem was a prestigious privilege to the king's supporters. Despite some dissatisfaction of David's previous management, the people of Israel, former supporters of Absalom, were ready to transfer their allegiance again to the king, but the people of Judah, David's own tribe was not doing anything as such, perhaps because Absalom's rebellion had started in Hebron, in Judah's territory. Therefore, David sent two priests, Zadok and Abiathar (cf. 2 Samuel 15:24–29) from Jerusalem to the elders of Judah with two messages: Agreeing on the messages, the Judahites went to Gilgal to guard David's crossing of the Jordan River. During David's return journey to Jerusalem there were three meetings which correspond to those during his departure from the city (15:9–16:13). His first encounter was with Shimei, a Benjaminite from the house of Saul, who previously cursed David (2 Samuel 16:5–13), now pleaded with the king to forget his past actions, even added that he made efforts as the first of the 'house of Joseph' (referring to the 'northerner', that is, tribes of Israel outside Judah) to meet him. David, as customary on coronation day, showed magnanimity by swearing an oath not to kill Shimei, refusing the advice of the vengeful sons of Zeruiah to punish (cf. 16:9), even dismissed Abishai as an 'adversary' (Hebrew: "satan"). Despite his oath, David did not forget or forgive Shimei's insults so he commanded Solomon to deal with Shimei after David's death (1 Kings 2:8–9). The second meeting was with Ziba, who had rushed down to the Jordan at the same time as Shimei with a group of people to assist the king's household to cross. The conversation with Mephibosheth (verses 24–30) was inserted here, because of the issue related to him and Ziba; it more likely happened near Jerusalem, after David's conversation with Barzillai in Transjordan. Mephiposheth was unkempt when coming to David, intentionally to demonstrate his grief for David's departure, and pleaded innocence, claiming that he had been deceived by Ziba (cf. 16:1–4), referring David as an 'angel of God' (cf. 2 Samuel 14:17, 20) as he recounted David's previous favors to him. David replied, curtly and to the point, by dividing Saul's territories between Ziba and Mephibosheth. The third meeting was with Barzillai who had made provision for the king and his troops (2 Samuel 17:27), and now David wished to recompense by giving him a place in the court (verses 31–40). Barzillai's old age could no longer enjoy the pleasures of the Court, so he only requested his home and family grave, while handed over his servant (or 'son' according to some Septuagint manuscripts), Chimham, to accompany David. David would not forget Barzillai's kindness: he blessed Barzillai (verses 38b—39), and later commended him to Solomon (1 Kings 2:26). The conflict between north and south in verses 41–43 is a continuation of verses 8–13, where the tribes of Israel outside Judah were thinking of 'bringing the king back' before the Judahites, but then the Judahites came first to guard the king crossing the Jordan River. The northern tribes felt excluded, especially as the tribe of Judah claimed priority because David was their kinsman, but the northern tribes claimed to form the larger part of his kingdom ('ten shares' to two) and to be the first to mention bringing back the king. These verses, left without a resolution, prepare for the revolt of 2 Samuel 20 and the ultimate division of the kingdom in 1 Kings 12. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70063034
70063037
2 Samuel 16
Second Book of Samuel chapter 2 Samuel 16 is the sixteenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 23 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–2, 6–8, 10–13, 17–18, 20–22. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The story of Absalom's rebellion can be observed as five consecutive episodes: A. David's flight from Jerusalem (15:13–16:14) B. The victorious Absalom and his counselors (16:15–17:14) C. David reaches Mahanaim (17:15–29) B'. The rebellion is crushed and Absalom is executed (18:1–19:8abc) A'. David's reentry into Jerusalem (19:8d–20:3) God's role seems to be understated in the whole events, but is disclosed by a seemingly insignificant detail: 'the crossing of the Jordan river'. The Hebrew root word"' 'br", "to cross" (in various nominal and verbal forms) is used more than 30 times in these chapters (compared to 20 times in the rest of 2 Samuel) to report David's flight from Jerusalem, his crossing of the Jordan river, and his reentry into Jerusalem. In 2 Samuel 17:16, stating that David should cross the Jordan (17:16), the verb" 'br" is even reinforced by a 'Hebrew infinitive absolute' to mark this critical moment: "king David is about to cross out of the land of Israel." David's future was in doubt until it was stated that God had rendered foolish Ahithophel's good counsel to Absalom (2 Samuel 17:14), thus granting David's prayer (15:31), and saving David from Absalom's further actions. Once Absalom was defeated, David's crossing back over the Jordan echoes the Israelites' first crossing over the Jordan under Joshua's leadership (Joshua 1–4): Here God's role is not as explicit as during Joshua's crossing, but the signs are clear that God was with David, just as with Joshua. David fled from Jerusalem (16:1–14). This section continues the last one (2 Samuel 15:13–37), where David had the first three of five meetings on his way out of Jerusalem, with two other meetings —this time with two persons connected with the house of Saul. The first meeting was with Ziba, the servant of Mephibosheth (verses 1–4), who brought provisions for David and reported that Mephibosheth had decided to stay in Jerusalem, thinking that Saul's kingdom was to be returned to him. Later, Mephibosheth's words in 19:27–29 disputed this. However, at this time without a chance to investigate and against his better judgement, David accepted Ziba's report and granted him all of Saul's estates. The second meeting took place at Bahurim on the edge of the wilderness, where another Saulide called Shimei came out (verses 5–14) cursing David and calling him 'Murderer', while interpreting Absalom's take-over of the kingdom as God's revenge for 'the blood of the house of Saul' on David (verse 8). There are some possibilities of David's alleged crime: David was unwilling to take action against Shimei, accepting the possibility that Shimei was cursing on YHWH's order (verse 10), so David resigned to God's will without protest (cf. 1 Samuel 26:9–11). The conversation with Abishai about killing Shimei mirrors the one about killing Saul in 1 Samuel 26 as follows: Absalom entered Jerusalem (16:15–23). Absalom entered Jerusalem as a victor, and greeted by Hushai, called as "David's friend", with the standard acclamation, 'Long live the king' to declare his allegiance to the new king (verse 16). Absalom instinctively suspected Hushai's signs of disloyalty to David, but was persuaded that Hushai considered Absalom to be God's elect and king by public acclamation and promised him the same loyalty as he had shown his father (verse 18). As soon as he set himself in Jerusalem, Absalom unwisely accepted Ahitophel's advice (which was "esteemed and regarded as divine guidance" in verse 23) to go to his father's harem (cf. 2 Samuel 12:8; 1 Kings 2:22–23), thereby publicly declaring his claim to the throne, which in fact he had already taken. Ahitophel reasoned that such action was a decisive break of relations between son and father which would consolidate support from the anti-Davidic camp. By having two wisest counsellors of that age (Ahitophel and Hushai), Absalom might be assured himself of success, without any thought of consulting YHWH (through the priests and the Ark of the Covenant), but this would be Absalom undoing, because Hushai would never counsel him to do wisely, whereas Ahitophel counselled him to do wickedly, basically to sin against YHWH. "And Hushai said to Absalom, "No, but whom the LORD and this people and all the men of Israel choose, his I will be, and with him I will remain."" Verse 18. Hushai's reply to Absalom's suspicion of his loyalty contains ambiguities, because, without mentioning any name, this circumlocution or descriptive character of the king of Israel would better fit David, whom the Lord had demonstratively chosen and the people of Israel had publicly anointed, instead of Absalom, who at this time had neither. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70063037
70063039
2 Samuel 10
Second Book of Samuel chapter 2 Samuel 10 is the tenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 19 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 4–7, 18–19. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The historic wars with Ammon and Aram are recorded in 2 Samuel 10–12 in connection with the David-Bathsheba affair and the succession narrative thereafter. This chapter comprises 3 parts: At the center of the chapter, Joab, David's commander, prayed for divine assistance: "may the Lord do what seems good to him" (verse 12) and God heard his prayer, confirming that God helps David (and his army) "wherever he went" (2 Samuel 8:6, 14). Humiliation of David's envoys by the Ammonites (10:1–5). The section begins with a Hebrew clause "wayehî ’a-ḥă-rê-ḵên", "and-happened after this" ("after this" or "and it came to pass"), indicating an indeterminate period of time since the events of the last chapter. The death of Nahash the king of the Ammonites, an ally of David, prompted David to send a mourning delegate to pay his respects and to maintain a good relationship with Hanun, Nahash's son and successor, but Hanun who suspected David's motives, humiliated the envoys. It was not uncommon in the region that during the transition of power a neighboring kingdom would attack an inexperienced king, just as the Philistines tried to attack David upon his anointing in Hebron (2 Samuel 2:1), or the Moabites rebelled against Ahaziah the new king of Israel, when Ahab, his father, was dead (2 Kings 1:1; 3:5). The structure of this section is as follows: Setting (10:1) A. David sends envoys (10:2) B. Hanun hears accusations against the envoys (10:3a) C. The accusations (10:3b) B'. Hanun believes the accusations and humuliates the envoys (10:4) A'. David's sends word to the envoys (9:5) The episode begins and ends in David's court, while the central event happens in Hanun's court. "Then David said, "I will show kindness to Hanun the son of Nahash, as his father showed kindness to me."" "So David sent by the hand of his servants to comfort him concerning his father. And David’s servants came into the land of the people of Ammon." Joab's victory over the Ammonites (10:6–14). Facing imminent retaliation from David for the humiliation of Israelite envoys, the Ammonites asked help from the Arameans (verse 6), which turned attention to four Aramean states: Zobah and Beth-rehob to the south, Maacah (Aram-Maacah in 1 Chronicles 19:6) north of Manasseh in Transjordan, and Tob, further south. Comparing with the narrative in 2 Samuel 8:3–5, the course of the Aramean conflict could be reconstructed as follows: Joab successfully fought battle in Rabbah on two fronts, but was not in a position to take more advantage, so he returned to Jerusalem (verse 14). "When the Ammonites saw the Arameans flee, they fled before his brother Abishai and went into the city. Joab withdrew from fighting the Ammonites and returned to Jerusalem." David's victory over the Arameans (10:15–19). The fight under the leadership of David himself gave a much better result: the Syrians fled before David, who killed many of them, including Shobach, Hadadezer's commander (verse 18), effectively neutralizing the power of Aram. After this defeat Hadadezer's vassals transferred their allegiance to David (verse 19). "And when all the kings who were servants of Hadadezer saw that they had been defeated by Israel, they made peace with Israel and became subject to them. So the Syrians were afraid to save the Ammonites anymore." Verse 19. There is a Hebrew wordplay in this verse: Hadarezer's servants "see" ("wayyir'u") that they are defeated, so the Syrians (Arameans) "fear" ("wayyire'u") to help the Ammonites again. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70063039
70063207
Bernoulli quadrisection problem
Splitting a triangle by perpendicular lines In triangle geometry, the Bernoulli quadrisection problem asks how to divide a given triangle into four equal-area pieces by two perpendicular lines. Its solution by Jacob Bernoulli was published in 1687. Leonhard Euler formulated a complete solution in 1779. As Euler proved, in a scalene triangle, it is possible to find a subdivision of this form so that two of the four crossings of the lines and the triangle lie on the middle edge of the triangle, cutting off a triangular area from that edge and leaving the other three areas as quadrilaterals. It is also possible for some triangles to be subdivided differently, with two crossings on the shortest of the three edges; however, it is never possible for two crossings to lie on the longest edge. Among isosceles triangles, the one whose height at its apex is 8/9 of its base length is the only one with exactly two perpendicular quadrisections. One of the two uses the symmetry axis as one of the two perpendicular lines, while the other has two lines of slope formula_0, each crossing the base and one side. This subdivision of a triangle is a special case of a theorem of Richard Courant and Herbert Robbins that any plane area can be subdivided into four equal parts by two perpendicular lines, a result that is related to the ham sandwich theorem. Although the triangle quadrisection has a solution involving the roots of low-degree polynomials, the more general quadrisection of Courant and Robbins can be significantly more difficult: for any computable number formula_1 there exist convex shapes whose boundaries can be accurately approximated to within any desired error in polynomial time, with a unique perpendicular quadrisection whose construction computes formula_1. In 2022, the first place in an Irish secondary school science competition, the Young Scientist and Technology Exhibition, went to a project by Aditya Joshi and Aditya Kumar using metaheuristic methods to find numerical solutions to the Bernoulli quadrisection problem. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pm 1" }, { "math_id": 1, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=70063207
70073990
Queen's graph
Mathematical graph relating to chess In mathematics, a queen's graph is an undirected graph that represents all legal moves of the queen—a chess piece—on a chessboard. In the graph, each vertex represents a square on a chessboard, and each edge is a legal move the queen can make, that is, a horizontal, vertical or diagonal move by any number of squares. If the chessboard has dimensions formula_1, then the induced graph is called the formula_1 queen's graph. Independent sets of the graphs correspond to placements of multiple queens where no two queens are attacking each other. They are studied in the eight queens puzzle, where eight non-attacking queens are placed on a standard formula_0 chessboard. Dominating sets represent arrangements of queens where every square is attacked or occupied by a queen; five queens, but no fewer, can dominate the formula_0 chessboard. Colourings of the graphs represent ways to colour each square so that a queen cannot move between any two squares of the same colour; at least "n" colours are needed for an formula_2 chessboard, but 9 colours are needed for the formula_0 board. Properties. There is a Hamiltonian cycle for each queen's graph, and the graphs are biconnected (they remain connected if any single vertex is removed). The special cases of the formula_3 and formula_4 queen's graphs are complete. Independence. An independent set of the graph corresponds to a placement of several queens on a chessboard such that no two queens are attacking each other. In an formula_2 chessboard, the largest independent set contains at most "n" vertices, as no two queens can be in the same row or column. This upper bound can be achieved for all "n" except "n"=2 and "n"=3. In the case of n=8, this is the traditional eight queens puzzle. Domination. A dominating set of the queen's graph corresponds to a placement of queens such that every square on the chessboard is either attacked or occupied by a queen. On an formula_0 chessboard, five queens can dominate, and this is the minimum number possible (four queens leave at least two squares unattacked). There are 4,860 such placements of five queens, including ones where the queens control also all occupied squares, i.e. they attack respectively protect each other. In this subgroup, there are also positions where the queens occupy squares on the main diagonal only (e.g. from a1 to h8), or all on a subdiagonal (e.g. from a2 to g8). Modifying the graph by replacing the non-looping rectangular formula_0 chessboard with a torus or cylinder reduces the minimum dominating set size to four. The formula_5 queen's graph is dominated by the single vertex at the centre of the board. The centre vertex of the formula_6 queen's graph is adjacent to all but 8 vertices: those vertices that are adjacent to the centre vertex of the formula_6 knight's graph. Domination numbers. Define the domination number "d"("n") of an formula_2 queen's graph to be the size of the smallest dominating set, and the diagonal domination number "dd"("n") to be the size of the smallest dominating set that is a subset of the long diagonal. Note that formula_7 for all "n". The bound is attained for formula_8, but not for formula_9. The domination number is linear in "n", with bounds given by: formula_10 Initial values of "d"("n"), for formula_11, are 1, 1, 1, 2, 3, 3, 4, 5, 5, 5, 5 (sequence in the OEIS). Let "Kn" be the maximum size of a subset of formula_12 such that every number has the same parity and no three numbers form an arithmetic progression (the set is "midpoint-free"). The diagonal domination number of an formula_2 queen's graph is formula_13. Define the independent domination number "ID"("n") to be the size of the smallest independent, dominant set in an formula_2 queen's graph. It is known that formula_14. Colouring. A colouring of the queen's graph is an assignment of colours to each vertex such that no two adjacent vertices are given the same colour. For instance, if a8 is coloured red then no other square on the a-file, eighth rank or long diagonal can be coloured red, as a queen can move from a8 to any of these squares. The chromatic number of the graph is the smallest number of colours that can be used to colour it. In the case of an formula_2 queen's graph, at least "n" colours are required, as each square in a rank or file needs a different colour (i.e. the rows and columns are cliques). The chromatic number is exactly "n" if formula_15 (i.e. "n" is one more or one less than a multiple of 6). The chromatic number of an formula_0 queen's graph is 9. Irredundance. A set of vertices is irredundant if removing any vertex from the set changes the neighbourhood of the set i.e. for each vertex, there is an adjacent vertex that is not adjacent to any other vertex in the set. This corresponds to a set of queens which each uniquely control at least one square. The maximum size "IR"("n") of an irredundant set on the formula_2 queen's graph is difficult to characterise; known values include formula_16 Pursuit–evasion game. Consider the pursuit–evasion game on an formula_0 queen's graph played according to the following rules: a white queen starts in one corner and a black queen in the opposite corner. Players alternate moves, which consist of moving the queen to an adjacent vertex that can be reached without passing over (horizontally, vertically or diagonally) or landing on a vertex that is adjacent to the opposite queen. This game can be won by white with a pairing strategy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "8\\times 8" }, { "math_id": 1, "text": "m\\times n" }, { "math_id": 2, "text": "n\\times n" }, { "math_id": 3, "text": "1\\times n" }, { "math_id": 4, "text": "2\\times 2" }, { "math_id": 5, "text": "3\\times 3" }, { "math_id": 6, "text": "5\\times 5" }, { "math_id": 7, "text": "d(n)\\le dd(n)" }, { "math_id": 8, "text": "d(8)=dd(8)=5" }, { "math_id": 9, "text": "d(11)=5,dd(11)=7" }, { "math_id": 10, "text": "\\frac{n-1}{2} \\le d(n)\\le n-\\left \\lfloor \\frac{n}{3}\\right \\rfloor." }, { "math_id": 11, "text": "n=1,2,3,\\dots" }, { "math_id": 12, "text": "\\{1,2,3,\\dots,n\\}" }, { "math_id": 13, "text": "n-K_n" }, { "math_id": 14, "text": "ID(n)<0.705n+0.895" }, { "math_id": 15, "text": "n\\equiv 1,5 \\pmod 6" }, { "math_id": 16, "text": "IR(5)=5,IR(6)=7,IR(7)=9,IR(8)=11." } ]
https://en.wikipedia.org/wiki?curid=70073990
70085
J. J. Thomson
British physicist (1856–1940) Sir Joseph John Thomson (18 December 1856 – 30 August 1940) was a British physicist and Nobel Laureate in Physics, credited with the discovery of the electron, the first subatomic particle to be found. In 1897, Thomson showed that cathode rays were composed of previously unknown negatively charged particles (now called electrons), which he calculated must have bodies much smaller than atoms and a very large charge-to-mass ratio. Thomson is also credited with finding the first evidence for isotopes of a stable (non-radioactive) element in 1913, as part of his exploration into the composition of canal rays (positive ions). His experiments to determine the nature of positively charged particles, with Francis William Aston, were the first use of mass spectrometry and led to the development of the mass spectrograph. Thomson was awarded the 1906 Nobel Prize in Physics for his work on the conduction of electricity in gases. Thomson was also a teacher, and seven of his students went on to win Nobel Prizes: Ernest Rutherford (Chemistry 1908), Lawrence Bragg (Physics 1915), Charles Barkla (Physics 1917), Francis Aston (Chemistry 1922), Charles Thomson Rees Wilson (Physics 1927), Owen Richardson (Physics 1928) and Edward Victor Appleton (Physics 1947). Only Arnold Sommerfeld's record of mentorship offers a comparable list of high-achieving students. Education and personal life. Joseph John Thomson was born on 18 December 1856 in Cheetham Hill, Manchester, Lancashire, England. His mother, Emma Swindells, came from a local textile family. His father, Joseph James Thomson, ran an antiquarian bookshop founded by Thomson's great-grandfather. He had a brother, Frederick Vernon Thomson, who was two years younger than he was. J. J. Thomson was a reserved yet devout Anglican. His early education was in small private schools where he demonstrated outstanding talent and interest in science. In 1870, he was admitted to Owens College in Manchester (now University of Manchester) at the unusually young age of 14 and came under the influence of Balfour Stewart, Professor of Physics, who initiated Thomson into physical research. Thomson began experimenting with contact electrification and soon published his first scientific paper. His parents planned to enroll him as an apprentice engineer to Sharp, Stewart &amp; Co, a locomotive manufacturer, but these plans were cut short when his father died in 1873. He moved on to Trinity College, Cambridge, in 1876. In 1880, he obtained his Bachelor of Arts degree in mathematics (Second Wrangler in the Tripos and 2nd Smith's Prize). He applied for and became a Fellow of Trinity College in 1881. He received his Master of Arts degree (with Adams Prize) in 1883. Family. In 1890, Thomson married Rose Elisabeth Paget at the church of St. Mary the Less. Rose, who was the daughter of Sir George Edward Paget, a physician and then Regius Professor of Physic at Cambridge, was interested in physics. Beginning in 1882, women could attend demonstrations and lectures at the University of Cambridge. Rose attended demonstrations and lectures, among them Thomson's, leading to their relationship. They had two children: George Paget Thomson, who was also awarded a Nobel Prize for his work on the wave properties of the electron, and Joan Paget Thomson (later Charnock), who became an author, writing children's books, non-fiction and biographies. Career and research. Overview. On 22 December 1884, Thomson was appointed Cavendish Professor of Physics at the University of Cambridge. The appointment caused considerable surprise, given that candidates such as Osborne Reynolds or Richard Glazebrook were older and more experienced in laboratory work. Thomson was known for his work as a mathematician, where he was recognised as an exceptional talent. He was awarded a Nobel Prize in 1906, "in recognition of the great merits of his theoretical and experimental investigations on the conduction of electricity by gases." He was knighted in 1908 and appointed to the Order of Merit in 1912. In 1914, he gave the Romanes Lecture in Oxford on "The atomic theory". In 1918, he became Master of Trinity College, Cambridge, where he remained until his death. He died on 30 August 1940; his ashes rest in Westminster Abbey, near the graves of Sir Isaac Newton and his former student Ernest Rutherford. Rutherford succeeded him as Cavendish Professor of Physics. Six of Thomson's research assistants and junior colleagues (Charles Glover Barkla, Niels Bohr, Max Born, William Henry Bragg, Owen Willans Richardson and Charles Thomson Rees Wilson) won Nobel Prizes in physics, and two (Francis William Aston and Ernest Rutherford) won Nobel prizes in chemistry. Thomson's son (George Paget Thomson) also won the 1937 Nobel Prize in physics for proving the wave-like properties of electrons. Early work. Thomson's prize-winning master's work, "Treatise on the motion of vortex rings", shows his early interest in atomic structure. In it, Thomson mathematically described the motions of William Thomson's vortex theory of atoms. Thomson published a number of papers addressing both mathematical and experimental issues of electromagnetism. He examined the electromagnetic theory of light of James Clerk Maxwell, introduced the concept of electromagnetic mass of a charged particle, and demonstrated that a moving charged body would apparently increase in mass. Much of his work in mathematical modelling of chemical processes can be thought of as early computational chemistry. In further work, published in book form as "Applications of dynamics to physics and chemistry" (1888), Thomson addressed the transformation of energy in mathematical and theoretical terms, suggesting that all energy might be kinetic. His next book, "Notes on recent researches in electricity and magnetism" (1893), built upon Maxwell's "Treatise upon electricity and magnetism", and was sometimes referred to as "the third volume of Maxwell". In it, Thomson emphasized physical methods and experimentation and included extensive figures and diagrams of apparatus, including a number for the passage of electricity through gases. His third book, "Elements of the mathematical theory of electricity and magnetism" (1895) was a readable introduction to a wide variety of subjects, and achieved considerable popularity as a textbook. A series of four lectures, given by Thomson on a visit to Princeton University in 1896, were subsequently published as "Discharge of electricity through gases" (1897). Thomson also presented a series of six lectures at Yale University in 1904. Discovery of the electron. Several scientists, such as William Prout and Norman Lockyer, had suggested that atoms were built up from a more fundamental unit, but they envisioned this unit to be the size of the smallest atom, hydrogen. Thomson in 1897 was the first to suggest that one of the fundamental units of the atom was more than 1,000 times smaller than an atom, suggesting the subatomic particle now known as the electron. Thomson discovered this through his explorations on the properties of cathode rays. Thomson made his suggestion on 30 April 1897 following his discovery that cathode rays (at the time known as Lenard rays) could travel much further through air than expected for an atom-sized particle. He estimated the mass of cathode rays by measuring the heat generated when the rays hit a thermal junction and comparing this with the magnetic deflection of the rays. His experiments suggested not only that cathode rays were over 1,000 times lighter than the hydrogen atom, but also that their mass was the same in whichever type of atom they came from. He concluded that the rays were composed of very light, negatively charged particles which were a universal building block of atoms. He called the particles "corpuscles", but later scientists preferred the name electron which had been suggested by George Johnstone Stoney in 1891, prior to Thomson's actual discovery. In April 1897, Thomson had only early indications that the cathode rays could be deflected electrically (previous investigators such as Heinrich Hertz had thought they could not be). A month after Thomson's announcement of the corpuscle, he found that he could reliably deflect the rays by an electric field if he evacuated the discharge tube to a very low pressure. By comparing the deflection of a beam of cathode rays by electric and magnetic fields he obtained more robust measurements of the mass-to-charge ratio that confirmed his previous estimates. This became the classic means of measuring the charge-to-mass ratio of the electron. (The charge itself was not measured until Robert A. Millikan's oil drop experiment in 1909.) Thomson believed that the corpuscles emerged from the atoms of the trace gas inside his cathode ray tubes. He thus concluded that atoms were divisible, and that the corpuscles were their building blocks. In 1904, Thomson suggested a model of the atom, hypothesizing that it was a sphere of positive matter within which electrostatic forces determined the positioning of the corpuscles. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge. In this "plum pudding model", the electrons were seen as embedded in the positive charge like raisins in a plum pudding (although in Thomson's model they were not stationary, but orbiting rapidly). Thomson made the discovery around the same time that Walter Kaufmann and Emil Wiechert discovered the correct mass to charge ratio of these cathode rays (electrons). The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered). For some years Thomson resisted using the word "electron" because he didn't like how some physicists talked of a "positive electron" that was supposed to be the elementary unit of positive charge just as the "negative electron" is the elementary unit of negative charge. Thomson preferred to stick with the word "corpuscle" which he strictly defined as negatively charged. He relented by 1914, using the word "electron" in his book "The Atomic Theory". In 1920, Rutherford and his fellows agreed to call the nucleus of the hydrogen ion "proton", establishing a distinct name for the smallest known positively-charged particle of matter (that can exist independently anyway). Isotopes and mass spectrometry. In 1912, as part of his exploration into the composition of the streams of positively charged particles then known as canal rays, Thomson and his research assistant F. W. Aston channelled a stream of neon ions through a magnetic and an electric field and measured its deflection by placing a photographic plate in its path. They observed two patches of light on the photographic plate (see image on right), which suggested two different parabolas of deflection, and concluded that neon is composed of atoms of two different atomic masses (neon-20 and neon-22), that is to say of two isotopes. This was the first evidence for isotopes of a stable element; Frederick Soddy had previously proposed the existence of isotopes to explain the decay of certain radioactive elements. Thomson's separation of neon isotopes by their mass was the first example of mass spectrometry, which was subsequently improved and developed into a general method by F. W. Aston and by A. J. Dempster. Experiments with cathode rays. Earlier, physicists debated whether cathode rays were immaterial like light ("some process in the aether") or were "in fact wholly material, and ... mark the paths of particles of matter charged with negative electricity", quoting Thomson. The aetherial hypothesis was vague, but the particle hypothesis was definite enough for Thomson to test. Magnetic deflection. Thomson first investigated the magnetic deflection of cathode rays. Cathode rays were produced in the side tube on the left of the apparatus and passed through the anode into the main bell jar, where they were deflected by a magnet. Thomson detected their path by the fluorescence on a squared screen in the jar. He found that whatever the material of the anode and the gas in the jar, the deflection of the rays was the same, suggesting that the rays were of the same form whatever their origin. Electrical charge. While supporters of the aetherial theory accepted the possibility that negatively charged particles are produced in Crookes tubes, they believed that they are a mere by-product and that the cathode rays themselves are immaterial. Thomson set out to investigate whether or not he could actually separate the charge from the rays. Thomson constructed a Crookes tube with an electrometer set to one side, out of the direct path of the cathode rays. Thomson could trace the path of the ray by observing the phosphorescent patch it created where it hit the surface of the tube. Thomson observed that the electrometer registered a charge only when he deflected the cathode ray to it with a magnet. He concluded that the negative charge and the rays were one and the same. Electrical deflection. In May–June 1897, Thomson investigated whether or not the rays could be deflected by an electric field. Previous experimenters had failed to observe this, but Thomson believed their experiments were flawed because their tubes contained too much gas. Thomson constructed a Crookes tube with a better vacuum. At the start of the tube was the cathode from which the rays projected. The rays were sharpened to a beam by two metal slits – the first of these slits doubled as the anode, the second was connected to the earth. The beam then passed between two parallel aluminium plates, which produced an electric field between them when they were connected to a battery. The end of the tube was a large sphere where the beam would impact on the glass, created a glowing patch. Thomson pasted a scale to the surface of this sphere to measure the deflection of the beam. Any electron beam would collide with some residual gas atoms within the Crookes tube, thereby ionizing them and producing electrons and ions in the tube (space charge); in previous experiments this space charge electrically screened the externally applied electric field. However, in Thomson's Crookes tube the density of residual atoms was so low that the space charge from the electrons and ions was insufficient to electrically screen the externally applied electric field, which permitted Thomson to successfully observe electrical deflection. When the upper plate was connected to the negative pole of the battery and the lower plate to the positive pole, the glowing patch moved downwards, and when the polarity was reversed, the patch moved upwards. Measurement of mass-to-charge ratio. In his classic experiment, Thomson measured the mass-to-charge ratio of the cathode rays by measuring how much they were deflected by a magnetic field and comparing this with the electric deflection. He used the same apparatus as in his previous experiment, but placed the discharge tube between the poles of a large electromagnet. He found that the mass-to-charge ratio was over a thousand times "lower" than that of a hydrogen ion (H+), suggesting either that the particles were very light and/or very highly charged. Significantly, the rays from every cathode yielded the same mass-to-charge ratio. This is in contrast to anode rays (now known to arise from positive ions emitted by the anode), where the mass-to-charge ratio varies from anode-to-anode. Thomson himself remained critical of what his work established, in his Nobel Prize acceptance speech referring to "corpuscles" rather than "electrons". Thomson's calculations can be summarised as follows (in his original notation, using F instead of E for the electric field and H instead of B for the magnetic field): The electric deflection is given by formula_0, where Θ is the angular electric deflection, F is applied electric intensity, e is the charge of the cathode ray particles, l is the length of the electric plates, m is the mass of the cathode ray particles and v is the velocity of the cathode ray particles. The magnetic deflection is given by formula_1, where φ is the angular magnetic deflection and H is the applied magnetic field intensity. The magnetic field was varied until the magnetic and electric deflections were the same, when formula_2. This can be simplified to give formula_3. The electric deflection was measured separately to give Θ and H, F and l were known, so m/e could be calculated. Conclusions. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;As the cathode rays carry a charge of negative electricity, are deflected by an electrostatic force as if they were negatively electrified, and are acted on by a magnetic force in just the way in which this force would act on a negatively electrified body moving along the path of these rays, I can see no escape from the conclusion that they are charges of negative electricity carried by particles of matter. As to the source of these particles, Thomson believed they emerged from the molecules of gas in the vicinity of the cathode. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If, in the very intense electric field in the neighbourhood of the cathode, the molecules of the gas are dissociated and are split up, not into the ordinary chemical atoms, but into these primordial atoms, which we shall for brevity call corpuscles; and if these corpuscles are charged with electricity and projected from the cathode by the electric field, they would behave exactly like the cathode rays. Thomson imagined the atom as being made up of these corpuscles orbiting in a sea of positive charge; this was his plum pudding model. This model was later proved incorrect when his student Ernest Rutherford showed that the positive charge is concentrated in the nucleus of the atom. Other work. In 1905, Thomson discovered the natural radioactivity of potassium. In 1906, Thomson demonstrated that hydrogen had only a single electron per atom. Previous theories allowed various numbers of electrons. Awards and honours. During his life. Thomson was elected a Fellow of the Royal Society (FRS) and appointed to the Cavendish Professorship of Experimental Physics at the Cavendish Laboratory, University of Cambridge in 1884. Thomson won numerous awards and honours during his career including: Thomson was elected a Fellow of the Royal Society on 12 June 1884 and served as President of the Royal Society from 1915 to 1920. Thomson was elected an International Honorary Member of the American Academy of Arts and Sciences in 1902, and International Member of the American Philosophical Society in 1903, and the United States National Academy of Sciences in 1903. In November 1927, Thomson opened the Thomson building, named in his honour, in the Leys School, Cambridge. Posthumous. In 1991, the thomson (symbol: Th) was proposed as a unit to measure mass-to-charge ratio in mass spectrometry in his honour. J J Thomson Avenue, on the University of Cambridge's West Cambridge site, is named after Thomson. The Thomson Medal Award, sponsored by the International Mass Spectrometry Foundation, is named after Thomson. The Institute of Physics Joseph Thomson Medal and Prize is named after Thomson. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Theta = Fel / mv^2" }, { "math_id": 1, "text": "\\phi = Hel / mv" }, { "math_id": 2, "text": "\\Theta = \\phi, Fel / mv^2 = Hel / mv" }, { "math_id": 3, "text": "m/e = H^2 l/F\\Theta" } ]
https://en.wikipedia.org/wiki?curid=70085
70096665
Membrane scaling
The article is about "membrane scaling" which is a major challenge in the water treatment of RO. Membrane scaling is when one or more sparingly soluble salts (e.g., calcium carbonate, calcium phosphate, etc.) precipitate and form a dense layer on the membrane surface in reverse osmosis (RO) applications. Figures 1 and 2 show scanning electron microscopy (SEM) images of the RO membrane surface without and with scaling, respectively. Membrane scaling, like other types of membrane fouling, increases energy costs due to higher operating pressure, and reduces permeate water production. Furthermore, scaling may damage and shorten the lifetime of membranes due to frequent membrane cleanings and therefore it is a major operational challenge in RO applications. Membrane scaling can occur when sparingly soluble salts in RO concentrate become supersaturated, meaning their concentrations exceed their equilibrium (solubility) levels. In RO processes, the increased concentration of sparingly soluble salts in the concentrate is primarily caused by the withdrawal of permeate water from the feedwater. The ratio of permeate water to feedwater is known as recovery which is directly related to membrane scaling. Recovery needs to be as high as possible in RO installations to minimize specific energy consumption. However, at high recovery rates, the concentration of sparingly soluble salts in the concentrate can increase dramatically. For example, for 80% and 90% recovery, the concentration of salts in the concentrate can reach 5 and 10 times their concentration in the feedwater, respectively. If the calcium and phosphate concentrations in the RO feedwater are 200 mg/L and 5 mg/L, respectively, the concentrations in the RO concentrate will be 1000 mg/L and 50 mg/L at 90% recovery, exceeding the calcium phosphate solubility limit and resulting in calcium phosphate scaling. It is important to note that membrane scaling is not only dependent on supersaturation but also on crystallization kinetics, i.e., nucleation and crystal growth. Scaling compounds encountered in RO. The most common salts that cause scaling in RO processes are: Scaling prediction methods. There are a number of indices available to determine the scaling tendency of sparingly soluble salts in a water solution. These indices provide information if a given scale-forming specie is undersaturated, saturated, or supersaturated. Scaling does not occur when a compound is undersaturated, while it will take place sooner or later when a compound is supersaturated. The most commonly used indices to predict scaling in RO applications are: formula_0 where, IAP and Ksp are ion activity product and solubility product of the sparingly soluble salt, respectively. For instance, SI for calcium sulphate can be calculated as follows: formula_1 where, γ is activity coefficient. [Ca2+] and [SO42−] are calcium and sulphate concentrations in mol/L, respectively. formula_2 where IAP and Ksp are ion activity product and solubility product of the sparingly soluble salt, respectively. For instance, Sr for calcium sulphate can be calculated as follows: formula_3 where, γ is activity coefficient. [Ca2+] and [SO42−] are calcium and sulphate concentrations in mol/L, respectively. LSI is used only for calcium carbonate scaling. On the other hand, SI and Sr are applicable for all compounds. A positive value for each SI and LSI indicates that scaling may occur in RO, whereas a negative value implies that scaling will not occur. Similarly, scaling may occur when Sr&gt;1, but not when Sr&lt;1. Scaling control in RO applications. There are several methods for preventing scaling in RO applications, including acidification of RO feed, lowering RO system recovery, and antiscalant addition. Acidification of RO feedwater was one of the first methods for tackling calcium carbonate scaling in RO processes. However, due to the risks associated with the use of acid, this method is becoming less common. Furthermore, acidification may not be effective for all types of scales; for example, it is very effective in preventing calcium carbonate scaling but not calcium sulphate scaling. Another method of preventing scaling is to operate RO at low recovery (ratio of permeate water to the feedwater). The recovery of the RO application is reduced in this approach to reduce the supersaturation level of the concentrate water to undersaturated conditions. Low recovery reduces the adverse effect of concentration polarization because there is less solute concentration on the membrane surface, reducing the potential for scale formation. This approach, however, is not very appealing or economical because it results in high specific energy consumption. Furthermore, the large amount of concentrate disposal is a problem. Antiscalants addition to the RO feed is one of the most widely applied strategies in term of scale control. Antiscalants can be used to increase the recovery of RO process and are primarily contains organic compounds such as sulphonate, phosphonate, or carboxylic acid functional groups. The addition of antiscalants hinder the crystallization process, i.e., nucleation and/or growth phase of scaling compounds. Antiscalant prevent scale formation by three mechanisms, namely threshold inhibition, crystal modification and dispersion. Threshold inhibition is when antiscalant molecules adsorb on crystal nuclei and halt their nucleation process, whereas crystal modification and dispersion are the ability of antiscalants to stop the growth and/or agglomeration of crystals and particles. For silica scale, there is also an additional function where it prevents polymerisation of silica monomers, hence preventing the growth of silica polymers. There are various commercial antiscalants on the market such as Kurita, Avista, BASF etc. In RO applications, antiscalants are chosen based on the composition of the feedwater, and their doses are usually calculated using computer programs created by antiscalant manufacturers. For example, Avista has a chemical dosing software called AdvisorCI™, that is used to compute accurate dosing of chemicals in RO systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "SI = \\log\\frac{IAP}{K_{sp}}" }, { "math_id": 1, "text": "SI = \\log\\frac{\\gamma[Ca^{2+}]\\gamma[SO_4^{2-}]}{Ksp}" }, { "math_id": 2, "text": "S_r = \\surd\\frac{IAP}{K_{sp}}" }, { "math_id": 3, "text": "S_r = \\surd\\frac{\\gamma[Ca^{2+}]\\gamma[SO_4^{2-}]}{Ksp}" } ]
https://en.wikipedia.org/wiki?curid=70096665
700981
Ribet's theorem
Result concerning properties of Galois representations associated with modular forms Ribet's theorem (earlier called the epsilon conjecture or ε-conjecture) is part of number theory. It concerns properties of Galois representations associated with modular forms. It was proposed by Jean-Pierre Serre and proven by Ken Ribet. The proof was a significant step towards the proof of Fermat's Last Theorem (FLT). As shown by Serre and Ribet, the Taniyama–Shimura conjecture (whose status was unresolved at the time) and the epsilon conjecture together imply that FLT is true. In mathematical terms, Ribet's theorem shows that if the Galois representation associated with an elliptic curve has certain properties, then that curve cannot be modular (in the sense that there cannot exist a modular form that gives rise to the same representation). Statement. Let "f" be a weight 2 newform on Γ0("qN") – i.e. of level "qN" where "q" does not divide "N" – with absolutely irreducible 2-dimensional mod "p" Galois representation "ρf,p" unramified at "q" if "q" ≠ "p" and finite flat at "q" "p". Then there exists a weight 2 newform "g" of level "N" such that formula_0 In particular, if "E" is an elliptic curve over formula_1 with conductor "qN", then the modularity theorem guarantees that there exists a weight 2 newform "f" of level "qN" such that the 2-dimensional mod "p" Galois representation "ρf, p" of "f" is isomorphic to the 2-dimensional mod "p" Galois representation "ρE, p" of "E". To apply Ribet's Theorem to "ρ""E", "p", it suffices to check the irreducibility and ramification of "ρE, p". Using the theory of the Tate curve, one can prove that "ρE, p" is unramified at "q" ≠ "p" and finite flat at "q" "p" if "p" divides the power to which "q" appears in the minimal discriminant Δ"E". Then Ribet's theorem implies that there exists a weight 2 newform "g" of level "N" such that "ρ""g", "p" ≈ "ρ""E", "p". Level lowering. Ribet's theorem states that beginning with an elliptic curve "E" of conductor "qN" does not guarantee the existence of an elliptic curve "E′" of level "N" such that "ρ""E, p" ≈ "ρ""E′", "p". The newform "g" of level "N" may not have rational Fourier coefficients, and hence may be associated to a higher-dimensional abelian variety, not an elliptic curve. For example, elliptic curve 4171a1 in the Cremona database given by the equation formula_2 with conductor 43 × 97 and discriminant 437 × 973 does not level-lower mod 7 to an elliptic curve of conductor 97. Rather, the mod "p" Galois representation is isomorphic to the mod "p" Galois representation of an irrational newform "g" of level 97. However, for "p" large enough compared to the level "N" of the level-lowered newform, a rational newform (e.g. an elliptic curve) must level-lower to another rational newform (e.g. elliptic curve). In particular for "p" ≫ "N""N"1+"ε", the mod "p" Galois representation of a rational newform cannot be isomorphic to an irrational newform of level "N". Similarly, the Frey-Mazur conjecture predicts that for large enough "p" (independent of the conductor "N"), elliptic curves with isomorphic mod "p" Galois representations are in fact isogenous, and hence have the same conductor. Thus non-trivial level-lowering between rational newforms is not predicted to occur for large "p" ("p" &gt; 17). History. In his thesis, Yves Hellegouarch originated the idea of associating solutions ("a","b","c") of Fermat's equation with a different mathematical object: an elliptic curve. If "p" is an odd prime and "a", "b", and "c" are positive integers such that formula_3 then a corresponding Frey curve is an algebraic curve given by the equation formula_4 This is a nonsingular algebraic curve of genus one defined over formula_1, and its projective completion is an elliptic curve over formula_1. In 1982 Gerhard Frey called attention to the unusual properties of the same curve, now called a Frey curve. This provided a bridge between Fermat and Taniyama by showing that a counterexample to FLT would create a curve that would not be modular. The conjecture attracted considerable interest when Frey suggested that the Taniyama–Shimura conjecture implies FLT. However, his argument was not complete. In 1985 Jean-Pierre Serre proposed that a Frey curve could not be modular and provided a partial proof. This showed that a proof of the semistable case of the Taniyama–Shimura conjecture would imply FLT. Serre did not provide a complete proof and the missing bit became known as the epsilon conjecture or ε-conjecture. In the summer of 1986, Kenneth Alan Ribet proved the epsilon conjecture, thereby proving that the Modularity theorem implied FLT. The origin of the name is from the ε part of "Taniyama-Shimura conjecture + ε ⇒ Fermat's last theorem". Implications. Suppose that the Fermat equation with exponent "p" ≥ 5 had a solution in non-zero integers "a", "b", "c". The corresponding Frey curve "E""a""p","b""p","c""p" is an elliptic curve whose minimal discriminant Δ is equal to 2−8 ("abc")2"p" and whose conductor "N" is the radical of "abc", i.e. the product of all distinct primes dividing "abc". An elementary consideration of the equation "a""p" + "b""p" "c""p", makes it clear that one of "a", "b", "c" is even and hence so is "N". By the Taniyama–Shimura conjecture, "E" is a modular elliptic curve. Since all odd primes dividing "a", "b", "c" in "N" appear to a "p"th power in the minimal discriminant Δ, by Ribet's theorem repetitive level descent modulo "p" strips all odd primes from the conductor. However, no newforms of level 2 remain because the genus of the modular curve "X"0(2) is zero (and newforms of level "N" are differentials on "X"0("N")). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho_{f,p} \\simeq \\rho_{g,p}. " }, { "math_id": 1, "text": "\\mathbb{Q}" }, { "math_id": 2, "text": "E: y^2 + xy + y = x^3 - 663204x + 206441595" }, { "math_id": 3, "text": "a^p + b^p = c^p," }, { "math_id": 4, "text": "y^2 = x(x - a^p)(x + b^p)." } ]
https://en.wikipedia.org/wiki?curid=700981
70105931
Cole–Davidson equation
The Cole-Davidson equation is a model used to describe dielectric relaxation in glass-forming liquids. The equation for the complex permittivity is formula_0 where formula_1 is the permittivity at the high frequency limit, formula_2 where formula_3 is the static, low frequency permittivity, and formula_4 is the characteristic relaxation time of the medium. The exponent formula_5 represents the exponent of the decay of the high frequency wing of the imaginary part, formula_6. The Cole–Davidson equation is a generalization of the Debye relaxation keeping the initial increase of the low frequency wing of the imaginary part, formula_7. Because this is also a characteristic feature of the Fourier transform of the stretched exponential function it has been considered as an approximation of the latter, although nowadays an approximation by the Havriliak-Negami function or exact numerical calculation may be preferred. Because the slopes of the peak in formula_8 in double-logarithmic representation are different it is considered an asymmetric generalization in contrast to the Cole-Cole equation. The Cole–Davidson equation is the special case of the Havriliak-Negami relaxation with formula_9. The real and imaginary parts are formula_10 and formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\hat{\\varepsilon}(\\omega) = \\varepsilon_{\\infty} + \\frac{\\Delta\\varepsilon}{(1+i\\omega\\tau)^{\\beta}},\n" }, { "math_id": 1, "text": "\\varepsilon_{\\infty}" }, { "math_id": 2, "text": "\\Delta\\varepsilon = \\varepsilon_{s}-\\varepsilon_{\\infty}" }, { "math_id": 3, "text": "\\varepsilon_{s}" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "\\varepsilon''(\\omega) \\sim \\omega^{-\\beta}" }, { "math_id": 7, "text": "\\varepsilon''(\\omega) \\sim \\omega" }, { "math_id": 8, "text": "\\varepsilon''(\\omega)" }, { "math_id": 9, "text": "\\alpha=1" }, { "math_id": 10, "text": "\n\\varepsilon'(\\omega) = \\varepsilon_{\\infty} + \\Delta\\varepsilon\\left( 1 + (\\omega\\tau)^{2} \\right)^{-\\beta/2} \\cos (\\beta\\arctan(\\omega\\tau))\n" }, { "math_id": 11, "text": "\n\\varepsilon''(\\omega) = \\Delta\\varepsilon\\left( 1 + (\\omega\\tau)^{2} \\right)^{-\\beta/2} \\sin (\\beta\\arctan(\\omega\\tau))\n" } ]
https://en.wikipedia.org/wiki?curid=70105931
7010617
Bayesian average
A Bayesian average is a method of estimating the mean of a population using outside information, especially a pre-existing belief, which is factored into the calculation. This is a central feature of Bayesian interpretation. This is useful when the available data set is small. Calculating the Bayesian average uses the prior mean "m" and a constant "C". "C" is chosen based on the typical data set size required for a robust estimate of the sample mean. The value is larger when the expected variation between data sets (within the larger population) is small. It is smaller when the data sets are expected to vary substantially from one another. formula_0 This is equivalent to adding "C" data points of value "m" to the data set. It is a weighted average of a prior average "m" and the sample average. When the formula_1 are binary values 0 or 1, "m" can be interpreted as the prior estimate of a binomial probability with the Bayesian average giving a posterior estimate for the observed data. In this case, "C" can be chosen based on the desired binomial proportion confidence interval for the sample value. For example, for rare outcomes when "m" is small choosing formula_2 ensures a 99% confidence interval has width about "2m". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\bar{x} = {Cm + \\sum_{i=1}^n x_i \\over C + n} " }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "C \\simeq 9/m" } ]
https://en.wikipedia.org/wiki?curid=7010617
701077
Bäcklund transform
In mathematics, Bäcklund transforms or Bäcklund transformations (named after the Swedish mathematician Albert Victor Bäcklund) relate partial differential equations and their solutions. They are an important tool in soliton theory and integrable systems. A Bäcklund transform is typically a system of first order partial differential equations relating two functions, and often depending on an additional parameter. It implies that the two functions separately satisfy partial differential equations, and each of the two functions is then said to be a Bäcklund transformation of the other. A Bäcklund transform which relates solutions of the "same" equation is called an invariant Bäcklund transform or auto-Bäcklund transform. If such a transform can be found, much can be deduced about the solutions of the equation especially if the Bäcklund transform contains a parameter. However, no systematic way of finding Bäcklund transforms is known. History. Bäcklund transforms have their origins in differential geometry: the first nontrivial example is the transformation of pseudospherical surfaces introduced by L. Bianchi and A.V. Bäcklund in the 1880s. This is a geometrical construction of a new pseudospherical surface from an initial such surface using a solution of a linear differential equation. Pseudospherical surfaces can be described as solutions of the sine-Gordon equation, and hence the Bäcklund transformation of surfaces can be viewed as a transformation of solutions of the sine-Gordon equation. The Cauchy–Riemann equations. The prototypical example of a Bäcklund transform is the Cauchy–Riemann system formula_0 which relates the real and imaginary parts formula_1 and formula_2 of a holomorphic function. This first order system of partial differential equations has the following properties. Thus, in this case, a Bäcklund transformation of a harmonic function is just a conjugate harmonic function. The above properties mean, more precisely, that Laplace's equation for formula_1 and Laplace's equation for formula_2 are the integrability conditions for solving the Cauchy–Riemann equations. These are the characteristic features of a Bäcklund transform. If we have a partial differential equation in formula_1, and a Bäcklund transform from formula_1 to formula_2, we can deduce a partial differential equation satisfied by formula_2. This example is rather trivial, because all three equations (the equation for formula_1, the equation for formula_2 and the Bäcklund transform relating them) are linear. Bäcklund transforms are most interesting when just one of the three equations is linear. The sine-Gordon equation. Suppose that "u" is a solution of the sine-Gordon equation formula_7 Then the system formula_8 where "a" is an arbitrary parameter, is solvable for a function "v" which will also satisfy the sine-Gordon equation. This is an example of an auto-Bäcklund transform. By using a matrix system, it is also possible to find a linear Bäcklund transform for solutions of sine-Gordon equation. The Liouville equation. A Bäcklund transform can turn a non-linear partial differential equation into a simpler, linear, partial differential equation. For example, if "u" and "v" are related via the Bäcklund transform formula_9 where "a" is an arbitrary parameter, and if "u" is a solution of the Liouville equation formula_10 then "v" is a solution of the much simpler equation, formula_11, and vice versa. We can then solve the (non-linear) Liouville equation by working with a much simpler linear equation. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "u_x=v_y, \\quad u_y=-v_x,\\," }, { "math_id": 1, "text": "u" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "u_{xx} + u_{yy} = 0" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "u_{xy}=u_{yx}, \\quad v_{xy}=v_{yx}.\\," }, { "math_id": 7, "text": " u_{xy} = \\sin u.\\," }, { "math_id": 8, "text": "\\begin{align}\nv_x & = u_x + 2a \\sin \\Bigl( \\frac{v+u}{2} \\Bigr) \\\\\nv_y & = -u_y + \\frac{2}{a} \\sin \\Bigl( \\frac{v-u}{2} \\Bigr)\n\\end{align} \\,\\!" }, { "math_id": 9, "text": "\\begin{align}\nv_x & = u_x + 2a \\exp \\Bigl( \\frac{u+v}{2} \\Bigr) \\\\\nv_y & = -u_y - \\frac{1}{a} \\exp \\Bigl( \\frac{u-v}{2} \\Bigr)\n\\end{align} \\,\\!" }, { "math_id": 10, "text": "u_{xy}=\\exp u \\,\\!" }, { "math_id": 11, "text": "v_{xy}=0" } ]
https://en.wikipedia.org/wiki?curid=701077
701096
Rotational symmetry
Property of objects which appear unchanged after a partial rotation Rotational symmetry, also known as radial symmetry in geometry, is the property a shape has when it looks the same after some rotation by a partial turn. An object's degree of rotational symmetry is the number of distinct orientations in which it looks exactly the same for each rotation. Certain geometric objects are partially symmetrical when rotated at certain angles such as squares rotated 90°, however the only geometric objects that are fully rotationally symmetric at any angle are spheres, circles and other spheroids. Formal treatment. Formally the rotational symmetry is symmetry with respect to some or all rotations in m-dimensional Euclidean space. Rotations are direct isometries, i.e., isometries preserving orientation. Therefore, a symmetry group of rotational symmetry is a subgroup of "E" +("m") (see Euclidean group). Symmetry with respect to all rotations about all points implies translational symmetry with respect to all translations, so space is homogeneous, and the symmetry group is the whole "E"("m"). With the modified notion of symmetry for vector fields the symmetry group can also be "E" +("m"). For symmetry with respect to rotations about a point we can take that point as origin. These rotations form the special orthogonal group SO("m"), the group of "m" × "m" orthogonal matrices with determinant 1. For "m" = 3 this is the rotation group SO(3). In another definition of the word, the rotation group "of an object" is the symmetry group within "E" +("n"), the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group. Laws of physics are SO(3)-invariant if they do not distinguish different directions in space. Because of Noether's theorem, the rotational symmetry of a physical system is equivalent to the angular momentum conservation law. Discrete rotational symmetry. Rotational symmetry of order n, also called n-fold rotational symmetry, or discrete rotational symmetry of the nth order, with respect to a particular point (in 2D) or axis (in 3D) means that rotation by an angle of &amp;NoBreak;}&amp;NoBreak; (180°, 120°, 90°, 72°, 60°, 51 &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄7°, etc.) does not change the object. A "1-fold" symmetry is no symmetry (all objects look alike after a rotation of 360°). The notation for n-fold symmetry is Cn or simply n. The actual symmetry group is specified by the point or axis of symmetry, together with the n. For each point or axis of symmetry, the abstract group type is cyclic group of order n, Zn. Although for the latter also the notation Cn is used, the geometric and abstract Cn should be distinguished: there are other symmetry groups of the same abstract group type which are geometrically different, see cyclic symmetry groups in 3D. The fundamental domain is a sector of &amp;NoBreak;&amp;NoBreak; Examples without additional reflection symmetry: Cn is the rotation group of a regular n-sided polygon in 2D and of a regular n-sided pyramid in 3D. If there is e.g. rotational symmetry with respect to an angle of 100°, then also with respect to one of 20°, the greatest common divisor of 100° and 360°. A typical 3D object with rotational symmetry (possibly also with perpendicular axes) but no mirror symmetry is a propeller. Multiple symmetry axes through the same point. For discrete symmetry with multiple symmetry axes through the same point, there are the following possibilities: In the case of the Platonic solids, the 2-fold axes are through the midpoints of opposite edges, and the number of them is half the number of edges. The other axes are through opposite vertices and through centers of opposite faces, except in the case of the tetrahedron, where the 3-fold axes are each through one vertex and the center of one face. Rotational symmetry with respect to any angle. Rotational symmetry with respect to any angle is, in two dimensions, circular symmetry. The fundamental domain is a half-line. In three dimensions we can distinguish cylindrical symmetry and spherical symmetry (no change when rotating about one axis, or for any rotation). That is, no dependence on the angle using cylindrical coordinates and no dependence on either angle using spherical coordinates. The fundamental domain is a half-plane through the axis, and a radial half-line, respectively. Axisymmetric and axisymmetrical are adjectives which refer to an object having cylindrical symmetry, or axisymmetry (i.e. rotational symmetry with respect to a central axis) like a doughnut (torus). An example of approximate spherical symmetry is the Earth (with respect to density and other physical and chemical properties). In 4D, continuous or discrete rotational symmetry about a plane corresponds to corresponding 2D rotational symmetry in every perpendicular plane, about the point of intersection. An object can also have rotational symmetry about two perpendicular planes, e.g. if it is the Cartesian product of two rotationally symmetry 2D figures, as in the case of e.g. the duocylinder and various regular duoprisms. Rotational symmetry with translational symmetry. 2-fold rotational symmetry together with single translational symmetry is one of the Frieze groups. A rotocenter is the fixed, or invariant, point of a rotation. There are two rotocenters per primitive cell. Together with double translational symmetry the rotation groups are the following wallpaper groups, with axes per primitive cell: Scaling of a lattice divides the number of points per unit area by the square of the scale factor. Therefore, the number of 2-, 3-, 4-, and 6-fold rotocenters per primitive cell is 4, 3, 2, and 1, respectively, again including 4-fold as a special case of 2-fold, etc. 3-fold rotational symmetry at one point and 2-fold at another one (or ditto in 3D with respect to parallel axes) implies rotation group p6, i.e. double translational symmetry and 6-fold rotational symmetry at some point (or, in 3D, parallel axis). The translation distance for the symmetry generated by one such pair of rotocenters is formula_2 times their distance. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1}{3} \\sqrt {3}" }, { "math_id": 1, "text": "\\tfrac{1}{2} \\sqrt {2}" }, { "math_id": 2, "text": "2\\sqrt {3}" } ]
https://en.wikipedia.org/wiki?curid=701096
701099
Luttinger liquid
Theoretical model describing interacting fermions in a one-dimensional conductor A Luttinger liquid, or Tomonaga–Luttinger liquid, is a theoretical model describing interacting electrons (or other fermions) in a one-dimensional conductor (e.g. quantum wires such as carbon nanotubes). Such a model is necessary as the commonly used Fermi liquid model breaks down for one dimension. The Tomonaga–Luttinger's liquid was first proposed by Sin-Itiro Tomonaga in 1950. The model showed that under certain constraints, second-order interactions between electrons could be modelled as bosonic interactions. In 1963, J.M. Luttinger reformulated the theory in terms of Bloch sound waves and showed that the constraints proposed by Tomonaga were unnecessary in order to treat the second-order perturbations as bosons. But his solution of the model was incorrect; the correct solution was given by Daniel C. Mattis and Elliot H. Lieb 1965. Theory. Luttinger liquid theory describes low energy excitations in a 1D electron gas as bosons. Starting with the free electron Hamiltonian: formula_0 is separated into left and right moving electrons and undergoes linearization with the approximation formula_1 over the range formula_2: formula_3 Expressions for bosons in terms of fermions are used to represent the Hamiltonian as a product of two boson operators in a Bogoliubov transformation. The completed bosonization can then be used to predict spin-charge separation. Electron-electron interactions can be treated to calculate correlation functions. Features. Among the hallmark features of a Luttinger liquid are the following: The Luttinger model is thought to describe the universal low-frequency/long-wavelength behaviour of any one-dimensional system of interacting fermions (that has not undergone a phase transition into some other state). Physical systems. Attempts to demonstrate Luttinger-liquid-like behaviour in those systems are the subject of ongoing experimental research in condensed matter physics. Among the physical systems believed to be described by the Luttinger model are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H = \\sum_{k} \\epsilon_k c_k^{\\dagger} c_k" }, { "math_id": 1, "text": "\\epsilon_k \\approx \\pm v_{\\rm F}(k-k_{\\rm F})" }, { "math_id": 2, "text": "\\Lambda" }, { "math_id": 3, "text": "H = \\sum_{k = k_{\\rm F} -\\Lambda}^{ k_{\\rm F} +\\Lambda} v_{\\rm F} k \\left(c_k^{\\mathrm R \\dagger } c_k^{\\mathrm R } - c_k^{\\mathrm L \\dagger }c_k^{\\mathrm L}\\right)" }, { "math_id": 4, "text": "2 k_\\text{F}" } ]
https://en.wikipedia.org/wiki?curid=701099
701100
Translational symmetry
Invariance of operations under geometric translation In physics and mathematics, continuous translational symmetry is the invariance of a system of equations under any translation (without rotation). Discrete translational symmetry is invariant under discrete translation. Analogously, an operator "A" on functions is said to be "translationally invariant" with respect to a translation operator formula_0 if the result after applying "A" doesn't change if the argument function is translated. More precisely it must hold that formula_1 Laws of physics are translationally invariant under a spatial translation if they do not distinguish different points in space. According to Noether's theorem, space translational symmetry of a physical system is equivalent to the momentum conservation law. Translational symmetry of an object means that a particular translation does not change the object. For a given object, the translations for which this applies form a group, the symmetry group of the object, or, if the object has more kinds of symmetry, a subgroup of the symmetry group. Geometry. Translational invariance implies that, at least in one direction, the object is infinite: for any given point p, the set of points with the same properties due to the translational symmetry form the infinite discrete set {p + "n"a | "n" ∈ Z} = p + Z a. Fundamental domains are e.g. H + [0, 1] a for any hyperplane H for which a has an independent direction. This is in 1D a line segment, in 2D an infinite strip, and in 3D a slab, such that the vector starting at one side ends at the other side. Note that the strip and slab need not be perpendicular to the vector, hence can be narrower or thinner than the length of the vector. In spaces with dimension higher than 1, there may be multiple translational symmetries. For each set of "k" independent translation vectors, the symmetry group is isomorphic with Z"k". In particular, the multiplicity may be equal to the dimension. This implies that the object is infinite in all directions. In this case, the set of all translations forms a lattice. Different bases of translation vectors generate the same lattice if and only if one is transformed into the other by a matrix of integer coefficients of which the absolute value of the determinant is 1. The absolute value of the determinant of the matrix formed by a set of translation vectors is the hypervolume of the "n"-dimensional parallelepiped the set subtends (also called the "covolume" of the lattice). This parallelepiped is a fundamental region of the symmetry: any pattern on or in it is possible, and this defines the whole object. See also lattice (group). E.g. in 2D, instead of a and b we can also take a and a − b, etc. In general in 2D, we can take "pa + "qb and "ra + "sb for integers "p", "q", "r", and "s" such that "ps" − "qr" is 1 or −1. This ensures that a and b themselves are integer linear combinations of the other two vectors. If not, not all translations are possible with the other pair. Each pair a, b defines a parallelogram, all with the same area, the magnitude of the cross product. One parallelogram fully defines the whole object. Without further symmetry, this parallelogram is a fundamental domain. The vectors a and b can be represented by complex numbers. For two given lattice points, equivalence of choices of a third point to generate a lattice shape is represented by the modular group, see lattice (group). Alternatively, e.g. a rectangle may define the whole object, even if the translation vectors are not perpendicular, if it has two sides parallel to one translation vector, while the other translation vector starting at one side of the rectangle ends at the opposite side. For example, consider a tiling with equal rectangular tiles with an asymmetric pattern on them, all oriented the same, in rows, with for each row a shift of a fraction, not one half, of a tile, always the same, then we have only translational symmetry, wallpaper group "p"1 (the same applies without shift). With rotational symmetry of order two of the pattern on the tile we have "p"2 (more symmetry of the pattern on the tile does not change that, because of the arrangement of the tiles). The rectangle is a more convenient unit to consider as fundamental domain (or set of two of them) than a parallelogram consisting of part of a tile and part of another one. In 2D there may be translational symmetry in one direction for vectors of any length. One line, not in the same direction, fully defines the whole object. Similarly, in 3D there may be translational symmetry in one or two directions for vectors of any length. One plane (cross-section) or line, respectively, fully defines the whole object.
[ { "math_id": 0, "text": "T_\\delta" }, { "math_id": 1, "text": "\\forall \\delta \\ A f = A (T_\\delta f)." } ]
https://en.wikipedia.org/wiki?curid=701100
7011111
Ramanujan prime
Prime fulfilling an inequality related to the prime-counting function In mathematics, a Ramanujan prime is a prime number that satisfies a result proven by Srinivasa Ramanujan relating to the prime-counting function. Origins and definition. In 1919, Ramanujan published a new proof of Bertrand's postulate which, as he notes, was first proved by Chebyshev. At the end of the two-page published paper, Ramanujan derived a generalized result, and that is: formula_0    OEIS:  where formula_1 is the prime-counting function, equal to the number of primes less than or equal to "x". The converse of this result is the definition of Ramanujan primes: The "n"th Ramanujan prime is the least integer "Rn" for which formula_2 for all "x" ≥ "Rn". In other words: Ramanujan primes are the least integers "Rn" for which there are at least "n" primes between "x" and "x"/2 for all "x" ≥ "Rn". The first five Ramanujan primes are thus 2, 11, 17, 29, and 41. Note that the integer "Rn" is necessarily a prime number: formula_3 and, hence, formula_1 must increase by obtaining another prime at "x" = "Rn". Since formula_3 can increase by at most 1, formula_4 Bounds and an asymptotic formula. For all formula_5, the bounds formula_6 hold. If formula_7, then also formula_8 where "p""n" is the "n"th prime number. As "n" tends to infinity, "R""n" is asymptotic to the 2"n"th prime, i.e., "R""n" ~ "p"2"n" ("n" → ∞). All these results were proved by Sondow (2009), except for the upper bound "R""n" &lt; "p"3"n" which was conjectured by him and proved by Laishram (2010). The bound was improved by Sondow, Nicholson, and Noe (2011) to formula_9 which is the optimal form of "R""n" ≤ "c·p"3"n" since it is an equality for "n" = 5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi(x) - \\pi\\left( \\frac x 2 \\right) \\ge 1,2,3,4,5,\\ldots \\text{ for all } x \\ge 2, 11, 17, 29, 41, \\ldots \\text{ respectively}" }, { "math_id": 1, "text": "\\pi(x)" }, { "math_id": 2, "text": "\\pi(x) - \\pi(x/2) \\ge n," }, { "math_id": 3, "text": "\\pi(x) - \\pi(x/2)" }, { "math_id": 4, "text": " \\pi(R_n) - \\pi\\left( \\frac{R_n} 2 \\right) = n. " }, { "math_id": 5, "text": "n \\geq 1" }, { "math_id": 6, "text": "2n\\ln2n < R_n < 4n\\ln4n" }, { "math_id": 7, "text": "n > 1" }, { "math_id": 8, "text": "p_{2n} < R_n < p_{3n}" }, { "math_id": 9, "text": "R_n \\le \\frac{41}{47} \\ p_{3n}" } ]
https://en.wikipedia.org/wiki?curid=7011111
701127
Electroweak scale
Standard energy scale for electroweak processes of 246 GeV In particle physics, the electroweak scale, also known as the Fermi scale, is the energy scale around 246 GeV, a typical energy of processes described by the electroweak theory. The particular number 246 GeV is taken to be the vacuum expectation value formula_0 of the Higgs field (where formula_1 is the Fermi coupling constant). In some cases the term "electroweak scale" is used to refer to the temperature of electroweak symmetry breaking, 159.5±1.5 GeV . In other cases, the term is used more loosely to refer to energies in a broad range around 102 - 103 GeV. This is within reach of the Large Hadron Collider (LHC), which is designed for about 104 GeV in proton–proton collisions. Interactions may have been above this scale during the electroweak epoch. In the unextended Standard Model, the transition from the electroweak epoch was not a first or a second order phase transition but a continuous crossover, preventing any baryogenesis. However many extensions to the standard model including supersymmetry and the inert double model have a first order electroweak phase transition (but still lack additional CP violation). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v = (G_F \\sqrt{2})^{-1/2}" }, { "math_id": 1, "text": "G_F" } ]
https://en.wikipedia.org/wiki?curid=701127
70113111
Gold(I) cyanide
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Gold(I) cyanide is the inorganic compound with the chemical formula AuCN. It is the binary cyanide of gold(I). It is an odourless, tasteless yellow solid. Wet gold(I) cyanide is unstable to light and will become greenish. Gold(I) cyanide itself is only of academic interest, but its derivative dicyanoaurate is an intermediate in gold cyanidation, the extraction of gold from its ores. Preparation. Solid gold(I) cyanide precipitates upon reaction of potassium dicyanoaurate with hydrochloric acid: formula_0 It can also be produced by the reaction of gold(III) chloride and potassium cyanide. Reactions. The solid dissolves to form water-soluble adducts with a variety of ligands: cyanides, hydroxide, ammonia, thiosulfate and hydrosulfide. Like most gold compounds, it converts to metallic gold upon heating. Structure. Gold(I) cyanide's is a coordination polymer consisting of linear chains of AuCN such that each Au(I) center is bonded to carbon and nitrogen. The structure is hexagonal with the lattice parameters a = 3.40 Å and c = 5.09 Å. T
[ { "math_id": 0, "text": "\\mathrm{K[Au(CN)_2] + HCl \\longrightarrow AuCN + HCN + KCl}" } ]
https://en.wikipedia.org/wiki?curid=70113111
701185
Correlation function (quantum field theory)
Expectation value of time-ordered quantum operators In quantum field theory, correlation functions, often referred to as correlators or Green's functions, are vacuum expectation values of time-ordered products of field operators. They are a key object of study in quantum field theory where they can be used to calculate various observables such as S-matrix elements. They are closely related to correlation functions between random variables, although they are nonetheless different objects, being defined in Minkowski spacetime and on quantum operators. Definition. For a scalar field theory with a single field formula_0 and a vacuum state formula_1 at every event (x) in spacetime, the n-point correlation function is the vacuum expectation value of the time-ordered products of formula_2 field operators in the Heisenberg picture formula_3 Here formula_4 is the time-ordering operator for which orders the field operators so that earlier time field operators appear to the right of later time field operators. By transforming the fields and states into the interaction picture, this is rewritten as formula_5 where formula_6 is the ground state of the free theory and formula_7 is the action. Expanding formula_8 using its Taylor series, the n-point correlation function becomes a sum of interaction picture correlation functions which can be evaluated using Wick's theorem. A diagrammatic way to represent the resulting sum is via Feynman diagrams, where each term can be evaluated using the position space Feynman rules. The series of diagrams arising from formula_9 is the set of all vacuum bubble diagrams, which are diagrams with no external legs. Meanwhile, formula_10 is given by the set of all possible diagrams with exactly formula_2 external legs. Since this also includes disconnected diagrams with vacuum bubbles, the sum factorizes into (sum over all bubble diagrams)formula_11(sum of all diagrams with no bubbles). The first term then cancels with the normalization factor in the denominator meaning that the n-point correlation function is the sum of all Feynman diagrams excluding vacuum bubbles formula_12 While not including any vacuum bubbles, the sum does include disconnected diagrams, which are diagrams where at least one external leg is not connected to all other external legs through some connected path. Excluding these disconnected diagrams instead defines connected "n"-point correlation functions formula_13 It is often preferable to work directly with these as they contain all the information that the full correlation functions contain since any disconnected diagram is merely a product of connected diagrams. By excluding other sets of diagrams one can define other correlation functions such as one-particle irreducible correlation functions. In the path integral formulation, n-point correlation functions are written as a functional average formula_14 They can be evaluated using the partition functional formula_15 which acts as a generating functional, with formula_16 being a source-term, for the correlation functions formula_17 Similarly, connected correlation functions can be generated using formula_18 as formula_20 Relation to the "S"-matrix. Scattering amplitudes can be calculated using correlation functions by relating them to the "S"-matrix through the LSZ reduction formula formula_21 Here the particles in the initial state formula_22 have a formula_19 sign in the exponential, while the particles in the final state formula_23 have a formula_24. All terms in the Feynman diagram expansion of the correlation function will have one propagator for each external leg, that is a propagators with one end at formula_25 and the other at some internal vertex formula_26. The significance of this formula becomes clear after the application of the Klein–Gordon operators to these external legs using formula_27 This is said to amputate the diagrams by removing the external leg propagators and putting the external states on-shell. All other off-shell contributions from the correlation function vanish. After integrating the resulting delta functions, what will remain of the LSZ reduction formula is merely a Fourier transformation operation where the integration is over the internal point positions formula_26 that the external leg propagators were attached to. In this form the reduction formula shows that the S-matrix is the Fourier transform of the amputated correlation functions with on-shell external states. It is common to directly deal with the momentum space correlation function formula_28, defined through the Fourier transformation of the correlation function formula_29 where by convention the momenta are directed inwards into the diagram. A useful quantity to calculate when calculating scattering amplitudes is the matrix element formula_30 which is defined from the S-matrix via formula_31 where formula_32 are the external momenta. From the LSZ reduction formula it then follows that the matrix element is equivalent to the amputated connected momentum space correlation function with properly orientated external momenta formula_33 For non-scalar theories the reduction formula also introduces external state terms such as polarization vectors for photons or spinor states for fermions. The requirement of using the connected correlation functions arises from the cluster decomposition because scattering processes that occur at large separations do not interfere with each other so can be treated separately. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi(x)" }, { "math_id": 1, "text": "|\\Omega\\rangle" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\nG_n(x_1,\\dots, x_n) = \\langle \\Omega|T\\{\\mathcal \\phi(x_1)\\dots \\mathcal \\phi(x_n)\\}|\\Omega\\rangle.\n" }, { "math_id": 4, "text": "T\\{\\cdots \\}" }, { "math_id": 5, "text": "\nG_n(x_1, \\dots, x_n) = \\frac{\\langle 0|T\\{\\phi(x_1)\\dots \\phi(x_n)e^{iS[\\phi]}\\}|0\\rangle}{\\langle 0|e^{i S[\\phi]}|0\\rangle},\n" }, { "math_id": 6, "text": "|0\\rangle" }, { "math_id": 7, "text": "S[\\phi]" }, { "math_id": 8, "text": "e^{iS[\\phi]}" }, { "math_id": 9, "text": "\\langle 0|e^{iS[\\phi]}|0\\rangle" }, { "math_id": 10, "text": "\\langle 0|\\phi(x_1)\\dots \\phi(x_n)e^{iS[\\phi]}|0\\rangle" }, { "math_id": 11, "text": "\\times" }, { "math_id": 12, "text": "\nG_n(x_1, \\dots, x_n) = \\langle 0|T\\{\\phi(x_1) \\dots \\phi(x_n)e^{iS[\\phi]}\\}|0\\rangle_{\\text{no bubbles}}.\n" }, { "math_id": 13, "text": "\nG_n^c(x_1, \\dots, x_n) = \\langle 0| T\\{\\phi(x_1)\\dots \\phi(x_n) e^{iS[\\phi]}\\}|0\\rangle_{\\text{connected, no bubbles}}\n" }, { "math_id": 14, "text": "\nG_n(x_1, \\dots, x_n) = \\frac{\\int \\mathcal D \\phi \\ \\phi(x_1) \\dots \\phi(x_n) e^{iS[\\phi]}}{\\int \\mathcal D \\phi \\ e^{iS[\\phi]}}.\n" }, { "math_id": 15, "text": "Z[J]" }, { "math_id": 16, "text": "J" }, { "math_id": 17, "text": "\nG_n(x_1, \\dots, x_n) = (-i)^n \\frac{1}{Z[J]} \\left.\\frac{\\delta^n Z[J]}{\\delta J(x_1) \\dots \\delta J(x_n)}\\right|_{J=0}.\n" }, { "math_id": 18, "text": "W[J] = -i \\ln Z[J]" }, { "math_id": 19, "text": "-i" }, { "math_id": 20, "text": "\nG_n^c(x_1, \\dots, x_n) = (-i)^{n-1} \\left.\\frac{\\delta^n W[J]}{\\delta J(x_1) \\dots \\delta J(x_n)}\\right|_{J=0}.\n" }, { "math_id": 21, "text": "\n\\langle f|S|i\\rangle = \\left[i \\int d^4 x_1 e^{-ip_1 x_1} \\left(\\partial^2_{x_1} + m^2\\right)\\right]\\cdots \\left[i \\int d^4 x_n e^{ip_n x_n} \\left(\\partial_{x_n}^2 + m^2\\right)\\right] \\langle \\Omega |T\\{\\phi(x_1)\\dots \\phi(x_n)\\}|\\Omega\\rangle.\n" }, { "math_id": 22, "text": "|i\\rangle" }, { "math_id": 23, "text": "|f\\rangle" }, { "math_id": 24, "text": "+i" }, { "math_id": 25, "text": "x_i" }, { "math_id": 26, "text": "x" }, { "math_id": 27, "text": "\n\\left(\\partial^2_{x_i} + m^2\\right)\\Delta_F(x_i,x) = -i\\delta^4(x_i-x).\n" }, { "math_id": 28, "text": "\\tilde G(q_1, \\dots, q_n)" }, { "math_id": 29, "text": "\n(2\\pi)^4 \\delta^{(4)}(q_1+\\cdots + q_n) \\tilde G_n(q_1, \\dots, q_n) = \\int d^4 x_1 \\dots d^4 x_n \\left(\\prod^n_{i=1} e^{-i q_i x_i}\\right) G_n(x_1, \\dots, x_n),\n" }, { "math_id": 30, "text": "\\mathcal M" }, { "math_id": 31, "text": "\\langle f| S - 1 |i\\rangle = i(2\\pi)^4 \\delta^4{\\bigg(\\sum_i p_i\\bigg)} \\mathcal M" }, { "math_id": 32, "text": "p_i" }, { "math_id": 33, "text": "\ni \\mathcal M = \\tilde G_n^c(p_1, \\dots, -p_n)_{\\text{amputated}}.\n" } ]
https://en.wikipedia.org/wiki?curid=701185
701188
Quantum vacuum state
Lowest-energy state of a field in quantum field theories, corresponding to no particles present In quantum field theory, the quantum vacuum state (also called the quantum vacuum or vacuum state) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. The term zero-point field is sometimes used as a synonym for the vacuum state of a quantized field which is completely individual. According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space". According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of the quantum field. The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s, it was reformulated by Feynman, Tomonaga, and Schwinger, who jointly received the Nobel prize for this work in 1965. Today, the electromagnetic interactions and the weak interactions are unified (at very high energies only) in the theory of the electroweak interaction. The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics (or QCD) is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions. Non-zero expectation value. If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator, or more accurately, the ground state of a measurement problem. In this case, the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity), field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass. Energy. The vacuum state is associated with a zero-point energy, and this zero-point energy (equivalent to the lowest possible energy state) has measurable effects. It may be detected as the Casimir effect in the laboratory. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. The energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV). An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant. Symmetry. For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can also be proved directly without these axioms. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case, the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model. Non-linear permittivity. Quantum corrections to Maxwell's equations are expected to result in a tiny nonlinear electric polarization term in the vacuum, resulting in a field-dependent electrical permittivity ε deviating from the nominal value ε0 of vacuum permittivity. These theoretical developments are described, for example, in Dittrich and Gies. The theory of quantum electrodynamics predicts that the QED vacuum should exhibit a slight nonlinearity so that in the presence of a very strong electric field, the permittivity is increased by a tiny amount with respect to ε0. Subject to ongoing experimental efforts is the possibility that a strong electric field would modify the effective permeability of free space, becoming anisotropic with a value slightly below "μ"0 in the direction of the electric field and slightly exceeding "μ"0 in the perpendicular direction. The quantum vacuum exposed to an electric field exhibits birefringence for an electromagnetic wave traveling in a direction other than the electric field. The effect is similar to the Kerr effect but without matter being present. This tiny nonlinearity can be interpreted in terms of virtual pair production A characteristic electric field strength for which the nonlinearities become sizable is predicted to be enormous, about formula_0V/m, known as the Schwinger limit; the equivalent Kerr constant has been estimated, being about 1020 times smaller than the Kerr constant of water. Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed. Experimentally measuring such an effect is challenging, and has not yet been successful. Virtual particles. The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles". It is sometimes attempted to provide an intuitive picture of virtual particles, or variances, based upon the Heisenberg energy-time uncertainty principle: formula_1 (with Δ"E" and Δ"t" being the energy and time variations respectively; Δ"E" is the accuracy in the measurement of energy and Δ"t" is the time taken in the measurement, and "ħ" is the Reduced Planck constant) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times. Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δ"t" determines a "budget" for borrowing energy Δ"E". Another issue is the meaning of "time" in this relation because energy and time (unlike position "q" and momentum "p", for example) do not satisfy a canonical commutation relation (such as ["q", "p"] i "ħ"). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. Many approaches to the energy-time uncertainty principle are a long and continuing subject. Physical nature of the quantum vacuum. According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a "Gedankenexperiment" [thought experiment] the quantum vacuum state." According to Fowler &amp; Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows: It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations. (See also.) Photon-photon interaction can occur only through interaction with the vacuum state of some other field, such as the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization. According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations." This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on. According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero." In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes: The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ... Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects." This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, "α", goes to zero." Notations. The vacuum state is written as formula_2 or formula_3. The vacuum expectation value (see also Expectation value) of any field formula_4 should be written as formula_5. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1.32 \\times 10^{18}" }, { "math_id": 1, "text": "\\Delta E \\Delta t \\ge \\frac{\\hbar}{2} \\, , " }, { "math_id": 2, "text": "|0\\rangle" }, { "math_id": 3, "text": "|\\rangle" }, { "math_id": 4, "text": "\\phi" }, { "math_id": 5, "text": "\\langle0|\\phi|0\\rangle" } ]
https://en.wikipedia.org/wiki?curid=701188
70120049
CASS microscopy
CASS is an acronym of Collective Accumulation of Single Scattering. This technique collects faint single scattering signal among the intense multiple scattering background in biological sample, thereby enabling conventional diffraction-limited imaging of a target embedded in a turbid sample. Principle. CASS microscopy makes use of time-gated detection and spatial input-output wave correlation. Theoretical description is given below. Input-Output Relationship for a given Object Function. Let formula_0 be a planar object function that we wish to reconstruct. Then, it is related to its Fourier transform formula_1 by formula_2 where formula_3 represents a 2-dimensional wavevector. Now, let's take a look at the relation between input and output wave in reflection geometry. formula_4 where we assumed the incoming wave is plane wave. Then, the angular spectrum of the output field with given input field is formula_5 where formula_6 has been used. Coherent Addition. Now, consider a reflection matrix in wavevector space without aberration. formula_7 where formula_8 explains the attenuation of single-scattered wave, and formula_9 explains the attenuation of the time-gated multiple-scattered waves. With formula_10, total summation of output field over all possible input wavevector becomes: formula_11 from which we observe that single-scattered field adds up coherently with the increasing number of incoming wavevectors, whereas the multiple-scattered field adds up incoherently. Accordingly, the output intensity behaves as follows with the number of incoming wavevector "N" formula_12 Comparison to Confocal Microscopy. CASS microscopy has a lot in common with confocal microscopy which enables optical sectioning by eliminating scattered light from other planes by using a confocal pinhole. The main difference between these two microscopy modality comes from whether the basis of illumination is in position space or in momentum space. So, let us try to understand the principle of confocal microscopy in terms of momentum basis, here. In confocal microscopy, the effect of the pinhole can be understood by the condition that formula_13 for all possible input wavevector formula_14's, where it is assumed that illumination is focused at formula_15. The resulting field from confocal microscopy (CM) then becomes formula_16 where "N" refers to the number of possible input wavevector formula_14's. The formula above gives formula_17 for the case of formula_18. Application. Rat brain imaging through skull. CASS microscopy has been used to image rat brain without removing skull. It has been further developed such that light energy can be delivered on the target beneath the skull by using reflection eigenchannel, and about 10-fold increase in light energy delivery has been reported. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\mathbf{r})" }, { "math_id": 1, "text": "\\tilde{O}(\\mathbf{k}_s)" }, { "math_id": 2, "text": "O(\\mathbf{r}) = \\int \\tilde{O}(\\mathbf{k}_s) e^{i\\mathbf{k}_s\\cdot \\mathbf{r}} d\\mathbf{k}_s" }, { "math_id": 3, "text": "\\mathbf{k}_s" }, { "math_id": 4, "text": "E_o(\\mathbf{r}) = O(\\mathbf{r}) E_i(\\mathbf{r}) = O(\\mathbf{r}) e^{i \\mathbf{k}_i \\cdot \\mathbf{r}}" }, { "math_id": 5, "text": "\n\\tilde{E}_o(\\mathbf{k}_o, \\mathbf{k}_i) = \\int E_o(\\mathbf{r}_o;\\mathbf{k}_i)e^{i\\mathbf{k}_o\\cdot \\mathbf{r}_o} d\\mathbf{r}_o = \\tilde{O}(\\mathbf{k}_o-\\mathbf{k}_i)\n" }, { "math_id": 6, "text": "E_o(\\mathbf{r}_o;\\mathbf{k}_i) = O(\\mathbf{r}_o)e^{i \\mathbf{k}_i \\cdot \\mathbf{r}_o} = \\int \\tilde{O}(\\mathbf{k}_s)e^{i(\\mathbf{k}_i+\\mathbf{k}_s)\\cdot \\mathbf{r}_o} d\\mathbf{k}_s" }, { "math_id": 7, "text": "\\tilde{E}_o(\\mathbf{k}_o;\\mathbf{k}_i) = \\sqrt{\\gamma}\\tilde{O}(\\mathbf{k}_o-\\mathbf{k}_i) + \\sqrt{\\beta}\\tilde{E}_M(\\mathbf{k}_o;\\mathbf{k}_i)" }, { "math_id": 8, "text": "\\gamma(z)=\\exp{(-2z/l_s)}" }, { "math_id": 9, "text": "\\beta" }, { "math_id": 10, "text": "\\Delta\\mathbf{k} \\equiv \\mathbf{k}_o-\\mathbf{k}_i" }, { "math_id": 11, "text": "\\tilde{E}_{CASS}(\\Delta\\mathbf{k}) = \\sum_{k_i}^N \\tilde{E}(\\Delta\\mathbf{k}+\\mathbf{k}_i;\\mathbf{k}_i) = N\\sqrt{\\gamma}\\tilde{O}(\\Delta\\mathbf{k}) + \\sum_{k_i}^N \\sqrt{\\beta}\\tilde{E}(\\Delta\\mathbf{k}+\\mathbf{k}_i;\\mathbf{k}_i) " }, { "math_id": 12, "text": "I_{CASS} \\sim \\gamma N^2 |\\tilde{O}(\\Delta\\mathbf{k})|^2 + \\beta N" }, { "math_id": 13, "text": "A(\\mathbf{k}_i)e^{i\\mathbf{k}_i\\cdot\\mathbf{r}_c}=1" }, { "math_id": 14, "text": "\\mathbf{k}_i" }, { "math_id": 15, "text": "\\mathbf{r}=\\mathbf{r}_c" }, { "math_id": 16, "text": "E_{CM}(\\mathbf{r}_o) = \\sum_{\\mathbf{k}_i}^N E_o(\\mathbf{r}_o ; \\mathbf{k}_i) = \\sum_{\\mathbf{k}_i} A(\\mathbf{k}_i)e^{i\\mathbf{k}_i\\cdot\\mathbf{r}_o} O(\\mathbf{r}_o) = \\sum_{\\mathbf{k}_i} e^{i\\mathbf{k}_i\\cdot(\\mathbf{r}_o-\\mathbf{r}_c)}O(\\mathbf{r}_o)" }, { "math_id": 17, "text": "E_{CM}(\\mathbf{r}_o) = N \\cdot O(\\mathbf{r}_c)" }, { "math_id": 18, "text": "\\mathbf{r}_o=\\mathbf{r}_c" } ]
https://en.wikipedia.org/wiki?curid=70120049
701207
Radix
Number of digits of a numeral system In a positional numeral system, the radix (pl.: radices) or base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal system (the most common system in use today) the radix is ten, because it uses the ten digits from 0 through 9. In any standard positional numeral system, a number is conventionally written as ("x")"y" with "x" as the string of digits and "y" as its base, although for base ten the subscript is usually assumed (and omitted, together with the pair of parentheses), as it is the most common way to express value. For example, (100)10 is equivalent to 100 (the decimal system is implied in the latter) and represents the number one hundred, while (100)2 (in the binary system with base 2) represents the number four. Etymology. "Radix" is a Latin word for "root". "Root" can be considered a synonym for "base," in the arithmetical sense. In numeral systems. Generally, in a system with radix "b" ("b" &gt; 1), a string of digits "d"1 ... "dn" denotes the number "d"1"b""n"−1 + "d"2"b""n"−2 + … + "dnb"0, where 0 ≤ "di" &lt; "b". In contrast to decimal, or radix 10, which has a ones' place, tens' place, hundreds' place, and so on, radix "b" would have a ones' place, then a "b"1s' place, a "b"2s' place, etc. For example, if "b" = 12, a string of digits such as 59A (where the letter "A" represents the value of ten) would represent the value "5" × "12""2" + "9" × "12""1" + "10" × "12""0" = 838 in base 10. Commonly used numeral systems include: The octal and hexadecimal systems are often used in computing because of their ease as shorthand for binary. Every hexadecimal digit corresponds to a sequence of four binary digits, since sixteen is the fourth power of two; for example, hexadecimal 7816 is binary 2. Similarly, every octal digit corresponds to a unique sequence of three binary digits, since eight is the cube of two. This representation is unique. Let "b" be a positive integer greater than 1. Then every positive integer "a" can be expressed uniquely in the form formula_0 where "m" is a nonnegative integer and the "r"'s are integers such that 0 &lt; "r""m" &lt; "b" and 0 ≤ "r""i" &lt; "b" for "i" = 0, 1, ... , "m" − 1. Radices are usually natural numbers. However, other positional systems are possible, for example, golden ratio base (whose radix is a non-integer algebraic number), and negative base (whose radix is negative). A negative base allows the representation of negative numbers without the use of a minus sign. For example, let "b" = −10. Then a string of digits such as 19 denotes the (decimal) number 1 × (−10)1 + 9 × (−10)0 = −1. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a = r_m b^m + r_{m-1} b^{m-1} + \\dotsb + r_1 b + r_0," } ]
https://en.wikipedia.org/wiki?curid=701207
70123605
Normal distributions transform
The normal distributions transform (NDT) is a point cloud registration algorithm introduced by Peter Biber and Wolfgang Straßer in 2003, while working at University of Tübingen. The algorithm registers two point clouds by first associating a piecewise normal distribution to the first point cloud, that gives the probability of sampling a point belonging to the cloud at a given spatial coordinate, and then finding a transform that maps the second point cloud to the first by maximising the likelihood of the second point cloud on such distribution as a function of the transform parameters. Originally introduced for 2D point cloud map matching in simultaneous localization and mapping (SLAM) and relative position tracking, the algorithm was extended to 3D point clouds and has wide applications in computer vision and robotics. NDT is very fast and accurate, making it suitable for application to large scale data, but it is also sensitive to initialisation, requiring a sufficiently accurate initial guess, and for this reason it is typically used in a coarse-to-fine alignment strategy. Formulation. The NDT function associated to a point cloud is constructed by partitioning the space in regular cells. For each cell, it is possible to define the mean formula_0 and covariance formula_1 of the formula_2 points of the cloud formula_3 that fall within the cell. The probability density of sampling a point at a given spatial location formula_4 within the cell is then given by the normal distribution formula_5 . Two point clouds can be mapped by a Euclidean transformation formula_6 with rotation matrix formula_7 and translation vector formula_8 formula_9 that maps from the second cloud to the first, parametrised by the rotation angles and translation components. The algorithm registers the two point clouds by optimising the parameters of the transformation that maps the second cloud to the first, with respect to a loss function based on the NDT of the first point cloud, solving the following problem formula_10 where the loss function represents the negated likelihood, obtained by applying the transformation to all points in the second cloud and summing the value of the NDT at each transformed point formula_11. The loss is piecewise continuous and differentiable, and can be optimised with gradient-based methods (in the original formulation, the authors use Newton's method). In order to reduce the effect of cell discretisation, a technique consists of partitioning the space into multiple overlapping grids, shifted by half cell size along the spatial directions, and computing the likelihood at a given location as the sum of the NDTs induced by each grid.
[ { "math_id": 0, "text": "\\textstyle \\mathbf{q} = \\frac{1}{n} \\sum_i \\mathbf{x_i}" }, { "math_id": 1, "text": "\\textstyle \\mathbf{S} = \\frac{1}{n} \\sum_i \\left(\\mathbf{x}_i - \\mathbf{q}\\right) \\left(\\mathbf{x}_i - \\mathbf{q}\\right)^\\top" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\mathbf{x}_1, \\dots, \\mathbf{x}_n" }, { "math_id": 4, "text": "\\mathbf{x}" }, { "math_id": 5, "text": "e^{-\\frac{1}{2} \\left(\\mathbf{x} - \\mathbf{q}\\right)^\\top \\mathbf{S}^{-1} \\left(\\mathbf{x} - \\mathbf{q}\\right)}" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "\\mathbf{R}" }, { "math_id": 8, "text": "\\mathbf{t}" }, { "math_id": 9, "text": "f_{\\mathbf{R}, \\mathbf{t}}(\\mathbf{x}) = \\mathbf{R} \\mathbf{x} + \\mathbf{t}" }, { "math_id": 10, "text": "\\arg\\min_{\\mathbf{R}, \\mathbf{t}} \\left\\{ -\\sum_i \\operatorname{NDT} \\left( f_{\\mathbf{R}, \\mathbf{t}} \\left( \\mathbf{x_i} \\right) \\right) \\right\\}" }, { "math_id": 11, "text": "f_{\\mathbf{R}, \\mathbf{t}}(\\mathbf{x})" } ]
https://en.wikipedia.org/wiki?curid=70123605
70124884
Bernoulli umbra
In Umbral calculus, the Bernoulli umbra formula_0 is an umbra, a formal symbol, defined by the relation formula_1, where formula_2 is the index-lowering operator, also known as evaluation operator and formula_3 are Bernoulli numbers, called "moments" of the umbra. A similar umbra, defined as formula_4, where formula_5 is also often used and sometimes called Bernoulli umbra as well. They are related by equality formula_6. Along with the Euler umbra, Bernoulli umbra is one of the most important umbras. In Levi-Civita field, Bernoulli umbras can be represented by elements with power series formula_7 and formula_8, with lowering index operator corresponding to taking the coefficient of formula_9 of the power series. The numerators of the terms are given in OEIS A118050 and the denominators are in OEIS A118051. Since the coefficients of formula_10 are non-zero, the both are infinitely large numbers, formula_0 being infinitely close (but not equal, a bit smaller) to formula_11 and formula_12 being infinitely close (a bit smaller) to formula_13. In Hardy fields (which are generalizations of Levi-Civita field) umbra formula_12 corresponds to the germ at infinity of the function formula_14 while formula_0 corresponds to the germ at infinity of formula_15, where formula_16 is inverse digamma function. Exponentiation. Since Bernoulli polynomials is a generalization of Bernoulli numbers, exponentiation of Bernoulli umbra can be expressed via Bernoulli polynomials: formula_17 where formula_18 is a real or complex number. This can be further generalized using Hurwitz Zeta function: formula_19 From the Riemann functional equation for Zeta function it follows that formula_20 Derivative rule. Since formula_5 and formula_21 are the only two members of the sequences formula_22 and formula_3 that differ, the following rule follows for any analytic function formula_23: formula_24 Elementary functions of Bernoulli umbra. As a general rule, the following formula holds for any analytic function formula_23: formula_25 This allows to derive expressions for elementary functions of Bernoulli umbra. formula_26 formula_27 formula_28 formula_29 Particularly, formula_30 formula_31 formula_32 formula_33 formula_34 Particularly, formula_35, formula_36, Relations between exponential and logarithmic functions. Bernoulli umbra allows to establish relations between exponential, trigonometric and hyperbolic functions on one side and logarithms, inverse trigonometric and inverse hyperbolic functions on the other side in closed form: formula_37 formula_38 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B_-" }, { "math_id": 1, "text": "\\operatorname{eval}B_-^n=B^-_n" }, { "math_id": 2, "text": "\\operatorname{eval}" }, { "math_id": 3, "text": "B^-_n" }, { "math_id": 4, "text": "\\operatorname{eval}B_+^n=B^+_n" }, { "math_id": 5, "text": "B^+_1=1/2" }, { "math_id": 6, "text": "B_+=B_-+1" }, { "math_id": 7, "text": "B_-= \\varepsilon^{-1} -\\frac{1}{2}-\\frac{\\varepsilon }{24}+\\frac{3 \\varepsilon ^3}{640}-\\frac{1525 \\varepsilon ^5}{580608}+\\dotsb" }, { "math_id": 8, "text": "B_+= \\varepsilon^{-1} +\\frac{1}{2}-\\frac{\\varepsilon }{24}+\\frac{3 \\varepsilon ^3}{640}-\\frac{1525 \\varepsilon ^5}{580608}+\\dotsb" }, { "math_id": 9, "text": "1=\\varepsilon^0" }, { "math_id": 10, "text": "\\varepsilon^{-1}" }, { "math_id": 11, "text": "\\varepsilon^{-1}-1/2" }, { "math_id": 12, "text": "B_+" }, { "math_id": 13, "text": "\\varepsilon^{-1}+1/2" }, { "math_id": 14, "text": "\\psi^{-1}(\\ln x)" }, { "math_id": 15, "text": "\\psi^{-1}(\\ln x)-1" }, { "math_id": 16, "text": "\\psi^{-1}(x)" }, { "math_id": 17, "text": "\\operatorname{eval} (B_-+a)^n=B_n(a)," }, { "math_id": 18, "text": "a" }, { "math_id": 19, "text": "\\operatorname{eval} (B_-+a)^p=-p\\zeta(1-p,a)." }, { "math_id": 20, "text": "\\operatorname{eval}\\,B_+^{-p}=\\operatorname{eval}\\frac{B_+^{p+1} 2^p\\pi^{p+1}}{\\sin(\\pi p/2)\\Gamma(p)(p+1)}" }, { "math_id": 21, "text": "B^-_1=-1/2" }, { "math_id": 22, "text": "B^+_n" }, { "math_id": 23, "text": "f(x)" }, { "math_id": 24, "text": "f'(x)=\\operatorname{eval}(f(B_++x)-f(B_-+x))=\\operatorname{eval} \\Delta f(B_-+x)" }, { "math_id": 25, "text": "\\operatorname{eval}f(B_-+x)=\\frac{D}{e^D-1} f(x)." }, { "math_id": 26, "text": "\\operatorname{eval} \\cos (z B_-)=\\operatorname{eval} \\cos (z B_+)=\\frac z2 \\cot \\left(\\frac z2\\right)" }, { "math_id": 27, "text": "\\operatorname{eval} \\cosh (z B_-)=\\operatorname{eval} \\cosh (z B_+)=\\frac z2 \\coth \\left(\\frac z2\\right)" }, { "math_id": 28, "text": "\\operatorname{eval} e^{z B_-}=\\frac{z}{e^{z}-1}" }, { "math_id": 29, "text": "\\operatorname{eval}\\ln ( B_-+z)=\\psi(z)" }, { "math_id": 30, "text": "\\operatorname{eval}\\ln B_+=-\\gamma" }, { "math_id": 31, "text": "\\operatorname{eval}\\frac1{\\pi }\\ln \\left(\\frac{ B _+-\\frac{z}{\\pi }}{ B _-+\\frac{z}{\\pi }}\\right)=\\cot z" }, { "math_id": 32, "text": "\\operatorname{eval} \\frac1\\pi\\ln \\left(\\frac{B _-+1/2 +\\frac{z}{\\pi }}{B _-+1/2 -\\frac{z}{\\pi }}\\right)=\\tan z" }, { "math_id": 33, "text": "\\operatorname{eval}\\cos (a B_-+x) = \\frac{a}{2} \\csc \\left(\\frac{a}{2}\\right) \\cos \\left(\\frac{a}{2}- x\\right)" }, { "math_id": 34, "text": "\\operatorname{eval}\\sin (a B_-+x) = \\frac{a}{2} \\cot \\left(\\frac{a}{2}\\right) \\sin x -\\frac{a}{2} \\cos x " }, { "math_id": 35, "text": "\\operatorname{eval}\\sin B_-=-1/2" }, { "math_id": 36, "text": "\\operatorname{eval}\\sin B_+=1/2" }, { "math_id": 37, "text": "\\operatorname{eval}\\left(\\cosh \\left(2 x B _\\pm\\right)-1\\right)=\\operatorname{eval}\\frac{x}{\\pi} \\operatorname{artanh}\\left(\\frac{x}{\\pi B _\\pm}\\right)=\\operatorname{eval}\\frac{x}{\\pi} \\operatorname{arcoth}\\left(\\frac{\\pi B _\\pm}{x}\\right)=x \\coth (x)-1" }, { "math_id": 38, "text": "\\operatorname{eval}\\frac{z}{2\\pi }\\ln \\left(\\frac{ B _+-\\frac{z}{2\\pi }}{ B _-+\\frac{z}{2\\pi }}\\right)=\\operatorname{eval} \\cos (z B_-)=\\operatorname{eval} \\cos (z B_+)=\\frac z2 \\cot \\left(\\frac z2\\right)" } ]
https://en.wikipedia.org/wiki?curid=70124884
70127661
2 Samuel 14
Second Book of Samuel chapter 2 Samuel 14 is the fourteenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 33 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–3, 14, 18–19, 33; 15 and 4Q53 (4QSamc; 100–75 BCE) with extant verses 7–33. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter contains the following structure: A. Joab's plan: he sends the wise woman to the king, putting words in her mouth (14:1–3) B. The woman manipulates king David to reconsider Absalom's exile (14:4–17) C. The king recognizes Joab's role and changes his mind on Absalom's exile (14:18–20) B'. The king executes his decision on Absalom's exile (14:21–28) "Interruption": an introduction to Absalom (14:25–27) A'. Absalom's plan: he sends Joab to the king, putting words in his mouth (14:29–33) At the opening, Joab who noticed David's softened heart toward Absalom, devised a plan to bring Absalom back to Jerusalem (A) and at the end, Absalom devised a plan to see David and was reconciled with his father (A'). The climax of these events is when king David detected Joab's plan (C). Absalom returned to Jerusalem (14:1–27). Joab read signs that David was ready for Absalom's return, so Joab used trickery to get David's permission so he could bring Absalom, a possible heir to the throne, back to the king's court. For executing his plan, Joab channeled his plea to David through the mouth of a wise woman from Tekoah who had the special gift of either a gift of speech or a gift for feigning or acting lamentation. There are possible connections between this episode and other biblical passages: The woman presented to David a dilemma: she was a widow with only two sons, that when one murdered the other, she was torn between her duty to avenge the death of one son and her duty to her husband to preserve his name by protecting the life of the remaining son (verse 7). Her community demanded a blood revenge, but her appeal for special consideration so that 'her last ember would not be quenched' touched king David's heart, so he promised a ruling (verse 8), which became a royal oath on the woman's further insistence that no one would touch her son. The oath placed David in jeopardy because he had condemned himself for his treatment of Absalom as the woman argued (verse 14): all would die, and Amnon's death cannot be changed by keeping Absalom in banishment. The parallel of the parable devised by Joab to be spoken by the woman to the story of Cain and Abel can be summarized below: Apparently Joab crafted the tale assuming that David had a masterful knowledge of the Torah, and that David would use it as an authoritative guide in making his legal decisions (cf. Nathan's parable; 2 Samuel 12:6), so the king would give the same verdict that the Lord issued for Cain. At this time, David realized that the woman's action was actually Joab's doing, still he acceded to the request that Absalom be allowed to return, although not be granted full privileges (verse 24). The section comprising verses 25–27 provides specific descriptions on Absalom — his beauty and in particular to the weight of his hair— as well as his children, probably intended to show the popularity of Absalom among the people of Israel. "And to Absalom there were born three sons, and one daughter whose name was Tamar; she was a woman of beautiful appearance." Absalom reconciled to David (14:28–33). After waiting for two years without any signs of progress in his relationship with his father, Absalom took one desperate action against Joab, by burning Joab's field, to get Joab's attention and compelled Joab to bring Absalom to David. Finally Absalom met David and given a kiss (verse 33) as a sign of reconciliation. "So Joab came to the king, and told him: and when he had called for Absalom, he came to the king, and bowed himself on his face to the ground before the king: and the king kissed Absalom." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70127661
7012960
Geertruida de Haas-Lorentz
Dutch physicist (1885–1973) Geertruida Luberta "Berta" de Haas-Lorentz (20 November 1885 – 1973) was a Dutch physicist and professor of the Technical University of Delft. She was the first to theoretically study thermal fluctuations in electric circuits, treating electrons as Brownian particles. Consequently she is considered one of the pioneers of electrical noise theory. She was the daughter and doctoral student of Hendrik Lorentz. She went by the name Berta, or Ber. Life. Berta Lorentz was born in Leiden, Netherlands, the eldest daughter of the physicist and 1902 Nobel Prize in Physics winner Hendrik Lorentz and Aletta Catharina Kaiser. Berta was the eldest of four children. Her siblings were Johanna Wilhelmina (born 1889), Gerrit (born 1893, died 1894), and Rudolf (born 1895). At that time of her birth, her father was Professor of Theoretical Physics at the University of Leiden. Her mother, Aletta Kaiser, took care of the children and household, did charity work, and was heavily involved with the local women's suffrage movement. On 22 December 1910, Berta Lorentz married Wander Johannes de Haas, who would become professor of experimental physics in Leiden, and they went on to have two sons and two daughters. Some of their children changed their last name to "Lorentz de Haas." She studied physics at the University of Leiden with her father as dissertation advisor and earned her doctor's degree in 1912 on a thesis entitled "On the theory of Brownian motion and related phenomena" (). After defending her doctoral dissertation in Leiden, de Haas-Lorentz taught physics at the Technical University of Delft and translated some of her father's works into German. She also wrote a biography of her father. Berta de Haas-Lorentz died in 1973 in Leiden. Research. De Haas-Lorentz was one the first to apply Albert Einstein's theory of Brownian motion to other domains. During her thesis work, she was the first to carry out a theoretical analysis of thermal fluctuation of electrons in electrical circuits, predating the experimental discovery of the Johnson–Nyquist noise. She considered that a circuit with resistance "R" and inductance "L" should store an energy "E" "LI"2/2, where "I" is the current. If there was a fluctuating thermal current, by the equipartition theorem the energy would be related to the thermal energy "kT" where "k" is the Boltzmann constant and "T" is the temperature. De Haas-Lorentz obtained, formula_0, where the angle brackets denote the thermal average. She was also the first to propose thermal fluctuations limit the detection of electromagnetic radiation. In collaboration with her husband, the de Haas couple showed that experiments carried by James Clerk Maxwell failed to prove the hypothesis of André-Marie Ampère, that magnetism in matter is caused by microscopic current loops. She also predicted the London penetration depth for superconductivity in 1925, before the development of the London equations in 1935. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{\\langle I^2\\rangle}=\\sqrt{2 k T L}" } ]
https://en.wikipedia.org/wiki?curid=7012960
7013774
Omnibus test
Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance, in a model with two independent variables, if only one variable exerts a significant effect on the dependent variable and the other does not, then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test, researchers often use contrasts. "Omnibus test", as a general name, refers to an overall or a global test. Other names include F-test or Chi-squared test. It is a statistical test implemented on an overall hypothesis that tends to find general significance between parameters' variance, while examining parameters of the same type, such as: Hypotheses regarding equality vs. inequality between k expectancies "μ"1 = "μ"2 = ⋯ = "μ""k" vs. at least one pair "μ""j" ≠ "μ""j′", where "j", "j′" = 1, ..., "k" and "j" ≠ "j′", in Analysis Of Variance (ANOVA); or regarding equality between k standard deviations "σ"1 = "σ"2= ⋯ = "σ""k" vs. at least one pair "σj" ≠ "σj′" in testing equality of variances in ANOVA; or regarding coefficients "β"1 = "β"2 = ⋯ = "β""k" vs. at least one pair "βj" ≠ "βj′" in Multiple linear regression or in Logistic regression. Usually, it tests more than two parameters of the same type and its role is to find general significance of at least one of the parameters involved. Definitions. "Omnibus test" commonly refers to either one of those statistical tests: These omnibus tests are usually conducted whenever one tends to test an overall hypothesis on a quadratic statistic (like sum of squares or variance or covariance) or rational quadratic statistic (like the ANOVA overall F test in Analysis of Variance or F Test in Analysis of covariance or the F Test in Linear Regression, or Chi-Square in Logistic Regression). While significance is founded on the omnibus test, it doesn't specify exactly where the difference is occurred, meaning, it doesn't bring specification on which parameter is significantly different from the other, but it statistically determines that there is a difference, so at least two of the tested parameters are statistically different. If significance was met, none of those tests will tell specifically which mean differs from the others (in ANOVA), which coefficient differs from the others (in regression) etc. In one-way analysis of variance. The F-test in ANOVA is an example of an omnibus test, which tests the overall significance of the model. A significant F test means that among the tested means, at least two of the means are significantly different, but this result doesn't specify exactly which means are different one from the other. Actually, testing means' differences is done by the quadratic rational F statistic ( F=MSB/MSW). In order to determine which mean differs from another mean or which contrast of means are significantly different, Post Hoc tests (Multiple Comparison tests) or planned tests should be conducted after obtaining a significant omnibus F test. It may be considered to use the simple Bonferroni correction or another suitable correction. Another omnibus test we can find in ANOVA is the F test for testing one of the ANOVA assumptions: the equality of variance between groups. In One-Way ANOVA, for example, the hypotheses tested by omnibus F test are: H0: μ1=μ2=...= μk H1: at least one pair μj≠μj' These hypotheses examine model fit of the most common model: yij = μj + εij, where yij is the dependent variable, μj is the j-th independent variable's expectancy, which usually is referred to as "group expectancy" or "factor expectancy"; and εij are the errors results on using the model. The F statistics of the omnibus test is: formula_0 Where, formula_1 is the overall sample mean, formula_2 is the group j sample mean, k is the number of groups and nj is sample size of group j. The F statistic is distributed F(k-1,n-k),(α) under assumption of null hypothesis and normality assumption. F test is considered robust in some situations, even when the normality assumption isn't met. Model assumptions in one-way ANOVA. If the assumption of equality of variances is not met, Tamhane's test is preferred. When this assumption is satisfied we can choose amongst several tests. Although the LSD (Fisher's Least Significant Difference) is a very strong test in detecting pairs of means differences, it is applied only when the F test is significant, and it is mostly less preferable since its method fails in protecting low error rate. Bonferroni test is a good choice due to its correction suggested by his method. This correction states that if n independent tests are to be applied then the α in each test should be equal to α /n. Tukey's method is also preferable by many statisticians because it controls the overall error rate. On small sample sizes, when the assumption of normality is not met, a nonparametric analysis of variance can be made by the Kruskal-Wallis test. An alternative option is to use bootstrap methods to assess whether the group means are different. Bootstrap methods do not have any specific distributional assumptions and may be an appropriate tool to use like using re-sampling, which is one of the simplest bootstrap methods. A person can extend the idea to the case of multiple groups and estimate p-values. Example. A cellular survey on customers' time-wait was reviewed on 1,963 different customers during 7 days on each one of 20 in-sequential weeks. Assuming none of the customers called twice and none of them have customer relations among each other, One Way ANOVA was run on SPSS to find significant differences between the days time-wait: Dependent variable: time minutes to respond. The omnibus F ANOVA test results above indicate significant differences between the days time-wait (P-Value =0.000 &lt; 0.05, α =0.05). The other omnibus tested was the assumption of Equality of Variances, tested by the Levene F test: Dependent variable: time minutes to respond. The results suggest that the equality of variances assumption can't be made. In that case Tamhane's test can be made on Post Hoc comparisons. Considerations. A significant omnibus F test in ANOVA procedure, is an in advance requirement before conducting the Post Hoc comparison, otherwise those comparisons are not required. If the omnibus test fails to find significant differences between all means, it means that no difference has been found between any combinations of the tested means. In such, it protects family-wise Type I error, which may be increased if overlooking the omnibus test. Some debates have occurred about the efficiency of the omnibus F Test in ANOVA. In a paper Review of Educational Research (66(3), 269-306) which reviewed by Greg Hancock, those problems are discussed: William B. Ware (1997) claims that the omnibus test significance is required depending on the Post Hoc test is conducted or planned: "... Tukey's HSD and Scheffé's procedure are one-step procedures and can be done without the omnibus F having to be significant. They are "a posteriori" tests, but in this case, "a posteriori" means "without prior knowledge", as in "without specific hypotheses." On the other hand, Fisher's Least Significant Difference test is a two-step procedure. It should not be done without the omnibus F-statistic being significant." William B. Ware (1997) argued that there are a number of problems associated with the requirement of an omnibus test rejection prior to conducting multiple comparisons. Hancock agrees with that approach and sees the omnibus requirement in ANOVA in performing planned tests an unnecessary test and potentially detrimental, hurdle unless it is related to Fisher's LSD, which is a viable option for k=3 groups. Other reason for relating to the omnibus test significance when it is concerned to protect family-wise Type I error. The publication "Review of Educational Research" discusses four problems in the omnibus F test requirement: "First", in a well planned study, the researcher's questions involve specific contrasts of group means' while the omnibus test, addresses each question only tangentially and it is rather used to facilitate control over the rate of Type I error. "Secondly", this issue of control is related to the second point: the belief that an omnibus test offers protection is not completely accurate. When the complete null hypothesis is true, weak family-wise Type I error control is facilitated by the omnibus test; but, when the complete null is false and partial nulls exist, the F-test does not maintain strong control over the family-wise error rate. A "third" point, which Games (1971) demonstrated in his study, is that the F-test may not be completely consistent with the results of a pairwise comparison approach. Consider, for example, a researcher who is instructed to conduct Tukey's test only if an alpha-level F-test rejects the complete null. It is possible for the complete null to be rejected but for the widest ranging means not to differ significantly. This is an example of what has been referred to as non-consonance/dissonance (Gabriel, 1969) or incompatibility (Lehmann, 1957). On the other hand, the complete null may be retained while the null associated with the widest ranging means would have been rejected had the decision structure allowed it to be tested. This has been referred to by Gabriel (1969) as incoherence. One wonders if, in fact, a practitioner in this situation would simply conduct the MCP contrary to the omnibus test's recommendation. The "fourth" argument against the traditional implementation of an initial omnibus F-test stems from the fact that its well-intentioned but unnecessary protection contributes to a decrease in power. The first test in a pairwise MCP, such as that of the most disparate means in Tukey's test, is a form of omnibus test all by itself, controlling the family-wise error rate at the α-level in the weak sense. Requiring a preliminary omnibus F-test amount to forcing a researcher to negotiate two hurdles to proclaim the most disparate means significantly different, a task that the range test accomplished at an acceptable α -level all by itself. If these two tests were perfectly redundant, the results of both would be identical to the omnibus test; probabilistically speaking, the joint probability of rejecting both would be α when the complete null hypothesis was true. However, the two tests are not completely redundant; as a result the joint probability of their rejection is less than α. The F-protection therefore imposes unnecessary conservatism (see Bernhardson, 1975, for a simulation of this conservatism). For this reason, and those listed before, we agree with Games' (1971) statement regarding the traditional implementation of a preliminary omnibus F-test: There seems to be little point in applying the overall F test prior to running c contrasts by procedures that set [the family-wise error rate] α ... If the c contrasts express the experimental interest directly, they are justified whether the overall F is significant or not and (family-wise error rate) is still controlled. In multiple regression. In multiple regression, the omnibus test is an ANOVA F test on all the coefficients, that is equivalent to the multiple correlations R Square F test. The omnibus F test is an overall test that examines model fit, thus failure to reject the null hypothesis implies that the suggested linear model is not significantly suitable to the data. None of the independent variables has explored as significant in explaining the dependent variable variation. These hypotheses examine model fit of the most common model: yi = β0 + β1 xi1 + ... +βk xik + εij estimated by E(yi|xi1...,xik) = β0 + β1xi1 + ... + βkxik, where E(yi|xi1...xik) is the dependant variable explanatory for the i-th observation, xij is the j-th independent (explanatory) variable, βj is the j-th coefficient of xij and indicates its influence on the dependant variable y upon its partial correlation with y. The F statistics of the omnibus test is: formula_3 Whereas, ȳ is the overall sample mean for yi, ŷi is the regression estimated mean for specific set of k independent (explanatory) variables and n is the sample size. The F statistic is distributed F (k,n-k-1),(α) under assuming of null hypothesis and normality assumption. The omnibus F test regarding the hypotheses over the coefficients. H0: β1= β2=...= βk = 0 H1: at least one βj ≠ 0 The omnibus test examines whether there are any regression coefficients that are significantly non-zero, except for the coefficient β0. The β0 coefficient goes with the constant predictor and is usually not of interest. The null hypothesis is generally thought to be false and is easily rejected with a reasonable amount of data, but in contrary to ANOVA, it is important to do the test anyway. When the null hypothesis cannot be rejected, this means the data are completely worthless. The model that has the constant regression function fits as well as the regression model, which means that no further analysis need be done. In many statistical researches, the omnibus is usually significant, although part or most of the independent variables has no significance influence on the dependant variable. So the omnibus is useful only to imply whether the model fits or not, but it doesn't offers the corrected recommended model which can be fitted to the data. The omnibus test comes to be significant mostly if at least one of the independent variables is significant. This means that any other variable may enter the model, under the model assumption of non-colinearity between independent variables, while the omnibus test still shows significance. The suggested model is fitted to the data. Example 1- omnibus F test on SPSS. An insurance company intends to predict "Average cost of claims" (variable name "claimamt") by three independent variables (Predictors): "Number of claims" (variable name "nclaims"), "Policyholder age" (variable name holderage), "Vehicle age" (variable name vehicleage). Linear Regression procedure has been run on the data, as follows: The omnibus F test in the ANOVA table implies that the model involved these three predictors can fit for predicting "Average cost of claims", since the null hypothesis is rejected (P-Value=0.000 &lt; 0.01, α=0.01). This rejection of the omnibus test implies that "at least one" of the coefficients of the predictors in the model have found to be non-zero. The multiple- R-Square reported on the Model Summary table is 0.362, which means that the three predictors can explain 36.2% from the "Average cost of claims" variation. ANOVA. a. Predictors: (Constant), nclaims Number of claims, holderage Policyholder age, vehicleage Vehicle age b. Dependent Variable: claimant Average cost of claims Model summary. a. Predictors: (Constant), nclaims Number of claims, holderage Policyholder age, vehicleage Vehicle age However, only the predictors: "Vehicle age" and "Number of claims" has statistical influence and prediction on the "Average cost of claims" as shown on the following "Coefficients table", whereas "Policyholder age" is not significant as a predictor (P-Value=0.116&gt;0.05). That means that a model without this predictor may be suitable. Coefficients. a. Dependent Variable: claimant Average cost of claims Example 2- multiple linear regression omnibus F test on R. The following R output illustrates the linear regression and model fit of two predictors: x1 and x2. The last line describes the omnibus F test for model fit. The interpretation is that the null hypothesis is rejected (P = 0.02692&lt;0.05, α=0.05). So Either β1 or β2 appears to be non-zero (or perhaps both). Note that the conclusion from Coefficients: table is that only β1 is significant (P-Value shown on Pr(&gt;|t|) column is 4.37e-05 « 0.001). Thus one step test, like omnibus F test for model fitting is not sufficient to determine model fit for those predictors. Coefficients. Residual standard error: 1.157 on 7 degrees of freedom Multiple R-Squared: 0.644, Adjusted R-squared: 0.5423 F-statistic: 6.332 on 2 and 7 DF, p-value: 0.02692 In logistic regression. In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (with a limited number of categories) or dichotomic dependent variable based on one or more predictor variables. The probabilities describing the possible outcome of a single trial are modeled, as a function of explanatory (independent) variables, using a logistic function or multinomial distribution. Logistic regression measures the relationship between a categorical or dichotomic dependent variable and usually a continuous independent variable (or several), by converting the dependent variable to probability scores. The probabilities can be retrieved using the logistic function or the multinomial distribution, while those probabilities, like in probability theory, takes on values between zero and one: formula_4 So the model tested can be defined by: formula_5 whereas yi is the category of the dependent variable for the i-th observation and xij is the j independent variable (j=1,2...k) for that observation, βj is the j-th coefficient of xij and indicates its influence on and expected from the fitted model . Note: independent variables in logistic regression can also be continuous. Omnibus test relates to the hypotheses. H0: β1= β2=...= βk = 0 H1: at least one βj ≠ 0 Model fitting: maximum likelihood method. The omnibus test, among the other parts of the logistic regression procedure, is a likelihood-ratio test based on the maximum likelihood method. Unlike the Linear Regression procedure in which estimation of the regression coefficients can be derived from least square procedure or by minimizing the sum of squared residuals as in maximum likelihood method, in logistic regression there is no such an analytical solution or a set of equations from which one can derive a solution to estimate the regression coefficients. So logistic regression uses the maximum likelihood procedure to estimate the coefficients that maximize the likelihood of the regression coefficients given the predictors and criterion. The maximum likelihood solution is an iterative process that begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this process until improvement is made, at which point the model is said to have converged. Applying the procedure in conditioned on convergence ( see also in the following "remarks and other considerations "). In general, regarding simple hypotheses on parameter θ ( for example): H0: θ=θ0 vs. H1: θ=θ1 , the likelihood ratio test statistic can be referred as: formula_6 , where L(yi|θ) is the likelihood function, which refers to the specific θ. The numerator corresponds to the maximum likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Lower values of the likelihood ratio mean that the observed result was much less likely to occur under the null hypothesis as compared to the alternative. Higher values of the statistic mean that the observed outcome was more than or equally likely or nearly as likely to occur under the null hypothesis as compared to the alternative, and the null hypothesis cannot be rejected. The likelihood ratio test provides the following decision rule: If    do not reject H0, otherwise If     reject H0 and also reject H0 with probability, whereas the critical values   c, q   are usually chosen to obtain a specified significance level α, through : formula_7. Thus, the likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable The Neyman-Pearson lemma states that this likelihood ratio test is the most powerful among all level-α tests for this problem. Test's statistic and distribution: Wilks' theorem. First we define the test statistic as the deviate formula_8 which indicates testing the ratio: formula_9 While the saturated model is a model with a theoretically perfect fit. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, non-significant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained. Two measures of deviance D are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept and no predictors and the saturated model. And, the model deviance represents the difference between a model with at least one predictor and the saturated model. In this respect, the null model provides a baseline upon which to compare predictor models. Therefore, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on a chi-square distribution with one degree of freedom. If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improved model fit. This is analogous to the F-test used in linear regression analysis to assess the significance of prediction. In most cases, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result, attributed to Samuel S. Wilks, says that as the sample size n approaches the test statistic has asymptotically distribution with degrees of freedom equal to the difference in dimensionality of and parameters the β coefficients as mentioned before on the omnibus test. e.g., if n is large enough and if the fitted model assuming the null hypothesis consist of 3 predictors and the saturated ( full ) model consist of 5 predictors, the Wilks' statistic is approximately distributed (with 2 degrees of freedom). This means that we can retrieve the critical value C from the chi squared with 2 degrees of freedom under a specific significance level. Example 1 of logistic regression. Spector and Mazzeo examined the effect of a teaching method known as PSI on the performance of students in a course, intermediate macro economics. The question was whether students exposed to the method scored higher on exams in the class. They collected data from students in two classes, one in which PSI was used and another in which a traditional teaching method was employed. For each of 32 students, they gathered data on Dependent variable. • GRADE — coded 1 if the final grade was an A, 0 if the final grade was a B or C. The particular interest in the research was whether PSI had a significant effect on GRADE. TUCE and GPA are included as control variables. Statistical analysis using logistic regression of Grade on GPA, Tuce and Psi was conducted in SPSS using Stepwise Logistic Regression. In the output, the "block" line relates to Chi-Square test on the set of independent variables that are tested and included in the model fitting. The "step" line relates to Chi-Square test on the step level while variables included in the model step by step. Note that in the output a step chi-square, is the same as the block chi-square since they both are testing the same hypothesis that the tested variables enter on this step are non-zero. If you were doing stepwise regression, however, the results would be different. Using forward stepwise selection, researchers divided the variables into two blocks (see METHOD on the syntax following below). LOGISTIC REGRESSION VAR=grade /METHOD=fstep psi / fstep gpa tuce /CRITERIA PIN(.50) POUT(.10) ITERATE(20) CUT(.5). The default PIN value is .05, was changed by the researchers to .5 so the insignificant TUCE would make it in. In the first block, psi alone gets entered, so the block and step Chi Test relates to the hypothesis H0: βPSI = 0. Results of the omnibus Chi-Square tests implies that PSI is significant for predicting that GRADE is more likely to be a final grade of A. Omnibus tests of model coefficients. Then, in the next block, the forward selection procedure causes GPA to get entered first, then TUCE (see METHOD command on the syntax before). Omnibus tests of model coefficients. The first step on block2 indicates that GPA is significant (P-Value=0.003&lt;0.05, α=0.05) So, looking at the final entries on step2 in block2, Tests of Individual Parameters shown on the "variables in the equation table", which Wald test (W=(b/sb)2, where b is β estimation and sb is its standard error estimation ) that is testing whether any individual parameter equals zero . You can, if you want, do an incremental LR chi-square test. That, in fact, is the best way to do it, since the Wald test referred to next is biased under certain situations. When parameters are tested separately, by controlling the other parameters, we see that the effects of GPA and PSI are statistically significant, but the effect of TUCE is not. Both have Exp(β) greater than 1, implying that the probability to get "A" grade is greater than getting other grade depends upon the teaching method PSI and a former grade average GPA. Variables in the equation. a. Variable(s) entered on step 1: PSI Example 2 of logistic regression. Research subject: "The Effects of Employment, Education, Rehabilitation and Seriousness of Offense on Re-Arrest". A social worker in a criminal justice probation agency tends to examine whether some of the factors are leading to re-arrest of those managed by the person's agency over the past five years who were convicted and then released. The data consist of 1,000 clients with the following variables: Independent variables (coded as a dummy variables). Note: Continuous independent variables were not measured on this scenario. The null hypothesis for the overall model fit: The overall model does not predict re-arrest. OR, the independent variables as a group are not related to being re-arrested. (And for the independent variables: any of the separate independent variables is not related to the likelihood of re-arrest). The alternative hypothesis for the overall model fit: The overall model predicts the likelihood of re-arrest. (The meaning respectively independent variables: having committed a felony (vs. a misdemeanor), not completing high school, not completing a rehab program, and being unemployed are related to the likelihood of being re-arrested). Logistic regression was applied to the data on SPSS, since the Dependent variable is Categorical (dichotomous) and the researcher examine the odd ratio of potentially being re-arrested vs. not expected to be re-arrested. Omnibus tests of model coefficients. The table shows the "Omnibus Test of Model Coefficients" based on Chi-Square test, which implies that the overall model is predictive of re-arrest (focus is on row three—"Model"): (4 degrees of freedom) = 41.15, p &lt; .001, and the null can be rejected. Testing the null that the Model, or the group of independent variables that are taken together, does not predict the likelihood of being re-arrested. This result means that the model of expecting re-arrestment is more suitable to the data. Variables in the equation. One can also reject the null that the B coefficients for having committed a felony, completing a rehab program, and being employed are equal to zero—they are statistically significant and predictive of re-arrest. Education level, however, was not found to be predictive of re-arrest. Controlling for other variables, having committed a felony for the first offense increases the odds of being re-arrested by 33% (p = .046), compared to having committed a misdemeanor. Completing a rehab program and being employed after the first offense decreases the odds or re-arrest, each by more than 50% (p &lt; .001). The last column, Exp(B) (taking the B value by calculating the inverse natural log of B) indicates odds ratio: the probability of an event occurring, divided by the probability of the event not occurring. An Exp(B) value over 1.0 signifies that the independent variable increases the odds of the dependent variable occurring. An Exp(B) under 1.0 signifies that the independent variable decreases the odds of the dependent variable occurring, depending on the decoding that mentioned on the variables details before. A negative B coefficient will result in an Exp(B) less than 1.0, and a positive B coefficient will result in an Exp(B) greater than 1.0. The statistical significance of each B is tested by the Wald Chi-Square—testing the null that the B coefficient = 0 (the alternate hypothesis is that it does not = 0). p-values lower than alpha are significant, leading to rejection of the null. Here, only the independent variables felony, rehab, employment, are significant ( P-Value&lt;0.05. Examining the odds ratio of being re-arrested vs. not re-arrested, means to examine the odds ratio for comparison of two groups (re-arrested = 1 in the numerator, and re-arrested = 0 in the denominator) for the felony group, compared to the baseline misdemeanor group. Exp(B)=1.327 for "felony" can indicates that having committed a felony vs. misdemeanor increases the odds of re-arrest by 33%. For "rehab", a person can say that having completed rehab reduces the likelihood (or odds) of being re-arrested by almost 51%. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " F = \\frac{\\displaystyle \\frac{1}{k-1} \\sum_{j=1}^k n_j\\left(\\bar y_j- \\bar y\\right)^2} {\\displaystyle \\frac{1}{n - k} {\\sum_{j=1}^{k}} {\\sum_{i=1}^{n_j}} \\left(y_{ij}- \\bar y_j\\right)^2}" }, { "math_id": 1, "text": "\\bar y" }, { "math_id": 2, "text": "\\bar y_j" }, { "math_id": 3, "text": " F = \\frac{{\\displaystyle \\sum_{i=1}^n \\left(\\widehat {y_i}-\\bar {y}\\right)^2}/{k}} {{\\displaystyle {\\sum_{j=1}^{k}} {\\sum_{i=1}^{n_j}} \\left(y_{ij}-\\widehat {y_i}\\right)^2}/{(n-k-1)}}" }, { "math_id": 4, "text": " P(y_i)= \\frac{e^{\\beta_0 + \\beta_1 x_{i1} + \\dots + \\beta_k x_{ik}}}{1+e^{\\beta_0 + \\beta_1 x_{i1} + \\dots + \\beta_k x_{ik}}} =\\frac{1}{1 + e^{-(\\beta_0 + \\beta_1 x_{i1} + \\dots+ \\beta_k x_{ik})}} " }, { "math_id": 5, "text": "f(y_i) = \\ln \\frac {P(y_i)}{1-P(y_i)} = \\beta_0 + \\beta_1 x_{i1} + \\dots + \\beta_k x_{ik} ," }, { "math_id": 6, "text": "\\lambda(y_i)= \\frac {L(y_i|\\theta_0)}{L(y_i|\\theta_1)}" }, { "math_id": 7, "text": "q \\cdot P(\\lambda(y_i)=C|H_0) + P(\\lambda(y_i)<C|H_0)" }, { "math_id": 8, "text": " D=-2\\ln\\lambda(y_i)" }, { "math_id": 9, "text": " D = - 2 \\ln \\lambda(y_i) = - 2 \\ln\\frac\\text{likelihood under fitted model if null hypothesis is true}\\text{likelihood under saturated model}" } ]
https://en.wikipedia.org/wiki?curid=7013774
70147730
Structured encryption
Cryptographic primitive Structured encryption (STE) is a form of encryption that encrypts a data structure so that it can be privately queried. Structured encryption can be used as a building block to design end-to-end encrypted databases, efficient searchable symmetric encryption (SSE) and other algorithms that can be efficiently executed on encrypted data. Description. A structured encryption scheme is a symmetric-key encryption scheme that encrypts a data structure in such a way that, given the key formula_0 and a query formula_1, one can generate a query token formula_2 with which the encrypted data structure can be queried. If the STE scheme is dynamic then it also supports update operations like inserts and deletes. There are several forms of STE including response-revealing STE where the response to the query is output in plaintext and response-hiding where the response to the query is output in encrypted form. STE schemes guarantee that no information about the data or queries can be recovered from the encrypted data structure and tokens beyond a well-specified and "reasonable" leakage profile. STE schemes with a variety of leakage profiles have been designed for a wide array of abstract data types and data structures including arrays, multi-maps, dictionaries and graphs. STE is closely related to but different than searchable symmetric encryption. The purpose of SSE is to encrypt document collections in such a way that keyword search can still be executed on the encrypted documents whereas the purpose of STE is to encrypt data structures in such a way that queries can still be executed over the encrypted structure. Certain types of STE schemes like multi-map encryption schemes can be used to design sub-linear and optimal SSE schemes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "qtk" } ]
https://en.wikipedia.org/wiki?curid=70147730
701481
Infrared divergence
Type of diverging integral in physics In physics, an infrared divergence (also IR divergence or infrared catastrophe) is a situation in which an integral, for example a Feynman diagram, diverges because of contributions of objects with very small energy approaching zero, or equivalently, because of physical phenomena at very long distances. Overview. The infrared divergence only appears in theories with massless particles (such as photons). They represent a legitimate effect that a complete theory often implies. In fact, in the case of photons, the energy is given by formula_0, where formula_1 is the frequency associated to the particle and as it goes to zero, like in the case of soft photons, there will be an infinite number of particles in order to have a finite amount of energy. One way to deal with it is to impose an infrared cutoff and take the limit as the cutoff approaches zero and/or refine the question. Another way is to assign the massless particle a fictitious mass, and then take the limit as the fictitious mass vanishes. The divergence is usually in terms of particle number and not empirically troubling, in that all measurable quantities remain finite. (Unlike in the case of the UV catastrophe where the energies involved diverge.) Bremsstrahlung example. When an electric charge is accelerated (or decelerated) it emits Bremsstrahlung radiation. Semiclassical electromagnetic theory, or the full quantum electrodynamic analysis, shows that an infinite number of soft photons are created. But only a finite number are detectable, the remainder, due to their low energy, falling below any finite energy detection threshold, which must necessarily exist. However even though most of the photons are not detectable they can't be ignored in the theory; quantum electrodynamic calculations show that the transition amplitude between "any" states with a finite number of photons vanishes. Finite transition amplitudes are obtained only by summing over states with an infinite number of soft photons. The zero-energy photons become important in analyzing the Bremsstrahlung radiation in the coaccelerated frame in which the charge experiences a thermal bath due to the Unruh effect. In this case, the static charge will only interact with these zero-energy (Rindler) photons in a sense similar to virtual photons in the coulomb interaction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E=h\\nu" }, { "math_id": 1, "text": "\\nu" } ]
https://en.wikipedia.org/wiki?curid=701481
70149850
2 Samuel 15
Second Book of Samuel chapter 2 Samuel 15 is the fifteenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 37 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–7, 20–21, 23, 26–31, 37 and 4Q53 (4QSamc; 100–75 BCE) with extant verses 1–6, 8–15. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter consists of two stages of Absalom's plan to take over the throne from David: The conspiracy part consists of 3 phases: The story of Absalom's rebellion can be observed as five consecutive episodes: A. David's flight from Jerusalem (15:13–16:14) B. The victorious Absalom and his counselors (16:15–17:14) C. David reaches Mahanaim (17:15–29) B'. The rebellion is crushed and Absalom is executed (18:1–19:8abc) A'. David's reentry into Jerusalem (19:8d–20:3) God's role seems to be understated in the whole events, but is disclosed by a seemingly insignificant detail: 'the crossing of the Jordan river'. The Hebrew root word"' 'br", "to cross" (in various nominal and verbal forms) is used more than 30 times in these chapters (compared to 20 times in the rest of 2 Samuel) to report David's flight from Jerusalem, his crossing of the Jordan river, and his reentry into Jerusalem. In 2 Samuel 17:16, stating that David should cross the Jordan (17:16), the verb" 'br" is even reinforced by a 'Hebrew infinitive absolute' to mark this critical moment: "king David is about to cross out of the land of Israel." David's future was in doubt until it was stated that God had rendered foolish Ahithophel's good counsel to Absalom (2 Samuel 17:14), thus granting David's prayer (15:31), and saving David from Absalom's further actions. Once Absalom was defeated, David's crossing back over the Jordan echoes the Israelites' first crossing over the Jordan under Joshua's leadership (Joshua 1–4): Here God's role is not as explicit as during Joshua's crossing, but the signs are clear that God was with David, just as with Joshua. Absalom’s conspiracy (15:1–12). Absalom's ambition to take the throne was made known when he got for himself a royal retinue, 'chariot and horses', and a personal bodyguard, 'men to run ahead of him' (cf 1 Kings 1:5). In his next step he set out to win popular support among the people from all tribes of Israel who came to the 'seat of justice' ('the gate') for litigation. Absalom was capitalizing on discontent caused by the failure of David's court to act efficiently and sympathetically, and gaining popularity by making himself accessible and friendly (verse 6). For 4 years Absalom planned his revolt without arousing any suspicion. As Absalom was born in Hebron (thus a Hebronite), his request for permission to fulfil a vow in Hebron was readily granted. He chose Hebron as the seat of kingship (verse 10) to show that he was supported by the Judahites (including Amasa, David's nephew, and Ahitophel, David's counsellor and grandfather of Bathsheba), while also enjoying support from northern tribes, therefore from Dan to Beersheba (cf. 17:11). David fled from Jerusalem (15:13–37). David's flight from Jerusalem toward the eastside of the Jordan River was evidently a wise move, as he could not seek refuge in Judah nor other areas west of Jordan due to Absalom's presence in Hebron, the discontent among the Israelites and the enmity of the Philistines. On the outskirts of Jerusalem, probably on the edge of the Kidron Valley before the ascent to the Mount of Olives, David stood to watch his supporters marched past him, including the Jerusalem garrison ('his servants'), loyal troops ('the people'), his personal bodyguard ('the Cherethites and Pelethites', cf. 2 Samuel 8:18) and a detachment of 600 Philistines from Gath under Ittai (verses 17–18). During David's flight from Jerusalem there were five conversations with various people (15:19–16:13), bearing some symmetrical correspondence to the three encounters with some of the same people on his homeward journey (19:16–40). In the first meeting, David tried to persuade Ittai (verses 19–23), the leader of the Gittites (people of Gath), to stay with Absalom ('the king') and avoid the uncertainty as a foreigner and exile with Davis, but for Ittai David was his only king with whom he was determined to stay. In the second conversation David gives the two priests, Abiathar and Zadok (verses 24–29), two reasons for returning to Jerusalem: David's advance up the Mount of Olives (verses 30–31), described as a pilgrimage or an act of penance, breaks the sequence of the five conversations. This was a march in sorrow and humility, containing a prayer that Ahithophel's counsel be confounded (verse 31). A third conversation soon occurred between David and Hushai of the Archite clan of Benjamin (verse 32–37), whose appearance in the place 'where God was worshipped' could be a direct reply to David's prayer, for Husai was commissioned to be an informer and to defeat Ahithophel's counsel. Hushai, with the two priests and their sons (Ahimaaz and Jonathan), was to infiltrate Absalom's inner circle and report back to David. "And it was told David, "Ahithophel is among the conspirators with Absalom." And David said, "O LORD, please turn the counsel of Ahithophel into foolishness."" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70149850
7015763
Hardy's inequality
Hardy's inequality is an inequality in mathematics, named after G. H. Hardy. It states that if formula_0 is a sequence of non-negative real numbers, then for every real number "p" &gt; 1 one has formula_1 If the right-hand side is finite, equality holds if and only if formula_2 for all "n". An integral version of Hardy's inequality states the following: if "f" is a measurable function with non-negative values, then formula_3 If the right-hand side is finite, equality holds if and only if "f"("x") = 0 almost everywhere. Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. The original formulation was in an integral form slightly different from the above. General one-dimensional version. The general weighted one dimensional version reads as follows: formula_5 formula_7 Multidimensional versions. Multidimensional Hardy inequality around a point. In the multidimensional case, Hardy's inequality can be extended to formula_8-spaces, taking the form formula_9 where formula_10, and where the constant formula_11 is known to be sharp; by density it extends then to the Sobolev space formula_12. Similarly, if formula_13, then one has for every formula_10 formula_14 Multidimensional Hardy inequality near the boundary. If formula_15 is an nonempty convex open set, then for every formula_16, formula_17 and the constant cannot be improved. Fractional Hardy inequality. If formula_18 and formula_19, formula_20, there exists a constant formula_21 such that for every formula_22 satisfying formula_23, one has formula_24 Proof of the inequality. Integral version. A change of variables gives formula_25 which is less or equal than formula_26 by Minkowski's integral inequality. Finally, by another change of variables, the last expression equals formula_27 Discrete version: from the continuous version. Assuming the right-hand side to be finite, we must have formula_28 as formula_29. Hence, for any positive integer "j", there are only finitely many terms bigger than formula_30. This allows us to construct a decreasing sequence formula_31 containing the same positive terms as the original sequence (but possibly no zero terms). Since formula_32 for every "n", it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining formula_33 if formula_34 and formula_35 otherwise. Indeed, one has formula_36 and, for formula_34, there holds formula_37 (the last inequality is equivalent to formula_38, which is true as the new sequence is decreasing) and thus formula_39. Discrete version: Direct proof. Let formula_40 and let formula_41 be positive real numbers. Set formula_42. First we prove the inequality Let formula_43 and let formula_44 be the difference between the formula_45-th terms in the right-hand side and left-hand side of *, that is, formula_46. We have: formula_47 or formula_48 According to Young's inequality we have: formula_49 from which it follows that: formula_50 By telescoping we have: formula_51 proving *. Applying Hölder's inequality to the right-hand side of * we have: formula_52 from which we immediately obtain: formula_53 Letting formula_54 we obtain Hardy's inequality.
[ { "math_id": 0, "text": "a_1, a_2, a_3, \\dots " }, { "math_id": 1, "text": "\\sum_{n=1}^\\infty \\left (\\frac{a_1+a_2+\\cdots +a_n}{n}\\right )^p\\leq\\left (\\frac{p}{p-1}\\right )^p\\sum_{n=1}^\\infty a_n^p." }, { "math_id": 2, "text": "a_n = 0" }, { "math_id": 3, "text": "\\int_0^\\infty \\left (\\frac{1}{x}\\int_0^x f(t)\\, dt\\right)^p\\, dx\\le\\left (\\frac{p}{p-1}\\right )^p\\int_0^\\infty f(x)^p\\, dx." }, { "math_id": 4, "text": "\\alpha + \\tfrac{1}{p} < 1" }, { "math_id": 5, "text": "\\int_0^\\infty \\biggl(y^{\\alpha - 1} \\int_0^y x^{-\\alpha} f(x)\\,dx \\biggr)^p \\,dy \\le \n\\frac{1}{\\bigl(1 - \\alpha - \\frac{1}{p}\\bigr)^p} \\int_0^\\infty f(x)^p\\, dx\n" }, { "math_id": 6, "text": "\\alpha + \\tfrac{1}{p} > 1" }, { "math_id": 7, "text": "\\int_0^\\infty \\biggl(y^{\\alpha - 1} \\int_y^\\infty x^{-\\alpha} f(x)\\,dx \\biggr)^p\\,dy \\le \n\\frac{1}{\\bigl(\\alpha + \\frac{1}{p} - 1\\bigr)^p} \\int_0^\\infty f(x)^p\\, dx.\n" }, { "math_id": 8, "text": "L^{p}" }, { "math_id": 9, "text": "\\left\\|\\frac{f}{|x|}\\right\\|_{L^{p}(\\mathbb{R}^{n})}\\le \\frac{p}{n-p}\\|\\nabla f\\|_{L^{p}(\\mathbb{R}^{n})}, 2\\le n, 1\\le p<n," }, { "math_id": 10, "text": "f\\in C_{0}^{\\infty}(\\mathbb{R}^{n})" }, { "math_id": 11, "text": "\\frac{p}{n-p}" }, { "math_id": 12, "text": "W^{1, p} (\\mathbb{R}^n)" }, { "math_id": 13, "text": "p > n \\ge 2" }, { "math_id": 14, "text": "\n \\Big(1 - \\frac{n}{p}\\Big)^p \\int_{\\mathbb{R}^n} \\frac{\\vert f(x) - f (0)\\vert^p}{|x|^p} dx \n\\le \\int_{\\mathbb{R}^n} \\vert \\nabla f\\vert^p.\n" }, { "math_id": 15, "text": "\\Omega \\subsetneq \\mathbb{R}^n" }, { "math_id": 16, "text": "f \\in W^{1, p} (\\Omega)" }, { "math_id": 17, "text": "\n \\Big(1 - \\frac{1}{p}\\Big)^p\\int_{\\Omega} \\frac{\\vert f (x)\\vert^p}{\\operatorname{dist} (x, \\partial \\Omega)^p}\\,dx\n\\le \\int_{\\Omega}\\vert \\nabla f \\vert^p,\n" }, { "math_id": 18, "text": "1 \\le p < \\infty" }, { "math_id": 19, "text": "0 < \\lambda < \\infty" }, { "math_id": 20, "text": "\\lambda \\ne 1" }, { "math_id": 21, "text": "C" }, { "math_id": 22, "text": "f : (0, \\infty) \\to \\mathbb{R}" }, { "math_id": 23, "text": "\\int_0^\\infty \\vert f (x)\\vert^p/x^{\\lambda} \\,dx < \\infty" }, { "math_id": 24, "text": "\n\\int_0^\\infty \\frac{\\vert f (x)\\vert^p}{x^{\\lambda}} \\,dx \n\\le C \\int_0^\\infty \\int_0^\\infty \\frac{\\vert f (x) - f (y)\\vert^p}{\\vert x - y\\vert^{1+\\lambda}} \\,dx \\, dy.\n" }, { "math_id": 25, "text": "\\left(\\int_0^\\infty\\left(\\frac{1}{x}\\int_0^x f(t)\\,dt\\right)^p\\ dx\\right)^{1/p}=\\left(\\int_0^\\infty\\left(\\int_0^1 f(sx)\\,ds\\right)^p\\,dx\\right)^{1/p}," }, { "math_id": 26, "text": "\\int_0^1\\left(\\int_0^\\infty f(sx)^p\\,dx\\right)^{1/p}\\,ds" }, { "math_id": 27, "text": "\\int_0^1\\left(\\int_0^\\infty f(x)^p\\,dx\\right)^{1/p}s^{-1/p}\\,ds=\\frac{p}{p-1}\\left(\\int_0^\\infty f(x)^p\\,dx\\right)^{1/p}." }, { "math_id": 28, "text": "a_n\\to 0" }, { "math_id": 29, "text": "n\\to\\infty" }, { "math_id": 30, "text": "2^{-j}" }, { "math_id": 31, "text": "b_1\\ge b_2\\ge\\dotsb" }, { "math_id": 32, "text": "a_1+a_2+\\dotsb +a_n\\le b_1+b_2+\\dotsb +b_n" }, { "math_id": 33, "text": "f(x)=b_n" }, { "math_id": 34, "text": "n-1<x<n" }, { "math_id": 35, "text": "f(x)=0" }, { "math_id": 36, "text": "\\int_0^\\infty f(x)^p\\,dx=\\sum_{n=1}^\\infty b_n^p" }, { "math_id": 37, "text": "\\frac{1}{x}\\int_0^x f(t)\\,dt=\\frac{b_1+\\dots+b_{n-1}+(x-n+1)b_n}{x} \\ge \\frac{b_1+\\dots+b_n}{n}" }, { "math_id": 38, "text": "(n-x)(b_1+\\dots+b_{n-1})\\ge (n-1)(n-x)b_n" }, { "math_id": 39, "text": "\\sum_{n=1}^\\infty\\left(\\frac{b_1+\\dots+b_n}{n}\\right)^p\\le\\int_0^\\infty\\left(\\frac{1}{x}\\int_0^x f(t)\\,dt\\right)^p\\,dx" }, { "math_id": 40, "text": "p > 1" }, { "math_id": 41, "text": "b_1 , \\dots , b_n" }, { "math_id": 42, "text": "S_k = \\sum_{i=1}^k b_i" }, { "math_id": 43, "text": "T_n = \\frac{S_n}{n}" }, { "math_id": 44, "text": "\\Delta_n" }, { "math_id": 45, "text": "n" }, { "math_id": 46, "text": "\\Delta_n := T_n^p - \\frac{p}{p-1} b_n T_n^{p-1}" }, { "math_id": 47, "text": "\\Delta_n = T_n^p - \\frac{p}{p-1} b_n T_n^{p-1} = T_n^p - \\frac{p}{p-1} (n T_n - (n-1) T_{n-1}) T_n^{p-1}" }, { "math_id": 48, "text": "\\Delta_n = T_n^p \\left( 1 - \\frac{np}{p-1} \\right) + \\frac{p (n-1)}{p-1} T_{n-1} T_n^p ." }, { "math_id": 49, "text": "T_{n-1} T_n^{p-1} \\leq \\frac{T_{n-1}^p}{p} + (p-1) \\frac{T_n^p}{p} ," }, { "math_id": 50, "text": "\\Delta_n \\leq \\frac{n-1}{p-1} T_{n-1}^p - \\frac{n}{p-1} T_n^p ." }, { "math_id": 51, "text": "\\begin{align}\n\\sum_{n=1}^N \\Delta_n &\\leq 0 - \\frac{1}{p-1} T_1^p + \\frac{1}{p-1} T_1^p - \\frac{2}{p-1} T_2^p + \\frac{2}{p-1} T_2^p - \\frac{3}{p-1} T_3^p + \\dotsb+ \\frac{N-1}{p-1} T_{N-1}^p - \\frac{N}{p-1} T_N^p \\\\\n&= - \\frac{N}{p-1} T_N^p < 0 ,\n\\end{align}\n" }, { "math_id": 52, "text": "\\sum_{n=1}^N \\frac{S_n^p}{n^p} \\leq \\frac{p}{p-1} \\sum_{n=1}^N \\frac{b_n S_n^{p-1}}{n^{p-1}} \\leq \\frac{p}{p-1} \\left( \\sum_{n=1}^N b_n^p \\right)^{1/p} \\left( \\sum_{n=1}^N \\frac{S_n^p}{n^p} \\right)^{(p-1)/p}" }, { "math_id": 53, "text": "\\sum_{n=1}^N \\frac{S_n^p}{n^p} \\leq \\left( \\frac{p}{p-1} \\right)^p \\sum_{n=1}^N b_n^p ." }, { "math_id": 54, "text": "N \\rightarrow \\infty" } ]
https://en.wikipedia.org/wiki?curid=7015763
70157641
Mirror descent
In mathematics, mirror descent is an iterative optimization algorithm for finding a local minimum of a differentiable function. It generalizes algorithms such as gradient descent and multiplicative weights. History. Mirror descent was originally proposed by Nemirovski and Yudin in 1983. Motivation. In gradient descent with the sequence of learning rates formula_0 applied to a differentiable function formula_1, one starts with a guess formula_2 for a local minimum of formula_3 and considers the sequence formula_4 such that formula_5 This can be reformulated by noting that formula_6 In other words, formula_7 minimizes the first-order approximation to formula_1 at formula_8 with added proximity term formula_9. This squared Euclidean distance term is a particular example of a Bregman distance. Using other Bregman distances will yield other algorithms such as Hedge which may be more suited to optimization over particular geometries. Formulation. We are given convex function formula_10 to optimize over a convex set formula_11, and given some norm formula_12 on formula_13. We are also given differentiable convex function formula_14, formula_15-strongly convex with respect to the given norm. This is called the "distance-generating function", and its gradient formula_16 is known as the "mirror map". Starting from initial formula_17, in each iteration of Mirror Descent: Extensions. Mirror descent in the online optimization setting is known as Online Mirror Descent (OMD). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\eta_n)_{n \\geq 0}" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "\\mathbf{x}_0" }, { "math_id": 3, "text": "F," }, { "math_id": 4, "text": "\\mathbf{x}_0, \\mathbf{x}_1, \\mathbf{x}_2, \\ldots" }, { "math_id": 5, "text": "\\mathbf{x}_{n+1}=\\mathbf{x}_n-\\eta_n \\nabla F(\\mathbf{x}_n),\\ n \\ge 0." }, { "math_id": 6, "text": "\\mathbf{x}_{n+1}=\\arg \\min_{\\mathbf{x}} \\left(F(\\mathbf{x}_n) + \\nabla F(\\mathbf{x}_n)^T (\\mathbf{x} - \\mathbf{x}_n) + \\frac{1}{2 \\eta_n}\\|\\mathbf{x} - \\mathbf{x}_n\\|^2\\right)" }, { "math_id": 7, "text": "\\mathbf{x}_{n+1}" }, { "math_id": 8, "text": "\\mathbf{x}_n" }, { "math_id": 9, "text": "\\|\\mathbf{x} - \\mathbf{x}_n\\|^2" }, { "math_id": 10, "text": "f" }, { "math_id": 11, "text": "K \\subset \\mathbb{R}^n" }, { "math_id": 12, "text": "\\|\\cdot\\|" }, { "math_id": 13, "text": "\\mathbb{R}^n" }, { "math_id": 14, "text": "h \\colon \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "\\nabla h \\colon \\mathbb{R}^n \\to \\mathbb{R}^n" }, { "math_id": 17, "text": "x_0 \\in K" }, { "math_id": 18, "text": "\\theta_t \\leftarrow \\nabla h (x_t)" }, { "math_id": 19, "text": "\\theta_{t+1} \\leftarrow \\theta_t - \\eta_t \\nabla f(x_t)" }, { "math_id": 20, "text": "x'_{t+1} \\leftarrow (\\nabla h)^{-1}(\\theta_{t+1})" }, { "math_id": 21, "text": "K" }, { "math_id": 22, "text": "x_{t+1} \\leftarrow \\mathrm{arg}\\min_{x \\in K}D_h(x||x'_{t+1})" }, { "math_id": 23, "text": "D_h" } ]
https://en.wikipedia.org/wiki?curid=70157641
7016707
Hopf manifold
In complex geometry, a Hopf manifold is obtained as a quotient of the complex vector space (with zero deleted) formula_0 by a free action of the group formula_1 of integers, with the generator formula_2 of formula_3 acting by holomorphic contractions. Here, a "holomorphic contraction" is a map formula_4 such that a sufficiently big iteration formula_5 maps any given compact subset of formula_6 onto an arbitrarily small neighbourhood of 0. Two-dimensional Hopf manifolds are called Hopf surfaces. Examples. In a typical situation, formula_3 is generated by a linear contraction, usually a diagonal matrix formula_7, with formula_8 a complex number, formula_9. Such manifold is called "a classical Hopf manifold". Properties. A Hopf manifold formula_10 is diffeomorphic to formula_11. For formula_12, it is non-Kähler. In fact, it is not even symplectic because the second cohomology group is zero. Hypercomplex structure. Even-dimensional Hopf manifolds admit hypercomplex structure. The Hopf surface is the only compact hypercomplex manifold of quaternionic dimension 1 which is not hyperkähler.
[ { "math_id": 0, "text": "({\\mathbb C}^n\\backslash 0)" }, { "math_id": 1, "text": "\\Gamma \\cong {\\mathbb Z}" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "\\Gamma" }, { "math_id": 4, "text": "\\gamma:\\; {\\mathbb C}^n \\to {\\mathbb C}^n" }, { "math_id": 5, "text": "\\;\\gamma^N" }, { "math_id": 6, "text": "{\\mathbb C}^n" }, { "math_id": 7, "text": "q\\cdot Id" }, { "math_id": 8, "text": "q\\in {\\mathbb C}" }, { "math_id": 9, "text": "0<|q|<1" }, { "math_id": 10, "text": "H:=({\\mathbb C}^n\\backslash 0)/{\\mathbb Z}" }, { "math_id": 11, "text": "S^{2n-1}\\times S^1" }, { "math_id": 12, "text": "n\\geq 2" } ]
https://en.wikipedia.org/wiki?curid=7016707
70167685
2 Samuel 17
Second Book of Samuel chapter 2 Samuel 17 is the seventeenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 29 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 2–3, 23–25, 29. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The story of Absalom's rebellion can be observed as five consecutive episodes: A. David's flight from Jerusalem (15:13–16:14) B. The victorious Absalom and his counselors (16:15–17:14) C. David reaches Mahanaim (17:15–29) B'. The rebellion is crushed and Absalom is executed (18:1–19:8abc) A'. David's reentry into Jerusalem (19:8d–20:3) God's role seems to be understated in the whole events, but is disclosed by a seemingly insignificant detail: 'the crossing of the Jordan river'. The Hebrew root word"' 'br", "to cross" (in various nominal and verbal forms) is used more than 30 times in these chapters (compared to 20 times in the rest of 2 Samuel) to report David's flight from Jerusalem, his crossing of the Jordan river, and his reentry into Jerusalem. In 2 Samuel 17:16, stating that David should cross the Jordan (17:16), the verb" 'br" is even reinforced by a 'Hebrew infinitive absolute' to mark this critical moment: "king David is about to cross out of the land of Israel." David's future was in doubt until it was stated that God had rendered foolish Ahithophel's good counsel to Absalom (2 Samuel 17:14), thus granting David's prayer (15:31), and saving David from Absalom's further actions. Once Absalom was defeated, David's crossing back over the Jordan echoes the Israelites' first crossing over the Jordan under Joshua's leadership (Joshua 1–4): Here God's role is not as explicit as during Joshua's crossing, but the signs are clear that God was with David, just as with Joshua. Hushai countered Ahitophel's advice (17:1–14). The previous section (2 Samuel 16:15–23) and this passage, comprising 2 Samuel 17:1–14, about Absalom and his two advisors (Ahitophel and Hushai) together have the following structure: A Absalom and Hushai (16:15–19) B. Absalom and Ahitophel: first counsel (16:20–22) An interruption regarding Ahitophel (16:23) B'. Absalom and Ahitophel: second counsel (17:1–4) A'. Absalom and Hushai (17:5–14a) Another interruption regarding Ahitophel (17:14b) This section records the contest between Hushai and Ahitophel to provide acceptable advice for Absalom, which was pivotal in the story of Absalom's rebellion. This was prepared by the task given by David to Hushai, that Hushai was to 'defeat... the counsel of Ahithophel' (15:34) and the conversations involving Hushai and the two priests, Zadok and Abiathar (15:24–29, 32–37), in contrast to the respectful introductions to Ahitophel and his counsel (15:12; 16:20–23). Ahithophel advised Absalom to take action against David quickly: a sudden night attack on David's weary companions, with swift action and minimal loss of life to kill David alone and return all other fugitives to Jerusalem, as 'a young wife returns to her husband after a brief quarrel' (reading verse 30 in the Septuagint, rather than the Masoretic Text). For an unspecified reason Absalom wished to consult Hushai, who then made full use of his persuasive powers in colorful words (verses 8–13) to counter Ahitophel's advice and buy time for David to regroup, using 3 arguments: Hushai's eloquent reasoning managed to impress Absalom and his advisers more than Ahitophel's counsel, which is emphasized in verse 14 to be YHWH's will as the decisive factor. Hushai's warning saved David (17:15–29). Hushai left Absalom's council right after giving his counsel before Absalom announced the final decision. He quickly sent a message to David to cross the Jordan immediately (verse 16) avoiding the possibility of a sudden attack as recommended by Ahithophel. Despite being spotted by Absalom's servants, the messengers, involving the sons of Abiathar (Jonathan) and Zadok (Ahimaaz) with the help of a girl informant, successfully transmitted the message to David who then safely crossed the Jordan River with his followers. Three pieces of supplemental information are included in verses 23–29:. (1) Shobi son of Nahash, the Ammonite, (2) Machir of the house of Saul, who had previously taken care of Mephibosheth, and (3) Barzillai from Gilead (cf. 2 Samuel 19:31–39). These people faithfully provide for David in his current condition. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70167685
7016769
Hopf surface
In complex geometry, a Hopf surface is a compact complex surface obtained as a quotient of the complex vector space (with zero deleted) formula_0 by a free action of a discrete group. If this group is the integers the Hopf surface is called primary, otherwise it is called secondary. (Some authors use the term "Hopf surface" to mean "primary Hopf surface".) The first example was found by Heinz Hopf (1948), with the discrete group isomorphic to the integers, with a generator acting on formula_1 by multiplication by 2; this was the first example of a compact complex surface with no Kähler metric. Higher-dimensional analogues of Hopf surfaces are called Hopf manifolds. Invariants. Hopf surfaces are surfaces of class VII and in particular all have Kodaira dimension formula_2, and all their plurigenera vanish. The geometric genus is 0. The fundamental group has a normal central infinite cyclic subgroup of finite index. The Hodge diamond is In particular the first Betti number is 1 and the second Betti number is 0. Conversely Kunihiko Kodaira (1968) showed that a compact complex surface with vanishing the second Betti number and whose fundamental group contains an infinite cyclic subgroup of finite index is a Hopf surface. Primary Hopf surfaces. In the course of classification of compact complex surfaces, Kodaira classified the primary Hopf surfaces. A primary Hopf surface is obtained as formula_3 where formula_4 is a group generated by a polynomial contraction formula_5. Kodaira has found a normal form for formula_5. In appropriate coordinates, formula_5 can be written as formula_6 where formula_7 are complex numbers satisfying formula_8, and either formula_9 or formula_10. These surfaces contain an elliptic curve (the image of the "x"-axis) and if formula_9 the image of the "y"-axis is a second elliptic curve. When formula_9, the Hopf surface is an elliptic fiber space over the projective line if formula_11 for some positive integers "m" and "n", with the map to the projective line given by formula_12, and otherwise the only curves are the two images of the axes. The Picard group of any primary Hopf surface is isomorphic to the non-zero complex numbers formula_13. has proven that a complex surface is diffeomorphic to formula_14 if and only if it is a primary Hopf surface. Secondary Hopf surfaces. Any secondary Hopf surface has a finite unramified cover that is a primary Hopf surface. Equivalently, its fundamental group has a subgroup of finite index in its center that is isomorphic to the integers. Masahido Kato (1975) classified them by finding the finite groups acting without fixed points on primary Hopf surfaces. Many examples of secondary Hopf surfaces can be constructed with underlying space a product of a spherical space forms and a circle.
[ { "math_id": 0, "text": "\\Complex^2\\setminus \\{0\\}" }, { "math_id": 1, "text": "\\Complex^2" }, { "math_id": 2, "text": "-\\infty" }, { "math_id": 3, "text": "H=\\Big(\\Complex^2\\setminus \\{0\\}\\Big)/\\Gamma," }, { "math_id": 4, "text": "\\Gamma" }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": " (x, y) \\mapsto (\\alpha x +\\lambda y^n, \\beta y)" }, { "math_id": 7, "text": "\\alpha, \\beta\\in \\Complex" }, { "math_id": 8, "text": "0<|\\alpha|\\leq |\\beta| <1" }, { "math_id": 9, "text": "\\lambda=0" }, { "math_id": 10, "text": "\\alpha=\\beta^n" }, { "math_id": 11, "text": "\\alpha^m =\\beta^n" }, { "math_id": 12, "text": "(x, y) \\mapsto x^m y^{-n}" }, { "math_id": 13, "text": "\\Complex^*" }, { "math_id": 14, "text": "S^3\\times S^1" } ]
https://en.wikipedia.org/wiki?curid=7016769
70174520
Quaternion estimator algorithm
Algorithm to solve Wahba's problem The quaternion estimator algorithm (QUEST) is an algorithm designed to solve Wahba's problem, that consists of finding a rotation matrix between two coordinate systems from two sets of observations sampled in each system respectively. The key idea behind the algorithm is to find an expression of the loss function for the Wahba's problem as a quadratic form, using the Cayley–Hamilton theorem and the Newton–Raphson method to efficiently solve the eigenvalue problem and construct a numerically stable representation of the solution. The algorithm was introduced by Malcolm D. Shuster in 1981, while working at Computer Sciences Corporation. While being in principle less robust than other methods such as Davenport's q method or singular value decomposition, the algorithm is significantly faster and reliable in practical applications, and it is used for attitude determination problem in fields such as robotics and avionics. Formulation of the problem. Wahba's problem consists of finding a rotation matrix formula_0 that minimises the loss function formula_1 where formula_2 are the vector observations in the reference frame, formula_3 are the vector observations in the body frame, formula_4 is a rotation matrix between the two frames, and formula_5 are a set of weights such that formula_6. It is possible to rewrite this as a maximisation problem of a gain function formula_7 formula_8 defined in such a way that the loss formula_9 attains a minimum when formula_7 is maximised. The gain formula_7 can in turn be rewritten as formula_10 where formula_11 is known as the attitude profile matrix. In order to reduce the number of variables, the problem can be reformulated by parametrising the rotation as a unit quaternion formula_12 with vector part formula_13 and scalar part formula_14, representing the rotation of angle formula_15 around an axis whose direction is described by the vector formula_16, subject to the unity constraint formula_17. It is now possible to express formula_4 in terms of the quaternion parametrisation as formula_18 where formula_19 is the skew-symmetric matrix formula_20. Substituting formula_4 with the quaternion representation and simplifying the resulting expression, the gain function can be written as a quadratic form in formula_21 formula_22 where the formula_23 matrix formula_24 is defined from the quantities formula_25 This quadratic form can be optimised under the unity constraint by adding a Lagrange multiplier formula_26, obtaining an unconstrained gain function formula_27 that attains a maximum when formula_28. This implies that the optimal rotation is parametrised by the quaternion formula_29 that is the eigenvector associated to the largest eigenvalue formula_30 of formula_31. Solution of the characteristic equation. The optimal quaternion can be determined by solving the characteristic equation of formula_31 and constructing the eigenvector for the largest eigenvalue. From the definition of formula_31, it is possible to rewrite formula_28 as a system of two equations formula_32 where formula_33 is the Rodrigues vector. Substituting formula_34 in the second equation with the first, it is possible to derive an expression of the characteristic equation formula_35. Since formula_36, it follows that formula_37 and therefore formula_38 for an optimal solution (when the loss formula_9 is small). This permits to construct the optimal quaternion formula_29 by replacing formula_30 in the Rodrigues vector formula_34 formula_39. The formula_34 vector is however singular for formula_40. An alternative expression of the solution that does not involve the Rodrigues vector can be constructed using the Cayley–Hamilton theorem. The characteristic equation of a formula_41 matrix formula_42 is formula_43 where formula_44 The Cayley–Hamilton theorem states that any square matrix over a commutative ring satisfies its own characteristic equation, therefore formula_45 allowing to write formula_46 where formula_47 and for formula_48 this provides a new construction of the optimal vector formula_49 that gives the conjugate quaternion representation of the optimal rotation as formula_50 where formula_51. The value of formula_30 can be determined as a numerical solution of the characteristic equation. Replacing formula_52 inside the previously obtained characteristic equation formula_35. gives formula_53 where formula_54 whose root can be efficiently approximated with the Newton–Raphson method, taking 1 as initial guess of the solution in order to converge to the highest eigenvalue (using the fact, shown above, that formula_38 when the quaternion is close to the optimal solution).
[ { "math_id": 0, "text": "\\mathbf{A}^*" }, { "math_id": 1, "text": " l \\left( \\mathbf{A} \\right) = \\frac{1}{2} \\sum_{i=1}^{n} a_i \\left\\| \\mathbf{w}_i - \\mathbf{A} \\mathbf{v}_i \\right\\|^2 " }, { "math_id": 2, "text": "\\mathbf{w}_i" }, { "math_id": 3, "text": "\\mathbf{v}_i" }, { "math_id": 4, "text": "\\mathbf{A}" }, { "math_id": 5, "text": "a_i" }, { "math_id": 6, "text": "\\textstyle \\sum_i a_i = 1" }, { "math_id": 7, "text": "g" }, { "math_id": 8, "text": "g \\left( \\mathbf{A} \\right) = 1 - l \\left( \\mathbf{A} \\right) = \\sum_i a_i \\mathbf{w}_i^\\top \\mathbf{A} \\mathbf{v}_i " }, { "math_id": 9, "text": "l" }, { "math_id": 10, "text": " g \\left( \\mathbf{A} \\right) = \\operatorname{tr} \\left( \\mathbf{A} \\mathbf{B}^\\top \\right) " }, { "math_id": 11, "text": "\\mathbf{B} = \\textstyle \\sum_i a_i \\mathbf{w}_i \\mathbf{v}_i^\\top" }, { "math_id": 12, "text": "\\mathbf{q} = \\left( v_1, v_2, v_3, q\\right)" }, { "math_id": 13, "text": "\\mathbf{v} = \\left( v_1, v_2, v_3 \\right)" }, { "math_id": 14, "text": "q" }, { "math_id": 15, "text": "\\theta = 2 \\cos^{-1} q" }, { "math_id": 16, "text": "\\textstyle \\frac{1}{\\sin \\frac{\\theta}{2}} \\mathbf{v}" }, { "math_id": 17, "text": "\\mathbf{q}^\\top \\mathbf{q} = 1" }, { "math_id": 18, "text": " \\mathbf{A} = \\left( q^2 - \\mathbf{v} \\cdot \\mathbf{v} \\right) \\mathbf{I} + 2 \\mathbf{v}\\mathbf{v}^\\top + 2 q \\mathbf{V}_\\times " }, { "math_id": 19, "text": "\\mathbf{V}_\\times" }, { "math_id": 20, "text": " \\mathbf{V}_\\times =\n\\begin{pmatrix}\n 0 & v_3 & -v_2 \\\\\n-v_3 & 0 & v_1 \\\\\n v_2 & -v_1 & 0 \\\\\n\\end{pmatrix}\n" }, { "math_id": 21, "text": "\\mathbf{q}" }, { "math_id": 22, "text": " g(\\mathbf{q}) = \\mathbf{q}^\\top \\mathbf{K} \\mathbf{q} " }, { "math_id": 23, "text": "4 \\times 4" }, { "math_id": 24, "text": " \\mathbf{K} =\n\\begin{pmatrix}\n\\mathbf{S} - \\sigma \\mathbf{I} & \\mathbf{z} \\\\\n\\mathbf{z}^\\top & \\sigma\n\\end{pmatrix}\n" }, { "math_id": 25, "text": "\n\\begin{align}\n\\mathbf{S} &= \\mathbf{B} + \\mathbf{B}^\\top \\\\\n\\mathbf{z} &= \\sum_i a_i \\left( \\mathbf{w}_i \\times \\mathbf{v}_i \\right) \\\\\n\\sigma &= \\operatorname{tr} \\mathbf{B} .\n\\end{align}\n" }, { "math_id": 26, "text": "-\\lambda \\mathbf{q}^\\top \\mathbf{q}" }, { "math_id": 27, "text": " \\hat{g} \\left( \\mathbf{q} \\right) = \\mathbf{q}^\\top \\mathbf{K} \\mathbf{q} - \\lambda \\mathbf{q}^\\top \\mathbf{q} " }, { "math_id": 28, "text": "\\mathbf{K} \\mathbf{q} = \\lambda \\mathbf{q}" }, { "math_id": 29, "text": "\\mathbf{q}^*" }, { "math_id": 30, "text": "\\lambda_{\\text{max}}" }, { "math_id": 31, "text": "\\mathbf{K}" }, { "math_id": 32, "text": "\n\\begin{align}\n\\mathbf{y} &= \\left( (\\lambda + \\sigma) \\mathbf{I} - \\mathbf{S} \\right)^{-1} \\mathbf{z} \\\\\n\\lambda &= \\sigma + \\mathbf{z} \\mathbf{y}\n\\end{align}\n" }, { "math_id": 33, "text": "\\mathbf{y} = \\textstyle \\frac{1}{q} \\mathbf{v}" }, { "math_id": 34, "text": "\\mathbf{y}" }, { "math_id": 35, "text": " \\lambda = \\sigma + \\mathbf{z}^\\top \\left( (\\lambda + \\sigma) \\mathbf{I} - \\mathbf{S} \\right)^{-1} \\mathbf{z} " }, { "math_id": 36, "text": "\\lambda_{\\text{max}} = \\max g\\left(\\mathbf{A}\\right)" }, { "math_id": 37, "text": "\\lambda_{\\text{max}} = 1 - \\min l\\left(\\mathbf{A}\\right)" }, { "math_id": 38, "text": "\\lambda_{\\text{max}} \\approx 1" }, { "math_id": 39, "text": " \\mathbf{q}^* = \\frac{1}{\\sqrt{1 + \\left| \\mathbf{y}_{\\lambda_{\\text{max}}} \\right|^2}} (\\mathbf{y}, 1)^\\top " }, { "math_id": 40, "text": "\\theta = \\pi" }, { "math_id": 41, "text": "3\\times3" }, { "math_id": 42, "text": "\\mathbf{S}" }, { "math_id": 43, "text": " \\det \\left[ \\mathbf{S} - \\xi \\mathbf{I} \\right] = -\\xi^3 + 2 \\sigma \\xi^2 - k \\xi + \\Delta = 0 " }, { "math_id": 44, "text": "\n\\begin{align}\n\\sigma &= \\frac{1}{2} \\operatorname{tr}{\\mathbf{S}} \\\\\nk &= \\operatorname{tr} \\left( \\operatorname{adj} \\mathbf{S} \\right) \\\\\n\\Delta &= \\det \\mathbf{S}\n\\end{align}\n" }, { "math_id": 45, "text": " -\\mathbf{S}^3 + 2 \\sigma \\mathbf{S}^2 - k \\mathbf{S} + \\Delta = 0 " }, { "math_id": 46, "text": " \\left( (\\omega + \\sigma) \\mathbf{I} - \\mathbf{S} \\right)^{-1} = \\frac{\\alpha \\mathbf{I} + \\beta \\mathbf{S} + \\mathbf{S}^2}{\\gamma} " }, { "math_id": 47, "text": "\n\\begin{align}\n\\alpha &= \\omega^2 - \\sigma^2 + k \\\\\n\\beta &= \\omega - \\sigma \\\\\n\\gamma &= (\\omega + \\sigma) \\alpha - \\Delta\n\\end{align}\n" }, { "math_id": 48, "text": "\\omega = \\lambda_{\\text{max}}" }, { "math_id": 49, "text": "\n\\begin{align}\n\\mathbf{y}^* &= \\left( (\\lambda + \\sigma) \\mathbf{I} - \\mathbf{S} \\right)^{-1} \\mathbf{z} \\\\\n &= \\frac{\\alpha \\mathbf{I} + \\beta \\mathbf{S} + \\mathbf{S}^2}{\\gamma} \\mathbf{z}\n\\end{align}\n" }, { "math_id": 50, "text": " \\mathbf{q}^* = \\frac{1}{\\sqrt{\\gamma^2 + \\left| \\mathbf{x} \\right|^2}} (\\mathbf{x}, \\gamma)^\\top " }, { "math_id": 51, "text": " \\mathbf{x} = \\left( \\alpha \\mathbf{I} + \\beta \\mathbf{S} + \\mathbf{S}^2 \\right) \\mathbf{z} " }, { "math_id": 52, "text": "\\left( (\\omega + \\sigma) \\mathbf{I} - \\mathbf{S} \\right)^{-1}" }, { "math_id": 53, "text": " \\lambda^4 - (a + b) \\lambda^2 - c \\lambda + (ab + c \\sigma - d) = 0 " }, { "math_id": 54, "text": "\n\\begin{align}\na &= \\sigma^2 - k \\\\\nb &= \\sigma^2 + \\mathbf{z}^\\top \\mathbf{z} \\\\\nc &= \\Delta + \\mathbf{z}^\\top \\mathbf{S} \\mathbf{z} \\\\\nd &= \\mathbf{z}^\\top \\mathbf{S}^2 \\mathbf{z}\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=70174520
70175623
K-stability of Fano varieties
In mathematics, and in particular algebraic geometry, K-stability is an algebro-geometric stability condition for projective algebraic varieties and complex manifolds. K-stability is of particular importance for the case of Fano varieties, where it is the correct stability condition to allow the formation of moduli spaces, and where it precisely characterises the existence of Kähler–Einstein metrics. K-stability was first defined for Fano manifolds by Gang Tian in 1997 in response to a conjecture of Shing-Tung Yau from 1993 that there should exist a stability condition which characterises the existence of a Kähler–Einstein metric on a Fano manifold. It was defined in reference to the "K-energy functional" previously introduced by Toshiki Mabuchi. Tian's definition of K-stability was reformulated by Simon Donaldson in 2001 in a purely algebro-geometric way. K-stability has become an important notion in the study and classification of Fano varieties. In 2012 Xiuxiong Chen, Donaldson, and Song Sun and independently Gang Tian proved that a smooth Fano manifold is K-polystable if and only if it admits a Kähler–Einstein metric. This was later generalised to singular K-polystable Fano varieties due to the work of Berman–Boucksom–Jonsson and others. K-stability is important in constructing moduli spaces of Fano varieties, where observations going back to the original development of geometric invariant theory show that it is necessary to restrict to a class of stable objects to form good moduli. It is now known through the work of Chenyang Xu and others that there exists a projective coarse moduli space of K-polystable Fano varieties of finite type. This work relies on Caucher Birkar's proof of boundedness of Fano varieties, for which he was awarded the 2018 Fields medal. Due to the reformulations of the K-stability condition by Fujita–Li and Odaka, the K-stability of Fano varieties may be explicitly computed in practice. Which Fano varieties are K-stable is well understood in dimension one, two, and three. Definition and characterisations. The notion of K-stability for Fano manifolds was originally specified using differential geometry by Tian, who extended the purely analytical notion of the Futaki invariant of a vector field to the case of certain normal varieties with orbifold singularities. This was later reformulated in a purely algebro-geometric form by Donaldson, but this general definition lost a direct link to the geometry of Fano varieties, instead making sense for the broader class of all projective varieties. Work of Tian shows that the Donaldson–Futaki invariant specifying the weight of the formula_0-action on the central fibre of a test configuration can be computed in terms of certain intersection numbers (corresponding to the weight of an action on the so-called CM line bundle). In the Fano case these intersection numbers, which involve the anticanonical divisor of the variety and its test configuration, can be given powerful alternative characterisations in terms of the algebraic and birational geometry of the Fano variety. Thus in the case of Fano varieties, there are many different but equivalent characterisations of K-stability, and some of these characterisations lend themselves to explicit calculation or easier proofs of results. In this section all definitions are stated in the generality of a formula_1-Fano variety, which is a Fano variety with ample formula_1-Cartier anticanonical divisor and at worst Kawamata log terminal (klt) singularities. The definitions of K-stability can be made for any formula_1-Gorenstein Fano variety (that is, any Fano variety where the anticanonical divisor is formula_1-Cartier), however it was proven by Odaka that every K-semistable Fano variety has at worst klt singularities, so for the purpose of studying K-stability it suffices to assume at worst klt singularities. Every definition can be extended in a straightforward way to formula_1-log Fano pairs, a pair formula_2 of a klt variety X and klt divisor such that formula_3 is ample and formula_1-Cartier. Traditional definition. The definition of K-stability for a Fano manifold, or more generally a formula_1-Fano variety, can be given in many forms. The general definition of K-stability in terms of test configurations (see K-stability for more details) can be simplified if the type of test configuration one considers can be simplified. For example, in the case of toric varieties, one may always take test configurations which are also toric, and this leads to a recharacterisation of K-stability in terms of convex functions on the moment polytope of the toric variety, as was observed by Donaldson in his first paper on K-stability. In the case of Fano manifolds, it was already implicit in the work of Tian that one may restrict to test configurations with a simplified central fibre, in that case where the central fibre is a normal variety. In this case there exists an intersection-theoretic formula for the Donaldson–Futaki invariant of a normal test configuration formula_4 for formula_5. Explicitly, one extends the test configuration formula_6 to a test configuration over the complex projective line formula_7 trivially at the point formula_8, one has a formula formula_9 With respect to this invariant, if formula_10 is a formula_1-Fano variety, we say formula_10 is According to the above definitions, there are implications Uniformly K-stable formula_19 K-stable formula_19 K-semistable The above definitions are not well-suited to the situation where the Fano variety formula_10 has automorphisms. When the space of automorphisms is positive-dimensional formula_20, it was observed by Akito Futaki that there are certain test configurations constructed out of the automorphisms of formula_10 which are "trivial" for the perspective of testing K-stability. In this case one should restrict to those test configurations which are equivariant with respect to the action of a maximal torus formula_21, and this leads to the notion of K-polystability or reduced K-stability. We say formula_10 is As for the case where the automorphism group is not positive-dimensional, we have implications Reduced uniformly K-stable formula_19 K-polystable formula_19 K-semistable The condition of "uniform" stability is "a priori" stronger than stability, because it assumes a "uniform" bound above zero for the Donaldson–Futaki invariant of the test configuration. However it turns out in the case of formula_1-Fano varieties uniform stability is actually equivalent to stability.&lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (Liu–Xu–Zhuang): — (Reduced) uniform K-stability is equivalent to K-(poly)stability for formula_1-Fano varieties. Many results can be proved more easily for uniform K-stability because a uniform bound is stronger than a non-uniform bound, so often one works with this definition as opposed to the traditional K-(poly)stability. In the more general case of a polarised variety considered in the article on K-stability it is still an open and important problem to characterise how (reduced) uniform K-stability relates to K-(poly)stability. Special test configurations. As mentioned above, sometimes the type of test configuration to be considered can be simplified. In the case of Fano varieties, a special test configuration is a test configuration formula_4 such that we have a rational equivalence of divisors formula_26 for some formula_27, and the central fibre formula_28 is also a formula_1-Fano variety. One may prove that given any test configuration formula_4, there exists a "special" test configuration formula_29 such that formula_30 This implies that for the purposes of testing the K-stability of formula_10, it suffices restrict to just looking at the above definitions of K-stability for special test configurations. The fact that one may assume the central fibre of the test configuration is also Fano leads to strong links with birational geometry and the minimal model program, providing a number of alternative characterisations of K-stability described in the following sections. The main alternative characterisation is in terms of a different notion of Ding stability, which is a variation of the K-stability condition for the Ding invariant formula_31 where one adds on the log canonical threshold of the test configuration. The Ding invariant can only be defined in the setting of Fano varieties. Using this new invariant instead of formula_32, one can define every notion of Ding stability exactly as above, leading to Ding (semi/poly)stability and uniform versions. The Ding invariant has better formal properties with respect to algebraic geometry than the Donaldson–Futaki invariant. It is known that when a test configuration is special, the Ding invariant agrees with the Donaldson–Futaki invariant up to a constant factor, and so for Fano varieties Ding stability is equivalent to K-stability. Alpha invariant. The first known effective criteria to test for K-stability was developed by Tian. Originally Tian's work was designed to directly provide a criterion for the existence of a Kähler–Einstein metric on a Fano manifold, and by later work it is known that every Kähler–Einstein Fano manifold is K-polystable. Tian's original definition of the alpha invariant was analytical in nature, but can be used to verify the existence of a Kähler–Einstein metric in practice. The alpha invariant of Tian can be defined relative to a group of automorphisms formula_33, and the alpha invariant formula_34 corresponds to the concept of reduced K-stability or K-polystability above. Fix a formula_33-invariant Kähler metric formula_35 on a Fano manifold. Define a special class of Kähler potentials by formula_36 Then the alpha invariant is defined by formula_37 The importance of this invariant is as follows: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem: (Tian) — Let formula_10 be a smooth Fano manifold of dimension formula_38. If the alpha invariant formula_39 then formula_10 admits a formula_33-invariant Kähler–Einstein metric. It was later observed by Odaka–Sano that the alpha invariant can be given a purely algebro-geometric definition in terms of an infimum of the log canonical threshold over all formula_33-invariant linear systems contained inside formula_40. Precisely, Demailly showed formula_41 This allows purely algebro-geometric proofs of the existence of Kähler–Einstein metrics. Beta invariant. The beta invariant formula_42 makes close contact with birational geometry. This invariant was developed by Fujita and Li in an attempt to discover a characterisation of K-stability in terms of divisors or valuations of the Fano variety formula_10. This work was inspired by earlier ideas of Ross–Thomas which attempted to describe K-stability in terms of algebraic invariants coming out of subschemes of the variety formula_10. Whilst it is not possible to show that this "slope" K-stability is equivalent to K-stability, by passing not just to divisors inside formula_10 but divisors inside any birational model over formula_10, one obtains "enough" objects to accurately test for K-stability. In particular Fujita realised that Ross–Thomas's notion of slope K-stability was limited by only integrating up to the Seshadri constant of the subscheme, where the natural divisor on the blow-up becomes ample. By contract the formula_43-invariant integrates up to the pseudoeffective threshold where the natural divisor has positive volume (since every ample divisor has positive volume, the pseudoeffective threshold goes beyond the Seshadri constant). This extra information gives Fujita and Li's valuative criterion enough information to fully characterise K-stability. Suppose formula_10 is a normal variety with formula_1-Cartier canonical divisor formula_44. One says formula_45 is a "divisor over formula_10" if formula_45 is a divisor contained inside some normal variety formula_46 such that there exists a proper birational morphism formula_47 (for example given by a blow up of formula_10). One defines the "log discrepancy" of a divisor formula_45 over formula_10 as formula_48 where formula_49 is the discrepancy of the divisor formula_45 in the sense of birational geometry (see canonical singularity). The discrepancy of a divisor formula_45 over formula_10 is defined as follows. Away from the exceptional locus of the birational morphism formula_47, the canonical divisors of formula_46 and formula_10 agree. Therefore, their difference is given by some sum of prime divisors formula_50 contained in the exceptional locus of formula_51. That is, formula_52 where formula_53. By definition formula_54 and formula_55 when formula_45 is not one of the prime divisors formula_50 in the exceptional locus. The log discrepancy of formula_10 measures the singularities of the Fano variety. In particular X is Kawamata log terminal if and only if formula_56 for any formula_45 over formula_10. To define the beta invariant, one also needs the a volume term. For a divisor formula_45 over formula_10, define formula_57 Here the volume of a divisor measures the rate at which its space of sections grows in comparison to the expected dimension. Namely, formula_58 where formula_38. Finally, the beta invariant was defined by Fujita and Li as formula_59 Despite the complicated definition, due to the powerful tools of birational geometry, this invariant may be explicitly computed in practice for many classes of Fano varieties where the structure of divisors in their birational models is known. This can often be achieved with the use of computational algebraic geometry or by-hand calculation. The relevance of the formula_43-invariant is in the following characterisation of K-stability first observed (in one direction) by Fujita and Li independently. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem: (Fujita–Li, Blum–Xu) — A formula_1-Fano variety formula_10 is K-semistable if and only if formula_60 for all divisors formula_45 over formula_10. Furthermore formula_10 is K-stable if and only if formula_61 for all divisors over formula_10. Delta invariant. The delta invariant can be defined as a "multiplicative" version of the "additive" beta invariant. The "delta invariant of a divisor formula_45 over formula_10" is defined by formula_62 The delta invariant of formula_10 is then given by a uniform measurement of the delta invariants of all divisors over formula_10. formula_63 The delta invariant of a divisor is conceptually similar to the beta invariant, however it was observed by Fujita–Odaka that one can compute the delta invariant as a limit of "quantized" delta invariants formula_64 as formula_65. The quantized delta invariants can be computed in terms of "m-basis type divisors" which are given by choices of bases in the fixed finite-dimensional vector space formula_66. Thus the delta invariant is generally more computable and more theoretically powerful than its predecessors, and much progress on the explicit computations of K-stability for Fano varieties, and in the theory of moduli of Fano varieties has occurred since its introduction. Its initial importance to the theory of K-stability is captured in the following characterisation. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem: (Fujita–Odaka, Blum–Xu) — A formula_1-Fano variety formula_10 is K-semistable if and only if formula_67. Furthermore it is uniformly K-stable if and only if formula_68. The algebraic formula_69-invariant can make contact with the explicit analytical properties of Kähler–Einstein metrics. In particular, one may define the "greatest Ricci lower bound" formula_70 as the supremum of all formula_71 such that there exists a Kähler metric formula_72 such that formula_73. This is the limit of how far one can traverse the natural continuity method to solve the Kähler–Einstein equation. If the greatest Ricci lower bound takes the value formula_74 then one can complete the continuity method to derive the existence of a Kähler–Einstein metric. It turns out that precisely how far you can go along this continuity method, the greatest Ricci lower bound, is exactly given by the formula_69-invariant. That is, formula_75 In the case of toric Fano manifolds an even more geometric interpretation of the delta invariant was derived by Li. For such a toric Fano formula_76, the origin formula_77 is always contained in the interior of the moment polytope formula_78. If formula_79 denotes the barycentre of the polytope formula_78 and formula_80 denotes the point on the boundary of the polytope intersecting the ray formula_81, then Li showed that the greatest Ricci lower bound is given by the ratio formula_82. In particular the toric Fano has formula_83 if and only if its barycentre is the origin. Interpreted using the delta invariant (and indeed using earlier results), one concludes that a toric Fano manifold is K-stable if and only if the barycentre of its polytope formula_78 is the origin. Existence of Kähler–Einstein metrics. From its initial introduction, the notion of K-stability has been intimately linked to the existence of Kähler–Einstein metrics on Fano manifolds. There are now many theorems which relate certain K-stability assumptions to the existence of solutions. These conjectures fall broadly under the title of the Yau–Tian–Donaldson conjecture. In the case of Fano varieties this conjecture asserts:&lt;templatestyles src="Math_theorem/styles.css" /&gt; Conjecture (Yau–Tian–Donaldson) — A Fano manifold admits a Kähler–Einstein metric if and only if it is K-polystable. For Fano manifolds this conjecture was originally proposed by Yau and Tian, and a more general form was stated by Donaldson which extends beyond just the case of Fano manifolds. Nevertheless, the conjecture even in the case of Fanos has come to be known as the Yau–Tian–Donaldson conjecture. See K-stability for more discussion of the general conjecture. In the case of Fano manifolds, the YTD conjecture admits generalisations beyond the case of smooth varieties and forms of the conjecture are now known for singular Fanos and log Fanos. Smooth Fano varieties. The forward direction of the conjecture, that a Fano manifold with a Kähler–Einstein metric is K-polystable, was proven by Tian in his original paper when the Fano manifold has a discrete automorphism group, that is, formula_84. This direction was proven in full generality, removing the assumption that the automorphism group was discrete, by Berman. The reverse direction of the Yau–Tian–Donaldson conjecture was first resolved in the smooth case as stated above by Chen–Donaldson–Sun, and at the same time by Tian. Chen, Donaldson, and Sun have alleged that Tian's claim to equal priority for the proof is incorrect, and they have accused him of academic misconduct. Tian has disputed their claims. Chen, Donaldson, and Sun were recognized by the American Mathematical Society's prestigious 2019 Veblen Prize as having had resolved the conjecture. The Breakthrough Prize has recognized Donaldson with the Breakthrough Prize in Mathematics and Sun with the New Horizons Breakthrough Prize, in part based upon their work with Chen on the conjecture.&lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — If a Fano manifold is K-polystable, then it admits a Kähler–Einstein metric. The proofs of Chen–Donaldson–Sun and Tian were based on a delicate study of Gromov–Hausdorff limits of Fano manifolds with Ricci curvature bounds. More recently, a proof based on the "classical" continuity method was provided by Ved Datar and Gabor Székelyhidi, followed by a proof by Chen, Sun, and Bing Wang using the Kähler–Ricci flow. Robert Berman, Sébastien Boucksom, and Mattias Jonsson also provided a proof from a new variational approach, which interprets K-stability in terms of Non-Archimedean geometry. Of particular interest is that the proof of Berman–Boucksom–Jonsson also applies to the case of a smooth log Fano pair, and does not use the notion of K-polystability but of uniform K-stability as introduced by Dervan and Boucksom–Hisamoto–Jonsson. It is now known that uniform K-stability is equivalent to K-stability and so BBJ's proof provides a new proof of the full YTD conjecture. Building on the variational techniques Berman–Boucksom–Jonsson and the so-called "quantized" delta invariants of Fujita–Odaka, Zhang produced a short quantization-based proof of the YTD conjecture for smooth Fano manifolds. Using other techniques entirely, Berman has also produced a proof of a YTD-type conjecture using a thermodynamic approach called "uniform Gibbs stability", where a Kähler–Einstein metric is constructed through a random point process. Singular Fano varieties and weak Kähler–Einstein metrics. The new proof of the Yau–Tian–Donaldson conjecture by Berman–Boucksom–Jonsson using variational techniques opened up the possible study of K-stability and Kähler–Einstein metrics for "singular" Fano varieties. The variational techniques used rely on "uniform" K-stability as described above. The result of Berman that a Fano manifold admitting a Kähler–Einstein metric is K-polystable was proven in the full generality of a formula_1-log Fano pair, admitting a weak Kähler–Einstein metric. A weak Kähler–Einstein metric on a formula_1-Fano variety formula_10 is a positive formula_85-current formula_86which restricts to give a smooth Kähler–Einstein metric formula_87on the smooth locus formula_88 of formula_10. By requiring a compatibility with a divisor formula_89, this definition can be extended to a weak Kähler–Einstein metric on a pair formula_2. In this generality, the reverse direction of the YTD conjecture was proven by Li–Tian–Wang in the case where the automorphism group is discrete, and in full generality by Li.&lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (Li–Tian–Wang, Li) — A formula_1-log Fano pair formula_2 which is reduced uniformly K-stable admits a weak Kähler–Einstein metric. By the resolution of the finite generation conjecture by Liu–Xu–Zhuang it is known that reduced uniform K-stability is equivalent to K-polystability, so combined with Berman's result the Yau–Tian–Donaldson conjecture is true in complete generality for singular Fano varieties.&lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (Li–Tian–Wang, Li, Berman, Liu–Xu–Zhuang) — A formula_1-log Fano pair formula_2 admits a weak Kähler–Einstein metric if and only if it is K-polystable. Moduli spaces of K-stable Fano varieties. The construction of moduli spaces is a central problem in algebraic geometry. The construction of moduli of algebraic curves spurred the development of geometric invariant theory, stacks, and classification of algebraic surfaces has motivated results throughout algebraic geometry. The case of moduli spaces of canonically polarised varieties was settled using techniques arising from the minimal model program by Kollár–Shepherd-Barron leading to the so-called KSB moduli spaces of varieties of general type. A key property of varieties of general type which allow the construction of moduli is the lack of automorphisms of such varieties. This does not hold for Fano varieties, which can often have very large automorphism groups, so the minimal model program did not directly find applications to the construction of moduli of Fano varieties, and it became clear that K-stability was the correct algebro-geometric notion to allow the formation of moduli in this case. Moduli spaces of K-stable varieties are known as "K-moduli". Smooth case. In the case of smooth Fano manifolds, one may use techniques arising out of the Yau–Tian–Donaldson conjecture to construct the moduli space analytically. In particular work of Odaka and Donaldson building upon the ideas of Gromov compactness of Kähler–Einstein Fanos used in the proof of the YTD conjecture implies the existence of moduli spaces of smooth Fano Kähler–Einstein manifolds with discrete automorphism groups. These moduli spaces are Hausdorff and have at worst quotient singularities. By the YTD conjecture these are alternatively moduli spaces of smooth K-polystable Fano varieties with discrete automorphism groups. However, a Gromov–Hausdorff limit of smooth Fano Kähler–Einstein manifolds may lead to a singular formula_1-Fano variety, so the moduli spaces described by Odaka and Donaldson is not compact, a criterion that is often desirable in the formation of moduli spaces. One method of compactifying the moduli space of smooth K-polystable Fanos is to pass to a moduli space of "singular" K-polystable Fanos, and use algebraic geometry to prove its projectivity. The Yau–Tian–Donaldson conjecture for singular Fano varieties would give this compactification an alternative point of view as consisting of singular Fano varieties with "weak" Kähler–Einstein metrics. General case. The standard algebraic technique to construct moduli spaces utilizes geometric invariant theory. Typically to apply Mumford's geometric invariant theory to construct moduli, one must embed a family of varieties inside a fixed finite-dimensional projective space. Such a family then defines a locus of points in the corresponding Hilbert scheme of the projective space, which is a projective scheme on which the group of projective automorphisms act. GIT stability with respect to this linearisation is called "Hilbert stability". If this locus forms an open set, then GIT may be used to construct a quotient which parametrises these objects. In good circumstances this quotient may be proper and projective. It is not always possible to embed a family of varieties inside a fixed projective space and therefore describe their moduli with geometric invariant theory, and this special property is called boundedness. A fundamental property of Fano varieties is that they fail to be bounded, and thus their stability cannot be reasonably captured by any finite-dimensional geometric invariant theory. This explains why K-stability requires one to consider test configurations formula_4 for which the relatively ample line bundle formula_90 can correspond to some power formula_91 for formula_92 arbitrarily large. However, results of Caucher Birkar showed that certain families of Fano varieties with "volume bounded below" form bounded families, which suggests that it may be possible to study stability of volume-bounded families of Fano varieties to form moduli spaces. For this work Birkar was awarded the Fields Medal in 2018. It was proven by Jiang that K-semistable formula_1-Fano varieties with volume bounded below form a bounded family. Thus for a given volume formula_93 there exists a uniform integer formula_94 such that every K-semistable formula_1-Fano with anticanonical volume larger than or equal to formula_95 admits an embedding inside the fixed projective space formula_96. The openness of this locus of K-semistable Fanos was proven by Blum–Liu–Xu and Xu. This implies the existence of an Artin stack of finite type denoted formula_97 parametrising K-semistable formula_1-Fano varieties with volume bounded below by formula_95. In order to find a genuine moduli space as a projective variety or scheme, one must prove certain properties about "S"-completeness and formula_98-reductivity of K-semistable Fanos inside the stack formula_97. Using properties of K-polystability, these properties of the moduli stack are true and there exists a coarse moduli space formula_99 for the stack formula_97 which parametrises K-polystable formula_1-Fano varieties with volume bounded below by formula_95. It was proven that formula_99 is proper and that the CM-line bundle is ample, meaning the coarse moduli space is also projective. The existence result for K-moduli can be summarised in the following theorem.&lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — There exists a separated, proper, projective, good moduli space formula_99 parametrising formula_100-dimensional K-polystable formula_1-Fano varieties with anticanonical volume bounded below by formula_93. The construction of the moduli space of K-polystable Fanos can be generalised to the setting of log Fano varieties. The case of singular formula_1-Fano varieties which are smoothable (that is they are limits of algebraic families of smooth K-polystable Fano manifolds) was solved earlier by Li–Wang–Xu using a combination of analytic techniques, also relying on the earlier work of Odaka, Donaldson, and Codogni–Patakfalvi. There the coarse moduli space is shown to be a scheme, but in general the existence results for K-moduli only guarantee the existence of an algebraic space. Explicit K-stability of Fano varieties. The explicit study of K-stable Fano varieties precedes the algebraic notion of K-stability, and in low dimensions was of interest purely due to the study of Kähler–Einstein manifolds. For example, either by explicit construction or the use the Tian's alpha invariant, all smooth Kähler–Einstein manifolds of dimension 1 and 2 were known before the definition of K-stability was introduced. In dimension 3 and higher explicit constructions of Kähler–Einstein metrics become more difficult, but advances arising out of the algebraic study of K-stability have enabled explicit computations of K-polystable Fano threefolds and certain families of higher dimensional varieties, and subsequently the discovery of new Kähler–Einstein manifolds. Dimension 1. In dimension one there is a unique smooth Fano variety, the complex projective line formula_101. This variety is easily seen to be K-stable due to the existence of the Fubini–Study metric, which is a Kähler–Einstein metric, implying the K-polystability of formula_101. A purely algebro-geometric proof of the K-stability of smooth Riemann surfaces follows from the work of Ross–Thomas on slope K-stability, which is equivalent to K-stability in dimension one. In this case one may construct test configurations out of collections of points on the curve, and when the curve is smooth no points destabilise. Dimension 2. In dimension two the spaces which admit Kähler–Einstein metrics were classified by Tian. There are 10 deformation families of smooth Fano varieties in dimension two, the del Pezzo surfaces. Using the alpha invariant, Tian showed that a smooth Fano surface admits a Kähler–Einstein metric and is K-polystable if and only if it is not the blow up of the complex projective plane formula_102 in one or two points. Thus 8 out of these 10 classes consist of K-polystable Fano surfaces. The K-moduli of Fano surfaces were studied in explicit examples by Tian and Mabuchi–Mukai. Explicit constructions of compact moduli spaces of Kähler–Einstein Fano surfaces were achieved by Odaka–Spotti–Sun. These spaces were constructed as Gromov–Hausdorff compactifications but were identified with explicit algebraic spaces of log Fano surfaces. For example, it is proven by Odaka–Spotti–Sun that the compact moduli space of smoothable Kähler–Einstein surfaces of degree four is given by the weighted projective space formula_103 with the smooth Kähler–Einstein surfaces of degree four corresponding to the locus formula_104 where formula_105 is an ample divisor consisting of those points satisfying the equation formula_106. Dimension 3. In dimension 3 purely algebraic techniques can be used to find examples of K-stable Fano varieties which are not "a priori" known to admit Kähler–Einstein metrics. The Iskovskikh–Mori–Mukai classification of smooth Fano threefolds provides a natural way of breaking down the problem of studying K-stable Fano threefolds into its components. It is known that there are 105 deformation families of smooth Fano threefolds, and explicit computations using Fujita–Li's beta invariant and Fujita–Odaka's delta invariant can be used to determine which deformation families contain K-stable representatives. For every deformation family it is known whether the generic element of the family is K-(poly)stable. In particular it is known that 78 out of 105 families contain a K-polystable representative in their deformation class. For 71 out of 105 families, it is known for every single member of the deformation class whether or not it is K-polystable. For many examples of the 105 deformation families, the K-stability of representative threefolds can be interpreted in terms of a natural GIT problem which describes that family, and so explicit examples of K-moduli of Fano threefolds can also be found as GIT quotients. For some classes of Fano threefolds the classification problem remains open. For example, it is known that the Mukai–Umemura threefold formula_107 in the deformation class formula_108 admits a Kähler–Einstein metric and is therefore K-polystable by work of Donaldson, who computed Tian's alpha invariant explicitly using the criterion above. This manifold has non-discrete automorphism group formula_109 and which nearby deformations of formula_107 are also K-polystable is not known. It is conjectured that the deformations corresponding to GIT-polystable points within the versal deformation space of formula_110 should correspond to nearby K-polystable varieties. Higher dimensions. The first and simplest example of a K-polystable Fano manifold in any dimension is complex projective space, which always admits the Fubini–Study metric which is Kähler–Einstein in any dimension and therefore all projective spaces are K-polystable. In general there are not many such "obvious" Kähler–Einstein metrics in higher dimensions, and one must use recent techniques of stability to find examples. For certain families of Fano varieties, K-stability can be proved in higher dimensions using either analytic techniques through the alpha invariant or purely algebro-geometric techniques with the beta or delta invariants. As an example, a Fermat hypersurface is a variety of the form formula_111 These hypersurfaces are smooth Fano manifolds with discrete automorphism group for formula_112, and it was proven by Tian using the alpha invariant that formula_113 implying formula_114 admits a Kähler–Einstein for formula_115, and using more detailed arguments Tian proved the existence of a Kähler–Einstein metric when formula_116. On the other hand, using the delta invariant Zhuang gave a completely algebraic proof that formula_114 is K-stable for formula_117 and therefore admits a Kähler–Einstein metric in these cases. Using the openness results for uniform K-stability and K-semistability, one can conclude from this that the generic smooth hypersurface of degree formula_117 inside formula_118 is K-stable. In some cases it is in fact known that "all" degree formula_119 hypersurfaces are K-stable, such as all formula_120 smooth hypersurfaces in formula_118. In addition to the study of particular Fano varieties, in certain settings K-moduli may be explicitly described in higher dimensions. For example, when the K-moduli admits an "obvious" GIT interpretation, the algebraic tools of beta or delta invariants can be used to verify that GIT stability is equivalent to K-stability for that particular problem. For example, Liu showed that for cubic fourfold hypersurfaces in formula_121, the GIT moduli space of (possibly singular) cubic fourfolds is isomorphic to the K-moduli space, and thus one obtains an explicit description of the K-stable, K-polystable, and K-semistable cubic fourfolds in terms of their GIT stability and singularity structure. In particular every smooth cubic fourfold is K-stable. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{C}^*" }, { "math_id": 1, "text": "\\mathbb{Q}" }, { "math_id": 2, "text": "(X,\\Delta)" }, { "math_id": 3, "text": "-(K_X + \\Delta)" }, { "math_id": 4, "text": "(\\mathcal{X},\\mathcal{L})" }, { "math_id": 5, "text": "(X,-rK_X)" }, { "math_id": 6, "text": "(\\mathcal{X},\\mathcal{L})\\to \\mathbb{C}" }, { "math_id": 7, "text": "\\mathbb{P}^1 = \\mathbb{C} \\cup \\{\\infty\\}" }, { "math_id": 8, "text": "\\infty" }, { "math_id": 9, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) = \\frac{1}{2(n+1)(-K_X)^n} \\left( n\\left(\\frac{1}{r} \\mathcal{L}\\right)^{n+1} + (n+1) K_{\\mathcal{X}/\\mathbb{P}^1} \\cdot \\left( \\frac{1}{r} \\mathcal{L}\\right)^n \\right)." }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) \\ge 0" }, { "math_id": 12, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) >0" }, { "math_id": 13, "text": "(X\\times\\mathbb{C}, L)" }, { "math_id": 14, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) \\ge \\varepsilon \\|(\\mathcal{X},\\mathcal{L})\\|_m" }, { "math_id": 15, "text": "\\|(\\mathcal{X},\\mathcal{L})\\|_m" }, { "math_id": 16, "text": "\\mathcal{X}" }, { "math_id": 17, "text": "\\varepsilon>0" }, { "math_id": 18, "text": "(X,L)" }, { "math_id": 19, "text": "\\implies" }, { "math_id": 20, "text": "\\dim \\operatorname{Aut}(X)>0" }, { "math_id": 21, "text": "T\\subset \\operatorname{Aut}(X)" }, { "math_id": 22, "text": "X\\times \\mathbb{C}" }, { "math_id": 23, "text": "\\operatorname{DF}(\\mathcal{X},\\mathcal{L}) \\ge \\varepsilon \\operatorname{J}_T^{\\operatorname{NA}}(\\mathcal{X},\\mathcal{L})" }, { "math_id": 24, "text": "\\operatorname{J}_T^{\\operatorname{NA}}(\\mathcal{X},\\mathcal{L})" }, { "math_id": 25, "text": "\\operatorname{J}" }, { "math_id": 26, "text": "\\mathcal{L} \\sim_{\\mathbb{Q}} -rK_{\\mathcal{X}}" }, { "math_id": 27, "text": "r" }, { "math_id": 28, "text": "\\mathcal{X}_0" }, { "math_id": 29, "text": "(\\mathcal{Y},\\mathcal{H})" }, { "math_id": 30, "text": "\\operatorname{DF}(\\mathcal{Y},\\mathcal{H}) \\le \\operatorname{DF}(\\mathcal{X},\\mathcal{L})." }, { "math_id": 31, "text": "\\operatorname{Ding}(\\mathcal{X},\\mathcal{L}) = -\\frac{(\\frac{1}{r} \\mathcal{L})^{n+1}}{(n+1)(-K_X)^n} - 1 + \\operatorname{lct}(\\mathcal{X},\\mathcal{D}_{(\\mathcal{X},\\mathcal{L})}; \\mathcal{X}_0)" }, { "math_id": 32, "text": "\\operatorname{DF}" }, { "math_id": 33, "text": "G" }, { "math_id": 34, "text": "\\alpha_G" }, { "math_id": 35, "text": "\\omega\\in c_1(X)" }, { "math_id": 36, "text": "P_G(X,\\omega) = \\{\\varphi\\in C^\\infty(X, \\mathbb{R})\\mid \\varphi \\text{ is } G \\text{ invariant}, \\sup \\varphi = 0, \\omega + i \\partial \\bar \\partial \\varphi > 0\\}." }, { "math_id": 37, "text": "\\alpha_G(X) := \\sup \\left\\{ \\alpha > 0 \\mid \\exists C(\\alpha)>0 \\text{ such that } \\int_X e^{-\\alpha \\varphi} \\omega^n < C(\\alpha) \\text{ for all } \\varphi\\in P_G(X,\\omega)\\right\\}." }, { "math_id": 38, "text": "\\dim X = n" }, { "math_id": 39, "text": "\\alpha_G(X) > \\frac{n}{n+1}" }, { "math_id": 40, "text": "\\left|-m K_X\\right|" }, { "math_id": 41, "text": "\\alpha_G(X) = \\operatorname{lct}_G(X) = \\inf_{m\\in \\mathbb{Z}_{>0}} \\inf_{D\\sim -mK_X} \\operatorname{lct}\\left(X,\\frac{1}{m}D\\right). " }, { "math_id": 42, "text": "\\beta_X(E)" }, { "math_id": 43, "text": "\\beta" }, { "math_id": 44, "text": "K_X" }, { "math_id": 45, "text": "E" }, { "math_id": 46, "text": "Y" }, { "math_id": 47, "text": "\\mu: Y \\to X" }, { "math_id": 48, "text": "A_X(E) := a(E,X) + 1" }, { "math_id": 49, "text": "a(E,X)" }, { "math_id": 50, "text": "E_i" }, { "math_id": 51, "text": "\\mu" }, { "math_id": 52, "text": "K_Y - \\mu^* K_X = \\sum_{i} a_i E_i" }, { "math_id": 53, "text": "a_i\\in \\mathbb{Q}" }, { "math_id": 54, "text": "a(E_i,X) = -a_i" }, { "math_id": 55, "text": "a(E,X)=0" }, { "math_id": 56, "text": "A_X(E)\\ge 0" }, { "math_id": 57, "text": "S_X(E) := \\frac{1}{(-K_X)^n} \\int_0^{\\infty} \\operatorname{Vol}(\\mu^* (-K_X) - tE) dt." }, { "math_id": 58, "text": "\\operatorname{Vol}(D) = \\limsup_{m\\to \\infty} \\frac{\\dim H^0(X, \\mathcal{O}(mD))}{m^n/n!}" }, { "math_id": 59, "text": "\\beta_X(E) := A_X(E) - S_X(E)." }, { "math_id": 60, "text": "\\beta_X(E)\\ge 0" }, { "math_id": 61, "text": "\\beta_X(E)>0" }, { "math_id": 62, "text": "\\delta(X,E) := \\frac{A_X(E)}{S_X(E)}." }, { "math_id": 63, "text": "\\delta(X) := \\inf_{E \\text{ over } X} \\delta(X,E)." }, { "math_id": 64, "text": "\\delta_m" }, { "math_id": 65, "text": "m\\to \\infty" }, { "math_id": 66, "text": "H^0(X,-mK_X)" }, { "math_id": 67, "text": "\\delta(X)\\ge 1" }, { "math_id": 68, "text": "\\delta(X)>1" }, { "math_id": 69, "text": "\\delta" }, { "math_id": 70, "text": "R(X)" }, { "math_id": 71, "text": "0\\le t \\le 1" }, { "math_id": 72, "text": "\\omega \\in c_1(X)" }, { "math_id": 73, "text": "\\operatorname{Ric}\\omega > t \\omega" }, { "math_id": 74, "text": "t=1" }, { "math_id": 75, "text": "R(X) = \\min\\{1,\\delta(X)\\}." }, { "math_id": 76, "text": "X_P" }, { "math_id": 77, "text": "O" }, { "math_id": 78, "text": "P" }, { "math_id": 79, "text": "B" }, { "math_id": 80, "text": "Q" }, { "math_id": 81, "text": "OB" }, { "math_id": 82, "text": "|BQ|/|OQ|" }, { "math_id": 83, "text": "R(X)=1" }, { "math_id": 84, "text": "\\left|\\operatorname{Aut}(X)\\right| < \\infty" }, { "math_id": 85, "text": "(1,1)" }, { "math_id": 86, "text": "\\theta_{\\operatorname{KE}}" }, { "math_id": 87, "text": "\\omega_{\\operatorname{KE}}" }, { "math_id": 88, "text": "X_{\\operatorname{reg}}" }, { "math_id": 89, "text": "\\Delta" }, { "math_id": 90, "text": "\\mathcal{L}" }, { "math_id": 91, "text": "L^k" }, { "math_id": 92, "text": "k" }, { "math_id": 93, "text": "V>0" }, { "math_id": 94, "text": "N>0" }, { "math_id": 95, "text": "V" }, { "math_id": 96, "text": "\\mathbb{CP}^N" }, { "math_id": 97, "text": "\\mathfrak{X}_{n,V}^{\\operatorname{Kss}}" }, { "math_id": 98, "text": "\\Theta" }, { "math_id": 99, "text": "X_{n,V}^{\\operatorname{Kps}}" }, { "math_id": 100, "text": "n" }, { "math_id": 101, "text": "\\mathbb{CP}^1" }, { "math_id": 102, "text": "\\mathbb{CP}^2" }, { "math_id": 103, "text": "\\mathbb{P}(1,2,3)" }, { "math_id": 104, "text": "\\mathbb{P}(1,2,3)\\backslash D" }, { "math_id": 105, "text": "D" }, { "math_id": 106, "text": "z_1^2 = 128 z_2" }, { "math_id": 107, "text": "X_{\\operatorname{MU}}" }, { "math_id": 108, "text": "V_{22}" }, { "math_id": 109, "text": "\\operatorname{SL}(2,\\mathbb{C})" }, { "math_id": 110, "text": "\\mathbb{X}_{\\operatorname{MU}}" }, { "math_id": 111, "text": "F_{n,d} = \\left\\{z\\in \\mathbb{CP}^{n+1} \\mid z_0^d + \\cdots + z_{n+1}^d = 0\\right\\} \\subset \\mathbb{CP}^{n+1}." }, { "math_id": 112, "text": "3 \\le d \\le n+1" }, { "math_id": 113, "text": "\\alpha_G (F_{n,d}) > \\frac{2}{n+2-d}" }, { "math_id": 114, "text": "F_{n,d}" }, { "math_id": 115, "text": "n\\le d \\le n+1" }, { "math_id": 116, "text": "d\\le n" }, { "math_id": 117, "text": "3\\le d \\le n+1" }, { "math_id": 118, "text": "\\mathbb{CP}^{n+1}" }, { "math_id": 119, "text": "d" }, { "math_id": 120, "text": "d=n,n+1" }, { "math_id": 121, "text": "\\mathbb{CP}^5" } ]
https://en.wikipedia.org/wiki?curid=70175623
70177408
Stochastic variance reduction
Family of optimization algorithms (Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting. Variance reduction approaches are widely used for training machine learning models such as logistic regression and support vector machines as these problems have finite-sum structure and uniform conditioning that make them ideal candidates for variance reduction. Finite sum objectives. A function formula_0 is considered to have finite sum structure if it can be decomposed into a summation or average: formula_1 where the function value and derivative of each formula_2 can be queried independently. Although variance reduction methods can be applied for any positive formula_3 and any formula_2 structure, their favorable theoretical and practical properties arise when formula_3 is large compared to the condition number of each formula_2, and when the formula_2 have similar (but not necessarily identical) Lipschitz smoothness and strong convexity constants. The finite sum structure should be contrasted with the stochastic approximation setting which deals with functions of the form formula_4 which is the expected value of a function depending on a random variable formula_5. Any finite sum problem can be optimized using a stochastic approximation algorithm by using formula_6. Rapid Convergence. Stochastic variance reduced methods without acceleration are able to find a minima of formula_0 within accuracy formula_7, i.e. formula_8 in a number of steps of the order: formula_9 The number of steps depends only logarithmically on the level of accuracy required, in contrast to the stochastic approximation framework, where the number of steps formula_10 required grows proportionally to the accuracy required. Stochastic variance reduction methods converge almost as fast as the gradient descent method's formula_11 rate, despite using only a stochastic gradient, at a formula_12 lower cost than gradient descent. Accelerated methods in the stochastic variance reduction framework achieve even faster convergence rates, requiring only formula_13 steps to reach formula_14 accuracy, potentially formula_15 faster than non-accelerated methods. Lower complexity bounds. for the finite sum class establish that this rate is the fastest possible for smooth strongly convex problems. Approaches. Variance reduction approaches fall within 3 main categories: table averaging methods, full-gradient snapshot methods and dual methods. Each category contains methods designed for dealing with convex, non-smooth, and non-convex problems, each differing in hyper-parameter settings and other algorithmic details. SAGA. In the SAGA method, the prototypical table averaging approach, a table of size formula_3 is maintained that contains the last gradient witnessed for each formula_2 term, which we denote formula_16. At each step, an index formula_17 is sampled, and a new gradient formula_18 is computed. The iterate formula_19 is updated with: formula_20 and afterwards table entry formula_17 is updated with formula_21. SAGA is among the most popular of the variance reduction methods due to its simplicity, easily adaptable theory, and excellent performance. It is the successor of the SAG method, improving on its flexibility and performance. SVRG. The stochastic variance reduced gradient method (SVRG), the prototypical snapshot method, uses a similar update except instead of using the average of a table it instead uses a full-gradient that is reevaluated at a snapshot point formula_22 at regular intervals of formula_23 iterations. The update becomes: formula_24 This approach requires two stochastic gradient evaluations per step, one to compute formula_18 and one to compute formula_25 where-as table averaging approaches need only one. Despite the high computational cost, SVRG is popular as its simple convergence theory is highly adaptable to new optimization settings. It also has lower storage requirements than tabular averaging approaches, which make it applicable in many settings where tabular methods can not be used. SDCA. Exploiting the dual representation of the objective leads to another variance reduction approach that is particularly suited to finite-sums where each term has a structure that makes computing the convex conjugate formula_26 or its proximal operator tractable. The standard SDCA method considers finite sums that have additional structure compared to generic finite sum setting: formula_27 where each formula_2 is 1 dimensional and each formula_28 is a data point associated with formula_2. SDCA solves the dual problem: formula_29 by a stochastic coordinate ascent procedure, where at each step the objective is optimized with respect to a randomly chosen coordinate formula_30, leaving all other coordinates the same. An approximate primal solution formula_31 can be recovered from the formula_32 values: formula_33. This method obtains similar theoretical rates of convergence to other stochastic variance reduced methods, while avoiding the need to specify a step-size parameter. It is fast in practice when formula_34 is large, but significantly slower than the other approaches when formula_34 is small. Accelerated approaches. Accelerated variance reduction methods are built upon the standard methods above. The earliest approaches make use of proximal operators to accelerate convergence, either approximately or exactly. Direct acceleration approaches have also been developed Catalyst acceleration. The catalyst framework uses any of the standard methods above as an inner optimizer to approximately solve a proximal operator: formula_35 after which it uses an extrapolation step to determine the next formula_36: formula_37 The catalyst method's flexibility and simplicity make it a popular baseline approach. It doesn't achieve the optimal rate of convergence among accelerated methods, it is potentially slower by up to a log factor in the hyper-parameters. Point-SAGA. Proximal operations may also be applied directly to the formula_2 terms to yield an accelerated method. The Point-SAGA method replaces the gradient operations in SAGA with proximal operator evaluations, result in a simple, direct acceleration method: formula_38 with the table update formula_39 performed after each step. Here formula_40 is defined as the proximal operator for the formula_41th term: formula_42 Unlike other known accelerated methods, Point-SAGA requires only a single iterate sequence formula_31 to be maintained between steps, and it has the advantage of only having a single tunable parameter formula_43. It obtains the optimal accelerated rate of convergence for strongly convex finite-sum minimization without additional log factors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "f(x) = \\frac{1}{n}\\sum_{i=1}^n f_i(x)," }, { "math_id": 2, "text": "f_i" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": " f(\\theta) = \\operatorname E_{\\xi} [F(\\theta,\\xi)] " }, { "math_id": 5, "text": "\\xi " }, { "math_id": 6, "text": "F(\\cdot,\\xi)=f_\\xi" }, { "math_id": 7, "text": "\\epsilon>" }, { "math_id": 8, "text": "f(x)-f(x_*)\\leq \\epsilon" }, { "math_id": 9, "text": "\nO \\left( \\left( \\frac{L}{\\mu} + n \\right)\\log \\left( \\frac{1}{\\epsilon} \\right)\\right).\n" }, { "math_id": 10, "text": "\nO\\bigl( L/(\\mu \\epsilon)\\bigr)" }, { "math_id": 11, "text": "O\\bigl( (L/\\mu)\\log(1/\\epsilon) \\bigr)" }, { "math_id": 12, "text": "1/n" }, { "math_id": 13, "text": "\nO \\left( \\left( \\sqrt{\\frac{nL}{\\mu}} + n \\right)\\log \\left( \\frac{1}{\\epsilon} \\right)\\right)\n" }, { "math_id": 14, "text": "\\epsilon" }, { "math_id": 15, "text": "\\sqrt{n}" }, { "math_id": 16, "text": "g_i" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "\\nabla f_i(x_k)" }, { "math_id": 19, "text": "x_k" }, { "math_id": 20, "text": " x_{k+1} = x_k - \\gamma \\left[ \\nabla f_i(x_k) - g_i + \\frac{1}{n}\\sum_{i=1}^n g_i \\right], " }, { "math_id": 21, "text": "g_i=\\nabla f_i(x_k)" }, { "math_id": 22, "text": "\\tilde{x}" }, { "math_id": 23, "text": "m\\geq n" }, { "math_id": 24, "text": " x_{k+1} = x_k - \\gamma [\\nabla f_i(x_k) - \\nabla f_i(\\tilde{x}) + \\nabla f(\\tilde{x})], " }, { "math_id": 25, "text": "\\nabla f_i(\\tilde{x})]," }, { "math_id": 26, "text": "f_{i}^{*}," }, { "math_id": 27, "text": "f(x) = \\frac{1}{n}\\sum_{i=1}^n f_i(x^Tv_i) + \\frac{\\lambda}{2}\\|x\\|^2," }, { "math_id": 28, "text": "v_i" }, { "math_id": 29, "text": "\n\\max_{\\alpha \\in \\mathbb{R}^n} \n-\\frac{1}{n}\\sum_{i=1}^n f_{i}^{*}(-\\alpha_i)-\\frac{\\lambda}{2}\n\\left \\|\n\\frac{1}{\\lambda n}\\sum_{i=1}^n \\alpha_i v_i\n\\right \\|^2,\n" }, { "math_id": 30, "text": "\\alpha_i" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": "\\alpha" }, { "math_id": 33, "text": "\nx = \\frac{1}{\\lambda n}\\sum_{i=1}^n \\alpha_i v_i" }, { "math_id": 34, "text": "\\lambda" }, { "math_id": 35, "text": "\nx_k \\approx \\text{argmin}_x \\left \\{ \n f(x) + \\frac{\\kappa}{2} \\| x - y_{k-1} \\|^2\n\\right \\}\n" }, { "math_id": 36, "text": " y" }, { "math_id": 37, "text": "\ny_k = x_k +\\beta_k (x_k - x_{k-1})\n" }, { "math_id": 38, "text": "\nx_{k+1} = \\text{prox}^\\gamma_j\\left(z_k \\triangleq x_k +\\gamma \\left[ g_{j} - \\frac{1}{n} \\sum_{i=1}^n g_i \\right] \\right)," }, { "math_id": 39, "text": "g_j = \\frac{1}{\\gamma}(z_k - x_{k+1})" }, { "math_id": 40, "text": "\\text{prox}^\\gamma_j" }, { "math_id": 41, "text": "j" }, { "math_id": 42, "text": "\n\\text{prox}^\\gamma_j(y) = \\text{argmin}_x \\left \\{ \n f_j(x) + \\frac{1}{2\\gamma} \\| x - y \\|^2\n\\right \\}.\n" }, { "math_id": 43, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=70177408
7018181
Anonymous function
Function definition that is not bound to an identifier In computer programming, an anonymous function (function literal, expression or block) is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types. Anonymous functions originate in the work of Alonzo Church in his invention of the lambda calculus, in which all functions are anonymous, in 1936, before electronic computers. In several programming languages, anonymous functions are introduced using the keyword "lambda", and anonymous functions are often referred to as lambdas or lambda abstractions. Anonymous functions have been a feature of programming languages since Lisp in 1958, and a growing number of modern programming languages support anonymous functions. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Names. The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual function "f"("x") = "M" would be written (λ"x"."M"), and where M is an expression that uses x. Compare to the Python syntax of lambda x: M. The name "arrow function" refers to the mathematical "maps to" symbol, "x" ↦ "M". Compare to the JavaScript syntax of x =&gt; M. Uses. Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples include closures and currying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by name. Anonymous functions often provide a briefer notation than defining named functions. In languages that do not permit the definition of named functions in local scopes, anonymous functions may provide encapsulation via localized scope, however the code in the body of such anonymous function may not be re-usable, or amenable to separate testing. Short/simple anonymous functions used in expressions may be easier to read and understand than separately defined named functions, though without a descriptive name they may be more difficult to understand. In some programming languages, anonymous functions are commonly implemented for very specific purposes such as binding events to callbacks or instantiating the function for particular values, which may be more efficient in a Dynamic programming language, more readable, and less error-prone than calling a named function. The following examples are written in Python 3. Sorting. When attempting to sort in a non-standard way, it may be easier to contain the sorting logic as an anonymous function instead of creating a named function. Most languages provide a generic sort function that implements a sort algorithm that will sort arbitrary objects. This function usually accepts an arbitrary function that determines how to compare whether two elements are equal or if one is greater or less than the other. Consider this Python code sorting a list of strings by length of the string: »&gt; a = ['house', 'car', 'bike'] »&gt; a.sort(key=lambda x: len(x)) »&gt; a ['car', 'bike', 'house'] The anonymous function in this example is the lambda expression: lambda x: len(x) The anonymous function accepts one argument, codice_0, and returns the length of its argument, which is then used by the codice_1 method as the criteria for sorting. Basic syntax of a lambda function in Python is lambda arg1, arg2, arg3, ...: &lt;operation on the arguments returning a value&gt; The expression returned by the lambda function can be assigned to a variable and used in the code at multiple places. »&gt; add = lambda a: a + a »&gt; add(20) 40 Another example would be sorting items in a list by the name of their class (in Python, everything has a class): »&gt; a = [10, 'number', 11.2] »&gt; a.sort(key=lambda x: x.__class__.__name__) »&gt; a [11.2, 10, 'number'] Note that codice_2 has class name "codice_3", codice_4 has class name "codice_5", and codice_6 has class name "codice_7". The sorted order is "codice_3", "codice_5", then "codice_7". Closures. Closures are functions evaluated in an environment containing bound variables. The following example binds the variable "threshold" in an anonymous function that compares the input to the threshold. def comp(threshold): return lambda x: x &lt; threshold This can be used as a sort of generator of comparison functions: »&gt; func_a = comp(10) »&gt; func_b = comp(20) »&gt; print(func_a(5), func_a(8), func_a(13), func_a(21)) True True False False »&gt; print(func_b(5), func_b(8), func_b(13), func_b(21)) True True True False It would be impractical to create a function for every possible comparison function and may be too inconvenient to keep the threshold around for further use. Regardless of the reason why a closure is used, the anonymous function is the entity that contains the functionality that does the comparing. Currying. Currying is the process of changing a function so that rather than taking multiple inputs, it takes a single input and returns a function which accepts the second input, and so forth. In this example, a function that performs division by any integer is transformed into one that performs division by a set integer. »&gt; def divide(x, y): ... return x / y »&gt; def divisor(d): ... return lambda x: divide(x, d) »&gt; half = divisor(2) »&gt; third = divisor(3) »&gt; print(half(32), third(32)) 16.0 10.666666666666666 »&gt; print(half(40), third(40)) 20.0 13.333333333333334 While the use of anonymous functions is perhaps not common with currying, it still can be used. In the above example, the function divisor generates functions with a specified divisor. The functions half and third curry the divide function with a fixed divisor. The divisor function also forms a closure by binding the variable codice_11. Higher-order functions. A higher-order function is a function that takes a function as an argument or returns one as a result. This is commonly used to customize the behavior of a generically defined function, often a looping construct or recursion scheme. Anonymous functions are a convenient way to specify such function arguments. The following examples are in Python 3. Map. The map function performs a function call on each element of a list. The following example squares every element in an array with an anonymous function. »&gt; a = [1, 2, 3, 4, 5, 6] »&gt; list(map(lambda x: x*x, a)) [1, 4, 9, 16, 25, 36] The anonymous function accepts an argument and multiplies it by itself (squares it). The above form is discouraged by the creators of the language, who maintain that the form presented below has the same meaning and is more aligned with the philosophy of the language: »&gt; a = [1, 2, 3, 4, 5, 6] »&gt; [x*x for x in a] [1, 4, 9, 16, 25, 36] Filter. The filter function returns all elements from a list that evaluate True when passed to a certain function. »&gt; a = [1, 2, 3, 4, 5, 6] »&gt; list(filter(lambda x: x % 2 == 0, a)) [2, 4, 6] The anonymous function checks if the argument passed to it is even. The same as with map, the form below is considered more appropriate: »&gt; a = [1, 2, 3, 4, 5, 6] »&gt; [x for x in a if x % 2 == 0] [2, 4, 6] Fold. A fold function runs over all elements in a structure (for lists usually left-to-right, a "left fold", called codice_12 in Python), accumulating a value as it goes. This can be used to combine all elements of a structure into one value, for example: »&gt; from functools import reduce »&gt; a = [1, 2, 3, 4, 5] »&gt; reduce(lambda x,y: x*y, a) 120 This performs formula_0 The anonymous function here is the multiplication of the two arguments. The result of a fold need not be one value. Instead, both map and filter can be created using fold. In map, the value that is accumulated is a new list, containing the results of applying a function to each element of the original list. In filter, the value that is accumulated is a new list containing only those elements that match the given condition. List of languages. The following is a list of programming languages that support unnamed anonymous functions fully, or partly as some variant, or not at all. This table shows some general trends. First, the languages that do not support anonymous functions (C, Pascal, Object Pascal) are all statically typed languages. However, statically typed languages can support anonymous functions. For example, the ML languages are statically typed and fundamentally include anonymous functions, and Delphi, a dialect of Object Pascal, has been extended to support anonymous functions, as has C++ (by the C++11 standard). Second, the languages that treat functions as first-class functions (Dylan, Haskell, JavaScript, Lisp, ML, Perl, Python, Ruby, Scheme) generally have anonymous function support so that functions can be defined and passed around as easily as other data types. Examples. Numerous languages support anonymous functions, or something similar. APL. Only some dialects support anonymous functions, either as dfns, in the tacit style or a combination of both. f←{⍵×⍵} As a dfn f 1 2 3 1 4 9 g←⊢×⊢ As a tacit 3-train (fork) g 1 2 3 1 4 9 h←×⍨ As a derived tacit function h 1 2 3 1 4 9 C (non-standard extension). The anonymous function is not supported by standard C programming language, but supported by some C dialects, such as "GCC" and "Clang". GCC. The GNU Compiler Collection (GCC) supports anonymous functions, mixed by nested functions and statement expressions. It has the form: The following example works only with GCC. Because of how macros are expanded, the codice_13 cannot contain any commas outside of parentheses; GCC treats the comma as a delimiter between macro arguments. The argument codice_14 can be removed if codice_15 is available; in the example below using codice_15 on array would return codice_17, which can be dereferenced for the actual value if needed. //* this is the definition of the anonymous function */ l_ret_type l_anonymous_functions_name l_arguments \ l_body \ &amp;l_anonymous_functions_name; \ int i=0; \ for(;i&lt;sizeof(fe_arr)/sizeof(fe_arrType);i++) { fe_arr[i] = fe_fn_body(&amp;fe_arr[i]); } \ typedef struct int a; int b; } testtype; void printout(const testtype * array) int i; for ( i = 0; i &lt; 3; ++ i ) printf("%d %d\n", array[i].a, array[i].b); printf("\n"); int main(void) testtype array[] = { {0,1}, {2,3}, {4,5} }; printout(array); /* the anonymous function is given as function for the foreach */ forEachInArray(testtype, array, lambda (testtype, (void *item), int temp = (*( testtype *) item).a; (*( testtype *) item).a = (*( testtype *) item).b; (*( testtype *) item).b = temp; return (*( testtype *) item); printout(array); return 0; Clang (C, C++, Objective-C, Objective-C++). Clang supports anonymous functions, called blocks, which have the form: The type of the blocks above is codice_18. Using the aforementioned "blocks" extension and Grand Central Dispatch (libdispatch), the code could look simpler: int main(void) { void (^count_loop)() = ^{ for (int i = 0; i &lt; 100; i++) printf("%d\n", i); printf("ah ah ah\n"); /* Pass as a parameter to another function */ dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), count_loop); /* Invoke directly */ count_loop(); return 0; The code with blocks should be compiled with codice_19 and linked with codice_20 C++ (since C++11). C++11 supports anonymous functions (technically function objects), called "lambda expressions", which have the form: where "codice_21" is of the form "codice_22 in that order; each of these components is optional". If it is absent, the return type is deduced from codice_23 statements as if for a function with declared return type codice_24. This is an example lambda expression: C++11 also supports closures, here called captures. Captures are defined between square brackets codice_25and codice_26 in the declaration of lambda expression. The mechanism allows these variables to be captured by value or by reference. The following table demonstrates this: [] // No captures, the lambda is implicitly convertible to a function pointer. [x, &amp;y] // x is captured by value and y is captured by reference. [&amp;] // Any external variable is implicitly captured by reference if used [=] // Any external variable is implicitly captured by value if used. [&amp;, x] // x is captured by value. Other variables will be captured by reference. [=, &amp;z] // z is captured by reference. Other variables will be captured by value. Variables captured by value are constant by default. Adding codice_27 after the parameter list makes them non-constant. C++14 and newer versions support init-capture, for example: std::unique_ptr&lt;int&gt; ptr = std::make_unique&lt;int&gt;(42); [ptr]{ /* ... */ }; // copy assignment is deleted for a unique pointer [ptr = std::move(ptr)]{ /* ... */ }; // ok auto counter = [i = 0]() mutable { return i++; }; // mutable is required to modify 'i' counter(); // 0 counter(); // 1 counter(); // 2 The following two examples demonstrate use of a lambda expression: std::vector&lt;int&gt; some_list{ 1, 2, 3, 4, 5 }; int total = 0; std::for_each(begin(some_list), end(some_list), [&amp;total](int x) { total += x; }); // Note that std::accumulate would be a way better alternative here... This computes the total of all elements in the list. The variable codice_28 is stored as a part of the lambda function's closure. Since it is a reference to the stack variable codice_28, it can change its value. std::vector&lt;int&gt; some_list{ 1, 2, 3, 4, 5 }; int total = 0; int value = 5; std::for_each(begin(some_list), end(some_list), [&amp;total, value, this](int x) { total += x * value * this-&gt;some_func(); }); This will cause codice_28 to be stored as a reference, but codice_31 will be stored as a copy. The capture of codice_32 is special. It can only be captured by value, not by reference. However in C++17, the current object can be captured by value (denoted by codice_33), or can be captured by reference (denoted by codice_32). codice_32 can only be captured if the closest enclosing function is a non-static member function. The lambda will have the same access as the member that created it, in terms of protected/private members. If codice_32 is captured, either explicitly or implicitly, then the scope of the enclosed class members is also tested. Accessing members of codice_32 does not need explicit use of codice_38 syntax. The specific internal implementation can vary, but the expectation is that a lambda function that captures everything by reference will store the actual stack pointer of the function it is created in, rather than individual references to stack variables. However, because most lambda functions are small and local in scope, they are likely candidates for inlining, and thus need no added storage for references. If a closure object containing references to local variables is invoked after the innermost block scope of its creation, the behaviour is undefined. Lambda functions are function objects of an implementation-dependent type; this type's name is only available to the compiler. If the user wishes to take a lambda function as a parameter, the parameter type must be a template type, or they must create a codice_39 or a similar object to capture the lambda value. The use of the codice_24 keyword can help store the lambda function, auto my_lambda_func = [&amp;](int x) { /*...*/ }; auto my_onheap_lambda_func = new auto([=](int x) { /*...*/ }); Here is an example of storing anonymous functions in variables, vectors, and arrays; and passing them as named parameters: double eval(std::function&lt;double(double)&gt; f, double x = 2.0) { return f(x); int main() { std::function&lt;double(double)&gt; f0 = [](double x) { return 1; }; auto f1 = [](double x) { return x; }; decltype(f0) fa[3] = {f0, f1, [](double x) { return x * x; }}; std::vector&lt;decltype(f0)&gt; fv = {f0, f1}; fv.push_back([](double x) { return x * x; }); for (size_t i = 0; i &lt; fv.size(); i++) { std::cout « fv[i](2.0) « std::endl; for (size_t i = 0; i &lt; 3; i++) { std::cout « fa[i](2.0) « std::endl; for (auto&amp; f : fv) { std::cout « f(2.0) « std::endl; for (auto&amp; f : fa) { std::cout « f(2.0) « std::endl; std::cout « eval(f0) « std::endl; std::cout « eval(f1) « std::endl; std::cout « eval([](double x) { return x * x; }) « std::endl; A lambda expression with an empty capture specification (codice_41) can be implicitly converted into a function pointer with the same type as the lambda was declared with. So this is legal: auto a_lambda_func = [](int x) -&gt; void { /*...*/ }; void (* func_ptr)(int) = a_lambda_func; func_ptr(4); //calls the lambda. Since C++17, a lambda can be declared codice_42, and since C++20, codice_43 with the usual semantics. These specifiers go after the parameter list, like codice_27. Starting from C++23, the lambda can also be codice_45 if it has no captures. The codice_45 and codice_27 specifiers are not allowed to be combined. Also since C++23 a lambda expression can be recursive through explicit codice_32 as first parameter: auto fibonacci = [](this auto self, int n) { return n &lt;= 1 ? n : self(n - 1) + self(n - 2); }; fibonacci(7); // 13 In addition to that, C++23 modified the syntax so that the parentheses can be omitted in the case of a lambda that takes no arguments even if the lambda has a specifier. It also made it so that an attribute specifier sequence that appears before the parameter list, lambda specifiers, or noexcept specifier (there must be one of them) applies to the function call operator or operator template of the closure type. Otherwise, it applies to the type of the function call operator or operator template. Previously, such a sequence always applied to the type of the function call operator or operator template of the closure type making e.g the codice_49 attribute impossible to use with lambdas. The Boost library provides its own syntax for lambda functions as well, using the following syntax: for_each(a.begin(), a.end(), std::cout « _1 « ' '); Since C++14, the function parameters of a lambda can be declared with codice_24. The resulting lambda is called a "generic lambda" and is essentially an anonymous function template since the rules for type deduction of the auto parameters are the rules of template argument deduction. As of C++20, template parameters can also be declared explicitly with the following syntax: C#. In C#, support for anonymous functions has deepened through the various versions of the language compiler. The language v3.0, released in November 2007 with .NET Framework v3.5, has full support of anonymous functions. C# names them "lambda expressions", following the original version of anonymous functions, the lambda calculus. "// the first int is the x' type" "// the second int is the return type" "// &lt;see href="http://msdn.microsoft.com/en-us/library/bb549151.aspx" /&gt;" Func&lt;int,int&gt; foo = x =&gt; x * x; Console.WriteLine(foo(7)); While the function is anonymous, it cannot be assigned to an implicitly typed variable, because the lambda syntax may be used for denoting an anonymous function or an expression tree, and the choice cannot automatically be decided by the compiler. E.g., this does not work: // will NOT compile! var foo = (int x) =&gt; x * x; However, a lambda expression can take part in type inference and can be used as a method argument, e.g. to use anonymous functions with the Map capability available with codice_51 (in the codice_52 method): // Initialize the list: var values = new List&lt;int&gt;() { 7, 13, 4, 9, 3 }; // Map the anonymous function over all elements in the list, return the new list var foo = values.ConvertAll(d =&gt; d * d) ; // the result of the foo variable is of type System.Collections.Generic.List&lt;Int32&gt; Prior versions of C# had more limited support for anonymous functions. C# v1.0, introduced in February 2002 with the .NET Framework v1.0, provided partial anonymous function support through the use of delegates. C# names them "lambda expressions", following the original version of anonymous functions, the lambda calculus. This construct is somewhat similar to PHP delegates. In C# 1.0, delegates are like function pointers that refer to an explicitly named method within a class. (But unlike PHP, the name is unneeded at the time the delegate is used.) C# v2.0, released in November 2005 with the .NET Framework v2.0, introduced the concept of anonymous methods as a way to write unnamed inline statement blocks that can be executed in a delegate invocation. C# 3.0 continues to support these constructs, but also supports the lambda expression construct. This example will compile in C# 3.0, and exhibits the three forms: public class TestDriver delegate int SquareDelegate(int d); static int Square(int d) return d * d; static void Main(string[] args) // C# 1.0: Original delegate syntax needed // initializing with a named method. SquareDelegate A = new SquareDelegate(Square); System.Console.WriteLine(A(3)); // C# 2.0: A delegate can be initialized with // inline code, called an "anonymous method". This // method takes an int as an input parameter. SquareDelegate B = delegate(int d) { return d * d; }; System.Console.WriteLine(B(5)); // C# 3.0. A delegate can be initialized with // a lambda expression. The lambda takes an int, and returns an int. // The type of x is inferred by the compiler. SquareDelegate C = x =&gt; x * x; System.Console.WriteLine(C(7)); // C# 3.0. A delegate that accepts one input and // returns one output can also be implicitly declared with the Func&lt;&gt; type. System.Func&lt;int,int&gt; D = x =&gt; x * x; System.Console.WriteLine(D(9)); In the case of the C# 2.0 version, the C# compiler takes the code block of the anonymous function and creates a static private function. Internally, the function gets a generated name, of course; this generated name is based on the name of the method in which the Delegate is declared. But the name is not exposed to application code except by using reflection. In the case of the C# 3.0 version, the same mechanism applies. ColdFusion Markup Language (CFML). Using the &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;function&lt;/samp&gt; keyword: fn = function(){ // statements Or using an arrow function: fn = () =&gt; { // statements fn = () =&gt; singleExpression // singleExpression is implicitly returned. There is no need for the braces or the return keyword fn = singleParam =&gt; { // if the arrow function has only one parameter, there's no need for parentheses // statements fn = (x, y) =&gt; { // if the arrow function has zero or multiple parameters, one needs to use parentheses // statements CFML supports any statements within the function's definition, not simply expressions. CFML supports recursive anonymous functions: factorial = function(n){ return n &gt; 1 ? n * factorial(n-1) : 1; CFML anonymous functions implement closure. D. D uses inline delegates to implement anonymous functions. The full syntax for an inline delegate is If unambiguous, the return type and the keyword "delegate" can be omitted. delegate (x){return x*x;} // if more verbosity is needed (int x){return x*x;} // if parameter type cannot be inferred delegate (int x){return x*x;} // ditto delegate double(int x){return x*x;} // if return type must be forced manually Since version 2.0, D allocates closures on the heap unless the compiler can prove it is unnecessary; the codice_53 keyword can be used for forcing stack allocation. Since version 2.058, it is possible to use shorthand notation: x =&gt; x*x; (int x) =&gt; x*x; (x,y) =&gt; x*y; (int x, int y) =&gt; x*y; An anonymous function can be assigned to a variable and used like this: auto sqr = (double x){return x*x;}; double y = sqr(4); Dart. Dart supports anonymous functions. var sqr = (x) =&gt; x * x; print(sqr(5)); or print(((x) =&gt; x * x)(5)); Delphi. Delphi introduced anonymous functions in version 2009. program demo; type TSimpleProcedure = reference to procedure; TSimpleFunction = reference to function(const x: string): Integer; var x1: TSimpleProcedure; y1: TSimpleFunction; begin x1 := procedure begin Writeln('Hello World'); end; x1; //invoke anonymous method just defined y1 := function(const x: string): Integer begin Result := Length(x); end; Writeln(y1('bar')); end. PascalABC.NET. PascalABC.NET supports anonymous functions using lambda syntax begin var n := 10000000; var pp := (1..n) .Select(x -&gt; (Random, Random)) .Where(p -&gt; Sqr(p[0]) + Sqr(p[1]) &lt; 1) .Count / n * 4; Print(pp); end. Elixir. Elixir uses the closure codice_54 for anonymous functions. sum = fn(a, b) -&gt; a + b end sum.(4, 3) square = fn(x) -&gt; x * x end Enum.map [1, 2, 3, 4], square Erlang. Erlang uses a syntax for anonymous functions similar to that of named functions. % Anonymous function bound to the Square variable Square = fun(X) -&gt; X * X end. % Named function with the same functionality square(X) -&gt; X * X. Go. Go supports anonymous functions. foo := func(x int) int { return x * x fmt.Println(foo(10)) Haskell. Haskell uses a concise syntax for anonymous functions (lambda expressions). The backslash is supposed to resemble λ. \x -&gt; x * x Lambda expressions are fully integrated with the type inference engine, and support all the syntax and features of "ordinary" functions (except for the use of multiple definitions for pattern-matching, since the argument list is only specified once). map (\x -&gt; x * x) [1..5] -- returns [1, 4, 9, 16, 25] The following are all equivalent: f x y = x + y f x = \y -&gt; x + y f = \x y -&gt; x + y Haxe. In Haxe, anonymous functions are called lambda, and use the syntax codice_55 . var f = function(x) return x*x; f(8); // 64 (function(x,y) return x+y)(5,6); // 11 Java. Java supports anonymous functions, named "Lambda Expressions", starting with JDK 8. A lambda expression consists of a comma separated list of the formal parameters enclosed in parentheses, an arrow token (codice_56), and a body. Data types of the parameters can always be omitted, as can the parentheses if there is only one parameter. The body can consist of one statement or a statement block. // with no parameter // with one parameter (this example is an identity function). a -&gt; a // with one expression (a, b) -&gt; a + b // with explicit type information (long id, String name) -&gt; "id: " + id + ", name:" + name // with a code block // with multiple statements in the lambda body. It needs a code block. // This example also includes two nested lambda expressions (the first one is also a closure). (id, defaultPrice) -&gt; { Optional&lt;Product&gt; product = productList.stream().filter(p -&gt; p.getId() == id).findFirst(); return product.map(p -&gt; p.getPrice()).orElse(defaultPrice); Lambda expressions are converted to "functional interfaces" (defined as interfaces that contain only one abstract method in addition to one or more default or static methods), as in the following example: public class Calculator { interface IntegerMath { int operation(int a, int b); default IntegerMath swap() { return (a, b) -&gt; operation(b, a); private static int apply(int a, int b, IntegerMath op) { return op.operation(a, b); public static void main(String... args) { IntegerMath addition = (a, b) -&gt; a + b; IntegerMath subtraction = (a, b) -&gt; a - b; System.out.println("40 + 2 = " + apply(40, 2, addition)); System.out.println("20 - 10 = " + apply(20, 10, subtraction)); System.out.println("10 - 20 = " + apply(20, 10, subtraction.swap())); In this example, a functional interface called codice_57 is declared. Lambda expressions that implement codice_57 are passed to the codice_59 method to be executed. Default methods like codice_60 define methods on functions. Java 8 introduced another mechanism named method reference (the codice_61 operator) to create a lambda on an existing method. A method reference does not indicate the number or types of arguments because those are extracted from the abstract method of the functional interface. IntBinaryOperator sum = Integer::sum; In the example above, the functional interface codice_62 declares an abstract method codice_63, so the compiler looks for a method codice_64 in the class codice_65. Differences compared to Anonymous Classes. Anonymous classes of lambda-compatible interfaces are similar, but not exactly equivalent, to lambda expressions. To illustrate, in the following example, and are both instances of that add their two parameters: IntegerMath anonymousClass = new IntegerMath() { @Override public int operation(int a, int b) { return a + b; }; IntegerMath lambdaExpression = (a, b) -&gt; a + b; The main difference here is that the lambda expression does not necessarily need to allocate a new instance for the , and can return the same instance every time this code is run. Additionally, in the OpenJDK implementation at least, lambdas are compiled to invokedynamic instructions, with the lambda body inserted as a static method into the surrounding class, rather than generating a new class file entirely. Java limitations. Java 8 lambdas have the following limitations: JavaScript. JavaScript/ECMAScript supports anonymous functions. alert((function(x){ return x * x; })(10)); ES6 supports "arrow function" syntax, where a =&gt; symbol separates the anonymous function's parameter list from the body: alert((x =&gt; x * x)(10)); This construct is often used in Bookmarklets. For example, to change the title of the current document (visible in its window's title bar) to its URL, the following bookmarklet may seem to work. document.title=location.href; However, as the assignment statement returns a value (the URL itself), many browsers actually create a new page to display this value. Instead, an anonymous function, that does not return a value, can be used: (function(){document.title=location.href;})(); The function statement in the first (outer) pair of parentheses declares an anonymous function, which is then executed when used with the last pair of parentheses. This is almost equivalent to the following, which populates the environment with codice_66 unlike an anonymous function. var f = function(){document.title=location.href;}; f(); Use void() to avoid new pages for arbitrary anonymous functions: void(function(){return document.title=location.href;}()); or just: void(document.title=location.href); JavaScript has syntactic subtleties for the semantics of defining, invoking and evaluating anonymous functions. These subliminal nuances are a direct consequence of the evaluation of parenthetical expressions. The following constructs which are called immediately-invoked function expression illustrate this: (function(){ ... }()) and Representing "codice_67" by codice_66, the form of the constructs are a parenthetical within a parenthetical codice_69 and a parenthetical applied to a parenthetical codice_70. Note the general syntactic ambiguity of a parenthetical expression, parenthesized arguments to a function and the parentheses around the formal parameters in a function definition. In particular, JavaScript defines a codice_71 (comma) operator in the context of a parenthetical expression. It is no mere coincidence that the syntactic forms coincide for an expression and a function's arguments (ignoring the function formal parameter syntax)! If codice_66 is not identified in the constructs above, they become codice_73 and codice_74. The first provides no syntactic hint of any resident function but the second MUST evaluate the first parenthetical as a function to be legal JavaScript. (Aside: for instance, the codice_75's could be ([],{},42,"abc",function(){}) as long as the expression evaluates to a function.) Also, a function is an Object instance (likewise objects are Function instances) and the object literal notation brackets, codice_76 for braced code, are used when defining a function this way (as opposed to using codice_77). In a very broad non-rigorous sense (especially since global bindings are compromised), an arbitrary sequence of braced JavaScript statements, codice_78, can be considered to be a fixed point of More correctly but with caveats, ( function(){stuff}() ) ~= A_Fixed_Point_of( function(){ return function(){ return ... { return function(){stuff}() } ... }() }() Note the implications of the anonymous function in the JavaScript fragments that follow: Performance metrics to analyze the space and time complexities of function calls, call stack, etc. in a JavaScript interpreter engine implement easily with these last anonymous function constructs. From the implications of the results, it is possible to deduce some of an engine's recursive versus iterative implementation details, especially tail-recursion. Julia. In Julia anonymous functions are defined using the syntax codice_84, julia&gt; f = x -&gt; x*x; f(8) 64 julia&gt; ((x,y)-&gt;x+y)(5,6) 11 Kotlin. Kotlin supports anonymous functions with the syntax codice_85, sum(5,6) // returns 11 even(4) // returns true Lisp. Lisp and Scheme support anonymous functions using the "lambda" construct, which is a reference to lambda calculus. Clojure supports anonymous functions with the "fn" special form and #() reader syntax. Common Lisp. Common Lisp has the concept of lambda expressions. A lambda expression is written as a list with the symbol "lambda" as its first element. The list then contains the argument list, documentation or declarations and a function body. Lambda expressions can be used inside lambda forms and with the special operator "function". "function" can be abbreviated as #'. Also, macro "lambda" exists, which expands into a function form: One typical use of anonymous functions in Common Lisp is to pass them to higher-order functions like "mapcar", which applies a function to each element of a list and returns a list of the results. '(1 2 3 4)) The "lambda form" in Common Lisp allows a "lambda expression" to be written in a function call: (+ (sqrt x) (sqrt y))) 10.0 12.0) Anonymous functions in Common Lisp can also later be given global names: (lambda (x) (* x x))) Scheme. Scheme's "named functions" is simply syntactic sugar for anonymous functions bound to names: (do-something arg)) expands (and is equivalent) to (define somename (lambda (arg) (do-something arg))) Clojure. Clojure supports anonymous functions through the "fn" special form: There is also a reader syntax to define a lambda: Like Scheme, Clojure's "named functions" are simply syntactic sugar for lambdas bound to names: expands to: Lua. In Lua (much as in Scheme) all functions are anonymous. A "named function" in Lua is simply a variable holding a reference to a function object. Thus, in Lua function foo(x) return 2*x end is just syntactical sugar for foo = function(x) return 2*x end An example of using anonymous functions for reverse-order sorting: table.sort(network, function(a,b) return a.name &gt; b.name end) Wolfram Language, Mathematica. The Wolfram Language is the programming language of Mathematica. Anonymous functions are important in programming the latter. There are several ways to create them. Below are a few anonymous functions that increment a number. The first is the most common. codice_86 refers to the first argument and codice_87 marks the end of the anonymous function. #1+1&amp; Function[x,x+1] x \[Function] x+1 So, for instance: f:= #1^2&amp;;f[8] 64 #1+#2&amp;[5,6] 11 Also, Mathematica has an added construct to make recursive anonymous functions. The symbol '#0' refers to the entire function. The following function calculates the factorial of its input: If[#1 == 1, 1, #1 * #0[#1-1]]&amp; For example, 6 factorial would be: If[#1 == 1, 1, #1 * #0[#1-1]]&amp;[6] 720 MATLAB, Octave. Anonymous functions in MATLAB or Octave are defined using the syntax codice_88. Any variables that are not found in the argument list are inherited from the enclosing scope and are captured by value. » f = @(x)x*x; f(8) ans = 64 » (@(x,y)x+y)(5,6) % Only works in Octave ans = 11 Maxima. In Maxima anonymous functions are defined using the syntax codice_89, f: lambda([x],x*x); f(8); 64 lambda([x,y],x+y)(5,6); 11 ML. The various dialects of ML support anonymous functions. OCaml. Anonymous functions in OCaml are functions without a declared name. Here is an example of an anonymous function that multiplies its input by two: fun x -&gt; x*2 In the example, fun is a keyword indicating that the function is an anonymous function. We are passing in an argument x and -&gt; to separate the argument from the body. F#. F# supports anonymous functions, as follows: (fun x -&gt; x * x) 20 // 400 Standard ML. Standard ML supports anonymous functions, as follows: Nim. Nim supports multi-line multi-expression anonymous functions. var anon = proc (var1, var2: int): int = var1 + var2 assert anon(1, 2) == 3 Multi-line example: var anon = func (x: int): bool = if x &gt; 0: result = true else: result = false assert anon(9) Anonymous functions may be passed as input parameters of other functions: var cities = @["Frankfurt", "Tokyo", "New York"] cities.sort( proc (x, y: string): int = cmp(x.len, y.len) An anonymous function is basically a function without a name. Perl. Perl 5. Perl 5 supports anonymous functions, as follows: (sub { print "I got called\n" })-&gt;(); # 1. fully anonymous, called as created my $squarer = sub { my $x = shift; $x * $x }; # 2. assigned to a variable sub curry { my ($sub, @args) = @_; return sub { $sub-&gt;(@args, @_) }; # 3. as a return value of another function sub sum { my $tot = 0; $tot += $_ for @_; $tot } # returns the sum of its arguments my $curried = curry \&amp;sum, 5, 7, 9; print $curried-&gt;(1,2,3), "\n"; # prints 27 ( = 5 + 7 + 9 + 1 + 2 + 3 ) Other constructs take "bare blocks" as arguments, which serve a function similar to lambda functions of one parameter, but do not have the same parameter-passing convention as functions -- @_ is not set. my @squares = map { $_ * $_ } 1..10; # map and grep don't use the 'sub' keyword my @square2 = map $_ * $_, 1..10; # braces unneeded for one expression my @bad_example = map { print for @_ } 1..10; # values not passed like normal Perl function PHP. Before 4.0.1, PHP had no anonymous function support. PHP 4.0.1 to 5.3. PHP 4.0.1 introduced the codice_90 which was the initial anonymous function support. This function call makes a new randomly named function and returns its name (as a string) $foo = create_function('$x', 'return $x*$x;'); $bar = create_function("\$x", "return \$x*\$x;"); echo $foo(10); The argument list and function body must be in single quotes, or the dollar signs must be escaped. Otherwise, PHP assumes "codice_91" means the variable codice_91 and will substitute it into the string (despite possibly not existing) instead of leaving "codice_91" in the string. For functions with quotes or functions with many variables, it can get quite tedious to ensure the intended function body is what PHP interprets. Each invocation of codice_90 makes a new function, which exists for the rest of the program, and cannot be "garbage collected", using memory in the program irreversibly. If this is used to create anonymous functions many times, e.g., in a loop, it can cause problems such as memory bloat. PHP 5.3. PHP 5.3 added a new class called codice_95 and magic method codice_96 that makes a class instance invocable. $x = 3; $func = function($z) { return $z * 2; }; echo $func($x); // prints 6 In this example, codice_97 is an instance of codice_95 and codice_99 is equivalent to codice_100. PHP 5.3 mimics anonymous functions but it does not support true anonymous functions because PHP functions are still not first-class objects. PHP 5.3 does support closures but the variables must be explicitly indicated as such: $x = 3; $func = function() use(&amp;$x) { $x *= 2; }; $func(); echo $x; // prints 6 The variable codice_91 is bound by reference so the invocation of codice_97 modifies it and the changes are visible outside of the function. PHP 7.4. Arrow functions were introduced in PHP 7.4 $x = 3; $func = fn($z) =&gt; $z * 2; echo $func($x); // prints 6 Prolog's dialects. Logtalk. Logtalk uses the following syntax for anonymous predicates (lambda expressions): A simple example with no free variables and using a list mapping predicate is: Ys = [2,4,6] yes Currying is also supported. The above example can be written as: Ys = [2,4,6] yes Visual Prolog. Anonymous functions (in general anonymous "predicates") were introduced in Visual Prolog in version 7.2. Anonymous predicates can capture values from the context. If created in an object member, it can also access the object state (by capturing codice_103). codice_104 returns an anonymous function, which has captured the argument codice_105 in the closure. The returned function is a function that adds codice_105 to its argument: clauses mkAdder(X) = { (Y) = X+Y }. Python. Python supports simple anonymous functions through the lambda form. The executable body of the lambda must be an expression and can't be a statement, which is a restriction that limits its utility. The value returned by the lambda is the value of the contained expression. Lambda forms can be used anywhere ordinary functions can. However these restrictions make it a very limited version of a normal function. Here is an example: »&gt; foo = lambda x: x * x »&gt; foo(10) 100 In general, the Python convention encourages the use of named functions defined in the same scope as one might typically use an anonymous function in other languages. This is acceptable as locally defined functions implement the full power of closures and are almost as efficient as the use of a lambda in Python. In this example, the built-in power function can be said to have been curried: »&gt; def make_pow(n): ... def fixed_exponent_pow(x): ... return pow(x, n) ... return fixed_exponent_pow »&gt; sqr = make_pow(2) »&gt; sqr(10) 100 »&gt; cub = make_pow(3) »&gt; cub(10) 1000 R. In R the anonymous functions are defined using the syntax codice_107 , which has shorthand since version 4.1.0 codice_108, akin to Haskell. &gt; f &lt;- function(x)x*x; f(8) [1] 64 &gt; (function(x,y)x+y)(5,6) [1] 11 &gt; # Since R 4.1.0 &gt; (\(x,y) x+y)(5, 6) [1] 11 Raku. In Raku, all blocks (even those associated with if, while, etc.) are anonymous functions. A block that is not used as an rvalue is executed immediately. my $squarer1 = -&gt; $x { $x * $x }; # 2a. pointy block my $squarer2 = { $^x * $^x }; # 2b. twigil my $squarer3 = { my $x = shift @_; $x * $x }; # 2c. Perl 5 style my $seven = add(3, 4); my $add_one = &amp;add.assuming(m =&gt; 1); my $eight = $add_one($seven); my $w = * - 1; # WhateverCode object my $b = { $_ - 1 }; # same functionality, but as Callable block Ruby. Ruby supports anonymous functions by using a syntactical structure called "block". There are two data types for blocks in Ruby. codice_109s behave similarly to closures, whereas codice_110s behave more analogous to an anonymous function. When passed to a method, a block is converted into a Proc in some circumstances. ex = [16.2, 24.1, 48.3, 32.4, 8.5] =&gt; [16.2, 24.1, 48.3, 32.4, 8.5] ex.sort_by { |x| x - x.to_i } # Sort by fractional part, ignoring integer part. =&gt; [24.1, 16.2, 48.3, 32.4, 8.5] =&gt; #&lt;Proc:0x007ff4598705a0@(irb):7&gt; ex.call Hello, world! =&gt; nil def multiple_of?(n) end =&gt; nil multiple_four = multiple_of?(4) =&gt; #&lt;Proc:0x007ff458b45f88@(irb):12 (lambda)&gt; multiple_four.call(16) =&gt; true multiple_four[15] =&gt; false Rust. In Rust, anonymous functions are called closures. They are defined using the following syntax: For example: let f = |x: i32| -&gt; i32 { x * 2 }; With type inference, however, the compiler is able to infer the type of each parameter and the return type, so the above form can be written as: let f = |x| { x * 2 }; With closures with a single expression (i.e. a body with one line) and implicit return type, the curly braces may be omitted: let f = |x| x * 2; Closures with no input parameter are written like so: let f = || println!("Hello, world!"); Closures may be passed as input parameters of functions that expect a function pointer: // A function which takes a function pointer as an argument and calls it with // the value `5`. fn apply(f: fn(i32) -&gt; i32) -&gt; i32 { // No semicolon, to indicate an implicit return f(5) fn main() { // Defining the closure let f = |x| x * 2; println!("{}", apply(f)); // 10 println!("{}", f(5)); // 10 However, one may need complex rules to describe how values in the body of the closure are captured. They are implemented using the codice_111, codice_112, and codice_113 traits: With these traits, the compiler will capture variables in the least restrictive manner possible. They help govern how values are moved around between scopes, which is largely important since Rust follows a lifetime construct to ensure values are "borrowed" and moved in a predictable and explicit manner. The following demonstrates how one may pass a closure as an input parameter using the codice_111 trait: // A function that takes a value of type F (which is defined as // a generic type that implements the `Fn` trait, e.g. a closure) // and calls it with the value `5`. fn apply_by_ref&lt;F&gt;(f: F) -&gt; i32 where F: Fn(i32) -&gt; i32 f(5) fn main() { let f = |x| { println!("I got the value: {}", x); x * 2 // Applies the function before printing its return value println!("5 * 2 = {}", apply_by_ref(f)); // ~~ Program output ~~ // I got the value: 5 // 5 * 2 = 10 The previous function definition can also be shortened for convenience as follows: fn apply_by_ref(f: impl Fn(i32) -&gt; i32) -&gt; i32 { f(5) Scala. In Scala, anonymous functions use the following syntax: (x: Int, y: Int) =&gt; x + y In certain contexts, like when an anonymous function is a parameter being passed to another function, the compiler can infer the types of the parameters of the anonymous function and they can be omitted in the syntax. In such contexts, it is also possible to use a shorthand for anonymous functions using the underscore character to introduce unnamed parameters. val list = List(1, 2, 3, 4) list.reduceLeft( (x, y) =&gt; x + y ) // Here, the compiler can infer that the types of x and y are both Int. // Thus, it needs no type annotations on the parameters of the anonymous function. list.reduceLeft( _ + _ ) // Each underscore stands for a new unnamed parameter in the anonymous function. // This results in an even shorter equivalent to the anonymous function above. Smalltalk. In Smalltalk anonymous functions are called blocks and they are invoked (called) by sending them a "value" message. If several arguments are to be passed, a "value:...value:" message with a corresponding number of value arguments must be used. For example, in GNU Smalltalk, st&gt; f:=[:x|x*x]. f value: 8 . 64 st&gt; [:x :y|x+y] value: 5 value: 6 . 11 Smalltalk blocks are technically closures, allowing them to outlive their defining scope and still refer to the variables declared therein. st&gt; f := [:a|[:n|a+n]] value: 100 . a BlockClosure "returns the inner block, which adds 100 (captured in "a" variable) to its argument." st&gt; f value: 1 . 101 st&gt; f value: 2 . 102 Swift. In Swift, anonymous functions are called closures. The syntax has following form: statement For example: return s1 &gt; s2 For sake of brevity and expressiveness, the parameter types and return type can be omitted if these can be inferred: Similarly, Swift also supports implicit return statements for one-statement closures: Finally, the parameter names can be omitted as well; when omitted, the parameters are referenced using shorthand argument names, consisting of the $ symbol followed by their position (e.g. $0, $1, $2, etc.): Tcl. In Tcl, applying the anonymous squaring function to 2 looks as follows: apply {x {expr {$x*$x}}} 2 This example involves two candidates for what it means to be a "function" in Tcl. The most generic is usually called a "command prefix", and if the variable "f" holds such a function, then the way to perform the function application "f"("x") would be where codice_123 is the expansion prefix (new in Tcl 8.5). The command prefix in the above example is apply codice_124 Command names can be bound to command prefixes by means of the codice_125 command. Command prefixes support currying. Command prefixes are very common in Tcl APIs. The other candidate for "function" in Tcl is usually called a "lambda", and appears as the codice_126 part of the above example. This is the part which caches the compiled form of the anonymous function, but it can only be invoked by being passed to the codice_127 command. Lambdas do not support currying, unless paired with an codice_127 to form a command prefix. Lambdas are rare in Tcl APIs. Vala. In Vala, anonymous functions are supported as lambda expressions. delegate int IntOp (int x, int y); void main () { IntOp foo = (x, y) =&gt; x * y; stdout.printf("%d\n", foo(10,5)); Visual Basic .NET. Visual Basic .NET 2008 introduced anonymous functions through the lambda form. Combined with implicit typing, VB provides an economical syntax for anonymous functions. As with Python, in VB.NET, anonymous functions must be defined on one line; they cannot be compound statements. Further, an anonymous function in VB.NET must truly be a VB.NET codice_129 - it must return a value. Dim foo = Function(x) x * x Console.WriteLine(foo(10)) Visual Basic.NET 2010 added support for multiline lambda expressions and anonymous functions without a return value. For example, a function for use in a Thread. Dim t As New System.Threading.Thread(Sub () For n As Integer = 0 To 10 'Count to 10 Console.WriteLine(n) 'Print each number Next End Sub t.Start() References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\left(\n \\left(\n \\left(\n 1 \\times 2\n \\right)\n \\times 3\n \\right)\n \\times 4\n\\right)\n\\times 5\n= 120.\n" } ]
https://en.wikipedia.org/wiki?curid=7018181
70182421
2 Samuel 18
Second Book of Samuel chapter 2 Samuel 18 is the eighteenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 33 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–11, 28–29. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The story of Absalom's rebellion can be observed as five consecutive episodes: A. David's flight from Jerusalem (15:13–16:14) B. The victorious Absalom and his counselors (16:15–17:14) C. David reaches Mahanaim (17:15–29) B'. The rebellion is crushed and Absalom is executed (18:1–19:8abc) A'. David's reentry into Jerusalem (19:8d–20:3) God's role seems to be understated in the whole events, but is disclosed by a seemingly insignificant detail: 'the crossing of the Jordan river'. The Hebrew root word"' 'br", "to cross" (in various nominal and verbal forms) is used more than 30 times in these chapters (compared to 20 times in the rest of 2 Samuel) to report David's flight from Jerusalem, his crossing of the Jordan river, and his reentry into Jerusalem. In 2 Samuel 17:16, stating that David should cross the Jordan (17:16), the verb" 'br" is even reinforced by a 'Hebrew infinitive absolute' to mark this critical moment: "king David is about to cross out of the land of Israel." David's future was in doubt until it was stated that God had rendered foolish Ahithophel's good counsel to Absalom (2 Samuel 17:14), thus granting David's prayer (15:31), and saving David from Absalom's further actions. Once Absalom was defeated, David's crossing back over the Jordan echoes the Israelites' first crossing over the Jordan under Joshua's leadership (Joshua 1–4): Here God's role is not as explicit as during Joshua's crossing, but the signs are clear that God was with David, just as with qJoshua. Death of Absalom (18:1–18). Hushai's successful counsel to Absalom gave David enough time to organize his troops. By the time for battle David had three groups of army, which was a traditional division at that time (cf. Judges 7:16; 1 Samuel 11:11). David was prevented by his men from marching out with them (verse 3), so he would not be in harm way as would happen to Absalom later. The narrative emphasizes that David should not be implicated in Absalom's death as he was not with the army and he gave specific instructions to his three commanders to 'deal gently' with Absalom, which were also heard by all the people. The battle is briefly described that 'the men of Israel', supporters of Absalom, were defeated by 'the servants of David', who were better placed to take advantage of the wooded terrain, made treacherous by the large pits, called 'the forest of Ephraim' (verse 17). Absalom became victim to the forest, that his phenomenal long hair (cf. 2 Samuel 14:26; cf. Josephus, Ant. 7 paragraph 239) got caught in the branches of a tree as his mule made its way under it, and 'he was left hanging' in mid-air. A man who reported Absalom's situation was originally offered a reward by Joab to kill Absalom, but he had three good reasons to refuse: Ignoring David's command to deal gently with Absalom, Joab himself thrust three spears at once through Absalom's heart and left his ten armorbearers beating the prince to death (verse 15). As the rebels' leader was dead, Joab suspended hostilities, as this was not a war between the people but more on an individual. Absalom's dead body was thrown into a pit by the troops and they heaped stones over him; this was not a respectable burial (cf. Joshua 7:26; 8:29), but Absalom had during his lifetime erected a memorial for himself in the Jerusalem area (verse 18) and this monument could be the one related to the Tomb of Absalom in the Kidron Valley. "Now Absalom in his lifetime had taken and set up a pillar for himself, which is in the King’s Valley. For he said, “I have no son to keep my name in remembrance.” He called the pillar after his own name. And to this day it is called Absalom’s Monument." David mourned the death of Absalom (18:19–33). The next drama is about the transmission of the battle outcome to David. Ahimaaz who was unaware of Absalom's death (verses 28–29), offered and went out to bring the message, but Joab could not rely on Ahimaaz to make that report as positively as he would wish, so Joab sent another messenger, a Cushite, to speak of good news despite Absalom's death. Ahimaaz who arrived first could only report that 'all was well' for David's side, but was unable to answer the question about Absalom. The Cushite brought the same good news, but, gave him the news of Absalom's death with a positive slant (verse 32). David understood the news and began a period of mourning for Absalom (verse 33), which continues into the next chapter. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70182421
70191734
2 Samuel 20
Second Book of Samuel chapter 2 Samuel 20 is the twentieth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 26 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 1Q7 (1QSam; 50 BCE) with extant verses 6–10 and 4Q51 (4QSama; 100–50 BCE) with extant verses 1–2, 4, 9–14, 19, 21–25. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. Verses 1–3 of this chapter conclude the account of Absalom's rebellion with David safely back in his residence in Jerusalem: Rebellion of Sheba (20:1–22). The discontent of the northern tribes recorded at the end of the previous chapter led to another rebellion, this time under Sheba, 'the son of Bichri, a Benjaminite', and a representative of the Saulide camp (cf. Bechorath in 1 Samuel 9:1). Although verse 2 suggests that 'all Israel' (the tribes other than Judah) left David and followed Sheba, verse 14 shows that only the Bichrites were the active rebels, but the significance of this group must not be overlooked. David perceived in verse 6 that this dissent was potentially more harmful than Absalom's rebellion, because it endangered the structure of the kingdom. Significantly Sheba's rallying cry (verse 1) was repeated when the kingdom of Israel was really divided after the death of Solomon (1 Kings 12:16). Once David had settled in Jerusalem and made arrangements for his ten concubines, whom he left behind (verse 3), he turned his focus to the dissension. The newly appointed commander, Amasa (2 Samuel 19:13), was given three days to rally a force, but did not do as requested. Abishai was immediately put in charge of the army, but Joab who still had 'men' under his command (verse 7) took the lead to pursue Sheba. When Amasa met them at Gibeon, Joab pretended to kiss Amasa by pulling his beard to kiss him, but used a hidden short sword in his girdle to kill Amasa. Now Joab unquestionably became the leader of the army (his brother Abishai was no longer mentioned after verse 10) and the pursuit reached Abel of Beth-maacah in the north, near Dan, where Sheba went into. During the siege a 'wise woman' spoke to Joab from the rampart, offering a plan to save Abel-beth-maachah, a city which had a reputation for wisdom (verse 18) and considered a 'mother city' in Israel (verse 19), by beheading Sheba and throwing his severed head to Joab. With this, the rebellion ended, all people went home to their own cities as Joab returned to Jerusalem to report to David. There are obvious links between the appearance of the wise woman of Abel and that of Tekoa in 2 Samuel 14: "Then the king arose and took his seat in the gate." "And the people were all told, "Behold, the king is sitting in the gate."" "And all the people came before the king." "Now Israel had fled every man to his own home." David's court officials (20:23–26). The chapter concludes with another list of David's court officials not exactly identical to the previous list in 2 Samuel 8:15-18. The comparison is as follows: Joab remained the established commander of the army, and Benaiah remained in charge of the Cherethites and Pelethites. Adoram (written as "Adoniram" in 1 Kings 4:6), not mentioned in the previous list, was in charge of forced labor, which was established in the latter part of David's reign. All the other names are identical with those in the previous list, except Ira, who replaces David's sons at 2 Samuel 8:18 and was called 'the Jairite', probably denoting his origin from the village of Jair (Numbers 32:41; Deuteronomy 3:14). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70191734
70192699
ScGET-seq
Single-cell sequencing technology Single-cell genome and epigenome by transposases sequencing (scGET-seq) is a DNA sequencing method for profiling open and closed chromatin. In contrast to single-cell assay for transposase-accessible chromatin with sequencing (scATAC-seq), which only targets active euchromatin, scGET-seq is also capable of probing inactive heterochromatin. This is achieved through the use of TnH, which is created by linking the chromodomain (CD) of heterochromatin protein-1-alpha (HP-1formula_0) to the Tn5 transposase. TnH is then able to target histone 3 lysine 9 trimethylation (H3K9me3), a marker for heterochromatin. Akin to RNA velocity, which uses the ratio of spliced to unspliced RNA to infer the kinetics of changes in gene expression over the course of cellular development, the ratio of TnH to Tn5 signals obtained from scGET-seq can be used to calculate chromatin velocity, which measures the dynamics of chromatin accessibility over the course of cellular developmental pathways. History. Transcriptional regulation is tightly linked to chromatin states. Chromatin that is open, or permissive to transcription, make up only 2-3% of the genome, but encompass 94.4% of transcription factor binding sites. Conversely, more tightly packed DNA, or heterochromatin, is responsible for genome organization and stability. Chromatin density also changes over the course of cellular differentiation processes, but there is a lack of high-throughput sequencing methods for directly assaying heterochromatin. Many genomic-related diseases such as cancer are highly linked to changes in their epigenome. Cancers in particular are characterized by single-cell heterogeneity, which can drive metastasis and treatment resistance.  The mechanisms that underlie these processes are still largely unknown, although the advent of single-cell technologies, including single-cell epigenomics, has contributed greatly to their elucidation. In 2015, ATAC-seq, which uses the Tn5 transposase to fragment and tag accessible chromatin, or euchromatin, for sequencing, became feasible at the single-cell resolution. scGET-seq builds upon this technology by also providing information on heterochromatin, providing a more comprehensive look at chromatin structure and dynamics within each cell. Methods. Sample preparation. Sample preparation for scGET-seq starts with obtaining a suspension of nuclei from cells using a method appropriate for the starting material. The next step is to produce the TnH transposase. Tn5 is a transposase that cuts and ligates adapters to genomic regions unbound by nucleosomes (open chromatin). HP-1a is a member of the HP1 family and is able to recognize and specifically bind to H3K9me3. Its chromodomain uses an induced-fit mechanism for recognizing this chromatin modification. Linking the first 112 amino acids of HP-1a containing the chromodomain to Tn5 using a three poly-tyrosine-glycine-serine (TGS) linker leads to the creation of the TnH transposase, which is capable of targeting heterochromatin marked by H3K9me3. Library preparation is done using a modified protocol for single-cell ATAC-seq, where the nuclei suspension is sequentially incubated with the Tn5 transposase first, and then TnH. Data analysis. The goals of the data analysis are: Analysis. Dimension reduction, visualization and clustering. Each of the matrices are filtered of shared regions and then normalized and log2 transformed. Linear dimension reduction is done using principal component analysis (PCA). Groups of cells are identified using a k-NN algorithm and Leiden algorithm. Finally, the four matrices are combined using matrix factorization and UMAP reduction. Cell identification annotation. There are two approaches to cell identity annotation: Annotation based on feature annotation of ATAC peaks, and annotation based on integration with reference scRNA-seq data. Applications. Current. By using the ratio of Tn5 to TnH signals, quantitative values describing how quickly and in what direction chromatin remodelling is taking place can be calculated (chromatin velocity). By isolating regions that are most dynamic and identifying which transcription factors bind there, chromatin velocity can be used to infer the dynamic epigenetic processes happening within a given cell and the contributions of various transcription factors to those processes. Future. Chromatin remodelling precedes changes in gene expression and enhances the understanding of trajectories and mechanisms of cellular changes. Thus, platforms and tools for integration of multimodal data are areas of active research Incorporating temporal and directionality elements through integration of chromatin velocity with RNA velocity has been proposed to reveal even more information about differentiation pathways. Limitations. scGET-seq has some of the same limitations as scATAC-seq. Both processes require nuclei samples from viable cells, and high cellular viability. Low cellular viability leads to high background DNA contamination that do not accurately represent authentic biological signals. Additionally, the sparsity and noisy nature of scATAC-seq and scGET-seq data makes analysis challenging, and there is no consensus yet on how to best manage this data Another limitation is that scGET-seq still needs the validation of SNVs results by bulk genome sequencing. Even though there is a high correlation of mutations between bulk exome sequencing and scGET-seq results, scGET-seq fails to capture all exome SNVs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=70192699
701934
Fermi's interaction
Mechanism of beta decay proposed in 1933 In particle physics, Fermi's interaction (also the Fermi theory of beta decay or the Fermi four-fermion interaction) is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another (at one vertex of the associated Feynman diagram). This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino (later determined to be an antineutrino) and a proton. Fermi first introduced this coupling in his description of beta decay in 1933. The Fermi interaction was the precursor to the theory for the weak interaction where the interaction between the proton–neutron and electron–antineutrino is mediated by a virtual W− boson, of which the Fermi theory is the low-energy effective field theory. According to Eugene Wigner, who together with Jordan introduced the Jordan–Wigner transformation, Fermi's paper on beta decay was his main contribution to the history of physics. History of initial rejection and later publication. Fermi first submitted his "tentative" theory of beta decay to the prestigious science journal "Nature", which rejected it "because it contained speculations too remote from reality to be of interest to the reader." It has been argued that "Nature" later admitted the rejection to be one of the great editorial blunders in its history, but Fermi's biographer David N. Schwartz has objected that this is both unproven and unlikely. Fermi then submitted revised versions of the paper to Italian and German publications, which accepted and published them in those languages in 1933 and 1934. The paper did not appear at the time in a primary publication in English. An English translation of the seminal paper was published in the American Journal of Physics in 1968. Fermi found the initial rejection of the paper so troubling that he decided to take some time off from theoretical physics, and do only experimental physics. This would lead shortly to his famous work with activation of nuclei with slow neutrons. The "tentativo". Definitions. The theory deals with three types of particles presumed to be in direct interaction: initially a “heavy particle” in the “neutron state” (formula_0), which then transitions into its “proton state” (formula_1) with the emission of an electron and a neutrino. formula_2 Electron state. where formula_3 is the single-electron wavefunction, formula_4 are its stationary states. formula_5 is the operator which annihilates an electron in state formula_6 which acts on the Fock space as formula_7 formula_8 is the creation operator for electron state formula_9 formula_10 Neutrino state. Similarly, formula_11 where formula_12 is the single-neutrino wavefunction, and formula_13 are its stationary states. formula_14 is the operator which annihilates a neutrino in state formula_15 which acts on the Fock space as formula_16 formula_17 is the creation operator for neutrino state formula_15. Heavy particle state. formula_18 is the operator introduced by Heisenberg (later generalized into isospin) that acts on a heavy particle state, which has eigenvalue +1 when the particle is a neutron, and −1 if the particle is a proton. Therefore, heavy particle states will be represented by two-row column vectors, where formula_19 represents a neutron, and formula_20 represents a proton (in the representation where formula_18 is the usual formula_21 spin matrix). The operators that change a heavy particle from a proton into a neutron and vice versa are respectively represented by formula_22 and formula_23 formula_24 resp. formula_25 is an eigenfunction for a neutron resp. proton in the state formula_26. Hamiltonian. The Hamiltonian is composed of three parts: formula_27, representing the energy of the free heavy particles, formula_28, representing the energy of the free light particles, and a part giving the interaction formula_29. formula_30 where formula_31 and formula_32 are the energy operators of the neutron and proton respectively, so that if formula_33, formula_34, and if formula_1, formula_35. formula_36 where formula_37 is the energy of the electron in the formula_38 state in the nucleus's Coulomb field, and formula_39 is the number of electrons in that state; formula_40 is the number of neutrinos in the formula_41 state, and formula_42 energy of each such neutrino (assumed to be in a free, plane wave state). The interaction part must contain a term representing the transformation of a proton into a neutron along with the emission of an electron and a neutrino (now known to be an antineutrino), as well as a term for the inverse process; the Coulomb force between the electron and proton is ignored as irrelevant to the formula_43-decay process. Fermi proposes two possible values for formula_29: first, a non-relativistic version which ignores spin: formula_44 and subsequently a version assuming that the light particles are four-component Dirac spinors, but that speed of the heavy particles is small relative to formula_45 and that the interaction terms analogous to the electromagnetic vector potential can be ignored: formula_46 where formula_3 and formula_12 are now four-component Dirac spinors, formula_47 represents the Hermitian conjugate of formula_3, and formula_48 is a matrix formula_49 Matrix elements. The state of the system is taken to be given by the tuple formula_50 where formula_51 specifies whether the heavy particle is a neutron or proton, formula_26 is the quantum state of the heavy particle, formula_39 is the number of electrons in state formula_6 and formula_40 is the number of neutrinos in state formula_15. Using the relativistic version of formula_29, Fermi gives the matrix element between the state with a neutron in state formula_26 and no electrons resp. neutrinos present in state formula_6 resp. formula_52, and the state with a proton in state formula_53 and an electron and a neutrino present in states formula_6 and formula_15 as formula_54 where the integral is taken over the entire configuration space of the heavy particles (except for formula_18). The formula_55 is determined by whether the total number of light particles is odd (−) or even (+). Transition probability. To calculate the lifetime of a neutron in a state formula_26 according to the usual quantum perturbation theory, the above matrix elements must be summed over all unoccupied electron and neutrino states. This is simplified by assuming that the electron and neutrino eigenfunctions formula_4 and formula_13 are constant within the nucleus (i.e., their Compton wavelength is much larger than the size of the nucleus). This leads to formula_56 where formula_4 and formula_13 are now evaluated at the position of the nucleus. According to Fermi's golden rule, the probability of this transition is formula_57 where formula_58 is the difference in the energy of the proton and neutron states. Averaging over all positive-energy neutrino spin / momentum directions (where formula_59 is the density of neutrino states, eventually taken to infinity), we obtain formula_60 where formula_61 is the rest mass of the neutrino and formula_43 is the Dirac matrix. Noting that the transition probability has a sharp maximum for values of formula_62 for which formula_63, this simplifies to formula_64 where formula_62 and formula_42 is the values for which formula_63. Fermi makes three remarks about this function: formula_69 in the transition probability is normally of magnitude 1, but in special circumstances it vanishes; this leads to (approximate) selection rules for formula_43-decay. Forbidden transitions. As noted above, when the inner product formula_70 between the heavy particle states formula_24 and formula_71 vanishes, the associated transition is "forbidden" (or, rather, much less likely than in cases where it is closer to 1). If the description of the nucleus in terms of the individual quantum states of the protons and neutrons is accurate to a good approximation, formula_70 vanishes unless the neutron state formula_24 and the proton state formula_71 have the same angular momentum; otherwise, the total angular momentum of the entire nucleus before and after the decay must be used. Influence. Shortly after Fermi's paper appeared, Werner Heisenberg noted in a letter to Wolfgang Pauli that the emission and absorption of neutrinos and electrons in the nucleus should, at the second order of perturbation theory, lead to an attraction between protons and neutrons, analogously to how the emission and absorption of photons leads to the electromagnetic force. He found that the force would be of the form formula_72, but noted that contemporary experimental data led to a value that was too small by a factor of a million. The following year, Hideki Yukawa picked up on this idea, but in his theory the neutrinos and electrons were replaced by a new hypothetical particle with a rest mass approximately 200 times heavier than the electron. Later developments. Fermi's four-fermion theory describes the weak interaction remarkably well. Unfortunately, the calculated cross-section, or probability of interaction, grows as the square of the energy formula_73. Since this cross section grows without bound, the theory is not valid at energies much higher than about 100 GeV. Here "G"F is the Fermi constant, which denotes the strength of the interaction. This eventually led to the replacement of the four-fermion contact interaction by a more complete theory (UV completion)—an exchange of a W or Z boson as explained in the electroweak theory. The interaction could also explain muon decay via a coupling of a muon, electron-antineutrino, muon-neutrino and electron, with the same fundamental strength of the interaction. This hypothesis was put forward by Gershtein and Zeldovich and is known as the Vector Current Conservation hypothesis. In the original theory, Fermi assumed that the form of interaction is a contact coupling of two vector currents. Subsequently, it was pointed out by Lee and Yang that nothing prevented the appearance of an axial, parity violating current, and this was confirmed by experiments carried out by Chien-Shiung Wu. The inclusion of parity violation in Fermi's interaction was done by George Gamow and Edward Teller in the so-called Gamow–Teller transitions which described Fermi's interaction in terms of parity-violating "allowed" decays and parity-conserving "superallowed" decays in terms of anti-parallel and parallel electron and neutrino spin states respectively. Before the advent of the electroweak theory and the Standard Model, George Sudarshan and Robert Marshak, and also independently Richard Feynman and Murray Gell-Mann, were able to determine the correct tensor structure (vector minus axial vector, "V" − "A") of the four-fermion interaction. Fermi constant. The most precise experimental determination of the Fermi constant comes from measurements of the muon lifetime, which is inversely proportional to the square of "G"F (when neglecting the muon mass against the mass of the W boson). In modern terms, the "reduced Fermi constant", that is, the constant in natural units is formula_74 Here, g is the coupling constant of the weak interaction, and "M"W is the mass of the W boson, which mediates the decay in question. In the Standard Model, the Fermi constant is related to the Higgs vacuum expectation value formula_75. More directly, approximately (tree level for the standard model), formula_76 This can be further simplified in terms of the Weinberg angle using the relation between the W and Z bosons with formula_77, so that formula_78 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho=+1" }, { "math_id": 1, "text": "\\rho = -1" }, { "math_id": 2, "text": "\\psi = \\sum_s \\psi_s a_s," }, { "math_id": 3, "text": "\\psi" }, { "math_id": 4, "text": "\\psi_s" }, { "math_id": 5, "text": "a_s" }, { "math_id": 6, "text": "s" }, { "math_id": 7, "text": "a_s \\Psi(N_1, N_2, \\ldots, N_s, \\ldots) = (-1)^{N_1 + N_2 + \\cdots + N_s - 1} (1 - N_s) \\Psi(N_1, N_2, \\ldots, 1 - N_s, \\ldots)." }, { "math_id": 8, "text": "a_s^*" }, { "math_id": 9, "text": "s:" }, { "math_id": 10, "text": "a_s^* \\Psi(N_1, N_2, \\ldots, N_s, \\ldots) = (-1)^{N_1 + N_2 + \\cdots + N_s - 1} N_s \\Psi(N_1, N_2, \\ldots, 1 - N_s, \\ldots)." }, { "math_id": 11, "text": "\\phi = \\sum_\\sigma \\phi_\\sigma b_\\sigma," }, { "math_id": 12, "text": "\\phi" }, { "math_id": 13, "text": "\\phi_\\sigma" }, { "math_id": 14, "text": "b_\\sigma" }, { "math_id": 15, "text": "\\sigma" }, { "math_id": 16, "text": "b_\\sigma \\Phi(M_1, M_2, \\ldots, M_\\sigma, \\ldots) = (-1)^{M_1 + M_2 + \\cdots + M_\\sigma - 1} (1 - M_\\sigma) \\Phi(M_1, M_2, \\ldots, 1 - M_\\sigma, \\ldots)." }, { "math_id": 17, "text": "b_\\sigma^*" }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "\\begin{pmatrix}1\\\\0\\end{pmatrix}" }, { "math_id": 20, "text": "\\begin{pmatrix}0\\\\1\\end{pmatrix}" }, { "math_id": 21, "text": "\\sigma_z" }, { "math_id": 22, "text": "Q = \\sigma_x - i \\sigma_y = \\begin{pmatrix}0 & 1\\\\ 0 & 0\\end{pmatrix}" }, { "math_id": 23, "text": "Q^* = \\sigma_x + i \\sigma_y = \\begin{pmatrix}0 & 0\\\\ 1 & 0\\end{pmatrix}." }, { "math_id": 24, "text": "u_n" }, { "math_id": 25, "text": "v_n" }, { "math_id": 26, "text": "n" }, { "math_id": 27, "text": "H_\\text{h.p.}" }, { "math_id": 28, "text": "H_\\text{l.p.}" }, { "math_id": 29, "text": "H_\\text{int.}" }, { "math_id": 30, "text": "H_\\text{h.p.} = \\frac{1}{2}(1 + \\rho)N + \\frac{1}{2}(1 - \\rho)P," }, { "math_id": 31, "text": "N" }, { "math_id": 32, "text": "P" }, { "math_id": 33, "text": "\\rho = 1" }, { "math_id": 34, "text": "H_\\text{h.p.} = N" }, { "math_id": 35, "text": "H_\\text{h.p.} = P" }, { "math_id": 36, "text": "H_\\text{l.p.} = \\sum_s H_s N_s + \\sum_\\sigma K_\\sigma M_\\sigma," }, { "math_id": 37, "text": "H_s" }, { "math_id": 38, "text": "s^\\text{th}" }, { "math_id": 39, "text": "N_s" }, { "math_id": 40, "text": "M_\\sigma" }, { "math_id": 41, "text": "\\sigma^\\text{th}" }, { "math_id": 42, "text": "K_\\sigma" }, { "math_id": 43, "text": "\\beta" }, { "math_id": 44, "text": "H_\\text{int.} = g \\left[ Q \\psi(x) \\phi(x) + Q^* \\psi^*(x) \\phi^*(x) \\right]," }, { "math_id": 45, "text": "c" }, { "math_id": 46, "text": "H_\\text{int.} = g \\left[ Q \\tilde{\\psi}^* \\delta \\phi + Q^* \\tilde{\\psi} \\delta \\phi^* \\right]," }, { "math_id": 47, "text": "\\tilde{\\psi}" }, { "math_id": 48, "text": "\\delta" }, { "math_id": 49, "text": "\\begin{pmatrix}\n0 & -1 & 0 & 0\\\\\n1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1\\\\\n0 & 0 & -1 & 0\n\\end{pmatrix}." }, { "math_id": 50, "text": "\\rho, n, N_1, N_2, \\ldots, M_1, M_2, \\ldots," }, { "math_id": 51, "text": "\\rho = \\pm 1" }, { "math_id": 52, "text": "\\sigma " }, { "math_id": 53, "text": "m" }, { "math_id": 54, "text": "H^{\\rho=1, n, N_s=0, M_\\sigma=0}_{\\rho=-1,m,N_s=1,M_\\sigma=1} = \\pm g \\int v_m^* u_n \\tilde{\\psi}_s \\delta \\phi^*_\\sigma d\\tau," }, { "math_id": 55, "text": "\\pm" }, { "math_id": 56, "text": "H^{\\rho=1, n, N_s=0, M_\\sigma=0}_{\\rho=-1,m,N_s=1,M_\\sigma=1} = \\pm g \\tilde{\\psi}_s \\delta \\phi_\\sigma^* \\int v_m^* u_n d\\tau," }, { "math_id": 57, "text": "\\begin{align}\n\\left|a^{\\rho=1, n, N_s=0, M_\\sigma=0}_{\\rho=-1,m,N_s=1,M_\\sigma=1}\\right|^2 &= \\left|H^{\\rho=1, n, N_s=0, M_\\sigma=0}_{\\rho=-1,m,N_s=1,M_\\sigma=1} \\times \\frac{\\exp{\\frac{2\\pi i}{h} (-W + H_s + K_\\sigma) t} - 1}{-W + H_s + K_\\sigma}\\right|^2 \\\\\n&= 4 \\left|H^{\\rho=1, n, N_s=0, M_\\sigma=0}_{\\rho=-1,m,N_s=1,M_\\sigma=1}\\right|^2 \\times \\frac{\\sin^2\\left(\\frac{\\pi t}{h}(-W + H_s + K_\\sigma)\\right)}{(-W + H_s + K_\\sigma)^2},\n\\end{align}" }, { "math_id": 58, "text": "W" }, { "math_id": 59, "text": "\\Omega^{-1}" }, { "math_id": 60, "text": " \\left\\langle \\left|H^{\\rho=1, n, N_s=0, M_\\sigma=0}_{\\rho=-1,m,N_s=1,M_\\sigma=1}\\right|^2 \\right \\rangle_\\text{avg} = \\frac{g^2}{4\\Omega} \\left|\\int v_m^* u_n d\\tau\\right|^2 \\left( \\tilde{\\psi}_s \\psi_s - \\frac{\\mu c^2}{K_\\sigma} \\tilde{\\psi}_s \\beta \\psi_s\\right)," }, { "math_id": 61, "text": "\\mu" }, { "math_id": 62, "text": "p_\\sigma" }, { "math_id": 63, "text": "-W + H_s + K_\\sigma = 0" }, { "math_id": 64, "text": " t\\frac{8\\pi^3 g^2}{h^4} \\times \\left| \\int v_m^* u_n d\\tau \\right|^2 \\frac{p_\\sigma^2}{v_\\sigma}\\left(\\tilde{\\psi}_s \\psi_s - \\frac{\\mu c^2}{K_\\sigma} \\tilde{\\psi}_s \\beta \\psi_s\\right)," }, { "math_id": 65, "text": "K_\\sigma > \\mu c^2" }, { "math_id": 66, "text": "H_s \\leq W - \\mu c^2" }, { "math_id": 67, "text": "H_s > mc^2" }, { "math_id": 68, "text": "W \\geq (m + \\mu)c^2" }, { "math_id": 69, "text": "Q_{mn}^* = \\int v_m^* u_n d\\tau" }, { "math_id": 70, "text": "Q_{mn}^*" }, { "math_id": 71, "text": "v_m" }, { "math_id": 72, "text": "\\frac{\\text{Const.}}{r^5}" }, { "math_id": 73, "text": " \\sigma \\approx G_{\\rm F}^2 E^2 " }, { "math_id": 74, "text": "G_{\\rm F}^0=\\frac{G_{\\rm F}}{(\\hbar c)^3}=\\frac{\\sqrt{2}}{8}\\frac{g^{2}}{M_{\\rm W}^{2} c^4}=1.1663787(6)\\times10^{-5} \\; \\textrm{GeV}^{-2} \\approx 4.5437957\\times10^{14} \\; \\textrm{J}^{-2}\\ ." }, { "math_id": 75, "text": "v = \\left(\\sqrt{2} \\, G_{\\rm F}^0\\right)^{-1/2} \\simeq 246.22 \\; \\textrm{GeV}" }, { "math_id": 76, "text": " \nG_{\\rm F}^0\\simeq \\frac {\\pi \\alpha}{\\sqrt{2}~ M_{\\rm W}^2 (1- M^2_{\\rm W}/M^2_{\\rm Z} )}. \n" }, { "math_id": 77, "text": "M_\\text{Z}=\\frac{M_\\text{W}}{\\cos\\theta_\\text{W}}" }, { "math_id": 78, "text": " \nG_{\\rm F}^0\\simeq \\frac {\\pi \\alpha}{\\sqrt{2}~ M_{\\rm Z}^{2}\\cos^{2}\\theta_{\\rm W}\\sin^{2}\\theta_{\\rm W}}.\n" } ]
https://en.wikipedia.org/wiki?curid=701934
70194962
Ribonucleoprotein Networks Analyzed by Mutational Profiling
Protein-RNA binding probing method Ribonucleoprotein Networks Analyzed by Mutational Profiling (RNP-MaP) is a strategy for probing RNA-protein networks and protein binding sites at a nucleotide resolution. Information about RNP assembly and function can facilitate a better understanding of biological mechanisms. RNP-MaP uses NHS-diazirine (SDA), a hetero-bifunctional crosslinker, to freeze RNA-bound proteins in place. Once the RNA-protein crosslinks are formed, MaP reverse transcription is then conducted to reversely transcribe the protein-bound RNAs as well as introduce mutations at the site of RNA-protein crosslinks. Sequencing results of the cDNAs reveal information about both protein-RNA interaction networks and protein binding sites. Strategy. Components. RNA-MaP involves three major components: Workflow. Long-wavelength UV and SDA reagents are first supplied to living cells to crosslink protein residues with RNA by forming amide bonds between amine groups of lysine (or arginine) residues and succinimidyl esters. Next, cells containing crosslinked RNPs are lysed and the RNA-bound proteins are digested into peptide adducts. MaP reverse transcription is then performed to label the protein-RNA binding sites through peptide adduct-induced mutations. Sequencing of the mutation-containing cDNA product will reveal the mutation sites (or RNP-MaP sites) and the correlations between the RNP-MaP sites are computationally determined using 3-nucleotide windows. Analysis. RNP-MaP site identification. RNP-MaP sites are defined as protein bound nucleotides. SDA and UV treated and UV only treated sample sequence reads are aligned and mutations are counted using ShapeMapper2 software. The SDA or RNP-MaP reactivity for a nucleotide is the ratio of the crosslinked (SDA and UV treated) mutational frequency to the un-crosslinked (UV only) mutation frequency. Using differential mutational signatures, RNP-MaP sites are identified based on universal normalization factors and thresholds on each RNA nucleotide (U, A, C, and G) derived from analysis of ribonucleoproteins of known structure. A nucleotide is identified as a RNP-MaP site if it passes three filters: Protein-RNA interaction network identification. Protein-RNA interactions networks are identified using RNP-MaP correlations since multiple crosslink sites can be detected for a single RNA molecule. RNP-MaP correlations provide a complementary measure of protein binding to RNA independent of RNP-MaP sites. They are identified using a G-test framework known as RingMapper. RNP-MaP correlations require a single RNA molecule to form at least two crosslinks and arise from any of three scenarios: Using RNP-MaP correlations, a network of protein-RNA interaction sites is found and can then be used for functional analysis. Related methods. Cross-linking immunoprecipitation (CLIP). CLIP analyzes protein interactions with RNA by combining UV cross-linking and immunoprecipitation. CLIP-based techniques are able to map RNA binding protein binding sites of interest on a genome-wide scale. There are many CLIP-based methods including: Mass spectrometry. Quantitative mass spectrometry (MS) (or quantitative proteomics) can be used to discover RNA-binding proteins (RBPs) bound to RNA. Labeling MS methods involve the differential use of stable isotope labels or chemical tagging of proteins in samples and controls. This is used to obtain enrichment scores and true binding partners through the ratio of labeled peptides. Label-free MS methods are able to identify proteins in samples and controls. In order to distinguish true binding partners for nonspecific proteins, analytical tools used alongside spectral count data from non-quantitative MS are used to score the probability of a true RBP-RNA interaction Advantages and Limitations. Advantages. RNP-MaP can help reveal functionally important RNA-protein binding networks through binding site density and interconnectivity independent of previous knowledge of interacting proteins. Because of the unbiased nature of the analysis, RNP-MaP is able to detect conserved RNA-protein interactions between species. RNP-MaP is also able to facilitate the characterization of functionally critical elements in large non-coding RNAs or even viral RNAs. Limitations. As a standalone technique, RNP-MaP cannot be used to determine protein-RNA binding mechanisms or protein identities. In order to do so, RNP-MaP must be used in conjunction with other techniques such as CLIP and mass spectrometry. RNP-MaP requires extremely high read-depths for analysis. To identify RNP-MaP sites, 1000x sequencing coverage is required, while RNP-MaP correlation sites require 10,000x sequencing coverage. There are severe limitations on the ability to characterize RNP-MaP correlations between distant (&gt;500 nucleotides) RNP-MaP sites. This is due to limitations of MaP reverse transcription processivity (500-600 nucleotides) and sequencing instrument clustering (&lt;1,000 nucleotides). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_x" }, { "math_id": 1, "text": "T_x = \\frac{BG_{X>10} - MED_{all X}}{SD_{all X}}" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "BG_X" }, { "math_id": 4, "text": "MED_{all X}" }, { "math_id": 5, "text": "SD_{all X}" }, { "math_id": 6, "text": "Z-factor = 1-\\frac{2.575(\\sigma_{SDA+UV}+\\sigma_{UV})}{|\\text{mutation rate}_{SDA+UV}-\\text{mutation rate}_{UV}|}" }, { "math_id": 7, "text": "\\text{mutation rate}_{SDA+UV}" }, { "math_id": 8, "text": "\\text{mutation rate}_{UV}" }, { "math_id": 9, "text": "\\sigma_{nt} = \\frac{\\sqrt{\\text{mutation rate}_{nt}}}{\\sqrt{\\text{reads}_{nt}}}" }, { "math_id": 10, "text": "nt" } ]
https://en.wikipedia.org/wiki?curid=70194962
701991
Creation and annihilation operators
Operators useful in quantum mechanics Creation operators and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator (usually denoted formula_0) lowers the number of particles in a given state by one. A creation operator (usually denoted formula_1) increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. They were introduced by Paul Dirac. Creation and annihilation operators can act on states of various types of particles. For example, in quantum chemistry and many-body theory the creation and annihilation operators often act on electron states. They can also refer specifically to the ladder operators for the quantum harmonic oscillator. In the latter case, the creation operator is interpreted as a raising operator, adding a quantum of energy to the oscillator system (similarly for the lowering operator). They can be used to represent phonons. Constructing Hamiltonians using these operators has the advantage that the theory automatically satisfies the cluster decomposition theorem. The mathematics for the creation and annihilation operators for bosons is the same as for the ladder operators of the quantum harmonic oscillator. For example, the commutator of the creation and annihilation operators that are associated with the same boson state equals one, while all other commutators vanish. However, for fermions the mathematics is different, involving anticommutators instead of commutators. Ladder operators for the quantum harmonic oscillator. In the context of the quantum harmonic oscillator, one reinterprets the ladder operators as creation and annihilation operators, adding or subtracting fixed quanta of energy to the oscillator system. Creation/annihilation operators are different for bosons (integer spin) and fermions (half-integer spin). This is because their wavefunctions have different symmetry properties. First consider the simpler bosonic case of the photons of the quantum harmonic oscillator. Start with the Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator, formula_2 Make a coordinate substitution to nondimensionalize the differential equation formula_3 The Schrödinger equation for the oscillator becomes formula_4 Note that the quantity formula_5 is the same energy as that found for light quanta and that the parenthesis in the Hamiltonian can be written as formula_6 The last two terms can be simplified by considering their effect on an arbitrary differentiable function formula_7 formula_8 which implies, formula_9 coinciding with the usual canonical commutation relation formula_10, in position space representation: formula_11. Therefore, formula_12 and the Schrödinger equation for the oscillator becomes, with substitution of the above and rearrangement of the factor of 1/2, formula_13 If one defines formula_14 as the "creation operator" or the "raising operator" and formula_15 as the "annihilation operator" or the "lowering operator", the Schrödinger equation for the oscillator reduces to formula_16 This is significantly simpler than the original form. Further simplifications of this equation enable one to derive all the properties listed above thus far. Letting formula_17, where formula_18 is the nondimensionalized momentum operator one has formula_19 and formula_20 Note that these imply formula_21 The operators formula_22 and formula_23 may be contrasted to normal operators, which commute with their adjoints. Using the commutation relations given above, the Hamiltonian operator can be expressed as formula_24 One may compute the commutation relations between the formula_22 and formula_23 operators and the Hamiltonian: formula_25 These relations can be used to easily find all the energy eigenstates of the quantum harmonic oscillator as follows. Assuming that formula_26 is an eigenstate of the Hamiltonian formula_27. Using these commutation relations, it follows that formula_28 This shows that formula_29 and formula_30 are also eigenstates of the Hamiltonian, with eigenvalues formula_31 and formula_32 respectively. This identifies the operators formula_33 and formula_34 as "lowering" and "raising" operators between adjacent eigenstates. The energy difference between adjacent eigenstates is formula_35. The ground state can be found by assuming that the lowering operator possesses a nontrivial kernel: formula_36 with formula_37. Applying the Hamiltonian to the ground state, formula_38 So formula_39 is an eigenfunction of the Hamiltonian. This gives the ground state energy formula_40, which allows one to identify the energy eigenvalue of any eigenstate formula_26 as formula_41 Furthermore, it turns out that the first-mentioned operator in (*), the number operator formula_42 plays the most important role in applications, while the second one, formula_43 can simply be replaced by formula_44. Consequently, formula_45 The time-evolution operator is then formula_46 Explicit eigenfunctions. The ground state formula_47 of the quantum harmonic oscillator can be found by imposing the condition that formula_48 Written out as a differential equation, the wavefunction satisfies formula_49 with the solution formula_50 The normalization constant C is found to be formula_51 from formula_52,  using the Gaussian integral. Explicit formulas for all the eigenfunctions can now be found by repeated application of formula_53 to formula_54. Matrix representation. The matrix expression of the creation and annihilation operators of the quantum harmonic oscillator with respect to the above orthonormal basis is formula_55 These can be obtained via the relationships formula_56 and formula_57. The eigenvectors formula_58 are those of the quantum harmonic oscillator, and are sometimes called the "number basis". Generalized creation and annihilation operators. Thanks to representation theory and C*-algebras the operators derived above are actually a specific instance of a more generalized notion of creation and annihilation operators in the context of CCR and CAR algebras. Mathematically and even more generally ladder operators can be understood in the context of a root system of a semisimple Lie group and the associated semisimple Lie algebra without the need of realizing the representation as operators on a functional Hilbert space. In the Hilbert space representation case the operators are constructed as follows: Let formula_59 be a one-particle Hilbert space (that is, any Hilbert space, viewed as representing the state of a single particle). The (bosonic) CCR algebra over formula_59 is the algebra-with-conjugation-operator (called "*") abstractly generated by elements formula_60, where formula_61ranges freely over formula_59, subject to the relations formula_62 in bra–ket notation. The map formula_63 from formula_59 to the bosonic CCR algebra is required to be complex antilinear (this adds more relations). Its adjoint is formula_64, and the map formula_65 is complex linear in H. Thus formula_59 embeds as a complex vector subspace of its own CCR algebra. In a representation of this algebra, the element formula_60 will be realized as an annihilation operator, and formula_64 as a creation operator. In general, the CCR algebra is infinite dimensional. If we take a Banach space completion, it becomes a C*-algebra. The CCR algebra over formula_59 is closely related to, but not identical to, a Weyl algebra. For fermions, the (fermionic) CAR algebra over formula_59 is constructed similarly, but using anticommutator relations instead, namely formula_66 The CAR algebra is finite dimensional only if formula_59 is finite dimensional. If we take a Banach space completion (only necessary in the infinite dimensional case), it becomes a formula_67 algebra. The CAR algebra is closely related, but not identical to, a Clifford algebra. Physically speaking, formula_60 removes (i.e. annihilates) a particle in the state formula_68 whereas formula_64 creates a particle in the state formula_68. The free field vacuum state is the state formula_69 with no particles, characterized by formula_70 If formula_68 is normalized so that formula_71, then formula_72 gives the number of particles in the state formula_68. Creation and annihilation operators for reaction-diffusion equations. The annihilation and creation operator description has also been useful to analyze classical reaction diffusion equations, such as the situation when a gas of molecules formula_73 diffuse and interact on contact, forming an inert product: formula_74. To see how this kind of reaction can be described by the annihilation and creation operator formalism, consider formula_75 particles at a site i on a one dimensional lattice. Each particle moves to the right or left with a certain probability, and each pair of particles at the same site annihilates each other with a certain other probability. The probability that one particle leaves the site during the short time period "dt" is proportional to formula_76, let us say a probability formula_77 to hop left and formula_78 to hop right. All formula_79 particles will stay put with a probability formula_80. (Since "dt" is so short, the probability that two or more will leave during "dt" is very small and will be ignored.) We can now describe the occupation of particles on the lattice as a 'ket' of the form formula_81. It represents the juxtaposition (or conjunction, or tensor product) of the number states formula_82 formula_83, formula_84 located at the individual sites of the lattice. Recall that formula_85 and formula_86 for all "n" ≥ 0, while formula_87 This definition of the operators will now be changed to accommodate the "non-quantum" nature of this problem and we shall use the following definition: formula_88 note that even though the behavior of the operators on the kets has been modified, these operators still obey the commutation relation formula_89 Now define formula_90 so that it applies formula_91 to formula_92. Correspondingly, define formula_93 as applying formula_53 to formula_92. Thus, for example, the net effect of formula_94 is to move a particle from the formula_95-th to the i-th site while multiplying with the appropriate factor. This allows writing the pure diffusive behavior of the particles as formula_96 The reaction term can be deduced by noting that formula_97 particles can interact in formula_98 different ways, so that the probability that a pair annihilates is formula_99, yielding a term formula_100 where number state n is replaced by number state "n" − 2 at site formula_101 at a certain rate. Thus the state evolves by formula_102 Other kinds of interactions can be included in a similar manner. This kind of notation allows the use of quantum field theoretic techniques to be used in the analysis of reaction diffusion systems. Creation and annihilation operators in quantum field theories. In quantum field theories and many-body problems one works with creation and annihilation operators of quantum states, formula_103 and formula_104. These operators change the eigenvalues of the number operator, formula_105 by one, in analogy to the harmonic oscillator. The indices (such as formula_101) represent quantum numbers that label the single-particle states of the system; hence, they are not necessarily single numbers. For example, a tuple of quantum numbers formula_106 is used to label states in the hydrogen atom. The commutation relations of creation and annihilation operators in a multiple-boson system are, formula_107 where formula_108 is the commutator and formula_109 is the Kronecker delta. For fermions, the commutator is replaced by the anticommutator formula_110, formula_111 Therefore, exchanging disjoint (i.e. formula_112) operators in a product of creation or annihilation operators will reverse the sign in fermion systems, but not in boson systems. If the states labelled by "i" are an orthonormal basis of a Hilbert space "H", then the result of this construction coincides with the CCR algebra and CAR algebra construction in the previous section but one. If they represent "eigenvectors" corresponding to the continuous spectrum of some operator, as for unbound particles in QFT, then the interpretation is more subtle. Normalization. While Zee obtains the momentum space normalization formula_113 via the symmetric convention for Fourier transforms, Tong and Peskin &amp; Schroeder use the common asymmetric convention to obtain formula_114. Each derives formula_115. Srednicki additionally merges the Lorentz-invariant measure into his asymmetric Fourier measure, formula_116, yielding formula_117. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat{a}" }, { "math_id": 1, "text": "\\hat{a}^\\dagger" }, { "math_id": 2, "text": "\\left(-\\frac{\\hbar^2}{2m} \\frac{d^2}{d x^2} + \\frac{1}{2}m \\omega^2 x^2\\right) \\psi(x) = E \\psi(x)." }, { "math_id": 3, "text": "x \\ = \\ \\sqrt{ \\frac{\\hbar}{m \\omega}} q." }, { "math_id": 4, "text": " \\frac{\\hbar \\omega}{2} \\left(-\\frac{d^2}{d q^2} + q^2 \\right) \\psi(q) = E \\psi(q)." }, { "math_id": 5, "text": " \\hbar \\omega = h \\nu " }, { "math_id": 6, "text": " -\\frac{d^2}{dq^2} + q^2 = \\left(-\\frac{d}{dq}+q \\right) \\left(\\frac{d}{dq}+ q \\right) + \\frac {d}{dq}q - q \\frac {d}{dq} ." }, { "math_id": 7, "text": " f(q), " }, { "math_id": 8, "text": "\\left(\\frac{d}{dq} q- q \\frac{d}{dq} \\right)f(q) = \\frac{d}{dq}(q f(q)) - q \\frac{df(q)}{dq} = f(q) " }, { "math_id": 9, "text": "\\frac{d}{dq} q- q \\frac{d}{dq} = 1 ," }, { "math_id": 10, "text": " -i[q,p]=1 " }, { "math_id": 11, "text": "p:=-i\\frac{d}{dq}" }, { "math_id": 12, "text": " -\\frac{d^2}{dq^2} + q^2 = \\left(-\\frac{d}{dq}+q \\right) \\left(\\frac{d}{dq}+ q \\right) + 1 " }, { "math_id": 13, "text": " \\hbar \\omega \\left[\\frac{1}{\\sqrt{2}} \\left(-\\frac{d}{dq}+q \\right)\\frac{1}{\\sqrt{2}} \\left(\\frac{d}{dq}+ q \\right) + \\frac{1}{2} \\right] \\psi(q) = E \\psi(q)." }, { "math_id": 14, "text": "a^\\dagger \\ = \\ \\frac{1}{\\sqrt{2}} \\left(-\\frac{d}{dq} + q\\right)" }, { "math_id": 15, "text": " a \\ \\ = \\ \\frac{1}{\\sqrt{2}} \\left(\\ \\ \\ \\!\\frac{d}{dq} + q\\right)" }, { "math_id": 16, "text": " \\hbar \\omega \\left( a^\\dagger a + \\frac{1}{2} \\right) \\psi(q) = E \\psi(q)." }, { "math_id": 17, "text": "p = - i \\frac{d}{dq}" }, { "math_id": 18, "text": "p" }, { "math_id": 19, "text": " [q, p] = i \\," }, { "math_id": 20, "text": "\\begin{align}\na &= \\frac{1}{\\sqrt{2}}(q + i p) = \\frac{1}{\\sqrt{2}}\\left( q + \\frac{d}{dq}\\right) \\\\[1ex]\na^\\dagger &= \\frac{1}{\\sqrt{2}}(q - i p) = \\frac{1}{\\sqrt{2}}\\left( q - \\frac{d}{dq}\\right).\n\\end{align}" }, { "math_id": 21, "text": " [a, a^\\dagger ] = \\frac{1}{2} [ q + ip , q-i p] = \\frac{1}{2} ([q,-ip] + [ip, q]) = -\\frac{i}{2} ([q, p] + [q, p]) = 1. " }, { "math_id": 22, "text": "a\\," }, { "math_id": 23, "text": "a^\\dagger\\," }, { "math_id": 24, "text": "\\hat H = \\hbar \\omega \\left( a \\, a^\\dagger - \\frac{1}{2}\\right) = \\hbar \\omega \\left( a^\\dagger \\, a + \\frac{1}{2}\\right).\\qquad\\qquad(*)" }, { "math_id": 25, "text": "\\begin{align}\n\\left[\\hat H, a \\right] &= \\left[\\hbar \\omega \\left ( a a^\\dagger - \\tfrac{1}{2}\\right ) , a\\right] = \\hbar \\omega \\left[ a a^\\dagger, a\\right] = \\hbar \\omega \\left( a [a^\\dagger,a] + [a,a] a^\\dagger\\right) = -\\hbar \\omega a. \\\\[1ex]\n\\left[\\hat H, a^\\dagger \\right] &= \\hbar \\omega \\, a^\\dagger .\n\\end{align}" }, { "math_id": 26, "text": "\\psi_n" }, { "math_id": 27, "text": "\\hat H \\psi_n = E_n\\, \\psi_n" }, { "math_id": 28, "text": "\\begin{align}\n\\hat H\\, a\\psi_n &= (E_n - \\hbar \\omega)\\, a\\psi_n . \\\\[1ex]\n\\hat H\\, a^\\dagger\\psi_n &= (E_n + \\hbar \\omega)\\, a^\\dagger\\psi_n .\n\\end{align}" }, { "math_id": 29, "text": "a\\psi_n" }, { "math_id": 30, "text": "a^\\dagger\\psi_n" }, { "math_id": 31, "text": "E_n - \\hbar \\omega" }, { "math_id": 32, "text": "E_n + \\hbar \\omega" }, { "math_id": 33, "text": "a" }, { "math_id": 34, "text": "a^\\dagger" }, { "math_id": 35, "text": "\\Delta E = \\hbar \\omega" }, { "math_id": 36, "text": "a\\, \\psi_0 = 0" }, { "math_id": 37, "text": "\\psi_0\\ne0" }, { "math_id": 38, "text": "\\hat H\\psi_0 = \\hbar\\omega\\left(a^\\dagger a+\\frac{1}{2}\\right)\\psi_0 = \\hbar\\omega a^\\dagger a \\psi_0 + \\frac{\\hbar\\omega}{2}\\psi_0=0+\\frac{\\hbar\\omega}{2}\\psi_0=E_0\\psi_0." }, { "math_id": 39, "text": "\\psi_0" }, { "math_id": 40, "text": "E_0 = \\hbar \\omega /2" }, { "math_id": 41, "text": "E_n = \\left(n + \\tfrac{1}{2}\\right)\\hbar \\omega." }, { "math_id": 42, "text": "N=a^\\dagger a\\,," }, { "math_id": 43, "text": "a a^\\dagger \\," }, { "math_id": 44, "text": "N+1" }, { "math_id": 45, "text": "\\hbar\\omega \\,\\left(N+\\tfrac{1}{2}\\right)\\,\\psi (q) =E\\,\\psi (q)~." }, { "math_id": 46, "text": "\\begin{align}\nU(t)\n&= \\exp ( -it \\hat{H}/\\hbar)\n= \\exp (-it\\omega (a^\\dagger a+1/2)) ~, \\\\[1ex]\n&= e^{-it \\omega /2} ~ \\sum_{k=0}^{\\infty} {(e^{-i\\omega t}-1)^k \\over k!} a^{{\\dagger} {k}} a^k ~.\n\\end{align}" }, { "math_id": 47, "text": "\\ \\psi_0(q)" }, { "math_id": 48, "text": " a \\ \\psi_0(q) = 0." }, { "math_id": 49, "text": "q \\psi_0 + \\frac{d\\psi_0}{dq} = 0" }, { "math_id": 50, "text": "\\psi_0(q) = C \\exp\\left(-\\tfrac 1 2 q^2\\right)." }, { "math_id": 51, "text": "1/ \\sqrt[4]{\\pi}" }, { "math_id": 52, "text": "\\int_{-\\infty}^\\infty \\psi_0^* \\psi_0 \\,dq = 1" }, { "math_id": 53, "text": " a^\\dagger" }, { "math_id": 54, "text": " \\psi_0" }, { "math_id": 55, "text": " \\begin{align}\na^\\dagger &= \\begin{pmatrix}\n0 & 0 & 0 & 0 & \\dots & 0 & \\dots \\\\\n\\sqrt{1} & 0 & 0 & 0 & \\dots & 0 & \\dots \\\\\n0 & \\sqrt{2} & 0 & 0 & \\dots & 0 & \\dots \\\\\n0 & 0 & \\sqrt{3} & 0 & \\dots & 0 & \\dots \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\ddots & \\dots & \\dots \\\\\n0 & 0 & 0 & \\dots & \\sqrt{n} & 0 & \\dots & \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\ddots \\end{pmatrix}\n\\\\[1ex]\na &= \\begin{pmatrix}\n0 & \\sqrt{1} & 0 & 0 & \\dots & 0 & \\dots \\\\\n0 & 0 & \\sqrt{2} & 0 & \\dots & 0 & \\dots \\\\\n0 & 0 & 0 & \\sqrt{3} & \\dots & 0 & \\dots \\\\\n0 & 0 & 0 & 0 & \\ddots & \\vdots & \\dots \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\sqrt{n} & \\dots \\\\\n0 & 0 & 0 & 0 & \\dots & 0 & \\ddots \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots \\end{pmatrix}\n\\end{align}\n" }, { "math_id": 56, "text": "a^\\dagger_{ij} = \\left\\langle\\psi_i \\right| a^\\dagger \\left| \\psi_j\\right\\rangle" }, { "math_id": 57, "text": "a_{ij} = \\left\\langle\\psi_i \\right| a \\left| \\psi_j\\right\\rangle" }, { "math_id": 58, "text": "\\psi_i" }, { "math_id": 59, "text": "H" }, { "math_id": 60, "text": "a(f)" }, { "math_id": 61, "text": "f\\," }, { "math_id": 62, "text": "\\begin{align}\n\\left[a(f), a(g)\\right] &= \\left[a^\\dagger(f), a^\\dagger(g)\\right] = 0 \\\\[1ex]\n\\left[a(f), a^\\dagger(g)\\right] &= \\langle f\\mid g \\rangle,\n\\end{align}" }, { "math_id": 63, "text": "a: f \\to a(f)" }, { "math_id": 64, "text": "a^\\dagger(f)" }, { "math_id": 65, "text": "f\\to a^\\dagger(f)" }, { "math_id": 66, "text": "\\begin{align}\n\\{a(f),a(g)\\} &= \\{a^\\dagger(f),a^\\dagger(g)\\} = 0 \\\\[1ex]\n\\{a(f),a^\\dagger(g)\\} &= \\langle f\\mid g \\rangle.\n\\end{align}" }, { "math_id": 67, "text": "C^*" }, { "math_id": 68, "text": "|f\\rangle" }, { "math_id": 69, "text": "\\left\\vert0\\right\\rangle" }, { "math_id": 70, "text": "a(f) \\left| 0\\right\\rangle=0." }, { "math_id": 71, "text": "\\langle f|f\\rangle = 1" }, { "math_id": 72, "text": "N=a^\\dagger(f)a(f)" }, { "math_id": 73, "text": "A" }, { "math_id": 74, "text": "A+A\\to \\empty" }, { "math_id": 75, "text": "n_{i}" }, { "math_id": 76, "text": "n_i \\, dt" }, { "math_id": 77, "text": "\\alpha n_{i}dt" }, { "math_id": 78, "text": "\\alpha n_i \\, dt" }, { "math_id": 79, "text": "n_i" }, { "math_id": 80, "text": "1-2\\alpha n_i \\, dt" }, { "math_id": 81, "text": "|\\dots, n_{-1}, n_0, n_1, \\dots\\rangle" }, { "math_id": 82, "text": "\\dots, |n_{-1}\\rangle" }, { "math_id": 83, "text": "|n_{0}\\rangle" }, { "math_id": 84, "text": "|n_{1}\\rangle, \\dots" }, { "math_id": 85, "text": "a\\left| n \\right\\rangle = \\sqrt{n} \\left|n-1\\right\\rangle" }, { "math_id": 86, "text": "a^\\dagger \\left| n\\right\\rangle= \\sqrt{n+1}\\left| n+1\\right\\rangle," }, { "math_id": 87, "text": "[a,a^{\\dagger}] = \\mathbf 1" }, { "math_id": 88, "text": "\\begin{align}\na \\left|n\\right\\rangle &= (n) \\left|n{-}1\\right\\rangle \\\\[1ex]\na^\\dagger \\left|n\\right\\rangle &= \\left| n{+}1\\right\\rangle\n\\end{align}" }, { "math_id": 89, "text": "[a,a^{\\dagger}]=\\mathbf 1" }, { "math_id": 90, "text": " a_i" }, { "math_id": 91, "text": " a" }, { "math_id": 92, "text": " |n_i\\rangle" }, { "math_id": 93, "text": " a^\\dagger_i" }, { "math_id": 94, "text": " a_{i-1} a^\\dagger_i" }, { "math_id": 95, "text": "(i-1)" }, { "math_id": 96, "text": "\\partial_{t}\\left| \\psi\\right\\rangle\n= -\\alpha \\sum_i \\left(2a_i^\\dagger a_i-a_{i-1}^\\dagger a_i-a_{i+1}^\\dagger a_i\\right) \\left|\\psi\\right\\rangle\n= -\\alpha\\sum_i \\left(a_i^\\dagger-a_{i-1}^\\dagger\\right)(a_i-a_{i-1}) \\left|\\psi\\right\\rangle. " }, { "math_id": 97, "text": "n" }, { "math_id": 98, "text": "n(n-1)" }, { "math_id": 99, "text": "\\lambda n(n-1)dt" }, { "math_id": 100, "text": "\\lambda \\sum_i (a_i a_i-a_i^\\dagger a_i^\\dagger a_i a_i)" }, { "math_id": 101, "text": "i" }, { "math_id": 102, "text": "\\partial_t\\left|\\psi\\right\\rangle = -\\alpha\\sum_i \\left(a_i^\\dagger-a_{i-1}^\\dagger\\right) \\left(a_i-a_{i-1}\\right) \\left|\\psi\\right\\rangle + \\lambda\\sum_i \\left(a_i^2-a_i^{\\dagger 2}a_i^2\\right) \\left|\\psi\\right\\rangle " }, { "math_id": 103, "text": "a^\\dagger_i" }, { "math_id": 104, "text": "a^{\\,}_i" }, { "math_id": 105, "text": "N = \\sum_i n_i = \\sum_i a^\\dagger_i a^{\\,}_i," }, { "math_id": 106, "text": "(n, \\ell, m, s)" }, { "math_id": 107, "text": "\\begin{align}\n\\left[a^{\\,}_i, a^\\dagger_j\\right] &\\equiv a^{\\,}_i a^\\dagger_j - a^\\dagger_ja^{\\,}_i = \\delta_{i j}, \\\\[1ex]\n\\left[a^\\dagger_i, a^\\dagger_j\\right] &= [a^{\\,}_i, a^{\\,}_j] = 0,\n\\end{align}" }, { "math_id": 108, "text": "[\\cdot , \\cdot ]" }, { "math_id": 109, "text": "\\delta_{i j}" }, { "math_id": 110, "text": "\\{\\cdot , \\cdot \\}" }, { "math_id": 111, "text": "\\begin{align}\n\\{a^{\\,}_i, a^\\dagger_j\\} &\\equiv a^{\\,}_i a^\\dagger_j +a^\\dagger_j a^{\\,}_i = \\delta_{i j}, \\\\[1ex]\n\\{a^\\dagger_i, a^\\dagger_j\\} &= \\{a^{\\,}_i, a^{\\,}_j\\} = 0.\n\\end{align}" }, { "math_id": 112, "text": "i \\ne j" }, { "math_id": 113, "text": "[\\hat a_{\\mathbf p},\\hat a_{\\mathbf q}^\\dagger] = \\delta(\\mathbf{p} - \\mathbf{q})" }, { "math_id": 114, "text": "[\\hat a_{\\mathbf p},\\hat a_{\\mathbf q}^\\dagger] = (2\\pi)^3\\delta(\\mathbf{p} - \\mathbf{q})" }, { "math_id": 115, "text": "[\\hat \\phi(\\mathbf x), \\hat \\pi(\\mathbf x')] = i\\delta(\\mathbf x - \\mathbf x')" }, { "math_id": 116, "text": "\\tilde{dk}=\\frac{d^3k}{(2\\pi)^3 2\\omega}" }, { "math_id": 117, "text": "[\\hat a_{\\mathbf k},\\hat a_{\\mathbf k'}^\\dagger] = (2\\pi)^3 2\\omega\\,\\delta(\\mathbf{k} - \\mathbf{k}')" } ]
https://en.wikipedia.org/wiki?curid=701991
70200683
Nielsen's theorem
Nielsen's theorem is a result in quantum information concerning transformations between bipartite states due to Michael Nielsen. It makes use of majorization. Statement. A bipartite state formula_0 transforms to another formula_1 using local operations and classical communication if and only if formula_2 is majorized by formula_3 where the formula_4 are the Schmidt coefficients of the respective state. This can be written more concisely as formula_5 iff formula_6. Proof. The proof is detailed in the paper and will be added here at a later date. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\psi\\rangle" }, { "math_id": 1, "text": "| \\phi \\rangle" }, { "math_id": 2, "text": "\\lambda_{\\psi}" }, { "math_id": 3, "text": "\\lambda_{\\phi}" }, { "math_id": 4, "text": "\\lambda_i" }, { "math_id": 5, "text": "|\\psi \\rangle \\rightarrow |\\phi\\rangle " }, { "math_id": 6, "text": "\\lambda_{\\psi} \\prec \\lambda_{\\phi}" } ]
https://en.wikipedia.org/wiki?curid=70200683
70201053
Acín decomposition
In a 2000 paper titled "Generalized Schmidt Decomposition and Classification of Three-Quantum-Bit States" Acín et al. described a way of separating out one of the terms of a general tripartite quantum state. This can be useful in considering measures of entanglement of quantum states. General decomposition. For a general three-qubit state formula_0there is no way of writing formula_1 but there is a general transformation to formula_2where formula_3. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\psi\\rangle=a_{000}\\left|0_{A}\\right\\rangle\\left|0_{B}\\right\\rangle\\left|0_{C}\\right\\rangle+a_{001}\\left|0_{A}\\right\\rangle\\left|0_{B}\\right\\rangle\\left|1_{C}\\right\\rangle+a_{010}\\left|0_{A}\\right\\rangle\\left|1_{B}\\right\\rangle\\left|0_{C}\\right\\rangle+a_{011}\\left|0_{A}\\right\\rangle\\left|1_{B}\\right\\rangle\\left|1_{C}\\right\\rangle +a_{100}\\left|1_{A}\\right\\rangle\\left|0_{B}\\right\\rangle\\left|0_{C}\\right\\rangle+a_{101}\\left|1_{A}\\right\\rangle\\left|0_{B}\\right\\rangle\\left|1_{C}\\right\\rangle+a_{110}\\left|1_{A}\\right\\rangle\\left|1_{B}\\right\\rangle\\left|0_{C}\\right\\rangle+a_{111}\\left|1_{A}\\right\\rangle\\left|1_{B}\\right\\rangle\\left|1_{C}\\right\\rangle" }, { "math_id": 1, "text": "\\left|\\psi_{A, B, C}\\right\\rangle \\neq \\sqrt{\\lambda_{0}}\\left|0_{A}^{\\prime}\\right\\rangle\\left|0_{B}^{\\prime}\\right\\rangle\\left|0_{C}^{\\prime}\\right\\rangle+\\sqrt{\\lambda_{1}}\\left|1_{A}^{\\prime}\\right\\rangle\\left|1_{B}^{\\prime}\\right\\rangle\\left|1_{C}^{\\prime}\\right\\rangle" }, { "math_id": 2, "text": "|\\psi\\rangle = \\lambda_{1} |0_{A}^{}\\rangle|0_{B}^{}\\rangle|0_{C}^{}\\rangle+|1_{A}^{}\\rangle(\\lambda_{2} e^{i \\phi}|0_{B}^{}\\rangle|0_{C}^{}\\rangle+\\lambda_{3}|0_{B}^{}\\rangle|1_{C}^{}\\rangle+\\lambda_{4}|1_{B}^{}\\rangle|0_{C}^{}\\rangle+\\lambda_{5}|1_{B}^{}\\rangle|1_{C}^{}\\rangle)" }, { "math_id": 3, "text": "\\lambda_{i} \\geq 0, \\sum_{i=1}^{5} \\lambda_{i}^{2}=1" } ]
https://en.wikipedia.org/wiki?curid=70201053
7020660
Clenshaw–Curtis quadrature
Numerical integration method Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables formula_0 and use a discrete cosine transform (DCT) approximation for the cosine series. Besides having fast-converging accuracy comparable to Gaussian quadrature rules, Clenshaw–Curtis quadrature naturally leads to nested quadrature rules (where different accuracy orders share points), which is important for both adaptive quadrature and multidimensional quadrature (cubature). Briefly, the function formula_1 to be integrated is evaluated at the formula_2 extrema or roots of a Chebyshev polynomial and these values are used to construct a polynomial approximation for the function. This polynomial is then integrated exactly. In practice, the integration weights for the value of the function at each node are precomputed, and this computation can be performed in formula_3 time by means of fast Fourier transform-related algorithms for the DCT. General method. A simple way of understanding the algorithm is to realize that Clenshaw–Curtis quadrature (proposed by those authors in 1960) amounts to integrating via a change of variable "x" = cos("θ"). The algorithm is normally expressed for integration of a function "f"("x") over the interval [−1,1] (any other interval can be obtained by appropriate rescaling). For this integral, we can write: formula_4 That is, we have transformed the problem from integrating formula_1 to one of integrating formula_5. This can be performed if we know the cosine series for formula_6: formula_7 in which case the integral becomes: formula_8 Of course, in order to calculate the cosine series coefficients formula_9 one must again perform a numeric integration, so at first this may not seem to have simplified the problem. Unlike computation of arbitrary integrals, however, Fourier-series integrations for periodic functions (like formula_10, by construction), up to the Nyquist frequency formula_11, are accurately computed by the formula_12 equally spaced and equally weighted points formula_13 for formula_14 (except the endpoints are weighted by 1/2, to avoid double-counting, equivalent to the trapezoidal rule or the Euler–Maclaurin formula). That is, we approximate the cosine-series integral by the type-I discrete cosine transform (DCT): formula_15 for formula_16 and then use the formula above for the integral in terms of these formula_17. Because only formula_18 is needed, the formula simplifies further into a type-I DCT of order "N"/2, assuming "N" is an even number: formula_19 From this formula, it is clear that the Clenshaw–Curtis quadrature rule is symmetric, in that it weights "f"("x") and "f"(−"x") equally. Because of aliasing, one only computes the coefficients formula_18 up to "k" = "N"/2, since discrete sampling of the function makes the frequency of 2"k" indistinguishable from that of "N"–2"k". Equivalently, the formula_18 are the amplitudes of the unique bandlimited trigonometric interpolation polynomial passing through the "N"+1 points where "f"(cos "θ") is evaluated, and we approximate the integral by the integral of this interpolation polynomial. There is some subtlety in how one treats the formula_20 coefficient in the integral, however—to avoid double-counting with its alias it is included with weight 1/2 in the final approximate integral (as can also be seen by examining the interpolating polynomial): formula_21 Connection to Chebyshev polynomials. The reason that this is connected to the Chebyshev polynomials formula_22 is that, by definition, formula_23, and so the cosine series above is really an approximation of formula_1 by Chebyshev polynomials: formula_24 and thus we are "really" integrating formula_1 by integrating its approximate expansion in terms of Chebyshev polynomials. The evaluation points formula_25 correspond to the extrema of the Chebyshev polynomial formula_26. The fact that such Chebyshev approximation is just a cosine series under a change of variables is responsible for the rapid convergence of the approximation as more terms formula_27 are included. A cosine series converges very rapidly for functions that are even, periodic, and sufficiently smooth. This is true here, since formula_10 is even and periodic in formula_28 by construction, and is "k"-times differentiable everywhere if formula_1 is "k"-times differentiable on formula_29. (In contrast, directly applying a cosine-series expansion to formula_1 instead of formula_6 will usually "not" converge rapidly because the slope of the even-periodic extension would generally be discontinuous.) Fejér quadrature. Fejér proposed two quadrature rules very similar to Clenshaw–Curtis quadrature, but much earlier (in 1933). Of these two, Fejér's "second" quadrature rule is nearly identical to Clenshaw–Curtis. The only difference is that the endpoints formula_30 and formula_31 are set to zero. That is, Fejér only used the "interior" extrema of the Chebyshev polynomials, i.e. the true stationary points. Fejér's "first" quadrature rule evaluates the formula_17 by evaluating formula_10 at a different set of equally spaced points, halfway between the extrema: formula_32 for formula_33. These are the "roots" of formula_34, and are known as the Chebyshev nodes. (These equally spaced midpoints are the only other choice of quadrature points that preserve both the even symmetry of the cosine transform and the translational symmetry of the periodic Fourier series.) This leads to a formula: formula_35 which is precisely the type-II DCT. However, Fejér's first quadrature rule is not nested: the evaluation points for 2"N" do not coincide with any of the evaluation points for "N", unlike Clenshaw–Curtis quadrature or Fejér's second rule. Despite the fact that Fejér discovered these techniques before Clenshaw and Curtis, the name "Clenshaw–Curtis quadrature" has become standard. Comparison to Gaussian quadrature. The classic method of Gaussian quadrature evaluates the integrand at formula_12 points and is constructed to "exactly" integrate polynomials up to degree formula_36. In contrast, Clenshaw–Curtis quadrature, above, evaluates the integrand at formula_12 points and exactly integrates polynomials only up to degree formula_2. It may seem, therefore, that Clenshaw–Curtis is intrinsically worse than Gaussian quadrature, but in reality this does not seem to be the case. In practice, several authors have observed that Clenshaw–Curtis can have accuracy comparable to that of Gaussian quadrature for the same number of points. This is possible because most numeric integrands are not polynomials (especially since polynomials can be integrated analytically), and approximation of many functions in terms of Chebyshev polynomials converges rapidly (see Chebyshev approximation). In fact, recent theoretical results argue that both Gaussian and Clenshaw–Curtis quadrature have error bounded by formula_37 for a "k"-times differentiable integrand. One often cited advantage of Clenshaw–Curtis quadrature is that the quadrature weights can be evaluated in formula_3 time by fast Fourier transform algorithms (or their analogues for the DCT), whereas most algorithms for Gaussian quadrature weights required formula_38 time to compute. However, recent algorithms have attained formula_39 complexity for Gauss–Legendre quadrature. As a practical matter, high-order numeric integration is rarely performed by simply evaluating a quadrature formula for very large formula_2. Instead, one usually employs an adaptive quadrature scheme that first evaluates the integral to low order, and then successively refines the accuracy by increasing the number of sample points, possibly only in regions where the integral is inaccurate. To evaluate the accuracy of the quadrature, one compares the answer with that of a quadrature rule of even lower order. Ideally, this lower-order quadrature rule evaluates the integrand at a "subset" of the original "N" points, to minimize the integrand evaluations. This is called a nested quadrature rule, and here Clenshaw–Curtis has the advantage that the rule for order "N" uses a subset of the points from order 2"N". In contrast, Gaussian quadrature rules are not naturally nested, and so one must employ Gauss–Kronrod quadrature formulas or similar methods. Nested rules are also important for sparse grids in multidimensional quadrature, and Clenshaw–Curtis quadrature is a popular method in this context. Integration with weight functions. More generally, one can pose the problem of integrating an arbitrary formula_1 against a fixed "weight function" formula_40 that is known ahead of time: formula_41 The most common case is formula_42, as above, but in certain applications a different weight function is desirable. The basic reason is that, since formula_40 can be taken into account "a priori", the integration error can be made to depend only on the accuracy in approximating formula_1, regardless of how badly behaved the weight function might be. Clenshaw–Curtis quadrature can be generalized to this case as follows. As before, it works by finding the cosine-series expansion of formula_6 via a DCT, and then integrating each term in the cosine series. Now, however, these integrals are of the form formula_43 For most formula_40, this integral cannot be computed analytically, unlike before. Since the same weight function is generally used for many integrands formula_1, however, one can afford to compute these formula_44 numerically to high accuracy beforehand. Moreover, since formula_40 is generally specified analytically, one can sometimes employ specialized methods to compute formula_44. For example, special methods have been developed to apply Clenshaw–Curtis quadrature to integrands of the form formula_45 with a weight function formula_40 that is highly oscillatory, e.g. a sinusoid or Bessel function (see, e.g., Evans &amp; Webster, 1999). This is useful for high-accuracy Fourier series and Fourier–Bessel series computation, where simple formula_42 quadrature methods are problematic because of the high accuracy required to resolve the contribution of rapid oscillations. Here, the rapid-oscillation part of the integrand is taken into account via specialized methods for formula_44, whereas the unknown function formula_1 is usually better behaved. Another case where weight functions are especially useful is if the integrand is unknown but has a known singularity of some form, e.g. a known discontinuity or integrable divergence (such as ) at some point. In this case the singularity can be pulled into the weight function formula_40 and its analytical properties can be used to compute formula_44 accurately beforehand. Note that Gaussian quadrature can also be adapted for various weight functions, but the technique is somewhat different. In Clenshaw–Curtis quadrature, the integrand is always evaluated at the same set of points regardless of formula_40, corresponding to the extrema or roots of a Chebyshev polynomial. In Gaussian quadrature, different weight functions lead to different orthogonal polynomials, and thus different roots where the integrand is evaluated. Integration on infinite and semi-infinite intervals. It is also possible to use Clenshaw–Curtis quadrature to compute integrals of the form formula_46 and formula_47, using a coordinate-remapping technique. High accuracy, even exponential convergence for smooth integrands, can be retained as long as formula_1 decays sufficiently quickly as |"x"| approaches infinity. One possibility is to use a generic coordinate transformation such as "x" = "t"/(1−"t"2) formula_48 to transform an infinite or semi-infinite interval into a finite one, as described in Numerical integration. There are also additional techniques that have been developed specifically for Clenshaw–Curtis quadrature. For example, one can use the coordinate remapping formula_49, where "L" is a user-specified constant (one could simply use "L"=1; an optimal choice of "L" can speed convergence, but is problem-dependent), to transform the semi-infinite integral into: formula_50 The factor multiplying sin("θ"), "f"(...)/(...)2, can then be expanded in a cosine series (approximately, using the discrete cosine transform) and integrated term-by-term, exactly as was done for "f"(cos "θ") above. To eliminate the singularity at "θ"=0 in this integrand, one merely requires that "f"("x") go to zero sufficiently fast as "x" approaches infinity, and in particular "f"("x") must decay at least as fast as 1/"x"3/2. For a doubly infinite interval of integration, one can use the coordinate remapping formula_51 (where "L" is a user-specified constant as above) to transform the integral into: formula_52 In this case, we have used the fact that the remapped integrand "f"("L" cot "θ")/sin2("θ") is already periodic and so can be directly integrated with high (even exponential) accuracy using the trapezoidal rule (assuming "f" is sufficiently smooth and rapidly decaying); there is no need to compute the cosine series as an intermediate step. Note that the quadrature rule does not include the endpoints, where we have assumed that the integrand goes to zero. The formula above requires that "f"("x") decay faster than 1/"x"2 as "x" goes to ±∞. (If "f" decays exactly as 1/"x"2, then the integrand goes to a finite value at the endpoints and these limits must be included as endpoint terms in the trapezoidal rule.). However, if "f" decays only polynomially quickly, then it may be necessary to use a further step of Clenshaw–Curtis quadrature to obtain exponential accuracy of the remapped integral instead of the trapezoidal rule, depending on more details of the limiting properties of "f": the problem is that, although "f"("L" cot"θ")/sin2("θ") is indeed periodic with period π, it is not necessarily smooth at the endpoints if all the derivatives do not vanish there [e.g. the function "f"("x") = tanh("x"3)/"x"3 decays as 1/"x"3 but has a jump discontinuity in the slope of the remapped function at θ=0 and π]. Another coordinate-remapping approach was suggested for integrals of the form formula_53, in which case one can use the transformation formula_54 to transform the integral into the form formula_55 where formula_56, at which point one can proceed identically to Clenshaw–Curtis quadrature for "f" as above. Because of the endpoint singularities in this coordinate remapping, however, one uses Fejér's first quadrature rule [which does not evaluate "f"(−1)] unless "g"(∞) is finite. Precomputing the quadrature weights. In practice, it is inconvenient to perform a DCT of the sampled function values "f"(cos θ) for each new integrand. Instead, one normally precomputes quadrature weights formula_57 (for "n" from 0 to "N"/2, assuming that "N" is even) so that formula_58 These weights formula_57 are also computed by a DCT, as is easily seen by expressing the computation in terms of matrix algebra. In particular, we computed the cosine series coefficients formula_18 via an expression of the form: formula_59 where "D" is the matrix form of the ("N"/2+1)-point type-I DCT from above, with entries (for zero-based indices): formula_60 and formula_61 is formula_62 As discussed above, because of aliasing, there is no point in computing coefficients beyond formula_63, so "D" is an formula_64 matrix. In terms of these coefficients "c", the integral is approximately: formula_65 from above, where "c" is the vector of coefficients formula_18 above and "d" is the vector of integrals for each Fourier coefficient: formula_66 (Note, however, that these weight factors are altered if one changes the DCT matrix "D" to use a different normalization convention. For example, it is common to define the type-I DCT with additional factors of 2 or √2 factors in the first and last rows or columns, which leads to corresponding alterations in the "d" entries.) The formula_67 summation can be re-arranged to: formula_68 where "w" is the vector of the desired weights formula_57 above, with:formula_69 Since the transposed matrix formula_70 is also a DCT (e.g., the transpose of a type-I DCT is a type-I DCT, possibly with a slightly different normalization depending on the conventions that are employed), the quadrature weights "w" can be precomputed in "O"("N" log "N") time for a given "N" using fast DCT algorithms. The weights formula_57 are positive and their sum is equal to one.
[ { "math_id": 0, "text": "x = \\cos \\theta" }, { "math_id": 1, "text": "f(x)" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "O(N \\log N)" }, { "math_id": 4, "text": "\\int_{-1}^1 f(x)\\,dx = \\int_0^\\pi f(\\cos \\theta) \\sin(\\theta)\\, d\\theta . " }, { "math_id": 5, "text": "f(\\cos \\theta) \\sin \\theta" }, { "math_id": 6, "text": "f(\\cos \\theta)" }, { "math_id": 7, "text": "f(\\cos \\theta) = \\frac{a_0}{2} + \\sum_{k=1}^\\infty a_k \\cos (k\\theta)" }, { "math_id": 8, "text": "\\int_0^\\pi f(\\cos \\theta) \\sin(\\theta)\\, d\\theta = a_0 + \\sum_{k=1}^\\infty \\frac{2 a_{2k}}{1 - (2k)^2} ." }, { "math_id": 9, "text": "a_k = \\frac{2}{\\pi} \\int_0^\\pi f(\\cos \\theta) \\cos(k \\theta)\\, d\\theta,\\quad k=0,1,2,\\dots," }, { "math_id": 10, "text": "f(\\cos\\theta)" }, { "math_id": 11, "text": "k=N" }, { "math_id": 12, "text": "N+1" }, { "math_id": 13, "text": "\\theta_n = n \\pi / N" }, { "math_id": 14, "text": "n = 0,\\ldots,N" }, { "math_id": 15, "text": "a_k \\approx \\frac{2}{N} \\left[ \\frac{f(1)}{2} + \\frac{f(-1)}{2} (-1)^k + \\sum_{n=1}^{N-1} f(\\cos[n\\pi/N]) \\cos(n k \\pi/N) \\right]" }, { "math_id": 16, "text": "k = 0,\\ldots,N" }, { "math_id": 17, "text": "a_k" }, { "math_id": 18, "text": "a_{2k}" }, { "math_id": 19, "text": "a_{2k} \\approx \\frac{2}{N} \\left[ \\frac{f(1) + f(-1)}{2} + f(0) (-1)^k + \\sum_{n=1}^{N/2-1} \\left\\{ f(\\cos[n\\pi/N]) + f(-\\cos[n\\pi/N]) \\right\\} \\cos\\left(\\frac{n k \\pi}{N/2}\\right) \\right]" }, { "math_id": 20, "text": "a_{N}" }, { "math_id": 21, "text": "\\int_0^\\pi f(\\cos \\theta) \\sin(\\theta)\\, d\\theta \\approx a_0 + \\sum_{k=1}^{N/2-1} \\frac{2 a_{2k}}{1 - (2k)^2} + \\frac{a_{N}}{1 - N^2}." }, { "math_id": 22, "text": "T_k(x)" }, { "math_id": 23, "text": "T_k(\\cos\\theta) = \\cos(k\\theta)" }, { "math_id": 24, "text": "f(x) = \\frac{a_0}{2} T_0(x) + \\sum_{k=1}^\\infty a_k T_k(x)," }, { "math_id": 25, "text": "x_n = \\cos(n\\pi/N)" }, { "math_id": 26, "text": "T_N(x)" }, { "math_id": 27, "text": "T_k (x)" }, { "math_id": 28, "text": "\\theta" }, { "math_id": 29, "text": "[-1,1]" }, { "math_id": 30, "text": "f(-1)" }, { "math_id": 31, "text": "f(1)" }, { "math_id": 32, "text": "\\theta_n = (n + 0.5) \\pi / N" }, { "math_id": 33, "text": "0 \\leq n < N" }, { "math_id": 34, "text": "T_N(\\cos\\theta)" }, { "math_id": 35, "text": "a_k \\approx \\frac{2}{N} \\sum_{n=0}^{N-1} f(\\cos[(n+0.5)\\pi/N]) \\cos[(n+0.5) k \\pi/N] " }, { "math_id": 36, "text": "2N+1" }, { "math_id": 37, "text": "O([2N]^{-k}/k)" }, { "math_id": 38, "text": "O(N^2)" }, { "math_id": 39, "text": "O(N)" }, { "math_id": 40, "text": "w(x)" }, { "math_id": 41, "text": "\\int_{-1}^1 f(x) w(x)\\,dx = \\int_0^\\pi f(\\cos \\theta) w(\\cos\\theta) \\sin(\\theta)\\, d\\theta . " }, { "math_id": 42, "text": "w(x) = 1" }, { "math_id": 43, "text": "W_k = \\int_0^\\pi w(\\cos \\theta) \\cos(k \\theta) \\sin(\\theta)\\, d\\theta . " }, { "math_id": 44, "text": "W_k" }, { "math_id": 45, "text": "f(x) w(x)" }, { "math_id": 46, "text": "\\int_0^\\infty f(x)\\,dx" }, { "math_id": 47, "text": "\\int_{-\\infty}^\\infty f(x)\\,dx" }, { "math_id": 48, "text": "\n\\int_{-\\infty}^{\\infty}f(x)\\,dx = \\int_{-1}^{1} f\\left(\\frac{t}{1-t^2}\\right)\\frac{1+t^2}{(1-t^2)^2}\\,dt,\n" }, { "math_id": 49, "text": "x = L \\cot^2(\\theta/2)" }, { "math_id": 50, "text": "\\int_0^\\infty f(x)\\,dx = 2L \\int_0^\\pi \\frac{f[L \\cot^2(\\theta/2)]}{[1 - \\cos(\\theta)]^2} \\sin(\\theta)\\,d\\theta ." }, { "math_id": 51, "text": "x = L\\cot(\\theta)" }, { "math_id": 52, "text": "\\int_{-\\infty}^\\infty f(x)\\,dx = L \\int_0^\\pi \\frac{f[L \\cot(\\theta)]}{\\sin^2(\\theta)}\\,d\\theta\n\\approx \\frac{L\\pi}{N} \\sum_{n=1}^{N-1} \\frac{f[L \\cot(n\\pi/N)]}{\\sin^2(n\\pi/N)}." }, { "math_id": 53, "text": "\\int_0^\\infty e^{-x} g(x)\\,dx" }, { "math_id": 54, "text": "x = -\\ln[(1 + \\cos\\theta)/2]" }, { "math_id": 55, "text": "\\int_0^\\pi f(\\cos\\theta)\\sin\\theta \\,d\\theta" }, { "math_id": 56, "text": "f(u) = g(-\\ln[(1+u)/2])/2" }, { "math_id": 57, "text": "w_n" }, { "math_id": 58, "text": "\\int_{-1}^1 f(x)\\,dx \\approx \\sum_{n=0}^{N/2} w_n \\left\\{ f(\\cos[n\\pi/N]) + f(-\\cos[n\\pi/N]) \\right\\} ." }, { "math_id": 59, "text": "c = \\begin{pmatrix} a_0 \\\\ a_2 \\\\ a_4 \\\\ \\vdots \\\\ a_N \\end{pmatrix} = D \\begin{pmatrix} y_0 \\\\ y_1 \\\\ y_2 \\\\ \\vdots \\\\ y_{N/2} \\end{pmatrix} = Dy, " }, { "math_id": 60, "text": "D_{kn} = \\frac{2}{N} \\cos\\left(\\frac{nk\\pi}{N/2}\\right) \\times \\begin{cases} 1/2 & n=0,N/2 \\\\ 1 & \\mathrm{otherwise} \\end{cases}" }, { "math_id": 61, "text": "y_n" }, { "math_id": 62, "text": "y_n = f(\\cos[n\\pi/N]) + f(-\\cos[n\\pi/N]) . " }, { "math_id": 63, "text": "a_N" }, { "math_id": 64, "text": "(N/2+1)\\times(N/2+1)" }, { "math_id": 65, "text": "\\int_{-1}^1 f(x)\\,dx \\approx a_0 + \\sum_{k=1}^{N/2-1} \\frac{2 a_{2k}}{1 - (2k)^2} + \\frac{a_{N}}{1 - N^2} = d^T c," }, { "math_id": 66, "text": "d = \\begin{pmatrix} 1 \\\\ 2/(1-4) \\\\ 2/(1-16) \\\\ \\vdots \\\\ 2 / (1-[N-2]^2) \\\\ 1 / (1-N^2) \\end{pmatrix}." }, { "math_id": 67, "text": "d^T c" }, { "math_id": 68, "text": "\\int_{-1}^1 f(x)\\,dx \\approx d^T c = d^T D y = (D^T d)^T y = w^T y" }, { "math_id": 69, "text": "w = D^T d. " }, { "math_id": 70, "text": "D^T" } ]
https://en.wikipedia.org/wiki?curid=7020660
7020888
Affine action
Let formula_0 be the Weyl group of a semisimple Lie algebra formula_1 (associate to fixed choice of a Cartan subalgebra formula_2). Assume that a set of simple roots in formula_3 is chosen. The affine action (also called the "dot action") of the Weyl group on the space formula_3 is formula_4 where formula_5 is the sum of all fundamental weights, or, equivalently, the half of the sum of all positive roots.
[ { "math_id": 0, "text": "W" }, { "math_id": 1, "text": "\\mathfrak{g}" }, { "math_id": 2, "text": "\\mathfrak{h}" }, { "math_id": 3, "text": "\\mathfrak{h}^*" }, { "math_id": 4, "text": "w\\cdot \\lambda:=w(\\lambda+\\delta)-\\delta" }, { "math_id": 5, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=7020888
70208980
Joshua 24
Book of Joshua chapter Joshua 24 is the twenty-fourth (and the final) chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records Joshua's final address to the people of Israel, that ends with a renewal of the covenant with YHWH, and the appendices of the book, a part of a section comprising Joshua 22:1–24:33 about the Israelites preparing for life in the land of Canaan. Text. This chapter was originally written in the Hebrew language. It is divided into 33 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The narrative of Israelites preparing for life in the land comprising verses 22:1 to 24:33 of the Book of Joshua and has the following outline: A. The Jordan Altar (22:1–34) B. Joshua's Farewell (23:1–16) 1. The Setting (23:1–2a) 2. The Assurance of the Allotment (23:2b–5) 3. Encouragement to Enduring Faithfulness (23:6–13) 4. The Certain Fulfillment of God's Word (23:14–16) C. Covenant and Conclusion (24:1–33) 1. Covenant at Shechem (24:1–28) a. Summoning the Tribes (24:1) b. Review of Covenant History (24:2–13) c. Joshua's Challenge to Faithful Worship (24:14–24) i. Joshua's Opening Challenge (24:14–15) ii. The People's Response (24:16–18) iii. Dialogue on Faithful Worship (24:19–24) d. Covenant Made at Shechem (24:25–28) 2. Conclusion: Three Burials (24:29–33) a. Joshua (24:29–31) b. Joseph (24:32) c. Eleazar (24:33) The book of Joshua is concluded with two distinct ceremonies, each seeming in itself to be a finale: Covenant at Shechem (24:1–28). Joshua's final farewell address to the people of Israel in this chapter was during a ceremony in Shechem (verse 1), which has important roots in the narrative of exodus and conquest (Deuteronomy 11:29; 27; Joshua 8:30-5), and has a strong association with covenant. The importance of Shechem is supported in the Book of Judges with a reference to a temple of 'Baal-berith' (or 'El-berith'), that is, the 'lord' (or 'god') 'of the covenant' (Judges 9:4, 46). This chapter exhibits unique features: The narrative in form of a literary construction resembles the ancient treaty, with real significance, that it records the actual commitment of the people of Israel to YHWH rather than to other gods, and their acceptance of this as the basis of their lives. The historical context of the narrative draws on themes that belong to Israel's traditions: the origins of Israel's ancestors in Mesopotamia and the patriarchal line (verses 2–4, cf. Genesis 11:27–12:9), the Exodus from Egypt and the wilderness wanderings (verses 5–9), the conflicts in Transjordan and the Balaam story (verses 9–10, cf. Numbers 22–24), and the conquest of Canaan. Archaeology has found structures at the remains of ancient Shechem and on Mount Ebal, which could be linked to this ceremony and to the one recorded in Joshua 8:30–35. Now the Israelites are to enter into a covenant renewal (following the covenants at Mount Horeb and the plain of Moab), they are called to exclusive loyalty (verses 14–15), challenged with the possibility that they "cannot serve the LORD", on the basis that it seems evil, unjust, unreasonable, or inconvenient to do so. A strong warning is given not to think that loyalty to YHWH will be easy and to enter the covenant lightly (Deuteronomy 9:4–7). This is based on the inclination of the early generations of Israel to resort to other gods from the beginning (Exodus 32; Numbers 25), that Deuteronomy 32 portrays Israel as unfaithful. The effect here could be rhetorical as the generation of Joshua is pictured as faithful (Judges 2:7,10). "And Joshua wrote these words in the Book of the Law of God. And he took a large stone and set it up there under the terebinth that was by the sanctuary of the LORD." Three burials (24:29–33). Four short units conclude the whole book, and, in a sense, the Hexateuch (Books of Genesis–Joshua). The deaths of Joshua and Eleazar, who were co-responsible for the division of the land, are recorded as the outer framing sections of these four units, signalling the end of the era of conquest and settlement (cf. Moses' death as the end of the period of exodus; Deuteronomy 34). Joshua is finally given the title 'servant of the LORD' (like Moses), and he was buried in Timnath-serah on the land given to him as a personal inheritance (Joshua 19:49-50; cf. Judges 2:8-9). The note concerning Israel records that they were faithful during Joshua's lifetime, agreeing with Judges 2:7, bringing the completed aspiration in Joshua of 'a people dwelling peacefully and obediently in a land given in fulfilment of God's promise'. The emphasis is on 'service', or worship, of YHWH, echoing the commitment undertaken in the covenant dialogue (verses 14–22). "And the bones of Joseph, which the children of Israel brought up out of Egypt, buried they in Shechem, in a parcel of ground which Jacob bought of the sons of Hamor the father of Shechem for an hundred pieces of silver: and it became the inheritance of the children of Joseph." Verse 32. The record of Joseph's burial connects expressly with Genesis 50 and , placing the story of Joshua in a broader context that the 'ending' achieved in it relates to the promises to the patriarchs long time ago, the great theme in the Book of Genesis: Joseph's bones were finally buried in the land of Canaan, in Shechem, in the territory of Joseph's firstborn son Manasseh. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70208980
70208988
Judges 2
Book of Judges, chapter 2 Judges 2 is the second chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the military failure and apostasy of the Israelites following the introduction in the first chapter. Text. This chapter was originally written in the Hebrew language. It is divided into 23 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Angel of the Lord at Bochim (2:1–5). This brief section about a theophany serves as a connecting link between the previous and the subsequent chapters, responding to the Israelites' request for divine guidance in Judges 1:1 with a reminder about God's covenantal promise to give Israel the land (back to the era of the patriarchs) being kept faithfully as witnessed by the redemption from Egypt (Judges 2:1), but the future of the covenant was conditional to the faithfulness of Israel, as a covenant partner, to YHWH alone. The failure to drive out the enemy described in 1:28–36 were not really because of military weakness (1:19), but due to the unfaithfulness of Israel to the covenant (2:2–3). People's reaction to these harsh predictions provides the etymology for the place where the angel appeared (2:4–5). "Then the Angel of the LORD came up from Gilgal to Bochim, and said: "I led you up from Egypt and brought you to the land of which I swore to your fathers; and I said, 'I will never break My covenant with you'"" It is also possible that this “the angel of the Lord” is the same as “the captain of the Lord’s host,” who appeared to Joshua at Jericho (Joshua 5:13-15). Israel's pattern of disobedience (2:6–23). This section laid out a theologically grounded view of history throughout this book: Israel's military and political fortunes depend not on pragmatic matters such as economic strength, political unity, or military preparedness, but rather on the people's faithfulness to the covenantal relationship with God, and appears also to depend on strong leaders, such as Joshua (verse 6–7). When Joshua and the generation of the Exodus died, a new generation replaced them, but they 'did not know YHWH or the work he had done for Israel' (verse 10) and this generally signaled trouble for Israel in other biblical texts (cf. Exodus 1:8; 1 Kings 12:8). Verses 11–23 outline the pattern of Israel's history under the judges as follows: This framework is comparable to the theology and language in Deuteronomy 4:21–31; 6:10–15; 9:4–7; 12:29–32; 28:25, and unifies the Book of Judges as a whole (cf. the language and content at 3:7–10, 12, 15; 4:1; 6:1–10; 10:6–16; 13:1). When Israel 'abandons' YHWH (verses 12–13) to 'lust after' foreign gods (verse 17, especially the Canaanite Baal and his consort, Astharoth, then YHWH becomes 'angry' and 'incensed' with them (verses 12,14, 20). This passage ends with a twist on the topic of Israel's incomplete conquest in Canaan: God allowed enemies to remain in the land to test Israel's faithfulness. "And they buried him in the border of his inheritance in Timnathheres, in the mount of Ephraim, on the north side of the hill Gaash." Verse 9. At the end of the parallel verse Joshua 24:30, the Septuagint has additional words, which may come from a Haggadah (traditional legend), that the flint knives used for the mass circumcision after crossing the Jordan River (Joshua 5:2) was buried in Joshua’s tomb. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70208988
70210267
Arithmetic progression topologies
In general topology and number theory, branches of mathematics, one can define various topologies on the set formula_0 of integers or the set formula_1 of positive integers by taking as a base a suitable collection of arithmetic progressions, sequences of the form formula_2 or formula_3 The open sets will then be unions of arithmetic progressions in the collection. Three examples are the Furstenberg topology on formula_0, and the Golomb topology and the Kirch topology on formula_1. Precise definitions are given below. Hillel Furstenberg introduced the first topology in order to provide a "topological" proof of the infinitude of the set of primes. The second topology was studied by Solomon Golomb and provides an example of a countably infinite Hausdorff space that is connected. The third topology, introduced by A.M. Kirch, is an example of a countably infinite Hausdorff space that is both connected and locally connected. These topologies also have interesting separation and homogeneity properties. The notion of an arithmetic progression topology can be generalized to arbitrary Dedekind domains. Construction. Two-sided arithmetic progressions in formula_0 are subsets of the form formula_4 where formula_5 and formula_6 The intersection of two such arithmetic progressions is either empty, or is another arithmetic progression of the same form: formula_7 where formula_8 is the least common multiple of formula_9 and formula_10 Similarly, one-sided arithmetic progressions in formula_11 are subsets of the form formula_12 with formula_13 and formula_14. The intersection of two such arithmetic progressions is either empty, or is another arithmetic progression of the same form: formula_15 with formula_16 equal to the smallest element in the intersection. This shows that every nonempty intersection of a finite number of arithmetic progressions is again an arithmetic progression. One can then define a topology on formula_0 or formula_1 by choosing a collection formula_17 of arithmetic progressions, declaring all elements of formula_17 to be open sets, and taking the topology generated by those. If any nonempty intersection of two elements of formula_17 is again an element of formula_17, the collection formula_17 will be a base for the topology. In general, it will be a subbase for the topology, and the set of all arithmetic progressions that are nonempty finite intersections of elements of formula_17 will be a base for the topology. Three special cases follow. The Furstenberg topology, or evenly spaced integer topology, on the set formula_0 of integers is obtained by taking as a base the collection of all formula_18 with formula_5 and formula_6 The Golomb topology, or relatively prime integer topology, on the set formula_1 of positive integers is obtained by taking as a base the collection of all formula_19 with formula_14 and formula_9 and formula_20 relatively prime. Equivalently, the subcollection of such sets with the extra condition formula_21 also forms a base for the topology. The corresponding topological space is called the Golomb space. The Kirch topology, or prime integer topology, on the set formula_1 of positive integers is obtained by taking as a "subbase" the collection of all formula_22 with formula_23 and formula_24 prime not dividing formula_25 Equivalently, one can take as a subbase the collection of all formula_22 with formula_24 prime and formula_26. A "base" for the topology consists of all formula_19 with relatively prime formula_14 and formula_9 squarefree (or the same with the additional condition formula_21). The corresponding topological space is called the Kirch space. The three topologies are related in the sense that every open set in the Kirch topology is open in the Golomb topology, and every open set in the Golomb topology is open in the Furstenberg topology (restricted to the subspace formula_1). On the set formula_1, the Kirch topology is coarser than the Golomb topology, which is itself coarser that the Furstenberg topology. Properties. The Golomb topology and the Kirch topology are Hausdorff, but not regular. The Furstenberg topology is Hausdorff and regular. It is metrizable, but not completely metrizable. Indeed, it is homeomorphic to the rational numbers formula_27 with the subspace topology inherited from the real line. Broughan has shown that the Furstenberg topology is closely related to the p-adic completion of the rational numbers. Regarding connectedness properties, the Furstenberg topology is totally disconnected. The Golomb topology is connected, but not locally connected. The Kirch topology is both connected and locally connected. The integers with the Furstenberg topology form a homogeneous space, because it is a topological ring — in some sense, the only topology on formula_0 for which it is a ring. By contrast, the Golomb space and the Kirch space are topologically rigid — the only self-homeomorphism is the trivial one. Relation to the infinitude of primes. Both the Furstenberg and Golomb topologies furnish a proof that there are infinitely many prime numbers. A sketch of the proof runs as follows: Generalizations. The Furstenberg topology is a special case of the profinite topology on a group. In detail, it is the topology induced by the inclusion formula_28, where formula_29 is the profinite integer ring with its profinite topology. The notion of an arithmetic progression makes sense in arbitrary formula_0-modules, but the construction of a topology on them relies on closure under intersection. Instead, the correct generalization builds a topology out of ideals of a Dedekind domain. This procedure produces a large number of countably infinite, Hausdorff, connected sets, but whether different Dedekind domains can produce homeomorphic topological spaces is a topic of current research. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "\\mathbb{Z}_{>0}" }, { "math_id": 2, "text": "\\{b,b+a,b+2a,...\\}" }, { "math_id": 3, "text": "\\{...,b-2a,b-a,b,b+a,b+2a,...\\}." }, { "math_id": 4, "text": "a\\mathbb{Z}+b := \\{an+b : n\\in\\mathbb{Z}\\}," }, { "math_id": 5, "text": "a,b\\in\\mathbb{Z}" }, { "math_id": 6, "text": "a>0." }, { "math_id": 7, "text": "(a\\mathbb{Z}+b) \\cap (c\\mathbb{Z}+b) = \\operatorname{lcm}(a,c)\\mathbb{Z}+b," }, { "math_id": 8, "text": "\\operatorname{lcm}(a,c)" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "c." }, { "math_id": 11, "text": "\\mathbb{Z}_{>0}=\\{1,2,...\\}" }, { "math_id": 12, "text": "a\\mathbb{N}+b := \\{an+b : n\\in\\mathbb{N}\\} = \\{b,a+b,2a+b,...\\}," }, { "math_id": 13, "text": "\\mathbb{N}=\\{0,1,2,...\\}" }, { "math_id": 14, "text": "a,b>0" }, { "math_id": 15, "text": "(a\\mathbb{N}+b) \\cap (c\\mathbb{N}+d) = \\operatorname{lcm}(a,c)\\mathbb{N}+q," }, { "math_id": 16, "text": "q" }, { "math_id": 17, "text": "\\mathcal{B}" }, { "math_id": 18, "text": "a\\mathbb{Z}+b" }, { "math_id": 19, "text": "a\\mathbb{N}+b" }, { "math_id": 20, "text": "b" }, { "math_id": 21, "text": "b<a" }, { "math_id": 22, "text": "p\\mathbb{N}+b" }, { "math_id": 23, "text": "b>0" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "b." }, { "math_id": 26, "text": "0<b<p" }, { "math_id": 27, "text": "\\mathbb{Q}" }, { "math_id": 28, "text": "\\Z\\subset \\hat\\Z" }, { "math_id": 29, "text": "\\hat\\Z" } ]
https://en.wikipedia.org/wiki?curid=70210267
702149
Pons asinorum
Statement that the angles opposite the equal sides of an isosceles triangle are themselves equal In geometry, the theorem that the angles opposite the equal sides of an isosceles triangle are themselves equal is known as the pons asinorum ( ), Latin for "bridge of asses", or more descriptively as the isosceles triangle theorem. The theorem appears as Proposition 5 of Book 1 in Euclid's "Elements". Its converse is also true: if two angles of a triangle are equal, then the sides opposite them are also equal. "Pons asinorum" is also used metaphorically for a problem or challenge which acts as a test of critical thinking, referring to the "asses' bridge's" ability to separate capable and incapable reasoners. Its first known usage in this context was in 1645. Etymology. There are two common explanations for the name "pons asinorum", the simplest being that the diagram used resembles a physical bridge. But the more popular explanation is that it is the first real test in the "Elements" of the intelligence of the reader and functions as a "bridge" to the harder propositions that follow. Another medieval term for the isosceles triangle theorem was Elefuga which, according to Roger Bacon, comes from Greek "elegia" "misery", and Latin "fuga" "flight", that is "flight of the wretches". Though this etymology is dubious, it is echoed in Chaucer's use of the term "flemyng of wreches" for the theorem. The name "Dulcarnon" was given to the 47th proposition of Book I of Euclid, better known as the Pythagorean theorem, after the Arabic "Dhū 'l qarnain" ذُو ٱلْقَرْنَيْن, meaning "the owner of the two horns", because diagrams of the theorem showed two smaller squares like horns at the top of the figure. That term has similarly been used as a metaphor for a dilemma. The name "pons asinorum" has itself occasionally been applied to the Pythagorean theorem. Gauss supposedly once suggested that understanding Euler's identity might play a similar role, as a benchmark indicating whether someone could become a first-class mathematician. Proofs. Euclid and Proclus. Euclid's statement of the "pons asinorum" includes a second conclusion that if the equal sides of the triangle are extended below the base, then the angles between the extensions and the base are also equal. Euclid's proof involves drawing auxiliary lines to these extensions. But, as Euclid's commentator Proclus points out, Euclid never uses the second conclusion and his proof can be simplified somewhat by drawing the auxiliary lines to the sides of the triangle instead, the rest of the proof proceeding in more or less the same way. There has been much speculation and debate as to why Euclid added the second conclusion to the theorem, given that it makes the proof more complicated. One plausible explanation, given by Proclus, is that the second conclusion can be used in possible objections to the proofs of later propositions where Euclid does not cover every case. The proof relies heavily on what is today called side-angle-side (SAS), the previous proposition in the "Elements", which says that given two triangles for which two pairs of corresponding sides and their included angles are respectively congruent, then the triangles are congruent. Proclus' variation of Euclid's proof proceeds as follows: Let &amp;NoBreak;&amp;NoBreak; be an isosceles triangle with congruent sides &amp;NoBreak;&amp;NoBreak;. Pick an arbitrary point &amp;NoBreak;&amp;NoBreak; along side &amp;NoBreak;&amp;NoBreak; and then construct point &amp;NoBreak;&amp;NoBreak; on &amp;NoBreak;&amp;NoBreak; to make congruent segments &amp;NoBreak;&amp;NoBreak;. Draw auxiliary line segments &amp;NoBreak;&amp;NoBreak;, &amp;NoBreak;&amp;NoBreak;, and &amp;NoBreak;&amp;NoBreak;. By side-angle-side, the triangles &amp;NoBreak;&amp;NoBreak;. Therefore &amp;NoBreak;&amp;NoBreak;, &amp;NoBreak;&amp;NoBreak;, and &amp;NoBreak;&amp;NoBreak;. By subtracting congruent line segments, &amp;NoBreak;&amp;NoBreak;. This sets up another pair of congruent triangles, &amp;NoBreak;&amp;NoBreak;, again by side-angle-side. Therefore &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak;. By subtracting congruent angles, &amp;NoBreak;&amp;NoBreak;. Finally &amp;NoBreak;&amp;NoBreak; by a third application of side-angle-side. Therefore &amp;NoBreak;&amp;NoBreak;, which was to be proved. Pappus. Proclus gives a much shorter proof attributed to Pappus of Alexandria. This is not only simpler but it requires no additional construction at all. The method of proof is to apply side-angle-side to the triangle and its mirror image. More modern authors, in imitation of the method of proof given for the previous proposition have described this as picking up the triangle, turning it over and laying it down upon itself. This method is lampooned by Charles Dodgson in "Euclid and his Modern Rivals", calling it an "Irish bull" because it apparently requires the triangle to be in two places at once. The proof is as follows: Let "ABC" be an isosceles triangle with "AB" and "AC" being the equal sides. Consider the triangles "ABC" and "ACB", where "ACB" is considered a second triangle with vertices "A", "C" and "B" corresponding respectively to "A", "B" and "C" in the original triangle. formula_0 is equal to itself, "AB" = "AC" and "AC" = "AB", so by side-angle-side, triangles "ABC" and "ACB" are congruent. In particular, formula_1. Others. A standard textbook method is to construct the bisector of the angle at "A". This is simpler than Euclid's proof, but Euclid does not present the construction of an angle bisector until proposition 9. So the order of presentation of Euclid's propositions would have to be changed to avoid the possibility of circular reasoning. The proof proceeds as follows: As before, let the triangle be "ABC" with "AB" = "AC". Construct the angle bisector of formula_2 and extend it to meet "BC" at "X". "AB" = "AC" and "AX" is equal to itself. Furthermore, formula_3, so, applying side-angle-side, triangle "BAX" and triangle "CAX" are congruent. It follows that the angles at "B" and "C" are equal. Legendre uses a similar construction in "Éléments de géométrie", but taking "X" to be the midpoint of "BC". The proof is similar but side-side-side must be used instead of side-angle-side, and side-side-side is not given by Euclid until later in the "Elements". In 1876, while a member of the United States Congress, future President James A. Garfield developed a proof using the trapezoid, which was published in the "New England Journal of Education". Mathematics historian William Dunham wrote that Garfield's trapezoid work was "really a very clever proof." According to the "Journal", Garfield arrived at the proof "in mathematical amusements and discussions with other members of congress." In inner product spaces. The isosceles triangle theorem holds in inner product spaces over the real or complex numbers. In such spaces, given vectors "x", "y", and "z", the theorem says that if formula_4 and formula_5 then formula_6 Since formula_7 and formula_8 where "θ" is the angle between the two vectors, the conclusion of this inner product space form of the theorem is equivalent to the statement about equality of angles. Metaphorical usage. Uses of the "pons asinorum" as a metaphor for a test of critical thinking include: Artificial intelligence proof myth. A persistent piece of mathematical folklore claims that an artificial intelligence program discovered an original and more elegant proof of this theorem. In fact, Marvin Minsky recounts that he had rediscovered the Pappus proof (which he was not aware of) by simulating what a mechanical theorem prover might do. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\angle A" }, { "math_id": 1, "text": "\\angle B = \\angle C" }, { "math_id": 2, "text": "\\angle BAC" }, { "math_id": 3, "text": "\\angle BAX = \\angle CAX" }, { "math_id": 4, "text": "x + y + z = 0" }, { "math_id": 5, "text": "\\|x\\| = \\|y\\|," }, { "math_id": 6, "text": "\\|x - z\\| = \\|y - z\\|." }, { "math_id": 7, "text": "\\|x - z\\|^2 = \\|x\\|^2 - 2x\\cdot z + \\|z\\|^2" }, { "math_id": 8, "text": "x \\cdot z = \\|x\\|\\|z\\|\\cos\\theta," } ]
https://en.wikipedia.org/wiki?curid=702149
70217363
Continuity in probability
In probability theory, a stochastic process is said to be continuous in probability or stochastically continuous if its distributions converge whenever the values in the index set converge. Definition. Let formula_0 be a stochastic process in formula_1. The process formula_2 is continuous in probability when formula_3 converges in probability to formula_4 whenever formula_5 converges to formula_6. Examples and Applications. Feller processes are continuous in probability at formula_7. Continuity in probability is a sometimes used as one of the defining property for Lévy process. Any process that is continuous in probability and has independent increments has a version that is càdlàg. As a result, some authors immediately define Lévy process as being càdlàg and having independent increments.
[ { "math_id": 0, "text": " X=(X_t)_{t \\in T} " }, { "math_id": 1, "text": " \\R^n " }, { "math_id": 2, "text": " X" }, { "math_id": 3, "text": " X_r " }, { "math_id": 4, "text": " X_s " }, { "math_id": 5, "text": " r " }, { "math_id": 6, "text": " s " }, { "math_id": 7, "text": " t=0 " } ]
https://en.wikipedia.org/wiki?curid=70217363
70220509
Judges 3
Book of Judges, chapter 3 Judges 3 is the third chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of the first three judges, Othniel, Ehud, and Shamgar, belonging to a section comprising Judges 3:1 to 5:31. Text. This chapter was originally written in the Hebrew language. It is divided into 31 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including XJudges (XJudg, X6; 50 BCE) with extant verses 23–24. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" Nations left to test Israel (3:1–6). The introductory section of the chapter lists by name and place the Canaanite nations that the Israelites had to drive out of the land (verse 1–4) with an additional text that the nations' continued presence in the land was allowed by YHWH so the Israelites as newcomers to the land could sharpen their agonistic skills and capacity to resist idols against some idolatrous enemies (verse 4; Judges 2:22). "Now these are the nations which the LORD left, to prove Israel by them, even as many of Israel as had not known all the wars of Canaan;" Verse 1. "To prove": The verb is the same as in Judges 2:22 and Judges 3:4, but here it is used in the meaning "to train (them)," rendered by Symmachus in Greek as "askēsai". This is directed to many Israelites who 'had not known all the wars of Canaan', implying the "generation after that of Joshua", to prepare them in the struggles of the actual conquest. Othniel (3:7–11). The report concerning the first judge, Othniel, is related to Judges 1:11-15, but here is using a conventionalized pattern (cf. Judges 2:11–31) with a formulaic language. Othniel had the empowerment of 'the spirit of the Lord' (verse 10) to defeat the enemies of Israel and to have the land rest for forty years. "Therefore the anger of the LORD was hot against Israel, and he sold them into the hand of Chushanrishathaim king of Mesopotamia: and the children of Israel served Chushanrishathaim eight years." "And when the children of Israel cried unto the LORD, the LORD raised up a deliverer to the children of Israel, who delivered them, even Othniel the son of Kenaz, Caleb's younger brother." Ehud (3:12–30). The second judge, a trickster-hero Ehud, succeeded through deception and disguise, 'a marginal person who uses his wits to alter his status at the expense of those holding power over him'. The ruse was made possible by Ehud's left-handedness, using a Hebrew term which is literally 'bound' or 'impaired with regard to the right hand', indicating an unusual or marginal status as the right being the preferred side in other biblical contexts (cf. Exodus 29:20, 22; Leviticus 7:32; 8:23, 25; Ecclesiastes 10:2). Judges 20 contains a note that the Benjaminites, Ehud's fellow-tribesmen, were in the tradition predisposed to left-handedness (cf. 20:16), a trait that makes them especially effective warriors to surprise the enemy and more difficult to defend against. The typical right-handed man would be expected to wear his sword on the left in order to draw with the right hand, thus Ehud could hide his weapon on the opposite side without raising suspicion. The story has word play with images of ritual sacrifice: the 'tribute' to Eglon as the king of Moab is in the term for sacrificial offering, while Eglon's name plays on the term for 'calf', so he became the 'fatted calf who will be slaughtered'. The phrase translated 'relieving himself' in NRSV literally in Hebrew reads 'pouring out' or 'covering his feet' ('the feet' is a biblical euphemism for the male member), so it could mean urinating or defecating, in any event, indicating Eglon's vulnerability and unmanning (cf. Saul in the cave; 1 Samuel 24:1–7). Shamgar (3:31). The reference to Shamgar, the third judge and liberator of Israel, is brief, lacking the conventional frame in content and language. Similar to Samson, Shamgar was superhumanly able to conquer hundreds of the Philistines with a mere ox-goad, a sign of the agrarian roots of the Israelite at this period of time, a national identity that dominated the book of Judges. The name "Anath", may also refer to a place or the Canaanite goddess Anath, the patroness of warriors. "And after him was Shamgar the son of Anath, which slew of the Philistines six hundred men with an ox goad: and he also delivered Israel." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70220509
7022979
Bayesian inference in phylogeny
Statistical method for molecular phylogenetics Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics. Bayesian inference of phylogeny background and bases. Bayesian inference refers to a probabilistic method developed by Reverend Thomas Bayes based on Bayes' theorem. Published posthumously in 1763 it was the first expression of inverse probability and the basis of Bayesian inference. Independently, unaware of Bayes' work, Pierre-Simon Laplace developed Bayes' theorem in 1774. Bayesian inference or the inverse probability method was the standard approach in statistical thinking until the early 1900s before RA Fisher developed what's now known as the classical/frequentist/Fisherian inference. Computational difficulties and philosophical objections had prevented the widespread adoption of the Bayesian approach until the 1990s, when Markov Chain Monte Carlo (MCMC) algorithms revolutionized Bayesian computation. The Bayesian approach to phylogenetic reconstruction combines the prior probability of a tree P(A) with the likelihood of the data (B) to produce a posterior probability distribution on trees P(A|B). The posterior probability of a tree will be the probability that the tree is correct, given the prior, the data, and the correctness of the likelihood model. MCMC methods can be described in three steps: first using a stochastic mechanism a new state for the Markov chain is proposed. Secondly, the probability of this new state to be correct is calculated. Thirdly, a new random variable (0,1) is proposed. If this new value is less than the acceptance probability the new state is accepted and the state of the chain is updated. This process is run thousands or millions of times. The number of times a single tree is visited during the course of the chain is an approximation of its posterior probability. Some of the most common algorithms used in MCMC methods include the Metropolis–Hastings algorithms, the Metropolis-Coupling MCMC (MC³) and the LOCAL algorithm of Larget and Simon. Metropolis–Hastings algorithm. One of the most common MCMC methods used is the Metropolis–Hastings algorithm, a modified version of the original Metropolis algorithm. It is a widely used method to sample randomly from complicated and multi-dimensional distribution probabilities. The Metropolis algorithm is described in the following steps: The algorithm keeps running until it reaches an equilibrium distribution. It also assumes that the probability of proposing a new tree Tj when we are at the old tree state Ti, is the same probability of proposing Ti when we are at Tj. When this is not the case Hastings corrections are applied. The aim of Metropolis-Hastings algorithm is to produce a collection of states with a determined distribution until the Markov process reaches a stationary distribution. The algorithm has two components: Metropolis-coupled MCMC. Metropolis-coupled MCMC algorithm (MC³) has been proposed to solve a practical concern of the Markov chain moving across peaks when the target distribution has multiple local peaks, separated by low valleys, are known to exist in the tree space. This is the case during heuristic tree search under maximum parsimony (MP), maximum likelihood (ML), and minimum evolution (ME) criteria, and the same can be expected for stochastic tree search using MCMC. This problem will result in samples not approximating correctly to the posterior density. The (MC³) improves the mixing of Markov chains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each for n iterations and with different stationary distributions formula_0, formula_1, where the first one, formula_2 is the target density, while formula_3, formula_4 are chosen to improve mixing. For example, one can choose incremental heating of the form: formula_5 so that the first chain is the cold chain with the correct target density, while chains formula_6 are heated chains. Note that raising the density formula_7 to the power formula_8 with formula_9 has the effect of flattening out the distribution, similar to heating a metal. In such a distribution, it is easier to traverse between peaks (separated by valleys) than in the original distribution. After each iteration, a swap of states between two randomly chosen chains is proposed through a Metropolis-type step. Let formula_10 be the current state in chain formula_11, formula_1. A swap between the states of chains formula_12 and formula_11 is accepted with probability: formula_13 At the end of the run, output from only the cold chain is used, while those from the hot chains are discarded. Heuristically, the hot chains will visit the local peaks rather easily, and swapping states between chains will let the cold chain occasionally jump valleys, leading to better mixing. However, if formula_14 is unstable, proposed swaps will seldom be accepted. This is the reason for using several chains which differ only incrementally. An obvious disadvantage of the algorithm is that formula_15 chains are run and only one chain is used for inference. For this reason, formula_16 is ideally suited for implementation on parallel machines, since each chain will in general require the same amount of computation per iteration. LOCAL algorithm of Larget and Simon. The LOCAL algorithms offers a computational advantage over previous methods and demonstrates that a Bayesian approach is able to assess uncertainty computationally practical in larger trees. The LOCAL algorithm is an improvement of the GLOBAL algorithm presented in Mau, Newton and Larget (1999) in which all branch lengths are changed in every cycle. The LOCAL algorithms modifies the tree by selecting an internal branch of the tree at random. The nodes at the ends of this branch are each connected to two other branches. One of each pair is chosen at random. Imagine taking these three selected edges and stringing them like a clothesline from left to right, where the direction (left/right) is also selected at random. The two endpoints of the first branch selected will have a sub-tree hanging like a piece of clothing strung to the line. The algorithm proceeds by multiplying the three selected branches by a common random amount, akin to stretching or shrinking the clothesline. Finally the leftmost of the two hanging sub-trees is disconnected and reattached to the clothesline at a location selected uniformly at random. This would be the candidate tree. Suppose we began by selecting the internal branch with length formula_17 that separates taxa formula_18 and formula_19 from the rest. Suppose also that we have (randomly) selected branches with lengths formula_20 and formula_21 from each side, and that we oriented these branches. Let formula_22, be the current length of the clothesline. We select the new length to be formula_23, where formula_24 is a uniform random variable on formula_25. Then for the LOCAL algorithm, the acceptance probability can be computed to be: formula_26 Assessing convergence. To estimate a branch length formula_27 of a 2-taxon tree under JC, in which formula_28 sites are unvaried and formula_29 are variable, assume exponential prior distribution with rate formula_30. The density is formula_31. The probabilities of the possible site patterns are: formula_32 for unvaried sites, and formula_33 Thus the unnormalized posterior distribution is: formula_34 or, alternately, formula_35 Update branch length by choosing new value uniformly at random from a window of half-width formula_36 centered at the current value: formula_37 where formula_38is uniformly distributed between formula_39 and formula_36. The acceptance probability is: formula_40 Example: formula_41, formula_42. We will compare results for two values of formula_36, formula_43 and formula_44. In each case, we will begin with an initial length of formula_45 and update the length formula_46 times. Maximum parsimony and maximum likelihood. There are many approaches to reconstructing phylogenetic trees, each with advantages and disadvantages, and there is no straightforward answer to “what is the best method?”. Maximum parsimony (MP) and maximum likelihood (ML) are traditional methods widely used for the estimation of phylogenies and both use character information directly, as Bayesian methods do. Maximum Parsimony recovers one or more optimal trees based on a matrix of discrete characters for a certain group of taxa and it does not require a model of evolutionary change. MP gives the most simple explanation for a given set of data, reconstructing a phylogenetic tree that includes as few changes across the sequences as possible. The support of the tree branches is represented by bootstrap percentage. For the same reason that it has been widely used, its simplicity, MP has also received criticism and has been pushed into the background by ML and Bayesian methods. MP presents several problems and limitations. As shown by Felsenstein (1978), MP might be statistically inconsistent, meaning that as more and more data (e.g. sequence length) is accumulated, results can converge on an incorrect tree and lead to long branch attraction, a phylogenetic phenomenon where taxa with long branches (numerous character state changes) tend to appear more closely related in the phylogeny than they really are. For morphological data, recent simulation studies suggest that parsimony may be less accurate than trees built using Bayesian approaches, potentially due to overprecision, although this has been disputed. Studies using novel simulation methods have demonstrated that differences between inference methods result from the search strategy and consensus method employed, rather than the optimization used. As in maximum parsimony, maximum likelihood will evaluate alternative trees. However it considers the probability of each tree explaining the given data based on a model of evolution. In this case, the tree with the highest probability of explaining the data is chosen over the other ones. In other words, it compares how different trees predict the observed data. The introduction of a model of evolution in ML analyses presents an advantage over MP as the probability of nucleotide substitutions and rates of these substitutions are taken into account, explaining the phylogenetic relationships of taxa in a more realistic way. An important consideration of this method is the branch length, which parsimony ignores, with changes being more likely to happen along long branches than short ones. This approach might eliminate long branch attraction and explain the greater consistency of ML over MP. Although considered by many to be the best approach to inferring phylogenies from a theoretical point of view, ML is computationally intensive and it is almost impossible to explore all trees as there are too many. Bayesian inference also incorporates a model of evolution and the main advantages over MP and ML are that it is computationally more efficient than traditional methods, it quantifies and addresses the source of uncertainty and is able to incorporate complex models of evolution. MrBayes software. MrBayes is a free software tool that performs Bayesian inference of phylogeny. It was originally written by John P. Huelsenbeck and Frederik Ronquist in 2001. As Bayesian methods increased in popularity, MrBayes became one of the software of choice for many molecular phylogeneticists. It is offered for Macintosh, Windows, and UNIX operating systems and it has a command-line interface. The program uses the standard MCMC algorithm as well as the Metropolis coupled MCMC variant. MrBayes reads aligned matrices of sequences (DNA or amino acids) in the standard NEXUS format. MrBayes uses MCMC to approximate the posterior probabilities of trees. The user can change assumptions of the substitution model, priors and the details of the MC³ analysis. It also allows the user to remove and add taxa and characters to the analysis. The program includes, among several nucleotide models, the most standard model of DNA substitution, the 4x4 also called JC69, which assumes that changes across nucleotides occur with equal probability. It also implements a number of 20x20 models of amino acid substitution, and codon models of DNA substitution. It offers different methods for relaxing the assumption of equal substitutions rates across nucleotide sites. MrBayes is also able to infer ancestral states accommodating uncertainty to the phylogenetic tree and model parameters. MrBayes 3 was a completely reorganized and restructured version of the original MrBayes. The main novelty was the ability of the software to accommodate heterogeneity of data sets. This new framework allows the user to mix models and take advantages of the efficiency of Bayesian MCMC analysis when dealing with different type of data (e.g. protein, nucleotide, and morphological). It uses the Metropolis-Coupling MCMC by default. MrBayes 3.2 was released in 2012 The new version allows the users to run multiple analyses in parallel. It also provides faster likelihood calculations and allow these calculations to be delegated to graphics processing unites (GPUs). Version 3.2 provides wider outputs options compatible with FigTree and other tree viewers. List of phylogenetics software. This table includes some of the most common phylogenetic software used for inferring phylogenies under a Bayesian framework. Some of them do not use exclusively Bayesian methods. Applications. Bayesian Inference has extensively been used by molecular phylogeneticists for a wide number of applications. Some of these include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi_j(.)\\ " }, { "math_id": 1, "text": "j = 1, 2, \\ldots, m\\ " }, { "math_id": 2, "text": "\\pi_1 = \\pi\\ " }, { "math_id": 3, "text": "\\pi_j\\ " }, { "math_id": 4, "text": "j = 2, 3, \\ldots, m\\ " }, { "math_id": 5, "text": " \\pi_j(\\theta) = \\pi(\\theta)^{1/[1+\\lambda(j-1)]}, \\ \\ \\lambda > 0, " }, { "math_id": 6, "text": "2, 3, \\ldots, m" }, { "math_id": 7, "text": "\\pi(.)" }, { "math_id": 8, "text": "1/T\\ " }, { "math_id": 9, "text": "T>1\\ " }, { "math_id": 10, "text": "\\theta^{(j)}\\ " }, { "math_id": 11, "text": "j\\ " }, { "math_id": 12, "text": "i\\ " }, { "math_id": 13, "text": " \\alpha = \\frac{\\pi_i(\\theta^{(j)})\\pi_j(\\theta^{(i)})}{\\pi_i(\\theta^{(i)})\\pi_j(\\theta^{(j)})}\\ " }, { "math_id": 14, "text": "\\pi_i(\\theta)/\\pi_j(\\theta)\\ " }, { "math_id": 15, "text": "m\\ " }, { "math_id": 16, "text": "\\mathrm{MC}^3\\ " }, { "math_id": 17, "text": "t_8\\ " }, { "math_id": 18, "text": "A\\ " }, { "math_id": 19, "text": "B\\ " }, { "math_id": 20, "text": "t_1\\ " }, { "math_id": 21, "text": "t_9\\ " }, { "math_id": 22, "text": "m = t_1+t_8+t_9\\ " }, { "math_id": 23, "text": "m^{\\star} = m\\exp(\\lambda(U_1-0.5))\\ " }, { "math_id": 24, "text": "U_1\\ " }, { "math_id": 25, "text": "(0,1)\\ " }, { "math_id": 26, "text": "\\frac{h(y)}{h(x)} \\times \\frac{{m^{\\star}}^3}{m^3}\\ " }, { "math_id": 27, "text": "t" }, { "math_id": 28, "text": "n_1" }, { "math_id": 29, "text": "n_2" }, { "math_id": 30, "text": "\\lambda\\ " }, { "math_id": 31, "text": "p(t) = \\lambda e^{-\\lambda t}\\ " }, { "math_id": 32, "text": "1/4\\left(1/4+3/4e^{-4/3t}\\right)\\ " }, { "math_id": 33, "text": " 1/4\\left(1/4-1/4e^{-4/3t}\\right)\\ " }, { "math_id": 34, "text": " h(t) = \\left(1/4\\right)^{n_1+n_2}\\left(1/4+3/4{e^{-4/3t}}^{n_1}\\right)\\ " }, { "math_id": 35, "text": " h(t) = \\left(1/4-1/4{e^{-4/3t}}^{n_2}\\right)(\\lambda e^{-\\lambda t})\\ " }, { "math_id": 36, "text": "w\\ " }, { "math_id": 37, "text": " t^\\star = |t+U|\\ " }, { "math_id": 38, "text": "U\\ " }, { "math_id": 39, "text": "-w\\ " }, { "math_id": 40, "text": " h(t^\\star)/h(t)\\ " }, { "math_id": 41, "text": "n_1 = 70\\ " }, { "math_id": 42, "text": "n_2 = 30\\ " }, { "math_id": 43, "text": "w = 0.1\\ " }, { "math_id": 44, "text": "w = 0.5\\ " }, { "math_id": 45, "text": "5\\ " }, { "math_id": 46, "text": "2000\\ " } ]
https://en.wikipedia.org/wiki?curid=7022979
702351
Ham sandwich theorem
Theorem that any three objects in space can be simultaneously bisected by a plane In mathematical measure theory, for every positive integer n the ham sandwich theorem states that given n measurable "objects" in n-dimensional Euclidean space, it is possible to divide each one of them in half (with respect to their measure, e.g. volume) with a single ("n" − 1)-dimensional hyperplane. This is even possible if the objects overlap. It was proposed by Hugo Steinhaus and proved by Stefan Banach (explicitly in dimension 3, without taking the trouble to state the theorem in the n-dimensional case), and also years later called the Stone–Tukey theorem after Arthur H. Stone and John Tukey. Naming. The ham sandwich theorem takes its name from the case when "n" = 3 and the three objects to be bisected are the ingredients of a ham sandwich. Sources differ on whether these three ingredients are two slices of bread and a piece of ham , bread and cheese and ham , or bread and butter and ham . In two dimensions, the theorem is known as the pancake theorem to refer to the flat nature of the two objects to be bisected by a line . History. According to , the earliest known paper about the ham sandwich theorem, specifically the "n" = 3 case of bisecting three solids with a plane, is a 1938 note in a Polish mathematics journal . Beyer and Zardecki's paper includes a translation of this note, which attributes the posing of the problem to Hugo Steinhaus, and credits Stefan Banach as the first to solve the problem, by a reduction to the Borsuk–Ulam theorem. The note poses the problem in two ways: first, formally, as "Is it always possible to bisect three solids, arbitrarily located, with the aid of an appropriate plane?" and second, informally, as "Can we place a piece of ham under a meat cutter so that meat, bone, and fat are cut in halves?" The note then offers a proof of the theorem. A more modern reference is , which is the basis of the name "Stone–Tukey theorem". This paper proves the n-dimensional version of the theorem in a more general setting involving measures. The paper attributes the "n" = 3 case to Stanislaw Ulam, based on information from a referee; but claim that this is incorrect, given the note mentioned above, although "Ulam did make a fundamental contribution in proposing" the Borsuk–Ulam theorem. Two-dimensional variant: proof using a rotating-knife. The two-dimensional variant of the theorem (also known as the pancake theorem) can be proved by an argument which appears in the fair cake-cutting literature (see e.g. Robertson–Webb rotating-knife procedure). For each angle formula_0, a straight line ("knife") of angle formula_1 can bisect pancake #1. To see this, translate [move parallelly] a straight line of angle formula_1 from formula_2 to formula_3; the fraction of pancake #1 covered by the line changes continuously from 0 to 1, so by the intermediate value theorem it must be equal to 1/2 somewhere along the way. It is possible that an entire range of translations of our line yield a fraction of 1/2; in this case, it is a canonical choice to pick the middle one of all such translations. When the knife is at angle 0, it also cuts pancake #2, but the pieces are probably unequal (if we are lucky and the pieces are equal, we are done). Define the 'positive' side of the knife as the side in which the fraction of pancake #2 is larger. We now turn the knife, and translate it as described above. When the angle is formula_1, define formula_4 as the fraction of pancake #2 at the positive side of the knife. Initially formula_5. The function formula_6 is continuous, since small changes in the angle lead to small changes in the position of the knife. When the knife is at angle 180, the knife is upside-down, so formula_7. By the intermediate value theorem, there must be an angle in which formula_8. Cutting at that angle bisects both pancakes simultaneously. "n"-dimensional variant: proof using the Borsuk–Ulam theorem. The ham sandwich theorem can be proved as follows using the Borsuk–Ulam theorem. This proof follows the one described by Steinhaus and others (1938), attributed there to Stefan Banach, for the "n" = 3 case. In the field of Equivariant topology, this proof would fall under the configuration-space/tests-map paradigm. Let "A"1, "A"2, ..., "A""n" denote the n objects that we wish to simultaneously bisect. Let S be the unit ("n" − 1)-sphere embedded in n-dimensional Euclidean space formula_9, centered at the origin. For each point p on the surface of the sphere S, we can define a continuum of oriented affine hyperplanes (not necessarily centred at 0) perpendicular to the (normal) vector from the origin to p, with the "positive side" of each hyperplane defined as the side pointed to by that vector (i.e. it is a choice of orientation). By the intermediate value theorem, every family of such hyperplanes contains at least one hyperplane that bisects the bounded object "A""n": at one extreme translation, no volume of "A""n" is on the positive side, and at the other extreme translation, all of "A""n"'s volume is on the positive side, so in between there must be a translation that has half of "A""n"'s volume on the positive side. If there is more than one such hyperplane in the family, we can pick one canonically by choosing the midpoint of the interval of translations for which "A""n" is bisected. Thus we obtain, for each point p on the sphere S, a hyperplane "π"("p") that is perpendicular to the vector from the origin to p and that bisects "A""n". Now we define a function f from the ("n" − 1)-sphere S to ("n" − 1)-dimensional Euclidean space formula_10 as follows: "f"("p") = (vol of "A"1 on the positive side of "π"("p"), vol of "A"2 on the positive side of "π"("p"), ..., vol of "A""n"−1 on the positive side of "π"("p")). This function f is continuous (which, in a formal proof, would need some justification). By the Borsuk–Ulam theorem, there are antipodal points p and q on the sphere S such that "f"("p") = "f"("q"). Antipodal points p and q correspond to hyperplanes "π"("p") and "π"("q") that are equal except that they have opposite positive sides. Thus, "f"("p") = "f"("q") means that the volume of "A""i" is the same on the positive and negative side of "π"("p") (or "π"("q")), for "i" = 1, 2, ..., "n"−1. Thus, "π"("p") (or "π"("q")) is the desired ham sandwich cut that simultaneously bisects the volumes of "A"1, "A"2, ..., "A""n". Measure theoretic versions. In measure theory, proved two more general forms of the ham sandwich theorem. Both versions concern the bisection of n subsets "X"1, "X"2, ..., "X""n" of a common set X, where X has a Carathéodory outer measure and each "X""i" has finite outer measure. Their first general formulation is as follows: for any continuous real function formula_11, there is a point p of the n-sphere "S""n" and a real number "s"0 such that the surface "f"("p","x") = "s"0 divides X into "f"("p","x") &lt; "s"0 and "f"("p","x") &gt; "s"0 of equal measure and simultaneously bisects the outer measure of "X"1, "X"2, ..., "X""n". The proof is again a reduction to the Borsuk-Ulam theorem. This theorem generalizes the standard ham sandwich theorem by letting "f"("s","x") = "s"1"x"1 + ... + "s""n""x""n". Their second formulation is as follows: for any "n" + 1 measurable functions "f"0, "f"1, ..., "f""n" over X that are linearly independent over any subset of X of positive measure, there is a linear combination "f" = "a"0"f"0 + "a"1"f"1 + ... + "a""n""f""n" such that the surface "f"("x") = 0, dividing X into "f"("x") &lt; 0 and "f"("x") &gt; 0, simultaneously bisects the outer measure of "X"1, "X"2, ..., "X""n". This theorem generalizes the standard ham sandwich theorem by letting "f"0("x") = 1 and letting "f""i"("x"), for "i" &gt; 0, be the i-th coordinate of x. Discrete and computational geometry versions. In discrete geometry and computational geometry, the ham sandwich theorem usually refers to the special case in which each of the sets being divided is a finite set of points. Here the relevant measure is the counting measure, which simply counts the number of points on either side of the hyperplane. In two dimensions, the theorem can be stated as follows: For a finite set of points in the plane, each colored "red" or "blue", there is a line that simultaneously bisects the red points and bisects the blue points, that is, the number of red points on either side of the line is equal and the number of blue points on either side of the line is equal. There is an exceptional case when points lie on the line. In this situation, we count each of these points as either being on one side, on the other, or on neither side of the line (possibly depending on the point), i.e. "bisecting" in fact means that each side contains less than half of the total number of points. This exceptional case is actually required for the theorem to hold, of course when the number of red points or the number of blue is odd, but also in specific configurations with even numbers of points, for instance when all the points lie on the same line and the two colors are separated from each other (i.e. colors don't alternate along the line). A situation where the numbers of points on each side cannot match each other is provided by adding an extra point out of the line in the previous configuration. In computational geometry, this ham sandwich theorem leads to a computational problem, the ham sandwich problem. In two dimensions, the problem is this: given a finite set of n points in the plane, each colored "red" or "blue", find a ham sandwich cut for them. First, described an algorithm for the special, separated case. Here all red points are on one side of some line and all blue points are on the other side, a situation where there is a unique ham sandwich cut, which Megiddo could find in linear time. Later, gave an algorithm for the general two-dimensional case; the running time of their algorithm is "O"("n" log "n"), where the symbol O indicates the use of Big O notation. Finally, found an optimal "O"("n")-time algorithm. This algorithm was extended to higher dimensions by where the running time is formula_12. Given d sets of points in general position in d-dimensional space, the algorithm computes a ("d"−1)-dimensional hyperplane that has an equal number of points of each of the sets in both of its half-spaces, i.e., a ham-sandwich cut for the given points. If d is a part of the input, then no polynomial time algorithm is expected to exist, as if the points are on a moment curve, the problem becomes equivalent to necklace splitting, which is PPA-complete. A linear-time algorithm that area-bisects two disjoint convex polygons is described by Generalizations. The original theorem works for at most n collections, where n is the number of dimensions. To bisect a larger number of collections without going to higher dimensions, one can use, instead of a hyperplane, an algebraic surface of degree k, i.e., an ("n"−1)–dimensional surface defined by a polynomial function of degree k: Given formula_13 measures in an n–dimensional space, there exists an algebraic surface of degree k which bisects them all. (). This generalization is proved by mapping the n–dimensional plane into a formula_13 dimensional plane, and then applying the original theorem. For example, for "n" = 2 and "k" = 2, the 2–dimensional plane is mapped to a 5–dimensional plane via: ("x", "y") → ("x", "y", "x"2, "y"2, "xy").
[ { "math_id": 0, "text": "\\alpha\\in[0,180^\\circ]" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "-\\infty" }, { "math_id": 3, "text": "\\infty" }, { "math_id": 4, "text": "p(\\alpha)" }, { "math_id": 5, "text": "p(0) > 1/2" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "p(180) < 1/2" }, { "math_id": 8, "text": "p(\\alpha)=1/2" }, { "math_id": 9, "text": "\\mathbb{R}^n" }, { "math_id": 10, "text": "\\mathbb{R}^{n-1}" }, { "math_id": 11, "text": "f \\colon S^n \\times X \\to \\mathbb{R}" }, { "math_id": 12, "text": "o(n^{d-1})" }, { "math_id": 13, "text": "\\binom{k+n}{n}-1" } ]
https://en.wikipedia.org/wiki?curid=702351
70238981
Quasilinearization
Quasilinearization replaces a nonlinear operator equation with a sequence of linear operator equatio In mathematics, quasilinearization is a technique which replaces a nonlinear differential equation or operator equation (or system of such equations) with a sequence of linear problems, which are presumed to be easier, and whose solutions approximate the solution of the original nonlinear problem with increasing accuracy. It is a generalization of Newton's method; the word "quasilinearization" is commonly used when the differential equation is a boundary value problem. Abstract formulation. Quasilinearization replaces a given nonlinear operator "N" with a certain linear operator which, being simpler, can be used in an iterative fashion to approximately solve equations containing the original nonlinear operator. This is typically performed when trying to solve an equation such as "N(y) = 0" together with certain boundary conditions "B" for which the equation has a solution "y". This solution is sometimes called the "reference solution". For quasilinearization to work, the reference solution needs to exist uniquely (at least locally). The process starts with an initial approximation "y0" that satisfies the boundary conditions and is "sufficiently close" to the reference solution "y" in a sense to be defined more precisely later. The first step is to take the Fréchet derivative of the nonlinear operator "N" at that initial approximation, in order to find the linear operator "L(y0") which best approximates "N(y)-N(y0)" locally. The nonlinear equation may then be approximated as "N"("y") = "N(yk)" + "L(yk)( y - yk)" + "O( y-yk )2", taking "k=0". Setting this equation to zero and imposing zero boundary conditions and ignoring higher-order terms gives the "linear" equation "L(yk)( y - yk ) = - N(yk)". The solution of this linear equation (with zero boundary conditions) might be called "yk+1". Computation of "yk" for "k"=1, 2, 3... by solving these linear equations in sequence is analogous to Newton's iteration for a single equation, and requires recomputation of the Fréchet derivative at each "yk". The process can converge quadratically to the reference solution, under the right conditions. Just as with Newton's method for nonlinear algebraic equations, however, difficulties may arise: for instance, the original nonlinear equation may have no solution, or more than one solution, or a "multiple" solution, in which cases the iteration may converge only very slowly, may not converge at all, or may converge instead to the "wrong" solution. The practical test of the meaning of the phrase "sufficiently close" earlier is precisely that the iteration converges to the correct solution. Just as in the case of Newton iteration, there are theorems stating conditions under which one can know ahead of time when the initial approximation is "sufficiently close". Contrast with discretizing first. One could instead discretize the original nonlinear operator and generate a (typically large) set of nonlinear algebraic equations for the unknowns, and then use Newton's method proper on this system of equations. Generally speaking, the convergence behavior is similar: a similarly good initial approximation will produce similarly good approximate discrete solutions. However, the quasilinearization approach (linearizing the operator equation instead of the discretized equations) seems to be simpler to think about, and has allowed such techniques as adaptive spatial meshes to be used as the iteration proceeds. Example. As an example to illustrate the process of quasilinearization, we can approximately solve the two-point boundary value problem for the nonlinear node formula_2 where the boundary conditions are formula_3 and formula_4. The exact solution of the differential equation can be expressed using the Weierstrass elliptic function ℘, like so: formula_5 where the vertical bar notation means that the "invariants" are formula_6 and formula_7. Finding the values of formula_8 and formula_9 so that the boundary conditions are satisfied requires solving two simultaneous nonlinear equations for the two unknowns formula_10 and formula_11, namely formula_12 and formula_13. This can be done, in an environment where ℘ and its derivatives are available, for instance by Newton's method. Applying the technique of quasilinearization instead, one finds by taking the Fréchet derivative at an unknown approximation formula_14 that the linear operator is formula_15 If the initial approximation is formula_16 identically on the interval formula_17, then the first iteration (at least) can be solved exactly, but is already somewhat complicated. A numerical solution instead, for instance by a Chebyshev spectral method using formula_18 Chebyshev—Lobatto points formula_19 for formula_20 gives a solution with residual less than formula_21 after three iterations; that is, formula_22 is the exact solution to formula_23, where the maximum value of formula_24 is less than 1 on the interval formula_17. This approximate solution (call it formula_0) agrees with the exact solution formula_25 with formula_26 Other values of formula_8 and formula_9 give other continuous solutions to this nonlinear two-point boundary-value problem for ODE, such as formula_27 The solution corresponding to these values plotted in the figure is called formula_28. Yet other values of the parameters can give discontinuous solutions because ℘ has a double pole at zero and so formula_29 has a double pole at formula_30. Finding other continuous solutions by quasilinearization requires different initial approximations to the ones used here. The initial approximation formula_31 approximates the exact solution formula_1 and can be used to generate a sequence of approximations converging to formula_1. Both approximations are plotted in the accompanying figure. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_1" }, { "math_id": 1, "text": "u_2" }, { "math_id": 2, "text": " \\frac{d^2}{dx^2} y(x) = y^2(x), " }, { "math_id": 3, "text": "y(-1) = 1" }, { "math_id": 4, "text": "y(1)=1" }, { "math_id": 5, "text": " y(x) = 6\\wp( x-\\alpha | 0, \\beta )" }, { "math_id": 6, "text": " g_2 = 0 " }, { "math_id": 7, "text": " g_3 = \\beta " }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\beta" }, { "math_id": 10, "text": " \\alpha " }, { "math_id": 11, "text": " \\beta " }, { "math_id": 12, "text": " 6\\wp(-1-\\alpha|0,\\beta) = 1" }, { "math_id": 13, "text": " 6\\wp(1-\\alpha|0,\\beta) = 1" }, { "math_id": 14, "text": "y_k(x)" }, { "math_id": 15, "text": " L(\\varepsilon) = \\frac{d^2}{dx^2}\\varepsilon(x) - 2 y_k(x) \\varepsilon(x). " }, { "math_id": 16, "text": " y_0(x) = 1 " }, { "math_id": 17, "text": " -1 \\le x \\le 1 " }, { "math_id": 18, "text": " n=21 " }, { "math_id": 19, "text": " x_k = \\cos( \\pi (n-1-k)/(n-1) )" }, { "math_id": 20, "text": " k = 0, 1, \\cdots, n-1 " }, { "math_id": 21, "text": " 5 \\cdot 10^{-9} " }, { "math_id": 22, "text": " y_3(x) " }, { "math_id": 23, "text": " \\frac{d^2}{dx^2}y(x) - y^2(x) = 5 \\cdot 10^{-9} v(x) " }, { "math_id": 24, "text": " |v(x)|" }, { "math_id": 25, "text": "6\\cdot\\wp( x-\\alpha | 0, \\beta )" }, { "math_id": 26, "text": " \\{\\alpha \\approx 3.524459420, \\beta \\approx 0.006691372637\\}." }, { "math_id": 27, "text": "\\{\\alpha \\approx 2.55347391110, \\beta \\approx - 1.24923895273\\}. " }, { "math_id": 28, "text": " u_2" }, { "math_id": 29, "text": "y(x)" }, { "math_id": 30, "text": "x=\\alpha" }, { "math_id": 31, "text": " y_0 = 5x^2-4" } ]
https://en.wikipedia.org/wiki?curid=70238981
7023931
Plasma beta
Characteristic quantity of plasmas The beta of a plasma, symbolized by β, is the ratio of the plasma pressure ("p" = "n""k"B"T") to the magnetic pressure ("p"mag = "B"2/2"μ"0). The term is commonly used in studies of the Sun and Earth's magnetic field, and in the field of fusion power designs. In the fusion power field, plasma is often confined using strong magnets. Since the temperature of the fuel scales with pressure, reactors attempt to reach the highest pressures possible. The costs of large magnets roughly scales like "β"1/2. Therefore, beta can be thought of as a ratio of money out to money in for a reactor, and beta can be thought of (very approximately) as an economic indicator of reactor efficiency. For tokamaks, betas of larger than 0.05 or 5% are desired for economically viable electrical production. The same term is also used when discussing the interactions of the solar wind with various magnetic fields. For example, beta in the corona of the Sun is about 0.01. Background. Fusion basics. Nuclear fusion occurs when the nuclei of two atoms approach closely enough for the nuclear force to pull them together into a single larger nucleus. The strong force is opposed by the electrostatic force created by the positive charge of the nuclei's protons, pushing the nuclei apart. The amount of energy that is needed to overcome this repulsion is known as the Coulomb barrier. The amount of energy released by the fusion reaction when it occurs may be greater or less than the Coulomb barrier. Generally, lighter nuclei with a smaller number of protons and greater number of neutrons will have the greatest ratio of energy released to energy required, and the majority of fusion power research focusses on the use of deuterium and tritium, two isotopes of hydrogen. Even using these isotopes, the Coulomb barrier is large enough that the nuclei must be given great amounts of energy before they will fuse. Although there are a number of ways to do this, the simplest is to heat the gas mixture, which, according to the Maxwell–Boltzmann distribution, will result in a small number of particles with the required energy even when the gas as a whole is relatively "cool" compared to the Coulomb barrier energy. In the case of the D-T mixture, rapid fusion will occur when the gas is heated to about 100 million degrees. Confinement. This temperature is well beyond the physical limits of any material container that might contain the gases, which has led to a number of different approaches to solving this problem. The main approach relies on the nature of the fuel at high temperatures. When the fusion fuel gasses are heated to the temperatures required for rapid fusion, they will be completely ionized into a plasma, a mixture of electrons and nuclei forming a globally neutral gas. As the particles within the gas are charged, this allows them to be manipulated by electric or magnetic fields. This gives rise to the majority of controlled fusion concepts. Even if this temperature is reached, the gas will be constantly losing energy to its surroundings (cooling off). This gives rise to the concept of the "confinement time", the amount of time the plasma is maintained at the required temperature. However, the fusion reactions might deposit their energy back into the plasma, heating it back up, which is a function of the density of the plasma. These considerations are combined in the Lawson criterion, or its modern form, the fusion triple product. In order to be efficient, the rate of fusion energy being deposited into the reactor would ideally be greater than the rate of loss to the surroundings, a condition known as "ignition". Magnetic confinement fusion approach. In magnetic confinement fusion (MCF) reactor designs, the plasma is confined within a vacuum chamber using a series of magnetic fields. These fields are normally created using a combination of electromagnets and electrical currents running through the plasma itself. Systems using only magnets are generally built using the stellarator approach, while those using current only are the pinch machines. The most studied approach since the 1970s is the tokamak, where the fields generated by the external magnets and internal current are roughly equal in magnitude. In all of these machines, the density of the particles in the plasma is very low, often described as a "poor vacuum". This limits its approach to the triple product along the temperature and time axis. This requires magnetic fields on the order of tens of Teslas, currents in the megaampere, and confinement times on the order of tens of seconds. Generating currents of this magnitude is relatively simple, and a number of devices from large banks of capacitors to homopolar generators have been used. However, generating the required magnetic fields is another issue, generally requiring expensive superconducting magnets. For any given reactor design, the cost is generally dominated by the cost of the magnets. Beta. Given that the magnets are a dominant factor in reactor design, and that density and temperature combine to produce pressure, the ratio of the pressure of the plasma to the magnetic energy density naturally becomes a useful figure of merit when comparing MCF designs. Plainly, the higher the beta value, the more economically viable the design is and further the higher Q value the design possibly has. In effect, the ratio illustrates how effectively a design confines its plasma. This ratio, beta, is widely used in the fusion field: formula_0 formula_1 is normally measured in terms of the total magnetic field. However, in any real-world design, the strength of the field varies over the volume of the plasma, so to be specific, the average beta is sometimes referred to as the "beta toroidal". In the tokamak design the total field is a combination of the external toroidal field and the current-induced poloidal one, so the "beta poloidal" is sometimes used to compare the relative strengths of these fields. And as the external magnetic field is the driver of reactor cost, "beta external" is used to consider just this contribution. Troyon beta limit. In a tokamak, for a stable plasma, formula_1 is always much smaller than 1 (otherwise thermal pressure would cause the plasma to grow and move in the vacuum chamber until confinement is lost). Ideally, a MCF device would want to have as high beta as possible, as this would imply the minimum amount of magnetic force needed for confinement. In practice, most tokamaks operate at beta of order 0.01, or 1%. Spherical tokamaks typically operate at beta values an order of magnitude higher. The record was set by the START device at 0.4, or 40%. These low achievable betas are due to instabilities in the plasma generated through the interaction of the fields and the motion of the particles due to the induced current. As the amount of current is increased in relation to the external field, these instabilities become uncontrollable. In early pinch experiments the current dominated the field components and the kink and sausage instabilities were common, today collectively referred to as "low-n instabilities". As the relative strength of the external magnetic field is increased, these simple instabilities are damped out, but at a critical field other "high-n instabilities" will invariably appear, notably the ballooning mode. For any given fusion reactor design, there is a limit to the beta it can sustain. As beta is a measure of economic merit, a practical tokamak based fusion reactor must be able to sustain a beta above some critical value, which is calculated to be around 5%. Through the 1980s the understanding of the high-n instabilities grew considerably. Shafranov and Yurchenko first published on the issue in 1971 in a general discussion of tokamak design, but it was the work by Wesson and Sykes in 1983 and Francis Troyon in 1984 that developed these concepts fully. Troyon's considerations, or the "Troyon limit", closely matched the real-world performance of existing machines. It has since become so widely used that it is often known simply as "the" beta limit in tokamaks. The Troyon limit is given as: formula_2 where "I" is the plasma current, formula_3 is the external magnetic field, and a is the minor radius of the tokamak (see torus for an explanation of the directions). formula_4 was determined numerically, and is normally given as 0.028 if "I" is measured in megaamperes. However, it is also common to use 2.8 if formula_5 is expressed as a percentage. Given that the Troyon limit suggested a formula_5 around 2.5 to 4%, and a practical reactor had to have a formula_5 around 5%, the Troyon limit was a serious concern when it was introduced. However, it was found that formula_4 changed dramatically with the shape of the plasma, and non-circular systems would have much better performance. Experiments on the DIII-D machine (the second D referring to the cross-sectional shape of the plasma) demonstrated higher performance, and the spherical tokamak design outperformed the Troyon limit by about 10 times. Astrophysics. Beta is also sometimes used when discussing the interaction of plasma in space with different magnetic fields. A common example is the interaction of the solar wind with the magnetic fields of the Sun or Earth. In this case, the betas of these natural phenomena are generally much smaller than those seen in reactor designs; the Sun's corona has a beta around 1%. Active regions have much higher beta, over 1 in some cases, which makes the area unstable. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta = \\frac{p}{p_\\text{mag}} = \\frac{n k_B T}{B^2/2\\mu_0}" }, { "math_id": 1, "text": "\\beta" }, { "math_id": 2, "text": "\\beta_\\text{max} = \\frac{\\beta_N I}{a B_0}" }, { "math_id": 3, "text": "B_0" }, { "math_id": 4, "text": "\\beta_N" }, { "math_id": 5, "text": "\\beta_\\text{max}" } ]
https://en.wikipedia.org/wiki?curid=7023931
7024176
Casson handle
In 4-dimensional topology, a branch of mathematics, a Casson handle is a 4-dimensional topological 2-handle constructed by an infinite procedure. They are named for Andrew Casson, who introduced them in about 1973. They were originally called "flexible handles" by Casson himself, and Michael Freedman (1982) introduced the name "Casson handle" by which they are known today. In that work he showed that Casson handles are topological 2-handles, and used this to classify simply connected compact topological 4-manifolds. Motivation. In the proof of the h-cobordism theorem, the following construction is used. Given a circle in the boundary of a manifold, we would often like to find a disk embedded in the manifold whose boundary is the given circle. If the manifold is simply connected then we can find a map from a disc to the manifold with boundary the given circle, and if the manifold is of dimension at least 5 then by putting this disc in "general position" it becomes an embedding. The number 5 appears for the following reason: submanifolds of dimension "m" and "n" in general position do not intersect provided the dimension of the manifold containing them has dimension greater than formula_0. In particular, a disc (of dimension 2) in general position will have no self intersections inside a manifold of dimension greater than 2+2. If the manifold is 4 dimensional, this does not work: the problem is that a disc in general position may have double points where two points of the disc have the same image. This is the main reason why the usual proof of the h-cobordism theorem only works for cobordisms whose boundary has dimension at least 5. We can try to get rid of these double points as follows. Draw a line on the disc joining two points with the same image. If the image of this line is the boundary of an embedded disc (called a Whitney disc), then it is easy to remove the double point. However this argument seems to be going round in circles: in order to eliminate a double point of the first disc, we need to construct a second embedded disc, whose construction involves exactly the same problem of eliminating double points. Casson's idea was to iterate this construction an infinite number of times, in the hope that the problems about double points will somehow disappear in the infinite limit. Construction. A Casson handle has a 2-dimensional skeleton, which can be constructed as follows. We can represent these skeletons by rooted trees such that each point is joined to only a finite number of other points: the tree has a point for each disc, and a line joining points if the corresponding discs intersect in the skeleton. A Casson handle is constructed by "thickening" the 2-dimensional construction above to give a 4-dimensional object: we replace each disc formula_1 by a copy of formula_2. Informally we can think of this as taking a small neighborhood of the skeleton (thought of as embedded in some 4-manifold). There are some minor extra subtleties in doing this: we need to keep track of some framings, and intersection points now have an orientation. Casson handles correspond to rooted trees as above, except that now each vertex has a sign attached to it to indicate the orientation of the double point. We may as well assume that the tree has no finite branches, as finite branches can be "unravelled" so make no difference. The simplest exotic Casson handle corresponds to the tree which is just a half infinite line of points (with all signs the same). It is diffeomorphic to formula_3 with a cone over the Whitehead continuum removed. There is a similar description of more complicated Casson handles, with the Whitehead continuum replaced by a similar but more complicated set. Structure. Freedman's main theorem about Casson handles states that they are all homeomorphic to formula_2; or in other words they are topological 2-handles. In general they are not diffeomorphic to formula_2 as follows from Donaldson's theorem, and there are an uncountable infinite number of different diffeomorphism types of Casson handles. However the interior of a Casson handle is diffeomorphic to formula_4; Casson handles differ from standard 2 handles only in the way the boundary is attached to the interior. Freedman's structure theorem can be used to prove the h-cobordism theorem for 5-dimensional topological cobordisms, which in turn implies the 4-dimensional topological Poincaré conjecture.
[ { "math_id": 0, "text": "m+n" }, { "math_id": 1, "text": "D^2" }, { "math_id": 2, "text": "D^2\\times \\R^2" }, { "math_id": 3, "text": "D^2\\times D^2" }, { "math_id": 4, "text": "\\R^4" } ]
https://en.wikipedia.org/wiki?curid=7024176
70247574
Judges 4
Book of Judges, chapter 4 Judges 4 is the fourth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Deborah, belonging to a section comprising to . Text. This chapter was originally written in the Hebrew language. It is divided into 24 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including XJudges (XJudg, X6; 50 BCE) with extant verses 5–8. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" Deborah (4:1–16). This chapter opens with the conventional narrative pattern of the book, connecting with Ehud without reference to Shamgar (who is later mentioned in Judges 5), to introduce Deborah the prophet as the savior (verse 4), after Israel's formulaic cry to God for relief from oppression. Deborah delivered military instructions received directly from God to Barak, the apparent leader of the Israelites, to confront the army of Jabin, led by Sisera (his general), and thereby showing that YHWH is the ultimate military commander in the holy wars fought by his people. The structure of the section from verses 6–16 is as follows: A The command of Deborah and the response of Barak (4:6–9) a. Deborah commands Barak to gather an army and assures him of victory (4:6–7) b. Barak requires Deborah's presence (4:8) c. Barak wins his request but loses glory (4:9) B Barak deploys the troops (4:10) a. Barak calls ("z'q") the troops to Kedesh (4:10a1) b. Barak goes up ("ʼlh") with the troops (4:10a2–b) B' Sisera deploys the troops (4:12–13) a. Sisera hears that Barak has gone up ("ʼlh") (4:12) b. Sisera calls ("z'q") the troops to Wadi-Kishon (4:13) A' The command of Deborah and the response of Barak (4:14–16) a. Deborah commands Barak to go into battle and assures him of victory (4:14a) b. Barak goes down to fight (4:14b) c. Barak wins the battle but loses Sisera (4:15–16) In verses 12-16, the pattern of Israel's redemption is completed with the underdogs' victory as prophesied by the prophetess. "And Deborah, a prophetess, the wife of Lapidoth, she judged Israel at that time." Jael kills Sisera (4:17–24). The structure of this section is: Sisera came to Jael's tent (4:17) A Jael entreats Sisera to come into her tent (4:18a) B Sisera enters asking for aid (4:18b–20) C Jael kills Sisera (4:21) Barak came to Jael's tent (4:22a1) A' Jael entreats Barak to come into her tent (4:22a2) B' Barak responds by entering (4:22b1) C' Jael presents the slain Sisera to Barak (4:22b2) In this section, Sisera was looking for a place to hide from Israelite pursuers and by chance came to Jael's tent. Jael intentionally went out to meet Sisera and tricked him into thinking that she could provide service (cf. Ehud to Eglon in Judges 3). Sisear asked for water, but Jael demonstrated ancient Near Eastern hospitality by instead giving him milk ("Jael" ( "Yāʿēl") means "mountain goat" ("ibex"); perhaps she gave Sisear goat's milk) and covering him up to sleep, whereupon Jael struck him dead with a tent-peg and hammer. The action was sung with some detail and nuance in the ancient poem of Judges 5 verse 22, as the fulfilment of Deborah's prediction (4:9). The last two verses (23–24) contain a reminder that YHWH controls the battle and gives relief from Israel's oppressors. "And he said to her, "Stand at the door of the tent, and if any man comes and inquires of you, and says, 'Is there any man here?' you shall say, 'No.' "" Verse 20. The last words of Sisera to Jael (before Sisera was killed by Jael) contain an irony, with the play of the word "any man" (Hebrew "ʼiš"): the first use refers to the one coming to the tent, which was Barak, whereas the second use refers to the one in the tent, which was Sisera, and the answer should be "No", because Sisera would no longer be alive by the time Barak came. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70247574
70253847
Bekić's theorem
Theorem about fixed points of multiple variables In computability theory, Bekić's theorem or Bekić's lemma is a theorem about fixed-points which allows splitting a mutual recursion into recursions on one variable at a time. It was created by Austrian Hans Bekić (1936-1982) in 1969, and published posthumously in a book by Cliff Jones in 1984. The theorem is set up as follows. Consider two operators formula_0 and formula_1 on pointed directed-complete partial orders formula_2 and formula_3, continuous in each component. Then define the operator formula_4. This is monotone with respect to the product order (componentwise order). By the Kleene fixed-point theorem, it has a least fixed point formula_5, a pair formula_6 in formula_7 such that formula_8 and formula_9. Bekić's theorem (called the "bisection lemma" in his notes) is that the simultaneous least fixed point formula_10 can be separated into a series of least fixed points on formula_2 and formula_3, in particular: formula_11 In this presentation formula_12 is defined in terms of formula_13. It can instead be defined in a symmetric presentation: formula_14 Proof (Bekić): formula_15 since it is the fixed point. Similarly formula_16. Hence formula_17 is a fixed point of formula_18. Conversely, if there is a pre-fixed point formula_19 with formula_20, then formula_21 and formula_22; hence formula_23 and formula_17 is the minimal fixed point. Variants. In a complete lattice. A variant of the theorem strengthens the conditions on formula_2 and formula_3 to be that they are complete lattices, and finds the least fixed point using the Knaster–Tarski theorem. The requirement for continuity of formula_24 and formula_25 can then be weakened to only requiring them to be monotonic functions. Categorical formulation. Bekić's lemma has been generalized to fix-points of endofunctors of categories (initial formula_26-algebras). Given two functors formula_27 and formula_28 such that all formula_29 and formula_30 exist, the fix-point formula_31 is carried by the pair: formula_32 Usage. Bekić's theorem can be applied repeatedly to find the least fixed point of a tuple in terms of least fixed points of single variables. Although the resulting expression might become rather complex, it can be easier to reason about fixed points of single variables when designing an automated theorem prover. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f : P \\times Q \\to P" }, { "math_id": 1, "text": "g : P \\times Q \\to Q" }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "(f, g)(x, y) = (f (x, y), g(x, y))" }, { "math_id": 5, "text": "\\mu (x, y).(f, g)(x, y)" }, { "math_id": 6, "text": "(x_0, y_0)" }, { "math_id": 7, "text": "P \\times Q" }, { "math_id": 8, "text": "f (x_0, y_0) = x_0" }, { "math_id": 9, "text": "g(x_0, y_0) = y_0" }, { "math_id": 10, "text": "\\mu (x, y).(f, g)(x, y) = (x_0,y_0)" }, { "math_id": 11, "text": "\n\\begin{align}\nx_0 &= \\mu x . f ( x , \\mu y . g(x,y) ) \\\\\ny_0 &= \\mu y . g ( x_0 , y )\n\\end{align}\n" }, { "math_id": 12, "text": "y_0" }, { "math_id": 13, "text": "x_0" }, { "math_id": 14, "text": "\n\\begin{align}\nx_0 &= \\mu x . f ( x , \\mu y . g(x,y) ) \\\\\ny_0 &= \\mu y . g ( \\mu x . f(x,y), y )\n\\end{align}\n" }, { "math_id": 15, "text": "y_0 = g(x_0,y_0)" }, { "math_id": 16, "text": "x_0 = f ( x_0 , \\mu y . g(x_0,y) ) = f(x_0,y_0)" }, { "math_id": 17, "text": "(x_0,y_0)" }, { "math_id": 18, "text": "(f,g)" }, { "math_id": 19, "text": "(x_1,y_1)" }, { "math_id": 20, "text": "(x_1,y_1) \\geq (f(x_1,y_1),g(x_1,y_1))" }, { "math_id": 21, "text": "x_1 \\geq f ( x_1 , \\mu y . g(x_1,y)) \\geq x_0" }, { "math_id": 22, "text": "y_1 \\geq \\mu y . g ( x_1 , y ) \\geq \\mu y . g ( x_0 , y ) = y_0" }, { "math_id": 23, "text": "(x_1,y_1) \\geq (x_0,y_0)" }, { "math_id": 24, "text": "f" }, { "math_id": 25, "text": "g" }, { "math_id": 26, "text": "F" }, { "math_id": 27, "text": "F : \\mathbf{C} \\times \\mathbf{D} \\to \\mathbf{C}" }, { "math_id": 28, "text": "G : \\mathbf{C} \\times \\mathbf{D} \\to \\mathbf{D}" }, { "math_id": 29, "text": "\\mu X.F(X,Y)" }, { "math_id": 30, "text": "\\mu Y.G(X,Y)" }, { "math_id": 31, "text": "\\mu \\langle X,Y\\rangle .\\langle F(X,Y),G(X,Y)\\rangle " }, { "math_id": 32, "text": "\n\\begin{align}\nX_0 &= \\mu X . F ( X , \\mu Y . G(X,Y) ) \\\\\nY_0 &= \\mu Y . G ( X_0 , Y )\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=70253847
70257
Growth accounting
Growth accounting is a procedure used in economics to measure the contribution of different factors to economic growth and to indirectly compute the rate of technological progress, measured as a residual, in an economy. Growth accounting decomposes the growth rate of an economy's total output into that which is due to increases in the contributing amount of the factors used—usually the increase in the amount of capital and labor—and that which cannot be accounted for by observable changes in factor utilization. The unexplained part of growth in GDP is then taken to represent increases in productivity (getting more output with the same amounts of inputs) or a measure of broadly defined technological progress. The technique has been applied to virtually every economy in the world and a common finding is that observed levels of economic growth cannot be explained simply by changes in the stock of capital in the economy or population and labor force growth rates. Hence, technological progress plays a key role in the economic growth of nations, or the lack of it. History. This methodology was introduced by Robert Solow and Trevor Swan in 1957. Growth accounting was proposed for management accounting in the 1980s. but they did not gain on as management tools. The reason is clear. The production functions are understood and formulated differently in growth accounting and management accounting. In growth accounting the production function is formulated as a function OUTPUT=F (INPUT), which formulation leads to maximize the average productivity ratio OUTPUT/INPUT. Average productivity has never been accepted in management accounting (in business) as a performance criterion or an objective to be maximized because it would mean the end of the profitable business. Instead the production function is formulated as a function INCOME=F(OUTPUT-INPUT) which is to be maximized. The name of the game is to maximize income, not to maximize productivity or production. Abstract example. The growth accounting model is normally expressed in the form of the exponential growth function. As an abstract example consider an economy whose total output (GDP) grows at 3% per year. Over the same period its capital stock grows at 6% per year and its labor force by 1%. The contribution of the growth rate of capital to output is equal to that growth rate weighted by the share of capital in total output and the contribution of labor is given by the growth rate of labor weighted by labor's share in income. If capital's share in output is &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3, then labor's share is &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 (assuming these are the only two factors of production). This means that the portion of growth in output which is due to changes in factors is .06×(&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3)+.01×(&lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3)=.027 or 2.7%. This means that there is still 0.3% of the growth in output that cannot be accounted for. This remainder is the increase in the productivity of factors that happened over the period, or the measure of technological progress during this time. Specific example. Growth accounting can also be expressed in the form of the arithmetical model, which is used here because it is more descriptive and understandable. The principle of the accounting model is simple. The weighted growth rates of inputs (factors of production) are subtracted from the weighted growth rates of outputs. Because the accounting result is obtained by subtracting it is often called a "residual". The residual is often defined as the growth rate of output not explained by the share-weighted growth rates of the inputs. We can use the real process data of the production model in order to show the logic of the growth accounting model and identify possible differences in relation to the productivity model. When the production data is the same in the model comparison the differences in the accounting results are only due to accounting models. We get the following growth accounting from the production data. The growth accounting procedure proceeds as follows. First is calculated the growth rates for the output and the inputs by dividing the Period 2 numbers with the Period 1 numbers. Then the weights of inputs are computed as input shares of the total input (Period 1). Weighted growth rates (WG) are obtained by weighting growth rates with the weights. The accounting result is obtained by subtracting the weighted growth rates of the inputs from the growth rate of the output. In this case the accounting result is 0.015 which implies a productivity growth by 1.5%. We note that the productivity model reports a 1.4% productivity growth from the same production data. The difference (1.4% versus 1.5%) is caused by the different production volume used in the models. In the productivity model the input volume is used as a production volume measure giving the growth rate 1.063. In this case productivity is defined as follows: output volume per one unit of input volume. In the growth accounting model the output volume is used as a production volume measure giving the growth rate 1.078. In this case productivity is defined as follows: input consumption per one unit of output volume. The case can be verified easily with the aid of productivity model using output as a production volume. The accounting result of the growth accounting model is expressed as an index number, in this example 1.015, which depicts the average productivity change. As demonstrated above we cannot draw correct conclusions based on average productivity numbers. This is due to the fact that productivity is accounted as an independent variable separated from the entity it belongs to, i.e. real income formation. Hence, if we compare in a practical situation two growth accounting results of the same production process we do not know which one is better in terms of production performance. We have to know separately income effects of productivity change and production volume change or their combined income effect in order to understand which one result is better and how much better. This kind of scientific mistake of wrong analysis level has been recognized and described long ago. Vygotsky cautions against the risk of separating the issue under review from the total environment, the entity of which the issue is an essential part. By studying only this isolated issue we are likely to end up with incorrect conclusions. A second practical example illustrates this warning. Let us assume we are studying the properties of water in putting out a fire. If we focus the review on small components of the whole, in this case the elements oxygen and hydrogen, we come to the conclusion that hydrogen is an explosive gas and oxygen is a catalyst in combustion. Therefore, their compound water could be explosive and unsuitable for putting out a fire. This incorrect conclusion arises from the fact that the components have been separated from the entity. Technical derivation. The total output of an economy is modeled as being produced by various factors of production, with capital and labor being the primary ones in modern economies (although land and natural resources can also be included). This is usually captured by an aggregate production function: formula_0 where Y is total output, K is the stock of capital in the economy, L is the labor force (or population) and A is a "catch all" factor for technology, role of institutions and other relevant forces which measures how productively capital and labor are used in production. Standard assumptions on the form of the function F(.) is that it is increasing in K, L, A (if you increase productivity or you increase the number of factors used you get more output) and that it is homogeneous of degree one, or in other words that there are constant returns to scale (which means that if you double both K and L you get double the output). The assumption of constant returns to scale facilitates the assumption of perfect competition which in turn implies that factors get their marginal products: formula_1 formula_2 where MPK denotes the extra units of output produced with an additional unit of capital and similarly, for MPL. Wages paid to labor are denoted by w and the rate of profit or the real interest rate is denoted by r. Note that the assumption of perfect competition enables us to take prices as given. For simplicity we assume unit price (i.e. P =1), and thus quantities also represent values in all equations. If we totally differentiate the above production function we get; formula_3 where formula_4 denotes the partial derivative with respect to factor i, or for the case of capital and labor, the marginal products. With perfect competition this equation becomes: formula_5 If we divide through by Y and convert each change into growth rates we get: formula_6 or denoting a growth rate (percentage change over time) of a factor as formula_7 we get: formula_8 Then formula_9 is the share of total income that goes to capital, which can be denoted as formula_10 and formula_11 is the share of total income that goes to labor, denoted by formula_12. This allows us to express the above equation as: formula_13 In principle the terms formula_10, formula_14, formula_15 and formula_16 are all observable and can be measured using standard national income accounting methods (with capital stock being measured using investment rates via the perpetual inventory method). The term formula_17 however is not directly observable as it captures technological growth and improvement in productivity that are unrelated to changes in use of factors. This term is usually referred to as Solow residual or Total factor productivity growth. Slightly rearranging the previous equation we can measure this as that portion of increase in total output which is not due to the (weighted) growth of factor inputs: formula_18 Another way to express the same idea is in per capita (or per worker) terms in which we subtract off the growth rate of labor force from both sides: formula_19 which states that the rate of technological growth is that part of the growth rate of per capita income which is not due to the (weighted) growth rate of capital per person. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y=F(A,K,L)" }, { "math_id": 1, "text": "{dY}/{dK}=MPK=r" }, { "math_id": 2, "text": "{dY}/{dL}=MPL=w" }, { "math_id": 3, "text": "dY=F_A dA+F_K dK+F_L dL" }, { "math_id": 4, "text": "F_i" }, { "math_id": 5, "text": "dY=F_A dA+MPK dK+MPL dL=F_A dA+r dK+w dL" }, { "math_id": 6, "text": "{dY}/{Y}=({F_A}A/{Y})({dA}/{A})+(r{K}/{Y})*({dK}/{K})+(w{L}/{Y})*({dL}/{L})" }, { "math_id": 7, "text": "g_i={di}/{i}" }, { "math_id": 8, "text": "g_Y=({F_A}A/{Y})*g_A+({rK}/{Y})*g_K+({wL}/{Y})*g_L" }, { "math_id": 9, "text": "{rK}/{Y}" }, { "math_id": 10, "text": "\\alpha" }, { "math_id": 11, "text": "{wL}/{Y}" }, { "math_id": 12, "text": "1-\\alpha" }, { "math_id": 13, "text": "g_Y={F_A}A/{Y}*g_A+\\alpha*g_K+(1-\\alpha)*g_L" }, { "math_id": 14, "text": "g_Y" }, { "math_id": 15, "text": "g_K" }, { "math_id": 16, "text": "g_L" }, { "math_id": 17, "text": "\\frac {F_A A} {Y}*g_A" }, { "math_id": 18, "text": "Solow Residual = g_Y-\\alpha*g_K-(1-\\alpha)*g_L" }, { "math_id": 19, "text": "Solow Residual=g_{(Y/L)}-\\alpha*g_{(K/L)}" } ]
https://en.wikipedia.org/wiki?curid=70257
70258442
Spiral hashing
Spiral hashing, also known as Spiral Storage is an extensible hashing algorithm. As in all hashing schemes, spiral hashing stores records in a varying number of buckets, using a record key for addressing. In an expanding Linear hashing file, buckets are split in a predefined order. This results in adding a new bucket at the end of the file. While this allows gradual reorganization of the file, the expected number of records in the newly created bucket and the bucket from what it splits falls to half the previous number. Several attempts were made to alleviate this sudden drop in space utilization. Martin's spiral storage uses a different approach. The file consists of a number of continuously numbered buckets. The lower-numbered (left) buckets have a higher expected number of records. When the file expands, the left-most bucket is replaced by two buckets on the right. Some variants of this idea exist. Spiral hashing requires a uniform hash function of the keys of the records into the unit interval formula_0. If the hash file starts at bucket formula_1, the key formula_2 is mapped into a real number formula_3. The final address is then computed as formula_4 where formula_5 is the "extension factor". When formula_1 is incremented, approximately formula_5 new buckets are created on the right. Larson conducted experiments that showed that Linear hashing still had superior performance over Spiral Hashing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[0,1]" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "x=S+h(k) \\in[S, S+1]" }, { "math_id": 4, "text": "\\lfloor d^x \\rfloor" }, { "math_id": 5, "text": "d" } ]
https://en.wikipedia.org/wiki?curid=70258442
7025898
Quantum cloning
Process of copying a quantum state with no modification of the original Quantum cloning is a process that takes an arbitrary, unknown quantum state and makes an exact copy without altering the original state in any way. Quantum cloning is forbidden by the laws of quantum mechanics as shown by the no cloning theorem, which states that there is no operation for cloning any arbitrary state formula_0perfectly. In Dirac notation, the process of quantum cloning is described by: formula_1 where formula_2 is the actual cloning operation,formula_0 is the state to be cloned, and formula_3is the initial state of the copy. Though perfect quantum cloning is not possible, it is possible to perform imperfect cloning, where the copies have a non-unit (i.e. non-perfect) fidelity. The possibility of approximate quantum computing was first addressed by Buzek and Hillery, and theoretical bounds were derived on the fidelity of cloned quantum states. One of the applications of quantum cloning is to analyse the security of quantum key distribution protocols. Teleportation, nuclear magnetic resonance, quantum amplification, and superior phase conjugation are examples of some methods utilized to realize a quantum cloning machine. Ion trapping techniques have been applied to cloning quantum states of ions. Types of Quantum Cloning Machines. It may be possible to clone a quantum state to arbitrary accuracy in the presence of closed timelike curves. Universal Quantum Cloning. Universal quantum cloning (UQC) implies that the quality of the output (cloned state) is not dependent on the input, thus the process is "universal" to any input state. The output state produced is governed by the Hamiltonian of the system. One of the first cloning machines, a 1 to 2 UQC machine, was proposed in 1996 by Buzek and Hillery. As the name implies, the machine produces two identical copies of a single input qubit with a fidelity of 5/6 when comparing only one output qubit, and global fidelity of 2/3 when comparing both qubits. This idea was expanded to more general cases such as an arbitrary number of inputs and copies, as well as d-dimensional systems. Multiple experiments have been conducted to realize this type of cloning machine physically by using photon stimulated emission. The concept relies on the property of certain three-level atoms to emit photons of any polarization with equally likely probability. This symmetry ensures the universality of the machine. Phase Covariant Cloning. When input states are restricted to Bloch vectors corresponding to points on the equator of the Bloch Sphere, more information is known about them. The resulting clones are thus state-dependent, having an optimal fidelity of formula_4. Although only having a fidelity slightly greater than the UQCM (≈0.83), phase covariant cloning has the added benefit of being easily implemented through quantum logic gates consisting of the rotational operator formula_5and the controlled-NOT (CNOT). Output states are also separable according to Peres-Horodecki criterion. The process has been generalized to the 1 → M case and proven optimal. This has also been extended to the qutrit and qudit cases. The first experimental asymmetric quantum cloning machine was realized in 2004 using nuclear magnetic resonance. Asymmetric Quantum Cloning. The first family of asymmetric quantum cloning machines was proposed by Nicholas Cerf in 1998. A cloning operation is said to be asymmetric if its clones have different qualities and are all independent of the input state. This is a more general case of the symmetric cloning operations discussed above which produce identical clones with the same fidelity. Take the case of a simple 1 → 2 asymmetric cloning machine. There is a natural trade-off in the cloning process in that if one clone's fidelity is fixed to a higher value, the other must decrease in quality and vice versa. The optimal trade-off is bounded by the following inequality: formula_6 where "Fd" and "Fe" are the state-independent fidelities of the two copies. This type of cloning procedure was proven mathematically to be optimal as derived from the Choi-Jamiolkowski channel state duality. However, even with this cloning machine perfect quantum cloning is proved to be unattainable. The trade-off of optimal accuracy between the resulting copies has been studied in quantum circuits, and with regards to theoretical bounds. Optimal asymmetric cloning machines are extended to formula_7 in formula_8 dimensions. Probabilistic Quantum Cloning. In 1998, Duan and Guo proposed a different approach to quantum cloning machines that relies on probability. This machine allows for the "perfect copying" of quantum states without violation of the No-Cloning and No-Broadcasting Theorems, but at the cost of not being 100% reproducible. The cloning machine is termed "probabilistic" because it performs measurements in addition to a unitary evolution. These measurements are then sorted through to obtain the perfect copies with a certain quantum efficiency (probability). As only orthogonal states can be cloned perfectly, this technique can be used to identify non-orthogonal states. The process is optimal when formula_9 where η is the probability of success for the states Ψ0 and Ψ1. The process was proven mathematically to clone two pure, non-orthogonal input states using a unitary-reduction process. One implementation of this machine was realized through the use of a "noiseless optical amplifier" with a success rate of about 5% . Applications of Approximate Quantum Cloning. Cloning in Discrete Quantum Systems. The simple basis for approximate quantum cloning exists in the first and second trivial cloning strategies. In first trivial cloning, a measurement of a qubit in a certain basis is made at random and yields two copies of the qubit. This method has a universal fidelity of 2/3. The second trivial cloning strategy, also called "trivial amplification", is a method in which an original qubit is left unaltered, and another qubit is prepared in a different orthogonal state. When measured, both qubits have the same probability, 1/2, (check) and an overall single copy fidelity of 3/4. Quantum Cloning Attacks. Quantum information is useful in the field of cryptography due to its intrinsic encrypted nature. One such mechanism is quantum key distribution. In this process, Bob receives a quantum state sent by Alice, in which some type of classical information is stored. He then performs a random measurement, and using minimal information provided by Alice, can determine whether or not his measurement was "good". This measurement is then transformed into a key in which private data can be stored and sent without fear of the information being stolen. One reason this method of cryptography is so secure is because it is impossible to eavesdrop due to the no-cloning theorem. A third party, Eve, can use incoherent attacks in an attempt to observe the information being transferred from Bob to Alice. Due to the no-cloning theorem, Eve is unable to gain any information. However, through quantum cloning, this is no longer entirely true. Incoherent attacks involve a third party gaining some information into the information being transmitted between Bob and Alice. These attacks follow two guidelines: 1) third party Eve must act individually and match the states that are being observed, and 2) Eve's measurement of the traveling states occurs after the sifting phase (removing states that are in non-matched bases) but before reconciliation (putting Alice and Bob's strings back together). Due to the secure nature of quantum key distribution, Eve would be unable to decipher the secret key even with as much information as Bob and Alice. These are known as an incoherent attacks because a random, repeated attack yields the highest chance of Eve finding the key. Nuclear Magnetic Resonance. While classical nuclear magnetic resonance is the phenomenon of nuclei emitting electromagnetic radiation at resonant frequencies when exposed to a strong magnetic field and is used heavily in imaging technology, quantum nuclear magnetic resonance is a type of quantum information processing (QIP). The interactions between the nuclei allow for the application of quantum logic gates, such as the CNOT. One quantum NMR experiment involved passing three qubits through a circuit, after which they are all entangled; the second and third qubit are transformed into clones of the first with a fidelity of 5/6. Another application allowed for the alteration of the signal-noise ratio, a process that increased the signal frequency while decreasing the noise frequency, allowing for a clearer information transfer. This is done through polarization transfer, which allows for a portion of the signal's highly polarized electric spin to be transferred to the target nuclear spin. The NMR system allows for the application of quantum algorithms such as Shor factorization and the Deutsch-Joza algorithm. Stimulated Emission. Stimulated emission is a type of Universal Quantum Cloning Machine that functions on a three-level system: one ground and two degenerates that are connected by an orthogonal electromagnetic field. The system is able to emit photons by exciting electrons between the levels. The photons are emitted in varying polarizations due to the random nature of the system, but the probability of emission type is equal for all – this is what makes this a universal cloning machine. By integrating quantum logic gates into the stimulated emission system, the system is able to produce cloned states. Telecloning. Telecloning is the combination of quantum teleportation and quantum cloning. This process uses positive operator-valued measurements, maximally entangled states, and quantum teleportation to create identical copies, locally and in a remote location. Quantum teleportation alone follows a "one-to-one" or "many-to-many" method in which either one or many states are transported from Alice, to Bob in a remote location. The teleclone works by first creating local quantum clones of a state, then sending these to a remote location by quantum teleportation. The benefit of this technology is that it removes errors in transmission that usually result from quantum channel decoherence. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\displaystyle |\\psi \\rangle _{A}} " }, { "math_id": 1, "text": "{\\displaystyle U|\\psi \\rangle _{A}|e\\rangle _{B}=|\\psi \\rangle _{A}|\\psi \\rangle _{B}}," }, { "math_id": 2, "text": "{\\displaystyle U} " }, { "math_id": 3, "text": "{\\displaystyle |e\\rangle _{B}}" }, { "math_id": 4, "text": "1/2 + \\sqrt{1/8} \\approx 0.8536" }, { "math_id": 5, "text": "\\hat{R}(\\vartheta)" }, { "math_id": 6, "text": "(1-F_d)(1-F_e)\\geq[1/2-(1-F_d)-(1-F_e)]^2" }, { "math_id": 7, "text": "M \\rightarrow N" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "\\eta=1/(1+|\\langle\\Psi_0|\\Psi_1\\rangle)" } ]
https://en.wikipedia.org/wiki?curid=7025898
7025924
Models of DNA evolution
Mathematical models of changing DNA A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree (in Bayesian and maximum likelihood approaches to tree estimation) and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences. Introduction. These models are phenomenological descriptions of the evolution of DNA as a string of four discrete states. These Markov models do not explicitly depict the mechanism of mutation nor the action of natural selection. Rather they describe the relative rates of different changes. For example, mutational biases and purifying selection favoring conservative changes are probably both responsible for the relatively high rate of transitions compared to transversions in evolving sequences. However, the Kimura (K80) model described below only attempts to capture the effect of both forces in a parameter that reflects the relative rate of transitions to transversions. Evolutionary analyses of sequences are conducted on a wide variety of time scales. Thus, it is convenient to express these models in terms of the instantaneous rates of change between different states (the "Q" matrices below). If we are given a starting (ancestral) state at one position, the model's "Q" matrix and a branch length expressing the expected number of changes to have occurred since the ancestor, then we can derive the probability of the descendant sequence having each of the four states. The mathematical details of this transformation from rate-matrix to probability matrix are described in the mathematics of substitution models section of the substitution model page. By expressing models in terms of the instantaneous rates of change we can avoid estimating a large numbers of parameters for each branch on a phylogenetic tree (or each comparison if the analysis involves many pairwise sequence comparisons). The models described on this page describe the evolution of a single site within a set of sequences. They are often used for analyzing the evolution of an entire locus by making the simplifying assumption that different sites evolve independently and are identically distributed. This assumption may be justifiable if the sites can be assumed to be evolving neutrally. If the primary effect of natural selection on the evolution of the sequences is to constrain some sites, then models of among-site rate-heterogeneity can be used. This approach allows one to estimate only one matrix of relative rates of substitution, and another set of parameters describing the variance in the total rate of substitution across sites. DNA evolution as a continuous-time Markov chain. Continuous-time Markov chains. "Continuous-time" Markov chains have the usual transition matrices which are, in addition, parameterized by time, formula_0. Specifically, if formula_1 are the states, then the transition matrix formula_2 where each individual entry, formula_3 refers to the probability that state formula_4 will change to state formula_5 in time formula_0. Example: We would like to model the substitution process in DNA sequences ("i.e." Jukes–Cantor, Kimura, "etc.") in a continuous-time fashion. The corresponding transition matrices will look like: formula_6 where the top-left and bottom-right 2 × 2 blocks correspond to "transition probabilities" and the top-right and bottom-left 2 × 2 blocks corresponds to "transversion probabilities". Assumption: If at some time formula_7, the Markov chain is in state formula_4, then the probability that at time formula_8, it will be in state formula_9 depends only upon formula_10, formula_11 and formula_12. This then allows us to write that probability as formula_13. Theorem: Continuous-time transition matrices satisfy: formula_14 Note: There is here a possible confusion between two meanings of the word "transition". (i) In the context of "Markov chains", transition is the general term for the change between two states. (ii) In the context of "nucleotide changes in DNA sequences", transition is a specific term for the exchange between either the two purines (A ↔ G) or the two pyrimidines (C ↔ T) (for additional details, see the article about transitions in genetics). By contrast, an exchange between one purine and one pyrimidine is called a transversion. Deriving the dynamics of substitution. Consider a DNA sequence of fixed length "m" evolving in time by base replacement. Assume that the processes followed by the "m" sites are Markovian independent, identically distributed and that the process is constant over time. For a particular site, let formula_15 be the set of possible states for the site, and formula_16 their respective probabilities at time formula_17. For two distinct formula_18, let formula_19 be the transition rate from state formula_20 to state formula_21. Similarly, for any formula_20, let the total rate of change from formula_20 be formula_22 The changes in the probability distribution formula_23 for small increments of time formula_24 are given by formula_25 In other words, (in frequentist language), the frequency of formula_26's at time formula_27 is equal to the frequency at time formula_17 minus the frequency of the "lost" formula_26's plus the frequency of the "newly created" formula_26's. Similarly for the probabilities formula_28, formula_29 and formula_30. These equations can be written compactly as formula_31 where formula_32 is known as the "rate matrix". Note that, by definition, the sum of the entries in each row of formula_33 is equal to zero. It follows that formula_34 For a stationary process, where formula_33 does not depend on time "t", this differential equation can be solved. First, formula_35 where formula_36 denotes the exponential of the matrix formula_37. As a result, formula_38 Ergodicity. If the Markov chain is irreducible, "i.e." if it is always possible to go from a state formula_20 to a state formula_21 (possibly in several steps), then it is also ergodic. As a result, it has a unique "stationary distribution" formula_39, where formula_40 corresponds to the proportion of time spent in state formula_20 after the Markov chain has run for an infinite amount of time. In DNA evolution, under the assumption of a common process for each site, the stationary frequencies formula_41 correspond to equilibrium base compositions. Indeed, note that since the stationary distribution formula_42 satisfies formula_43, we see that when the current distribution formula_44 is the stationary distribution formula_42 we have formula_45 In other words, the frequencies of formula_46 do not change. Time reversibility. Definition: A stationary Markov process is "time reversible" if (in the steady state) the amount of change from state formula_47 to formula_48 is equal to the amount of change from formula_48 to formula_47, (although the two states may occur with different frequencies). This means that: formula_49 Not all stationary processes are reversible, however, most commonly used DNA evolution models assume time reversibility, which is considered to be a reasonable assumption. Under the time reversibility assumption, let formula_50, then it is easy to see that: formula_51 Definition The symmetric term formula_52 is called the "exchangeability" between states formula_53 and formula_48. In other words, formula_52 is the fraction of the frequency of state formula_47 that is the result of transitions from state formula_48 to state formula_47. Corollary The 12 off-diagonal entries of the rate matrix, formula_54 (note the off-diagonal entries determine the diagonal entries, since the rows of formula_54 sum to zero) can be completely determined by 9 numbers; these are: 6 exchangeability terms and 3 stationary frequencies formula_55, (since the stationary frequencies sum to 1). Scaling of branch lengths. By comparing extant sequences, one can determine the amount of sequence divergence. This raw measurement of divergence provides information about the number of changes that have occurred along the path separating the sequences. The simple count of differences (the Hamming distance) between sequences will often underestimate the number of substitution because of multiple hits (see homoplasy). Trying to estimate the exact number of changes that have occurred is difficult, and usually not necessary. Instead, branch lengths (and path lengths) in phylogenetic analyses are usually expressed in the expected number of changes per site. The path length is the product of the duration of the path in time and the mean rate of substitutions. While their product can be estimated, the rate and time are not identifiable from sequence divergence. The descriptions of rate matrices on this page accurately reflect the relative magnitude of different substitutions, but these rate matrices are not scaled such that a branch length of 1 yields one expected change. This scaling can be accomplished by multiplying every element of the matrix by the same factor, or simply by scaling the branch lengths. If we use the β to denote the scaling factor, and ν to denote the branch length measured in the expected number of substitutions per site then βν is used in the transition probability formulae below in place of μ"t". Note that ν is a parameter to be estimated from data, and is referred to as the branch length, while β is simply a number that can be calculated from the rate matrix (it is not a separate free parameter). The value of β can be found by forcing the expected rate of flux of states to 1. The diagonal entries of the rate-matrix (the "Q" matrix) represent -1 times the rate of leaving each state. For time-reversible models, we know the equilibrium state frequencies (these are simply the π"i" parameter value for state "i"). Thus we can find the expected rate of change by calculating the sum of flux out of each state weighted by the proportion of sites that are expected to be in that class. Setting β to be the reciprocal of this sum will guarantee that scaled process has an expected flux of 1: formula_56 For example, in the Jukes–Cantor, the scaling factor would be "4/(3μ)" because the rate of leaving each state is "3μ/4". Most common models of DNA evolution. JC69 model (Jukes and Cantor 1969). JC69, the Jukes and Cantor 1969 model, is the simplest substitution model. There are several assumptions. It assumes equal base frequencies formula_57 and equal mutation rates. The only parameter of this model is therefore formula_58, the overall substitution rate. As previously mentioned, this variable becomes a constant when we normalize the mean-rate to 1. formula_59 formula_61 When branch length, formula_60, is measured in the expected number of changes per site then: formula_62 It is worth noticing that formula_63 what stands for sum of any column (or row) of matrix formula_33 multiplied by time and thus means expected number of substitutions in time formula_17 (branch duration) for each particular site (per site) when the rate of substitution equals formula_58. Given the proportion formula_64 of sites that differ between the two sequences the Jukes–Cantor estimate of the evolutionary distance (in terms of the expected number of changes) between two sequences is given by formula_65 The formula_64 in this formula is frequently referred to as the formula_64-distance. It is a sufficient statistic for calculating the Jukes–Cantor distance correction, but is not sufficient for the calculation of the evolutionary distance under the more complex models that follow (also note that formula_64 used in subsequent formulae is not identical to the "formula_64-distance"). K80 model (Kimura 1980). K80, the Kimura 1980 model, often referred to as Kimura's two parameter model (or the K2P model), distinguishes between transitions (formula_66, i.e. from purine to purine, or formula_67, i.e. from pyrimidine to pyrimidine) and transversions (from purine to pyrimidine or vice versa). In Kimura's original description of the model the α and β were used to denote the rates of these types of substitutions, but it is now more common to set the rate of transversions to 1 and use κ to denote the transition/transversion rate ratio (as is done below). The K80 model assumes that all of the bases are equally frequent (formula_68). Rate matrix formula_69 with columns corresponding to formula_70, formula_71, formula_72, and formula_73, respectively. The Kimura two-parameter distance is given by: formula_74 where "p" is the proportion of sites that show transitional differences and "q" is the proportion of sites that show transversional differences. K81 model (Kimura 1981). K81, the Kimura 1981 model, often called Kimura's three parameter model (K3P model) or the Kimura three substitution type (K3ST) model, has distinct rates for transitions and two distinct types of transversions. The two transversion types are those that conserve the weak/strong properties of the nucleotides (i.e., formula_75 and formula_76, denoted by symbol formula_77 ) and those that conserve the amino/keto properties of the nucleotides (i.e., formula_78 and formula_79, denoted by symbol formula_80 ). The K81 model assumes that all equilibrium base frequencies are equal (i.e., formula_81). Rate matrix formula_82 with columns corresponding to formula_70, formula_71, formula_72, and formula_73, respectively. The K81 model is used much less often than the K80 (K2P) model for distance estimation and it is seldom the best-fitting model in maximum likelihood phylogenetics. Despite these facts, the K81 model has continued to be studied in the context of mathematical phylogenetics. One important property is the ability to perform a Hadamard transform assuming the site patterns were generated on a tree with nucleotides evolving under the K81 model. When used in the context of phylogenetics the Hadamard transform provides an elegant and fully invertible means to calculate expected site pattern frequencies given a set of branch lengths (or vice versa). Unlike many maximum likelihood calculations, the relative values for formula_83, formula_80, and formula_77 can vary across branches and the Hadamard transform can even provide evidence that the data do not fit a tree. The Hadamard transform can also be combined with a wide variety of methods to accommodate among-sites rate heterogeneity, using continuous distributions rather than the discrete approximations typically used in maximum likelihood phylogenetics (although one must sacrifice the invertibility of the Hadamard transform to use certain among-sites rate heterogeneity distributions). F81 model (Felsenstein 1981). F81, the Felsenstein's 1981 model, is an extension of the JC69 model in which base frequencies are allowed to vary from 0.25 (formula_84) Rate matrix: formula_85 When branch length, ν, is measured in the expected number of changes per site then: formula_86 formula_87 HKY85 model (Hasegawa, Kishino and Yano 1985). HKY85, the Hasegawa, Kishino and Yano 1985 model, can be thought of as combining the extensions made in the Kimura80 and Felsenstein81 models. Namely, it distinguishes between the rate of transitions and transversions (using the κ parameter), and it allows unequal base frequencies (formula_84). [ Felsenstein described a similar (but not equivalent) model in 1984 using a different parameterization; that latter model is referred to as the F84 model. ] Rate matrix formula_88 If we express the branch length, "ν" in terms of the expected number of changes per site then: formula_89 formula_90 formula_91 formula_92 formula_93 and formula for the other combinations of states can be obtained by substituting in the appropriate base frequencies. T92 model (Tamura 1992). T92, the Tamura 1992 model, is a mathematical method developed to estimate the number of nucleotide substitutions per site between two DNA sequences, by extending Kimura's (1980) two-parameter method to the case where a G+C content bias exists. This method will be useful when there are strong transition-transversion and G+C-content biases, as in the case of "Drosophila" mitochondrial DNA. T92 involves a single, compound base frequency parameter formula_94 (also noted formula_95) formula_96 As T92 echoes the Chargaff's second parity rule — pairing nucleotides do have the same frequency on a single DNA strand, G and C on the one hand, and A and T on the other hand — it follows that the four base frequences can be expressed as a function of formula_95 formula_97 and formula_98 Rate matrix formula_99 The evolutionary distance between two DNA sequences according to this model is given by formula_100 where formula_101 and formula_102 is the G+C content (formula_103). TN93 model (Tamura and Nei 1993). TN93, the Tamura and Nei 1993 model, distinguishes between the two different types of transition; i.e. (formula_66) is allowed to have a different rate to (formula_67). Transversions are all assumed to occur at the same rate, but that rate is allowed to be different from both of the rates for transitions. TN93 also allows unequal base frequencies (formula_84). Rate matrix formula_104 GTR model (Tavaré 1986). GTR, the Generalised time-reversible model of Tavaré 1986, is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986. GTR parameters consist of an equilibrium base frequency vector, formula_105, giving the frequency at which each base occurs at each site, and the rate matrix formula_106 Where formula_107 are the transition rate parameters. Therefore, GTR (for four characters, as is often the case in phylogenetics) requires 6 substitution rate parameters, as well as 4 equilibrium base frequency parameters. However, this is usually eliminated down to 9 parameters plus formula_58, the overall number of substitutions per unit time. When measuring time in substitutions (formula_58=1) only 8 free parameters remain. In general, to compute the number of parameters, one must count the number of entries above the diagonal in the matrix, i.e. for n trait values per site formula_108, and then add "n" for the equilibrium base frequencies, and subtract 1 because formula_58 is fixed. One gets formula_109 For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins), one would find there are 209 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are formula_110 codons, but the rates for transitions between codons which differ by more than one base is assumed to be zero. Hence, there are formula_111 parameters. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " t " }, { "math_id": 1, "text": " E_1,E_2,E_3,E_4 " }, { "math_id": 2, "text": " P(t) = \\big(P_{ij}(t)\\big) " }, { "math_id": 3, "text": " P_{ij}(t) " }, { "math_id": 4, "text": " E_i " }, { "math_id": 5, "text": " E_j " }, { "math_id": 6, "text": "\nP(t) = \\begin{pmatrix}\n p_\\mathrm{AA}(t) & p_\\mathrm{AG}(t) & p_\\mathrm{AC}(t) & p_\\mathrm{AT}(t) \\\\\n p_\\mathrm{GA}(t) & p_\\mathrm{GG}(t) & p_\\mathrm{GC}(t) & p_\\mathrm{GT}(t) \\\\\n p_\\mathrm{CA}(t) & p_\\mathrm{CG}(t) & p_\\mathrm{CC}(t) & p_\\mathrm{CT}(t) \\\\\n p_\\mathrm{TA}(t) & p_\\mathrm{TG}(t) & p_\\mathrm{TC}(t) & p_\\mathrm{TT}(t)\n \\end{pmatrix} " }, { "math_id": 7, "text": " t_0 " }, { "math_id": 8, "text": " t_0+t " }, { "math_id": 9, "text": "E_j " }, { "math_id": 10, "text": "i " }, { "math_id": 11, "text": "j " }, { "math_id": 12, "text": "t " }, { "math_id": 13, "text": "p_{ij}(t) " }, { "math_id": 14, "text": "P(t+\\tau) = P(t)P(\\tau) " }, { "math_id": 15, "text": " \\mathcal{E} = \\{A,\\, G, \\, C, \\, T\\}" }, { "math_id": 16, "text": " \\mathbf{p}(t) = (p_A(t),\\, p_G(t),\\, p_C(t),\\, p_T(t))" }, { "math_id": 17, "text": "t" }, { "math_id": 18, "text": " x, y \\in \\mathcal{E}" }, { "math_id": 19, "text": " \\mu_{xy}\\ " }, { "math_id": 20, "text": "x" }, { "math_id": 21, "text": "y" }, { "math_id": 22, "text": "\\mu_x = \\sum_{y\\neq x}\\mu_{xy}\\,." }, { "math_id": 23, "text": "p_A(t)" }, { "math_id": 24, "text": "\\Delta t" }, { "math_id": 25, "text": "p_A(t+\\Delta t) = p_A(t) - p_A(t)\\mu_A\\Delta t + \\sum_{x\\neq A}p_x(t)\\mu_{xA}\\Delta t\\,." }, { "math_id": 26, "text": "A" }, { "math_id": 27, "text": " t + \\Delta t" }, { "math_id": 28, "text": "p_G(t)" }, { "math_id": 29, "text": "p_C(t)" }, { "math_id": 30, "text": "p_T(t)" }, { "math_id": 31, "text": "\\mathbf{p}(t+\\Delta t) = \\mathbf{p}(t) + \\mathbf{p}(t)Q\\Delta t\\,," }, { "math_id": 32, "text": " Q = \\begin{pmatrix} -\\mu_A & \\mu_{AG} & \\mu_{AC} & \\mu_{AT} \\\\\n \\mu_{GA} & -\\mu_G & \\mu_{GC} & \\mu_{GT} \\\\\n \\mu_{CA} & \\mu_{CG} & -\\mu_C & \\mu_{CT} \\\\\n \\mu_{TA} & \\mu_{TG} & \\mu_{TC} & -\\mu_T \\end{pmatrix}" }, { "math_id": 33, "text": "Q" }, { "math_id": 34, "text": " \\mathbf{p}'(t) = \\mathbf{p}(t) Q\\,." }, { "math_id": 35, "text": "P(t) = \\exp(tQ)," }, { "math_id": 36, "text": "\\exp(tQ)" }, { "math_id": 37, "text": "tQ" }, { "math_id": 38, "text": "\\mathbf{p}(t) = \\mathbf{p}(0)P(t) = \\mathbf{p}(0)\\exp(tQ) \\,." }, { "math_id": 39, "text": "{\\boldsymbol\\pi} = \\{\\pi_x,\\, x \\in \\mathcal{E}\\}" }, { "math_id": 40, "text": "\\pi_x" }, { "math_id": 41, "text": "\\pi_A,\\, \\pi_G,\\, \\pi_C,\\, \\pi_T" }, { "math_id": 42, "text": "{\\boldsymbol\\pi}" }, { "math_id": 43, "text": "{\\boldsymbol\\pi}Q = 0" }, { "math_id": 44, "text": "\\mathbf{p}(t)" }, { "math_id": 45, "text": "{\\mathbf{p}'(t) = \\mathbf{p}(t)Q = \\boldsymbol\\pi}Q = 0 \\,." }, { "math_id": 46, "text": "p_A(t),\\, p_G(t),\\, p_C(t),\\, p_T(t)" }, { "math_id": 47, "text": " x\\ " }, { "math_id": 48, "text": " y\\ " }, { "math_id": 49, "text": " \\pi_x\\mu_{xy} = \\pi_y\\mu_{yx} \\ " }, { "math_id": 50, "text": " s_{xy} = \\mu_{xy}/\\pi_y\\ " }, { "math_id": 51, "text": " s_{xy} = s_{yx} \\ " }, { "math_id": 52, "text": " s_{xy}\\ " }, { "math_id": 53, "text": "x\\ " }, { "math_id": 54, "text": " Q\\ " }, { "math_id": 55, "text": "\\pi_x\\ " }, { "math_id": 56, "text": "\\beta = 1/\\left(-\\sum_i \\pi_i\\mu_{ii}\\right)" }, { "math_id": 57, "text": "\\left(\\pi_A = \\pi_G = \\pi_C = \\pi_T = {1\\over4}\\right)" }, { "math_id": 58, "text": "\\mu" }, { "math_id": 59, "text": "Q = \\begin{pmatrix} {*} & {\\mu\\over 4} & {\\mu\\over 4} & {\\mu\\over 4} \\\\ {\\mu\\over 4} & {*} & {\\mu\\over 4}& {\\mu\\over 4}\\\\ {\\mu\\over 4}& {\\mu\\over 4}& {*} & {\\mu\\over 4}\\\\ {\\mu\\over 4}& {\\mu\\over 4}& {\\mu\\over 4}& {*} \\end{pmatrix}" }, { "math_id": 60, "text": "\\nu" }, { "math_id": 61, "text": " P= \\begin{pmatrix} {{1\\over4} + {3\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} \\\\\\\\ {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} + {3\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} \\\\\\\\ {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} + {3\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} \\\\\\\\ {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} - {1\\over4}e^{-t\\mu}} & {{1\\over4} + {3\\over4}e^{-t\\mu}} \\end{pmatrix}" }, { "math_id": 62, "text": "P_{ij}(\\nu) = \\left\\{\n\\begin{array}{cc}\n{1\\over4} + {3\\over4}e^{-4\\nu/3} & \\mbox{ if } i = j \\\\\n{1\\over4} - {1\\over4}e^{-4\\nu/3} & \\mbox{ if } i \\neq j \n\\end{array}\n\\right." }, { "math_id": 63, "text": "\\nu={3\\over4}t\\mu=({\\mu\\over4}+{\\mu\\over4}+{\\mu\\over4})t" }, { "math_id": 64, "text": "p" }, { "math_id": 65, "text": "\\hat{d}=-{3\\over4} \\ln({1-{4\\over3}p})=\\hat{\\nu}" }, { "math_id": 66, "text": " A \\leftrightarrow G " }, { "math_id": 67, "text": " C \\leftrightarrow T " }, { "math_id": 68, "text": "\\pi_A = \\pi_G = \\pi_C = \\pi_T ={1\\over4}" }, { "math_id": 69, "text": "Q= \\begin{pmatrix} {*} & {\\kappa} & {1} & {1} \\\\ {\\kappa} & {*} & {1} & {1} \\\\ {1} & {1} & {*} & {\\kappa} \\\\ {1} & {1} & {\\kappa} & {*} \\end{pmatrix}" }, { "math_id": 70, "text": " A " }, { "math_id": 71, "text": " G " }, { "math_id": 72, "text": " C " }, { "math_id": 73, "text": " T " }, { "math_id": 74, "text": "K = - {1\\over2}\\ln((1-2p-q) \\sqrt{1-2q})" }, { "math_id": 75, "text": " A \\leftrightarrow T " }, { "math_id": 76, "text": " C \\leftrightarrow G " }, { "math_id": 77, "text": " \\gamma " }, { "math_id": 78, "text": " A \\leftrightarrow C " }, { "math_id": 79, "text": " G \\leftrightarrow T " }, { "math_id": 80, "text": " \\beta " }, { "math_id": 81, "text": "\\pi_A = \\pi_G = \\pi_C = \\pi_T =0.25" }, { "math_id": 82, "text": "Q= \\begin{pmatrix} {*} & {\\alpha} & {\\beta} & {\\gamma} \\\\ {\\alpha} & {*} & {\\gamma} & {\\beta} \\\\ {\\beta} & {\\gamma} & {*} & {\\alpha} \\\\ {\\gamma} & {\\beta} & {\\alpha} & {*} \\end{pmatrix}" }, { "math_id": 83, "text": " \\alpha " }, { "math_id": 84, "text": "\\pi_A \\ne \\pi_G \\ne \\pi_C \\ne \\pi_T " }, { "math_id": 85, "text": "Q= \\begin{pmatrix} {*} & {\\pi_G} & {\\pi_C} & {\\pi_T} \\\\ {\\pi_A} & {*} & {\\pi_C} & {\\pi_T} \\\\ {\\pi_A} & {\\pi_G} & {*} & {\\pi_T} \\\\ {\\pi_A} & {\\pi_G} & {\\pi_C} & {*} \\end{pmatrix}" }, { "math_id": 86, "text": "\\beta = 1/(1-\\pi_A^2-\\pi_C^2-\\pi_G^2-\\pi_T^2)" }, { "math_id": 87, "text": "P_{ij}(\\nu) = \\left\\{\n\\begin{array}{cc}\ne^{-\\beta\\nu}+\\pi_j\\left(1- e^{-\\beta\\nu}\\right) & \\mbox{ if } i = j \\\\\n\\pi_j\\left(1- e^{-\\beta\\nu}\\right) & \\mbox{ if } i \\neq j \n\\end{array}\n\\right." }, { "math_id": 88, "text": "Q= \\begin{pmatrix} {*} & {\\kappa\\pi_G} & {\\pi_C} & {\\pi_T} \\\\ {\\kappa\\pi_A} & {*} & {\\pi_C} & {\\pi_T} \\\\ {\\pi_A} & {\\pi_G} & {*} & {\\kappa\\pi_T} \\\\ {\\pi_A} & {\\pi_G} & {\\kappa\\pi_C} & {*} \\end{pmatrix}" }, { "math_id": 89, "text": "\\beta = \\frac{1}{2(\\pi_A + \\pi_G)(\\pi_C + \\pi_T) + 2\\kappa[(\\pi_A\\pi_G) + (\\pi_C\\pi_T)]} " }, { "math_id": 90, "text": "P_{AA}(\\nu,\\kappa,\\pi) = \\left[\\pi_A\\left(\\pi_A + \\pi_G + (\\pi_C + \\pi_T)e^{-\\beta\\nu}\\right) + \\pi_G e^{-(1 + (\\pi_A + \\pi_G)(\\kappa - 1.0))\\beta\\nu}\\right]/(\\pi_A + \\pi_G) " }, { "math_id": 91, "text": "P_{AC}(\\nu,\\kappa,\\pi) = \\pi_C\\left(1.0 - e^{-\\beta\\nu}\\right) " }, { "math_id": 92, "text": "P_{AG}(\\nu,\\kappa,\\pi) = \\left[\\pi_G\\left(\\pi_A + \\pi_G + (\\pi_C + \\pi_T)e^{-\\beta\\nu}\\right) - \\pi_Ge^{-(1 + (\\pi_A + \\pi_G)(\\kappa - 1.0))\\beta\\nu}\\right] /\\left(\\pi_A + \\pi_G\\right) " }, { "math_id": 93, "text": "P_{AT}(\\nu,\\kappa,\\pi) = \\pi_T\\left(1.0 - e^{-\\beta\\nu}\\right) " }, { "math_id": 94, "text": " \\theta \\in (0,1) " }, { "math_id": 95, "text": " \\pi_{GC} " }, { "math_id": 96, "text": " = \\pi_G + \\pi_C = 1 - ( \\pi_A + \\pi_T ) " }, { "math_id": 97, "text": " \\pi_G = \\pi_C = {\\pi_{GC}\\over 2} " }, { "math_id": 98, "text": " \\pi_A = \\pi_T = {(1-\\pi_{GC})\\over 2} " }, { "math_id": 99, "text": "Q= \\begin{pmatrix} {*} & {\\kappa\\pi_{GC}/2} & {\\pi_{GC}/2} & {(1-\\pi_{GC})/2} \\\\\n {\\kappa(1-\\pi_{GC})/2} & {*} & {\\pi_{GC}/2} & {(1-\\pi_{GC})/2} \\\\\n {(1-\\pi_{GC})/2} & {\\pi_{GC}/2} & {*} & {\\kappa(1-\\pi_{GC})/2} \\\\\n {(1-\\pi_{GC})/2} & {\\pi_{GC}/2} & {\\kappa\\pi_{GC}/2} & {*} \\end{pmatrix}" }, { "math_id": 100, "text": "d = -h \\ln(1-{p\\over h}-q)-{1\\over2}(1-h)\\ln(1-2q)" }, { "math_id": 101, "text": "h = 2\\theta(1-\\theta)" }, { "math_id": 102, "text": "\\theta " }, { "math_id": 103, "text": " \\pi_{GC} = \\pi_G + \\pi_C " }, { "math_id": 104, "text": "Q= \\begin{pmatrix} {*} & {\\kappa_1\\pi_G} & {\\pi_C} & {\\pi_T} \\\\\n {\\kappa_1\\pi_A} & {*} & {\\pi_C} & {\\pi_T} \\\\\n {\\pi_A} & {\\pi_G} & {*} & {\\kappa_2\\pi_T} \\\\\n {\\pi_A} & {\\pi_G} & {\\kappa_2\\pi_C} & {*} \\end{pmatrix}" }, { "math_id": 105, "text": "\\Pi = (\\pi_A , \\pi_G , \\pi_C , \\pi_T)" }, { "math_id": 106, "text": "Q = \\begin{pmatrix}\n{-(\\alpha\\pi_G + \\beta\\pi_C + \\gamma\\pi_T)} & {\\alpha\\pi_G} & {\\beta\\pi_C} & {\\gamma\\pi_T} \\\\ \n{\\alpha\\pi_A} & {-(\\alpha\\pi_A + \\delta\\pi_C + \\epsilon\\pi_T)} & {\\delta\\pi_C} & {\\epsilon\\pi_T} \\\\ \n{\\beta\\pi_A} & {\\delta\\pi_G} & {-(\\beta\\pi_A + \\delta\\pi_G + \\eta\\pi_T)} & {\\eta\\pi_T} \\\\ \n{\\gamma\\pi_A} & {\\epsilon\\pi_G} & {\\eta\\pi_C} & {-(\\gamma\\pi_A + \\epsilon\\pi_G + \\eta\\pi_C)} \n\\end{pmatrix} " }, { "math_id": 107, "text": " \n\\begin{align}\n\\alpha = r(A\\rightarrow G) = r(G\\rightarrow A)\\\\\n\\beta = r(A\\rightarrow C) = r(C\\rightarrow A)\\\\\n\\gamma = r(A\\rightarrow T) = r(T\\rightarrow A)\\\\\n\\delta = r(G\\rightarrow C) = r(C\\rightarrow G)\\\\\n\\epsilon = r(G\\rightarrow T) = r(T\\rightarrow G)\\\\\n\\eta = r(C\\rightarrow T) = r(T\\rightarrow C)\n\\end{align}\n" }, { "math_id": 108, "text": "{{n^2-n} \\over 2} " }, { "math_id": 109, "text": "{{n^2-n} \\over 2} + n - 1 = {1 \\over 2}n^2 + {1 \\over 2}n - 1." }, { "math_id": 110, "text": "4^3 = 64" }, { "math_id": 111, "text": "{{20 \\times 19 \\times 3} \\over 2} + 64 - 1 = 633" } ]
https://en.wikipedia.org/wiki?curid=7025924
70259277
Nonlinear tides
Nonlinear Tides Nonlinear tides are generated by hydrodynamic distortions of tides. A tidal wave is said to be nonlinear when its shape deviates from a pure sinusoidal wave. In mathematical terms, the wave owes its nonlinearity due to the nonlinear advection and frictional terms in the governing equations. These become more important in shallow-water regions such as in estuaries. Nonlinear tides are studied in the fields of coastal morphodynamics, coastal engineering and physical oceanography. The nonlinearity of tides has important implications for the transport of sediment. Framework. From a mathematical perspective, the nonlinearity of tides originates from the nonlinear terms present in the Navier-Stokes equations. In order to analyse tides, it is more practical to consider the depth-averaged shallow water equations:formula_0formula_1formula_2Here, formula_3 and formula_4 are the zonal (formula_5) and meridional (formula_6) flow velocity respectively, formula_7 is the gravitational acceleration, formula_8 is the density, formula_9 and formula_10 are the components of the bottom drag in the formula_5- and formula_6-direction respectively, formula_11 is the average water depth and formula_12 is the water surface elevation with respect to the mean water level. The former of the three equations is referred to as the continuity equation while the others represent the momentum balance in the formula_5- and formula_6-direction respectively. These equations follow from the assumptions that water is incompressible, that water does not cross the bottom or surface and that pressure variations above the surface are negligible. The latter allows the pressure gradient terms in the standard Navier-Stokes equations to be replaced by gradients in formula_13. Furthermore, the coriolis and molecular mixing terms are omitted in the equations above since they are relatively small at the temporal and spatial scale of tides in shallow waters. For didactic purposes, the remainder of this article only considers a one-dimensional flow with a propagating tidal wave in the positive formula_14-direction.This implies that formula_15 zero and is all quatities are homogeneous in the formula_16-direction. Therefore, all formula_17 terms equal zero and the latter of the above equations is arbitrary. Nonlinear contributions. In this one dimensional case, the nonlinear tides are induced by three nonlinear terms. That is, the divergence term formula_18, the advection term formula_19, and the frictional term formula_20. The latter is nonlinear in two ways. Firstly, because formula_21 is (nearly) quadratic in formula_22 . Secondly, because of formula_13 in the denominator. The effect of the advection and divergence term, and the frictional term are analysed separately. Additionally, nonlinear effects of basin topography, such as intertidal area and flow curvature can induce specific kinds of nonlinearity. Furthermore, mean flow, e.g. by river discharge, may alter the effects of tidal deformation processes. Harmonic analysis. A tidal wave can often be described as a sum of harmonic waves. The principal tide (1st harmonic) refers to the wave which is induced by a tidal force, for example the diurnal or semi-diurnal tide. The latter is often referred to as the formula_23 tide and will be used throughout the remainder of this article as the principal tide. The higher harmonics in a tidal signal are generated by nonlinear effects. Thus, harmonic analysis is used as a tool to understand the effect the nonlinear deformation. One could say that the deformation dissipates energy from the principal tide to its higher harmonics. For the sake of consistency, higher harmonics having a frequency that is an even or odd multiple of the principle tide may be referred to as the even or odd higher harmonics respectively. Divergence and advection. In order to understand the nonlinearity induced by the divergence term, one could consider the propagation speed of a shallow water wave. Neglecting friction, the wave speed is given as: formula_24 Comparing low water (LW) to high water (HW) levels (formula_25), the through (LW) of a shallow water wave travels slower than the crest (HW). As a result, the crest "catches up" with the trough and a tidal wave becomes asymmetric. In order to understand the nonlinearity induced by the advection term, one could consider the amplitude of the tidal current. Neglecting friction, the tidal current amplitude is given as: formula_26 When the tidal range is not small compared to the water depth, i.e. formula_27 is significant, the flow velocity formula_22 is not negligible with respect to formula_28. Thus, wave propagation speed at the crest is formula_29 while at the trough, the wave speed is formula_30. Similar to the deformation induced by the divergence term, this results in a crest "catching up" with the trough such that the tidal wave becomes asymmetric. For both the nonlinear divergence and advection term, the deformation is asymmetric. This implies that even higher harmonics are generated, which are asymmetric around the node of the principal tide. Mathematical analysis. The linearized shallow water equations are based on the assumption that the amplitude of the sea level variations are much smaller than the overall depth. This assumption does not necessarily hold in shallow water regions. When neglecting the friction, the nonlinear one-dimensional shallow water equations read:formula_31formula_32Here formula_33 is the undisturbed water depth, which is assumed to be constant. These equations contain three nonlinear terms, of which two originate from the mass flux in the continuity equation (denoted with subscript formula_34), and one originates from advection incorporated in the momentum equation (denoted with subscript formula_35). To analyze this set of nonlinear partial differential equations, the governing equations can be transformed in a nondimensional form. This is done based on the assumption that formula_22 and formula_13 are described by a propagating water wave, with a water level amplitude formula_36, a radian frequency formula_37 and a wavenumber formula_38. Based on this, the following transformation principles are applied:formula_39The non-dimensional variables, denoted by the tildes, are multiplied with an appropriate length, time or velocity scale of the dimensional variable. Plugging in the non-dimensional variables, the governing equations read:formula_40formula_41The nondimensionalization shows that the nonlinear terms are very small if the average water depth is much larger than the water level variations, i.e. formula_42 is small. In the case that formula_43, a linear perturbation analysis can be used to further analyze this set of equations. This analysis assumes small perturbations around a mean state of formula_44:formula_45Here formula_46. When inserting this linear series in the nondimensional governing equations, the zero-order terms are governed by:formula_47formula_48This is a linear wave equation with a simple solution of form: formula_49Collecting the formula_50 terms and dividing by formula_51 yields: formula_52formula_53Three nonlinear terms remain. However, the nonlinear terms only involve terms of formula_44, for which the solutions are known. Hence these can be worked out. Subsequently, taking the formula_54-derivative of the upper and subtracting the formula_55-derivative of the lower equation yields a single wave equation: formula_56 This linear inhomogenous partial differential equation, obeys the following particulate solution: formula_57 Returning to the dimensional solution for the sea surface elevation: formula_58 This solution is valid for a first order perturbation. The nonlinear terms are responsible for creating a higher harmonic signal with double the frequency of the principal tide. Furthermore, the higher harmonic term scales with formula_14, formula_59 and formula_38. Hence, the shape of the wave will deviate more and more from its original shape when propagating in the formula_14-direction, for a relatively large tidal range and for shorter wavelengths. When considering a common principal formula_60 tide, the nonlinear terms in the equation lead to the generation of the formula_61 harmonic. When considering higher-order formula_51 terms, one would also find higher harmonics. Friction. The frictional term in the shallow water equations, is nonlinear in both the velocity and water depth. In order to understand the latter, one can infer from the formula_62 term that the friction is strongest for lower water levels. Therefore, the crest "catches up" with the trough because it experiences less friction to slow it down. Similar to the nonlinearity induced by the divergence and advection term, this causes an asymmetrical tidal wave. In order to understand the nonlinear effect of the velocity, one should consider that the bottom stress is often parametrized quadratically:formula_63Here formula_64 is the drag coefficient, which is often assumed to be constant (formula_65). Twice per tidal cycle, at peak flood and peak ebb, formula_66 reaches a maximum, . However, the sign of formula_67 is opposite for these two moments. Causally, the flow is altered symmetrical around the wave node. This leads to the conclusion that this nonlinearity results in odd higher harmonics, which are symmetric around the node of the principal tide. Mathematical analysis. Nonlinearity in velocity. The parametrization of formula_68 contains the product of the velocity vector with its magnitude. At a fixed location, a principal tide is considered with a flow velocity: formula_69 Here, formula_70 is the flow velocity amplitude and formula_37 is the angular frequency. To investigate the effect of bottom friction on the velocity, the friction parameterization can be developed into a Fourier series: formula_71 This shows that formula_68 can be described as a Fourier series containing only odd multiples of the principal tide with frequency formula_72. Hence, the frictional force causes an energy dissipation of the principal tide towards higher harmonics. In the two dimensional case, also even harmonics are possible. The above equation for formula_68 implies that the magnitude of the friction is proportional to the velocity amplitude formula_73. Meaning that stronger currents experience more friction and thus more tidal deformation. In shallow waters, higher currents are required to accommodate for sea surface elevation change, causing more energy dissipation to odd higher harmonics of the principal tide. Nonlinearity in water depth. Although not very accurate, one can use a linear parameterization of the bottom stress: formula_74 Here formula_75 is a friction factor which represents the first Fourier component of the more exact quadratical parameterization. Neglecting the advectional term and using the linear parameterization in the frictional term, the nondimensional governing equations read:formula_76formula_77Despite the linear parameterization of the bottom stress, the frictional term remains nonlinear. This is due to the time dependent water depth formula_78 in its denominator. Similar to the analysis of the nonlinear advection term, a linear perturbation analysis can be used to analyse the frictional nonlinearity. The formula_44 equations are given as:formula_79formula_80Taking the formula_54-derivative of the upper equation and subtracting the formula_55-derivative of the lower equation, the formula_81 terms can be eliminated. Calling formula_82, this yield a single second order partial differential equation in formula_83:formula_84In order to solve this, boundary conditions are required. These can be formulated asformula_85The boundary conditions are formulated based on a pure cosine wave entering a domain with length formula_86. The boundary (formula_87) of this domain is impermeable to water. To solve the partial differential equation, a separation of variable method can be used. It is assumed that formula_88. A solution that obeys the partial differential equation and the boundary conditions, reads:formula_89Here, formula_90. In a similar manner, the formula_50 equations can be determined:formula_91formula_92Here the friction term was developed into a Taylor series, resulting in two friction terms, one of which is nonlinear. The nonlinear friction term contains a multiplication of two formula_44 terms, which show wave-like behaviour. The real parts of formula_93 and formula_94 are given as:formula_95formula_96Here the formula_97 denote a complex conjugate. Inserting these identities into the nonlinear friction term, this becomes: formula_98The above equation suggests that the particulate solution of the first order terms obeys a particulate solution with a time-independent residual flow formula_99 (quantities denoted with subscript formula_100) and a higher harmonic with double the frequency of the principal tide, e.g. if the principal tide has a formula_23 frequency, the double linearity in the friction will generate an formula_61 component. The residual flow component represents Stokes drift. Friction causes higher flow velocities in the high water wave than in the low water, hence making the water parcels move in the direction of the wave propagation. When higher order terms in the perturbation analysis are considered, even higher harmonics will also be generated. Intertidal area. In a shallow estuary, nonlinear terms play an important role and might cause tidal asymmetry. This can intuitively be understood when considering that if the water depth is smaller, the friction slows down the tidal wave more. For an estuary with small intertidal area (case i), the average water depth generally increases during the rising tide. Therefore, the crest of the tidal wave experiences less friction to slow it down and it catches up with the trough. This causes tidal asymmetry with a relatively fast rising tide. For an estuary with much intertidal area (case ii), the water depth in the main channel also increases during the rising tide. However, because of the intertidal area, the width averaged water depth generally deceases. Therefore, the trough of the tidal wave experiences relatively little friction slowing it down and it catches up on the crest. This causes tidal asymmetry with a relatively slow rising tide. For a friction dominated estuary, the flood phase corresponds to the rising tide and the ebb phase corresponds to the falling tide. Therefore, case (i) and (ii) correspond to a flood and ebb dominated tide respectively. In order to find a mathematical expression to find the type of asymmetry in an estuary, the wave speed should be considered. Following a non-linear perturbation analysis, the time-dependent wave speed for a convergent estuary is given as: formula_101 With formula_102 the channel depth, formula_103 the estuary width, and the right side just a decomposition of these quantities in their tidal averages (denote by the formula_104) and their deviation from it. Using a first order Taylor expansion, this can be simplified to: formula_105 Here: formula_106 This parameter represents the tidal asymmetry. The discussed case (i), i.e. fast rising tide, corresponds to formula_107, while case (ii), i.e. slow rising tide, corresponds to formula_108. Nonlinear numerical simulations by Friedrichs and Aubrey reproduce a similar relationship for formula_109. Flow curvature. Consider a tidal flow induced by a tidal force in the x-direction such as in the figure. Far away from the coast, the flow will be in the x-direction only. Since at the coast the water cannot flow cross-shore, the streamlines are parallel to the coast. Therefore, the flow curves around the coast. The centripetal force to accommodate for this change in the momentum budget is the pressure gradient perpendicular to a streamline. This is induced by a gradient in the sea level height. Analogues to the gravity force that keeps planets in their orbit, the gradient in sea level height for a streamline curvature with radius formula_110 is given as: formula_111 For a convex coast, this corresponds to a decreasing water level height when approaching the coast. For a concave coast this is opposite, such that the sea level height increases when approaching the coast. This pattern is the same when the tide reverses the current. Therefore, one finds that the flow curvature lowers or raises the water level height twice per tidal cycle. Hence it adds a tidal constituent with a frequency twice that of the principal component. This higher harmonic is indicative of nonlinearity, but this is also observed by the quadratic term in the above expression. Mean flow. A mean flow, e.g. a river flow, can alter the nonlinear effects. Considering a river inflow into an estuary, the river flow will cause a decrease of the flood flow velocities, while increasing the ebb flow velocities. Since the friction scales quadratically with the flow velocities, the increase in friction is larger for the ebb flow velocities than the decrease for the flood flow velocities. Hence, creating a higher harmonic with double the frequency of the principal tide. When the mean flow is larger than the amplitude of the tidal current, this would lead to no reversal of the flow direction. Thus, the generation of the odd higher harmonics by the nonlinearity in the friction would be reduced. Moreover, an increase in the mean flow discharge can cause an increase in the mean water depth and therefore reduce the relative importance of nonlinear deformation. Example: Severn Estuary. The Severn Estuary is relatively shallow and its tidal range is relatively large. Therefore, nonlinear tidal deformation is notable in this estuary. Using GESLA data of the water level height at the measuring station near Avonmouth, the presence of nonlinear tides can be confirmed. Using a simple harmonic fitting algorithm with a moving time window of 25 hours, the water level amplitude of different tidal constituents can be found. For 2011, this has been done for the formula_23, formula_61 and formula_112 constituents. In the figure, the water level amplitude of the formula_61 and formula_113 harmonics, formula_114 and formula_115 respectively, are plotted against the water level amplitude of the principal formula_23 tide, formula_116. It can be observed that higher harmonics, being generated by nonlinearity, are significant with respect to the principal tide. The correlation between formula_117 and formula_118 looks somewhat quadratic. This quadratic dependence could be expected from the mathematical analysis in this article. Firstly, the analysis of divergence and advection results in an expression that, for a fixed formula_119, implies: formula_120 Secondly, the analysis of the nonlinearity of the friction in the water depth yields a second higher harmonic. For the mathematical analysis, a linear parameterization of the bottom stress was assumed. However, the bottom stress actually scales nearly quadratically with the flow velocity. This is reflected in the quadratic relation between formula_117 and formula_118. In the graph, for a small tidal range, the correlation between formula_117 and formula_121 is approximately directly proportional. This relation between the principal tide and its third harmonic follows from the nonlinearity of the friction in the velocity, which is reflected in the derived expression. For larger tidal ranges, formula_121 start decreasing. This behaviour remains unresolved by the theory covered in this article. Sediment transport. The deformation of tides can be of significant importance in sediment transport. In order to analyse this, it is obvious to distinguish between the dynamics of suspended sediment and bed load sediment. Suspended sediment transport (in one dimension) can in general be quantified as: formula_122 Here formula_123 is the depth integrated sediment flux, formula_124 is the sediment concentration, formula_125 is the horizontal diffusivity coefficient and formula_126 is the reference height above the surface formula_127. The bed load transport can be estimated by the following heuristic definition: formula_128Here formula_129 is an erosion coefficient. The zonal flow velocity can be represented as a truncated Fourier series. When considering a tidal flow composed of only formula_23 and formula_61 constituents, the current at a specific location is given as:formula_130A description of the local evolution of the suspended sediment concentration is required to obtain an expression for the tidally averaged suspended sediment flux. The local change of the depth integrated suspended sediment concentration (formula_131) is governed by:formula_132 Here formula_133 is the fall velocity, formula_134 is the vertical diffusivity coefficient and formula_135 is an erosion coefficient. Advection is neglected in this model. Considering the definition of formula_136 and formula_137, an expression for the tidally averaged bed and suspended load transport can be obtained:formula_138formula_139Here formula_140, the ratio of the settling time scale over the tidal time scale. Two important mechanisms can be identified using the definitions of formula_141 and formula_142. These two transport mechanisms are discussed shortly. Velocity asymmetry. The velocity asymmetry mechanism relies on a difference in maximum flow velocity between peak ebb and flood. The quantification of this mechanism is encapsulated by the formula_143 term. The implications of this term are summarized in the table below: Hence, the velocity asymmetry mechanism causes a net ebb directed transport if the absolute value of the relative phase differenceformula_144, while it causes a net flood directed transport if formula_145 . In the latter case, peak flood flows will be larger than peak ebb flows. Hence, the sediment will be transported over a larger distance in the flood direction, making formula_146 and formula_147. The opposite applies for formula_145. Duration asymmetry. The duration asymmetry mechanism can also cause a tidally averaged suspended load transport. This mechanism only allows for a tidally averaged suspended sediment flux. The quantification of this mechanism is encapsulated by the formula_148 term, which is absent in the formula_141 equation. The implications of this term are summarized in the table below: When formula_149, the time from peak flood to peak ebb is longer than the time from peak ebb to peak flood. This makes that more sediment can settle during the period from peak flood to peak ebb, hence less sediment will be suspended at peak ebb and there will be a net transport in the flood direction. A similar, but opposite explanation holds for formula_150. Bed load transport is not affected by this mechanism because the mechanism requires a settling lag of the particles, i.e. the particles must take time to settle and the concentration adapts gradually to the flow velocities. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial \\eta}{\\partial t}+\\frac{\\partial}{\\partial x}[(D_0 + \\eta )u]+\\frac{\\partial}{\\partial y}[(D_0 + \\eta )v]=0," }, { "math_id": 1, "text": "\\frac{\\partial u}{\\partial t}+u\\frac{\\partial u}{\\partial x}+v\\frac{\\partial u}{\\partial y}=-g\\frac{\\partial \\eta}{\\partial x}-\\frac{\\tau_{b,x}}{\\rho (D_0 + \\eta )},\n" }, { "math_id": 2, "text": "\\frac{\\partial v}{\\partial t}+u\\frac{\\partial v}{\\partial x}+v\\frac{\\partial v}{\\partial y}=-g\\frac{\\partial \\eta}{\\partial y}-\\frac{\\tau_{b,y}}{\\rho (D_0 +\\eta )}.\n" }, { "math_id": 3, "text": "u\n" }, { "math_id": 4, "text": "v\n" }, { "math_id": 5, "text": "x\n" }, { "math_id": 6, "text": "y\n" }, { "math_id": 7, "text": "g" }, { "math_id": 8, "text": "\\rho\n" }, { "math_id": 9, "text": "\\tau_{b,x}\n" }, { "math_id": 10, "text": "\\tau_{b,y}\n" }, { "math_id": 11, "text": "D_0 \n" }, { "math_id": 12, "text": "\\eta\n" }, { "math_id": 13, "text": "\\eta" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "v=0" }, { "math_id": 16, "text": "y" }, { "math_id": 17, "text": "\\partial / \\partial y" }, { "math_id": 18, "text": "\\partial(\\eta u)/\\partial x" }, { "math_id": 19, "text": "u\\; \\partial u/\\partial x" }, { "math_id": 20, "text": "\\tau_b/(D_0+\\eta)" }, { "math_id": 21, "text": "\\tau_b" }, { "math_id": 22, "text": "u" }, { "math_id": 23, "text": "M_2" }, { "math_id": 24, "text": "c_0 \\approx \\sqrt{g(D_0+\\eta)}" }, { "math_id": 25, "text": "\\eta_{LW}<\\eta_{HW}" }, { "math_id": 26, "text": "U_0 \\approx c_0 \\frac{\\eta}{D_0} " }, { "math_id": 27, "text": "\\eta/D_0 " }, { "math_id": 28, "text": "c_0 " }, { "math_id": 29, "text": "c_0 +u" }, { "math_id": 30, "text": "c_0 -u" }, { "math_id": 31, "text": "\\frac{\\partial \\eta}{\\partial t}+\\underbrace{u\\frac{\\partial \\eta}{\\partial x}}_i+(D_{0}+\\underbrace{\\eta)\\frac{\\partial u}{\\partial x}}_i=0," }, { "math_id": 32, "text": "\\frac{\\partial u}{\\partial t}+\\underbrace{u\\frac{\\partial u}{\\partial x}}_{ii}=-g\\frac{\\partial \\eta}{\\partial x}.\n" }, { "math_id": 33, "text": "D_{0}" }, { "math_id": 34, "text": "i\n" }, { "math_id": 35, "text": "ii\n" }, { "math_id": 36, "text": "H_0" }, { "math_id": 37, "text": "\\omega" }, { "math_id": 38, "text": "k" }, { "math_id": 39, "text": "\\left\\{\n \\begin{array}{ll}\n x=\\frac{1}{k}\\tilde{x}\\\\\n \\eta=H_{0}\\tilde{\\eta}\\\\\n t=\\frac{1}{\\omega}\\tilde{t}\\\\\n u=H_{0}\\sqrt{\\frac{g}{D_0}}\\tilde{u}\n \\end{array}\n \\right." }, { "math_id": 40, "text": "\\frac{\\partial \\tilde{\\eta}}{\\partial \\tilde{t}}+\\frac{H_{0}}{D_0}\\tilde{u}\\frac{\\partial \\tilde{\\eta}}{\\partial \\tilde{x}}+(1+\\frac{H_{0}}{D_{0}}\\tilde{\\eta})\\frac{\\partial \\tilde{u}}{\\partial \\tilde{x}}=0" }, { "math_id": 41, "text": "\\frac{\\partial \\tilde{u}}{\\partial \\tilde{t}}+\\frac{H_{0}}{D_{0}}\\tilde{u}\\frac{\\partial \\tilde{u}}{\\partial \\tilde{x}}=-\\frac{\\partial \\tilde{\\eta}}{\\partial\\tilde{ x}}\n" }, { "math_id": 42, "text": "\\frac{H_{0}}{D_{0}}\n" }, { "math_id": 43, "text": "H_0/D_0 <<1" }, { "math_id": 44, "text": "\\mathcal{O}(1)" }, { "math_id": 45, "text": "\\left\\{\n \\begin{array}{ll}\n \\tilde{\\eta}=\\tilde{\\eta}_0+\\epsilon \\tilde{\\eta}_1+\\mathcal{O}(\\epsilon^2)\\\\\n \\tilde{u}={\\tilde u}_0+\\epsilon \\tilde{u}_1+\\mathcal{O}(\\epsilon^2)\n \\end{array}\n \\right." }, { "math_id": 46, "text": "\\epsilon=H_0/D_0" }, { "math_id": 47, "text": "\\frac{\\partial {\\tilde{\\eta}_0}}{\\partial \\tilde{t}}+\\frac{\\partial \\tilde{u}_0}{\\partial \\tilde{x}}=0" }, { "math_id": 48, "text": "\\frac{\\partial {\\tilde u}_0}{\\partial \\tilde{t}}+\\frac{\\partial {\\tilde \\eta}_0}{\\partial \\tilde{x}}=0" }, { "math_id": 49, "text": "\\left\\{\n \\begin{array}{ll}\n {\\tilde \\eta}_{0}(\\tilde {x},\\tilde{t})=\\cos(\\tilde{x}-\\tilde{t})\\\\\n {\\tilde{u}}_0(\\tilde {x},\\tilde{t})=\\cos(\\tilde{x}-\\tilde{t})\n \\end{array}\n \\right." }, { "math_id": 50, "text": "\\mathcal{O}(\\epsilon)" }, { "math_id": 51, "text": "\\epsilon" }, { "math_id": 52, "text": "\\frac{\\partial \\tilde{\\eta}_1}{\\partial \\tilde{t}}+\\frac{\\partial \\tilde{u}_1}{\\partial \\tilde x}+\\tilde{\\eta}_0 \\frac{\\partial \\tilde{u}_0}{\\partial \\tilde x}+\\tilde{u}_0\\frac{\\partial \\tilde{\\eta}_0}{\\partial \\tilde{x}}=0" }, { "math_id": 53, "text": "\\frac{\\partial \\tilde{u}_1}{\\partial \\tilde{t}}+\\tilde{u}_0\\frac{\\partial \\tilde{u}_0}{\\partial \\tilde{x}}=-\\frac{\\partial \\tilde{\\eta}_1}{\\partial \\tilde{x}}" }, { "math_id": 54, "text": "\\tilde{t}" }, { "math_id": 55, "text": "\\tilde{x}" }, { "math_id": 56, "text": "\\frac{\\partial^2 \\tilde{\\eta}_1}{\\partial \\tilde{t}^2}-\\frac{\\partial^2 \\tilde{\\eta}_1}{\\partial \\tilde{x}^2}=-3\\cos(2(\\tilde{x}-\\tilde{t}))" }, { "math_id": 57, "text": "\\left\\{\n \\begin{array}{ll}\n \\tilde{\\eta}_{1}(\\tilde {x},\\tilde{t})=-\\frac{3}{4}\\tilde{x}\\sin(2(\\tilde{x}-\\tilde{t}))\\\\\n {\\tilde u}_1(\\tilde {x},\\tilde{t})=-\\frac{3}{4}\\tilde{x}\\sin(2(\\tilde{x}-\\tilde{t}))\n \\end{array}\n \\right." }, { "math_id": 58, "text": "\\eta=H_0\\cos(kx-\\omega t)-\\frac{3}{4}\\frac{H_0^2kx}{D_0}\\sin(2(kx-\\omega t))" }, { "math_id": 59, "text": "H_0/D_0" }, { "math_id": 60, "text": "M2" }, { "math_id": 61, "text": "M_4" }, { "math_id": 62, "text": "\\tau_b/(D_0+\\eta)\n" }, { "math_id": 63, "text": "\\tau_{b}=\\rho C_{d}u|u|\n" }, { "math_id": 64, "text": "C_{d}\n" }, { "math_id": 65, "text": "C_d=0.0025" }, { "math_id": 66, "text": "|u|" }, { "math_id": 67, "text": "u|u|\n" }, { "math_id": 68, "text": "\\tau_{b}\n" }, { "math_id": 69, "text": "u=U_0cos(\\omega t)" }, { "math_id": 70, "text": "U_0" }, { "math_id": 71, "text": "\\tau_{b}=\\rho C_{d} U_{0}^2\\left(\\frac{2}{\\pi}\\cos(\\omega t)+\\frac{2}{\\pi}\\sum_{n=1}^{a}\\frac{\\left(-1\\right)^{n}}{1-4n^{2}}(\\cos\\left(\\omega t(2n+1))+\\cos(\\omega t(2n-1))\\right)\\right)=\\rho C_{d} U_{0}^2 (\\frac{8}{3\\pi}cos(\\omega t)+\\frac{8}{15\\pi}cos(3\\omega t)+...)" }, { "math_id": 72, "text": "\\omega_2 " }, { "math_id": 73, "text": "U_{0}^2" }, { "math_id": 74, "text": "\\tau_b =\\rho \\; \\hat{r}u" }, { "math_id": 75, "text": "\\hat{r}" }, { "math_id": 76, "text": "\\frac{\\partial \\tilde{\\eta}}{\\partial \\tilde{t}}+\\frac{\\partial \\tilde{u}}{\\partial \\tilde{x}}=0" }, { "math_id": 77, "text": "\\frac{\\partial \\tilde{u}}{\\partial \\tilde{t}}=-\\frac{\\partial \\tilde{\\eta}}{\\partial\\tilde{ x}}-\\frac{\\hat{r}\\tilde{u}}{\\omega D_0(1+\\frac{H_0}{D_0}\\tilde{\\eta})}\n" }, { "math_id": 78, "text": "D_0 +\\eta" }, { "math_id": 79, "text": "\\frac{\\partial {\\eta_0}}{\\partial \\tilde{t}}+\\frac{\\partial u_0}{\\partial \\tilde{x}}=0" }, { "math_id": 80, "text": "\\frac{\\partial u_0}{\\partial \\tilde{t}}+\\frac{\\partial \\eta_0}{\\partial \\tilde{x}}=-\\frac{\\hat{r}u_{0}}{\\omega D_0}" }, { "math_id": 81, "text": "u_0 " }, { "math_id": 82, "text": "\\frac{\\hat{r}}{\\omega D_0}=\\lambda" }, { "math_id": 83, "text": "\\eta_0 " }, { "math_id": 84, "text": "-\\left(\\frac{\\partial^2}{\\partial \\tilde{t}^2}+\\lambda\\frac{\\partial }{\\partial \\tilde{t}}-\\frac{\\partial^2}{\\partial \\tilde{x}^2}\\right)\\eta_0 = 0" }, { "math_id": 85, "text": "\\left\\{\n \\begin{array}{ll}\n \\eta_{0}(0,\\tilde{t})=\\cos(\\tilde{t})\\\\\n \\frac{\\partial \\eta_0}{\\partial \\tilde{x}}(kL,\\tilde{t})=0\n \\end{array}\n \\right." }, { "math_id": 86, "text": "L" }, { "math_id": 87, "text": "x=L" }, { "math_id": 88, "text": "\\eta_0(\\tilde{x},\\tilde{t})=\\mathfrak{Re}(\\hat{\\eta}_0(\\tilde{x})e^{-it})" }, { "math_id": 89, "text": "\\left\\{\n \\begin{array}{ll}\n \\hat{\\eta}_{0}(\\tilde{x})=\\frac{\\cos(\\mu(\\tilde{x}-kL))}{\\cos({\\mu kL})}\\\\\n \\hat{u}_{0}(\\tilde{x})=-i\\frac{\\sin(\\mu(\\tilde{x}-kL))}{\\cos({\\mu kL})}\n \\end{array}\n \\right." }, { "math_id": 90, "text": "\\mu=\\sqrt{1+i\\lambda}" }, { "math_id": 91, "text": "\\frac{\\partial {\\eta_1}}{\\partial \\tilde{t}}+\\frac{\\partial u_1}{\\partial \\tilde{x}}=0 " }, { "math_id": 92, "text": "\\frac{\\partial u_1}{\\partial \\tilde{t}}+\\frac{\\partial \\eta_1}{\\partial \\tilde{x}}+\\frac{\\hat{r}u_1}{\\omega D_0}=\\frac{\\hat{r}u_{0}\\eta_0}{\\omega D_0}" }, { "math_id": 93, "text": "\\eta_0(\\tilde{x},\\tilde{t})" }, { "math_id": 94, "text": "u_0(\\tilde{x},\\tilde{t})" }, { "math_id": 95, "text": "\\eta_0(\\tilde{x},\\tilde{t})=\\frac{1}{2}\\hat{\\eta}_0e^{-it}+\\frac{1}{2}\\hat{\\eta}_0^*e^{it}" }, { "math_id": 96, "text": "u_0(\\tilde{x},\\tilde{t})=\\frac{1}{2}\\hat{u}_0e^{-it}+\\frac{1}{2}\\hat{u}_0^*e^{it}" }, { "math_id": 97, "text": "*" }, { "math_id": 98, "text": "\\frac{\\hat{r}u_{0}\\eta_0}{\\omega D_0}=\\frac{\\hat{r}}{4\\omega D_0}(\\hat{u}_0^*\\hat{\\eta}_0+\\hat{u}_0\\hat{\\eta}_0^*)+\\frac{\\hat{r}}{4\\omega D_0}(\\hat{u}_0\\hat{\\eta}_0e^{-2it}+\\hat{u}_0^*\\hat{\\eta}_0^*e^{2it})" }, { "math_id": 99, "text": "M_0" }, { "math_id": 100, "text": "0" }, { "math_id": 101, "text": " c(t) \\sim \\frac{h(t)}{b(t)^2} \\approx \\frac{\\langle h\\rangle[1+(\\eta/H_0)(H_0/\\langle h\\rangle)]}{\\langle b\\rangle^{1/2}[1+(\\eta/H_0)(\\Delta b/\\langle b\\rangle)]^{1/2}}" }, { "math_id": 102, "text": "h(t)" }, { "math_id": 103, "text": "b(t)" }, { "math_id": 104, "text": "\\langle \\rangle" }, { "math_id": 105, "text": " c \\sim \\frac{\\langle h\\rangle}{\\langle b\\rangle^{1/2}}[1+\\gamma(\\eta/H_{0})] " }, { "math_id": 106, "text": " \\gamma = \\frac{H_0}{\\langle h\\rangle}-\\frac{1}{2}\\frac{\\Delta b}{\\langle b\\rangle } " }, { "math_id": 107, "text": " \\gamma>0 " }, { "math_id": 108, "text": " \\gamma<0 " }, { "math_id": 109, "text": "\\gamma" }, { "math_id": 110, "text": " r " }, { "math_id": 111, "text": "\ng \\frac{\\partial \\eta}{\\partial r} = \\frac{u^2}{r}\n" }, { "math_id": 112, "text": "M_6 " }, { "math_id": 113, "text": " M_6 " }, { "math_id": 114, "text": " H_{M4} " }, { "math_id": 115, "text": " H_{M6} " }, { "math_id": 116, "text": " H_{M2} " }, { "math_id": 117, "text": "H_{M2}" }, { "math_id": 118, "text": "H_{M4}" }, { "math_id": 119, "text": " x " }, { "math_id": 120, "text": " H_{M4} \\propto H_{M2}^2 " }, { "math_id": 121, "text": "H_{M6}" }, { "math_id": 122, "text": "q_{s}=\\int_{z_b+a_r}^\\eta (uc-K_{b}\\frac{\\partial c}{\\partial z})dz" }, { "math_id": 123, "text": "q_{s}" }, { "math_id": 124, "text": "c" }, { "math_id": 125, "text": "K_b" }, { "math_id": 126, "text": "a_{r}" }, { "math_id": 127, "text": "z_{b}" }, { "math_id": 128, "text": "q_b=\\beta u^3" }, { "math_id": 129, "text": "\\beta" }, { "math_id": 130, "text": "u(t)=U_{M2}\\cos(\\omega_{M2}t-\\phi_{M2})+U_{M4}\\cos(\\omega_{M4}t-\\phi_{M4})." }, { "math_id": 131, "text": "C=\\int^{\\eta}_{z_{b}+a_r}c dz" }, { "math_id": 132, "text": "\\frac{\\partial C}{\\partial t}=\\alpha u^{2}-\\frac{W_s^2}{K_v}C" }, { "math_id": 133, "text": "W_s" }, { "math_id": 134, "text": "K_v" }, { "math_id": 135, "text": "\\alpha" }, { "math_id": 136, "text": "u(t)" }, { "math_id": 137, "text": "C" }, { "math_id": 138, "text": "\\langle q_s\\rangle=\\frac{K_v\\alpha}{W_s^{2}}(\\frac{1}{4}U_{M2}^2U_{M4}\\cos(2 \\phi_{M2}-\\phi_{M4})(\\frac{2}{1+a^{2}}+\\frac{1}{1+4a^2})+\n\\frac{a}{2}U_{M2}^{2}U_{M4}\\sin(2\\phi_{M2}-\\phi_{M4})(\\frac{2}{1+a^{2}}+\\frac{1}{1+4a^2}))" }, { "math_id": 139, "text": "\\langle q_b\\rangle=\\frac{3\\beta}{4}U^{2}_{M2}U_{M4}\\cos(2\\phi_{M2}-\\phi_{M4})" }, { "math_id": 140, "text": "a=\\frac{\\omega_{M2}K_v^2}{W_s}" }, { "math_id": 141, "text": "\\langle q_b\\rangle" }, { "math_id": 142, "text": "\\langle q_s\\rangle" }, { "math_id": 143, "text": "\\cos(2\\phi_{M2}-\\phi_{M4})" }, { "math_id": 144, "text": "|2\\phi_{M2}-\\phi_{M4}|>90^\\circ" }, { "math_id": 145, "text": "|2\\phi_{M2}-\\phi_{M4}|<90^\\circ" }, { "math_id": 146, "text": "\\langle q_s\\rangle >0" }, { "math_id": 147, "text": "\\langle q_b\\rangle>0" }, { "math_id": 148, "text": "\\sin(2\\phi_{M2}-\\phi_{M4})" }, { "math_id": 149, "text": "0^\\circ<2\\phi_{M2}-\\phi_{M4}<180^\\circ" }, { "math_id": 150, "text": "180^\\circ<2\\phi_{M2}-\\phi_{M4}<360^\\circ" } ]
https://en.wikipedia.org/wiki?curid=70259277
70262245
Zero stability
Zero-stability, also known as D-stability in honor of Germund Dahlquist, refers to the stability of a numerical scheme applied to the simple initial value problem formula_0. A linear multistep method is "zero-stable" if all roots of the characteristic equation that arises on applying the method to formula_1 have magnitude less than or equal to unity, and that all roots with unit magnitude are simple. This is called the "root condition" and means that the parasitic solutions of the recurrence relation will not grow exponentially. Example. The following third-order method has the highest order possible for any explicit two-step method for solving formula_2: formula_3 If formula_4 identically, this gives a linear recurrence relation with characteristic equation formula_5 The roots of this equation are formula_6 and formula_7 and so the general solution to the recurrence relation is formula_8. Rounding errors in the computation of formula_9 would mean a nonzero (though small) value of formula_10 so that eventually the parasitic solution formula_11 would dominate. Therefore, this method is not zero-stable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " y'(x) = 0" }, { "math_id": 1, "text": "y'(x) = 0" }, { "math_id": 2, "text": "y'(x) = f(x)" }, { "math_id": 3, "text": "y_{n+2} + 4 y_{n+1} - 5y_n = h(4f_{n+1} + 2 f_n)." }, { "math_id": 4, "text": "f(x)=0" }, { "math_id": 5, "text": "r^2 + 4r - 5=(r-1)(r+5) = 0." }, { "math_id": 6, "text": "r=1" }, { "math_id": 7, "text": "r=-5" }, { "math_id": 8, "text": "y_n = c_1\\cdot 1^n + c_2 (-5)^n" }, { "math_id": 9, "text": "y_1" }, { "math_id": 10, "text": "c_2" }, { "math_id": 11, "text": "(-5)^n" } ]
https://en.wikipedia.org/wiki?curid=70262245
70271889
Judges 5
Book of Judges, chapter 5 Judges 5 is the fifth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy through Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in the 7th century BCE. This chapter records the activities of judge Deborah, belonging to a section comprising to 5:31. Text. This chapter was originally written in the Hebrew language. It is divided into 31 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" The victory song attributed to Deborah in this chapter is one of the oldest extant Israelite literary compositions dating to around the 12th century BCE, roughly contemporaneous with the period of time it depicts. Comparable to earlier works of the Canaanites discovered at Ugarit, the composition is characterized by a 'parallelistic variety of repetition whereby imagery unfolds in a beautifully layered or impressionistic style' so that 'the parallel line adds colour, nuance, or contrast to its neighbouring description'. The lines (in bicola or tricola) are generally about parallel in length. The content itself draws upon traditional Israelite media of expression, also employed by others in the biblical tradition. "Then sang Deborah and Barak the son of Abinoam on that day, saying," Song of Deborah (5:2–31). The structure of the Song of Deborah is as follows: The call to hear this song contains parallel terms and syntax with the formulaic introduction 'hear/give ear' (cf Deuteronomy 32:1; Isaiah 1:2), to state that YHWH, both the muse and victor, is the ultimate source and receiver of the song. Verses 24–27 present another version of the tale of Jael in wonderfully economic style, with the repetition that underscores the violent turn in the action as Jael is described as one who strikes, crushes, shatters, and pierces, as she at the same time seduced and slaughtered the enemy. In contrast to Jael as a tent-dwelling woman, the mother of Sisera is an noblewomen peering from a house with lattice-work windows (cf. 2 Kings 10:30), accompanied by ladies-in-waiting, but instead of expecting the coming of Sisera with the spoils of war, it was Sisera himself who has been despoiled at the hands of a warrior woman. "In the days of Shamgar the son of Anath, in the days of Jael, the highways were unoccupied, and the travellers walked through byways." "Thus let all Your enemies perish, O LORD! But let those who love Him be like the sun When it comes out in full strength." "So the land had rest for forty years." Verse 31. The abrupt burst by which the song ends depicts the completeness of the overthrow, causing it to be long remembered as an example of Israel's triumph over God's enemies (Psalm 83:9–10; Psalm 83:12–15). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70271889
70275297
Faroe-Bank Channel overflow
Overflow current from Nordic Seas towards North Atlantic Ocean Cold and dense water from the Nordic Seas is transported southwards as Faroe-Bank Channel overflow. This water flows from the Arctic Ocean into the North Atlantic through the Faroe-Bank Channel between the Faroe Islands and Scotland. The overflow transport is estimated to contribute to one-third (2.1±0.2 Sv, on average) of the total overflow over the Greenland-Scotland Ridge. The remaining two-third of overflow water passes through Denmark Strait (being the strongest overflow branch with an estimated transport of 3.5 Sv), the Wyville Thomson Ridge (0.3 Sv), and the Iceland-Faroe Ridge (1.1 Sv). Faroe-Bank Channel overflow (FBCO) contributes to a large extent to the formation of North Atlantic Deep Water. Therefore, FBCO is important for water transport towards the deep parts of the North Atlantic, playing a significant role in Earth's climate system. Faroe-Bank Channel. The Faroe-Bank Channel (FBC) is a deeply eroded channel in the Greenland-Scotland Ridge (GSR). Its primary sill, located south of the Faroe Islands, has a width of about 15 km and a maximum depth of 840 m, with very steep walls at both sides of the channel. 100 km north-west of this sill, there is a secondary sill with a maximum depth of 850 m. Faroe-Bank Channel overflow enters the FBC from the northeast, turns towards the west between the Faroe Islands and the Faroe Bank, and leaves the GSR in southwestern direction, west-southwest of the Faroe Islands. Hydrography. The water flowing over the Greenland-Scotland Ridge through the Faroe-Bank Channel consists of a very well-mixed bottom layer, with a stratified water layer on top. The temperature of this stratified layer can get to 11 °C in the upper 100 m of the channel, with a salinity around 35.1 g/kg; between 100 and 400 m depth the temperature of the water in the stratified layer is around 8 °C, with a salinity of 35.2 g/kg. The water below 400 m, in the well-mixed layer, can be characterised as overflow water. Definition of overflow. The mixed bottom layer of the FBC is where the actual overflow takes place, being fed by inflow of cold and fresh North Atlantic Water, Modified North Atlantic Water, Norwegian Sea Deep Water and Norwegian Sea Arctic Intermediate Water. These water masses have different temperatures (between -0.5 and 7.0 °C) and salinities (between 34.7 and 35.4 g/kg). Therefore, it may be complicated to exactly define which water entering the FBC contributes to the actual overflow. Four definitions are possible, two of which depending on the overflow velocity, one depending on the overflow flux, and one depending on the overflow water properties. The simplest definition is in terms of velocities: water with a velocity in northwestern direction is then termed Faroe-Bank Channel overflow. At the sill, velocities can grow up until 1.2 m/s, accelerating when flowing downwards the deepening bathymetry. In this respect, high velocities are associated with strong mixing and highly turbulent flows. In the stratified layer at the top of the channel, velocities become negative (i.e., in southeastern direction), which makes these water no part of the overflow. Another option is to take into account the barotropic (i.e., horizontal sea-surface height gradients determine currents) and baroclinic (i.e., horizontal density gradients determine currents) pressure gradients at the overflow depth between both sides of the GSR:formula_0formula_1where formula_2 is the decrease in sea-surface height and formula_3 is the decrease in interface height from upstream areas to the sill. Processes like mixing, circulation and convection contribute to these pressure gradients. The overflow velocity, then, scales as follows with the pressure gradient between the basins north and south of the ridge: formula_4This velocity can then be used to define the total overflow flux in the FBC. A third definition is so-called kinematic overflow: the water flux from the bottom of the channel up to the interface height, being the level where the velocity in northwestern direction measures one half of the maximum velocity in the profile. The overflow flux is then calculated throughformula_5where formula_6 is the average profile velocity, formula_7 is the interface height, formula_8 is the height of the layer below the lowest measurement station in the channel, and formula_9 is the volume flux per unit width of the channel. Lastly, overflow can also be defined on the basis of hydrographical properties: namely as water that flows through the FBC having a temperature lower than 3 °C, or having a potential density higher than 27.8 kg/m3. This definition is most often used when estimating values for the magnitude of the FBCO. Periodicity. Temperature and salinity profiles as well as current speeds in the FBC vary strongly on a day-to-day basis. The dense water forms domes that move through the channel with a period of 2.5 to 6 days. At the ocean surface, this periodicity can be observed in the form of topographic Rossby waves at the sea surface, which are caused by mesoscale oscillations in the velocity field. The resulting eddies are the consequence of baroclinic instabilities within the overflow water, which then induce the observed periodicity. On a greater timescale, atmospheric forcing also causes periodic changes in the FBCO. When the atmospheric circulation governing the Nordic Seas is in a cyclonic (anticyclonic) regime, the source of the deep water predominantly comes via a western (eastern) inflow path, and the FBCO will be weaker (stronger). The eastern inflow path is called the Faroe-Strait Channel Jet. This transition from a cyclonic to an anticyclonic regime takes place on an interannual timescale, but the atmospheric forcing also shows a seasonal cycle. During summer the weakened cyclonic winds are associated with a higher FBCO transport. This indicates a fast barotropic response to the wind forcing. Outflow. Faroe-Strait Channel Jet water is much colder than the water flowing into the Faroe-Bank Channel via its western entrance path. Within the FBC, water always flows along its eastern rather than its western boundary, regardless the different inflow pathways from the Nordic Seas. Moreover, at times the eastern inflow path is dominant, overflow waters are denser and higher in volume. After passing the primary Faroe-Bank Channel sill, the overflow bifurcates into two different branches that both flow with a maximum velocity of 1.35 m/s on top of each other. The average thickness of the total outflow plume along its descent is 160±70 m, showing a high lateral variability, and yields a transport of ~1 Sv per branch. A transverse circulation actively dilutes the bottom branch of the plume. The shallow, intermediate branch transports warmer, less dense outflow water along the ridge slope towards the west. This branch mixes with oxygen-poor, fresh Modified East Icelandic Water. The deep (deeper than 1000 m) branch transports the most dense, cold water towards the deep parts of the North Atlantic. This branch entrains warmer and more saline water, mixes, and consequently obtains higher temperatures and salinity. Both branches ultimately contribute to the formation of North Atlantic Deep Water. North Atlantic overturning. The Atlantic meridional overturning circulation (AMOC) is important for Earth's climate because of its distribution of heat and salinity over the globe. The strength of the Faroe-Bank Channel overflow is an important indicator for the stability of the AMOC, since the overflow produces dense waters that contribute for a large extent to the total overturning in the North Atlantic. Parameters that can effect the AMOC are kinematic overflow (i.e., the magnitude of the overflow transport) and overflow density (as the AMOC being a density-driven circulation). In this respect, density characteristics of the overflow could vary even if the kinematic overflow does not. Measurements. From 1995 onwards, FBCO has been monitored by a continuous Acoustic Doppler current profiler (ADCP) mooring, measuring volume transport, hydrographic properties and the density of the overflow. The kinematic overflow, derived from the velocity field, showed a non-significant positive linear trend of 0.01±0.013 Sv/yr between 1995 and 2015, whereas the coldest part of the FBCO warmed in that same period with 0.1±0.06 °C (which made density decrease), causing increasing transport of heat into the AMOC. This warming, however, is accompanied by an observed salinity (and therefore density) increase, which results in no net change in density. Model simulations. Climate models have shown an overall decreasing trend in the baroclinic component of the overflow between 1948 and 2005; the barotropic pressure gradient, however, shows an increasing trend of equal magnitude. These processes compensate each other; as a result the pressure difference at depth does not show a significant trend over time. Global inverse modelling, ocean hydrographic surveys, chlorofuorocarbon (CFC) inventories, and monitoring of the AMOC from 2004 to present have shown that the AMOC has slowed down in the past decades. As explained, density of FBCO waters did not significantly change in that time period, so changes in FBCO cannot (fully) explain the changes in the AMOC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta P_{trop} = g\\Delta h" }, { "math_id": 1, "text": "\\Delta P_{clin} = g\\Delta H \\frac{\\Delta\\rho}{\\rho}" }, { "math_id": 2, "text": "\\Delta h" }, { "math_id": 3, "text": "\\Delta H" }, { "math_id": 4, "text": "v = \\sqrt{2\\left(\\Delta P_{trop} + \\Delta P_{clin}\\right)}" }, { "math_id": 5, "text": "Q_{k}(t) = \\int_{x_{1}}^{x_{2}} v_{0}(x,t) \\left(h_{0}(x,t) -\nh_{B}\\right) \\operatorname{d}\\!x + \\int_{x_{1}}^{x_{2}} q_{B}\\left(x,t\\right)\\operatorname{d}\\!x" }, { "math_id": 6, "text": "v_{0}" }, { "math_id": 7, "text": "h_{0}" }, { "math_id": 8, "text": "h_{B}" }, { "math_id": 9, "text": "q_{B}" } ]
https://en.wikipedia.org/wiki?curid=70275297
70277231
Mixed Poisson distribution
A mixed Poisson distribution is a univariate discrete probability distribution in stochastics. It results from assuming that the conditional distribution of a random variable, given the value of the rate parameter, is a Poisson distribution, and that the rate parameter itself is considered as a random variable. Hence it is a special case of a compound probability distribution. Mixed Poisson distributions can be found in actuarial mathematics as a general approach for the distribution of the number of claims and is also examined as an epidemiological model. It should not be confused with compound Poisson distribution or compound Poisson process. Definition. A random variable "X" satisfies the mixed Poisson distribution with density π("λ") if it has the probability distribution formula_0 If we denote the probabilities of the Poisson distribution by "q""λ"("k"), then formula_1 Properties. In the following let formula_2 be the expected value of the density formula_3 and formula_4 be the variance of the density. Expected value. The expected value of the mixed Poisson distribution is formula_5 Variance. For the variance one gets formula_6 Skewness. The skewness can be represented as formula_7 Characteristic function. The characteristic function has the form formula_8 Where formula_9 is the moment generating function of the density. Probability generating function. For the probability generating function, one obtains formula_10 Moment-generating function. The moment-generating function of the mixed Poisson distribution is formula_11
[ { "math_id": 0, "text": "\\operatorname{P}(X=k) = \\int_0^\\infty \\frac{\\lambda^k}{k!}e^{-\\lambda} \\,\\,\\pi(\\lambda)\\,\\mathrm d\\lambda. " }, { "math_id": 1, "text": "\\operatorname{P}(X=k) = \\int_0^\\infty q_\\lambda(k) \\,\\,\\pi(\\lambda)\\,\\mathrm d\\lambda. " }, { "math_id": 2, "text": "\\mu_\\pi=\\int\\limits_0^\\infty \\lambda \\,\\,\\pi(\\lambda) \\, d\\lambda\\," }, { "math_id": 3, "text": "\\pi(\\lambda)\\," }, { "math_id": 4, "text": "\\sigma_\\pi^2 = \\int\\limits_0^\\infty (\\lambda-\\mu_\\pi)^2 \\,\\,\\pi(\\lambda) \\, d\\lambda\\," }, { "math_id": 5, "text": "\\operatorname{E}(X) = \\mu_\\pi." }, { "math_id": 6, "text": "\\operatorname{Var}(X) = \\mu_\\pi+\\sigma_\\pi^2. " }, { "math_id": 7, "text": "\\operatorname{v}(X) = \\Bigl(\\mu_\\pi+\\sigma_\\pi^2\\Bigr)^{-3/2} \\,\\Biggl[\\int_0^\\infty(\\lambda-\\mu_\\pi)^3\\,\\pi(\\lambda)\\,d{\\lambda}+\\mu_\\pi\\Biggr]." }, { "math_id": 8, "text": "\\varphi_X(s) = M_\\pi(e^{is}-1).\\," }, { "math_id": 9, "text": " M_\\pi " }, { "math_id": 10, "text": "m_X(s) = M_\\pi(s-1).\\," }, { "math_id": 11, "text": "M_X(s) = M_\\pi(e^s-1).\\," } ]
https://en.wikipedia.org/wiki?curid=70277231
70280311
Judges 6
Book of Judges, chapter 6 Judges 6 is the sixth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Gideon, belonging to a section comprising Judges 6 to 9 and a bigger section of Judges 6:1 to . Text. This chapter was originally written in the Hebrew language. It is divided into 40 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 1Q6 (1QJudg; &lt; 68 BCE) with extant verses 20–22. and 4Q49 (4QJudga; 50–25 BCE) with extant verses 2–6, 11–13. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" Chapters 6 to 9 record the Gideon/Abimelech Cycle, which has two major parts: The Abimelech account is really a sequel of the Gideon account, resolving a number of complications originated in the Gideon narrative. In this narrative, for the first time Israel's appeal to Yahweh was met with a stern rebuke rather than immediate deliverance, and the whole cycle addresses the issue of infidelity and religious deterioration. The Gideon Narrative (6:1–8:32) consists of five sections along concentric lines — thematic parallels exist between the first (A) and fifth (A') sections as well as between the second (B) and fourth (B') sections, whereas the third section (C) stands alone — forming a symmetrical pattern as follows: A. Prologue to Gideon (6:1–10) B. God's plan of deliverance through the call of Gideon—the story of two altars (6:11–32) B1. The first altar—call and commissioning of Gideon (6:11–24) B2. The second altar—the charge to clean house (6:25–32) C. Gideon's personal faith struggle (6:33–7:18) a. The Spirit-endowed Gideon mobilizes 4 tribes against the Midianites, though lacking confidence in God's promise (6:33–35) b. Gideon seeks a sign from God with two fleecings to confirm the promise that Yahweh will give Midian into his hand (6:36-40) c. With the fearful Israelites having departed, God directs Gideon to go down to the water for the further reduction of his force (7:1–8) c'. With fear still in Gideon himself, God directs Gideon to go down to the enemy camp to overhear the enemy (7:9–11) b'. God provides a sign to Gideon with the dream of a Midianite and its interpretation to confirm the promise that Yahweh will give Midian into his hand (7:12–14) a'. The worshiping Gideon mobilizes his force of 300 for a surprise attack against the Midianites, fully confident in God's promise (7:15–18) B'. God's deliverance from the Midianites—the story of two battles (7:19–8:21) B1'. The first battle (Cisjordan) (7:19–8:3) B2'. The second battle (Transjordan) (8:4–21) A'. Epilogue to Gideon (8:22–32) The call of Gideon (6:1–24). The Gideon narrative follows the conventionalized pattern of the judges (cf. Judges 2:11–23; 3:12–30) with a description of the oppressed Israel (as an agriculturally based community; verses 3–5), because Israel had worshipped gods other than YHWH. God's response to the Israel's cry this time was different from the earlier accounts, as a prophet was sent to confront and indict the people of their unfaithfulness instead of directly sending a deliverer, implying that there would be a time when God's patience turned into judgment. Nonetheless, God who was the rescuer of the Exodus (cf. Exodus 20:2), did send a rescuer with the call to Gideon. The account of Gideon's calling has strong similarities with that of Moses in Exodus 3 and Joshua in Joshua 1 as in the table below: The divine presence to Gideon, as also in the cases of Abraham, Jacob, Manoah's wife, involved an intermediary messenger who appeared at first like a normal human being. The commissioning of Gideon (cf. Moses: Exodus 3:10; Jeremiah: Jeremiah 1:4-5; and Saul: 1 Samuel 9:20) and his humble attempt to refuse (cf. Exodus 3:11; Jeremiah 1:6; 1 Samuel 9:21) was followed by a request for a sign as an assurance that the commission were truly from God (Genesis 15:8; Exodus 4:1; also Exodus 3:12-13). The fiery consummation of Gideon's offering as evidence of the divine message follows a pattern where God's power was revealed in the fire (cf. Genesis 15:17; Exodus 3:1–6; cf. Judges 13:20). Gideon's response in building of an altar placed Gideon in a line of Israelite ancestor heroes (cf. Genesis 29:17–18; 32:30). "And the children of Israel did evil in the sight of the LORD: and the LORD delivered them into the hand of Midian seven years." Gideon destroys Baal's altar (6:25–32). Gideon's first task from God was to cut down the sacred pole or "asherah", a symbol of Baal, the Canaanite deity, and to replace the altar with an altar to YHWH, using the wood of the pole to provide the fire while offering a bull of his father's. When the people was angry at the action, Joash, Gideon's father, came to Gideon's support, by stating, 'Let Baal contend against him', which became a folk etymology for Gideon's new name, "Jerub-Baal" (meaning: "Let/May Baal contend/indict"), and this completes Gideon's transformation from 'farmer's son' to 'warrior hero'. The sign of the fleece (6:33–40). Gideon was filled with the spirit of God (verse 34), a mark of charismatic leaders such as Samson, Jephthah, and Saul. but he still needed more confirmation for the battle and requested a sign of God's support. A fleece of wool as the material to show the sign draws from Israel's agricultural world which was the tradition throughout the book of Judges. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70280311
702847
True-range multilateration
Using distance measures along a shape's edges to determine position in space True-range multilateration (also termed range-range multilateration and spherical multilateration) is a method to determine the location of a movable vehicle or stationary point in space using multiple ranges (distances) between the vehicle/point and multiple spatially-separated known locations (often termed "stations"). Energy waves may be involved in determining range, but are not required. True-range multilateration is both a mathematical topic and an applied technique used in several fields. A practical application involving a fixed location occurs in surveying. Applications involving vehicle location are termed navigation when on-board persons/equipment are informed of its location, and are termed surveillance when off-vehicle entities are informed of the vehicle's location. Two "slant ranges" from two known locations can be used to locate a third point in a two-dimensional Cartesian space (plane), which is a frequently applied technique (e.g., in surveying). Similarly, two "spherical ranges" can be used to locate a point on a sphere, which is a fundamental concept of the ancient discipline of celestial navigation — termed the "altitude intercept" problem. Moreover, if more than the minimum number of ranges are available, it is good practice to utilize those as well. This article addresses the general issue of position determination using multiple ranges. In two-dimensional geometry, it is known that if a point lies on two circles, then the circle centers and the two radii provide sufficient information to narrow the possible locations down to two – one of which is the desired solution and the other is an ambiguous solution. Additional information often narrow the possibilities down to a unique location. In three-dimensional geometry, when it is known that a point lies on the surfaces of three spheres, then the centers of the three spheres along with their radii also provide sufficient information to narrow the possible locations down to no more than two (unless the centers lie on a straight line). True-range multilateration can be contrasted to the more frequently encountered pseudo-range multilateration, which employs range differences to locate a (typically, movable) point. Pseudo range multilateration is almost always implemented by measuring times-of-arrival (TOAs) of energy waves. True-range multilateration can also be contrasted to triangulation, which involves the measurement of angles. Terminology. There is no accepted or widely-used general term for what is termed "true-range multilateration" here . That name is selected because it: (a) is an accurate description and partially familiar terminology ("multilateration" is often used in this context); (b) avoids specifying the number of ranges involved (as does, e.g., "range-range"; (c) avoids implying an application (as do, e.g., "DME/DME navigation" or "trilateration") and (d) and avoids confusion with the more common pseudo-range multilateration. Obtaining ranges. For similar ranges and measurement errors, a navigation and surveillance system based on true-range multilateration provide service to a significantly larger 2-D area or 3-D volume than systems based on pseudo-range multilateration. However, it is often more difficult or costly to measure true=ranges than it is to measure pseudo ranges. For distances up to a few miles and fixed locations, true-range can be measured manually. This has been done in surveying for several thousand years  – e.g., using ropes and chains. For longer distances and/or moving vehicles, a radio/radar system is generally needed. This technology was first developed circa 1940 in conjunction with radar. Since then, three methods have been employed: Solution methods. "True-range multilateration" algorithms may be partitioned based on Any pseudo-range multilateration algorithm can be specialized for use with true-range multilateration. Two Cartesian dimensions, two measured slant ranges (trilateration). An analytic solution has likely been known for over 1,000 years, and is given in several texts. Moreover, one can easily adapt algorithms for a three dimensional Cartesian space. The simplest algorithm employs analytic geometry and a station-based coordinate frame. Thus, consider the circle centers (or stations) C1 and C2 in Fig. 1 which have known coordinates (e.g., have already been surveyed) and thus whose separation formula_0 is known. The figure 'page' contains C1 and C2. If a third 'point of interest' P (e.g., a vehicle or another point to be surveyed) is at unknown point formula_1, then Pythagoras's theorem yields formula_4 Thus, Note that formula_5 has two values (i.e., solution is ambiguous); this is usually not a problem. While there are many enhancements, Equation 1 is the most fundamental true-range multilateration relationship. Aircraft DME/DME navigation and the trilateration method of surveying are examples of its application. During World War II Oboe and during the Korean War SHORAN used the same principle to guide aircraft based on measured ranges to two ground stations. SHORAN was later used for off-shore oil exploration and for aerial surveying. The Australian Aerodist aerial survey system utilized 2-D Cartesian true-range multilateration. This 2-D scenario is sufficiently important that the term "trilateration" is often applied to all applications involving a known baseline and two range measurements. The baseline containing the centers of the circles is a line of symmetry. The correct and ambiguous solutions are perpendicular to and equally distant from (on opposite sides of) the baseline. Usually, the ambiguous solution is easily identified. For example, if P is a vehicle, any motion toward or away from the baseline will be opposite that of the ambiguous solution; thus, a crude measurement of vehicle heading is sufficient. A second example: surveyors are well aware of which side of the baseline that P lies. A third example: in applications where P is an aircraft and C1 and C2 are on the ground, the ambiguous solution is usually below ground. If needed, the interior angles of triangle C1-C2-P can be found using the trigonometric law of cosines. Also, if needed, the coordinates of P can be expressed in a second, better-known coordinate system—e.g., the Universal Transverse Mercator (UTM) system—provided the coordinates of C1 and C2 are known in that second system. Both are often done in surveying when the trilateration method is employed. Once the coordinates of P are established, lines C1-P and C2-P can be used as new baselines, and additional points surveyed. Thus, large areas or distances can be surveyed based on multiple, smaller triangles—termed a "traverse". An implied assumption for the above equation to be true is that formula_2 and formula_3 relate to the same position of P. When P is a vehicle, then typically formula_2 and formula_3 must be measured within a synchronization tolerance that depends on the vehicle speed and the allowable vehicle position error. Alternatively, vehicle motion between range measurements may be accounted for, often by dead reckoning. A trigonometric solution is also possible (side-side-side case). Also, a solution employing graphics is possible. A graphical solution is sometimes employed during real-time navigation, as an overlay on a map. Three Cartesian dimensions, three measured slant ranges. There are multiple algorithms that solve the 3-D Cartesian true-range multilateration problem directly (i.e., in closed-form) – e.g., Fang. Moreover, one can adopt closed-form algorithms developed for pseudo range multilateration. Bancroft's algorithm (adapted) employs vectors, which is an advantage in some situations. The simplest algorithm corresponds to the sphere centers in Fig. 2. The figure 'page' is the plane containing C1, C2 and C3. If P is a 'point of interest' (e.g., vehicle) at formula_6, then Pythagoras's theorem yields the slant ranges between P and the sphere centers: formula_7 Thus, the coordinates of P are: The plane containing the sphere centers is a plane of symmetry. The correct and ambiguous solutions are perpendicular to it and equally distant from it, on opposite sides. Many applications of 3-D true-range multilateration involve short ranges—e.g., precision manufacturing. Integrating range measurement from three or more radars (e.g., FAA's ERAM) is a 3-D aircraft surveillance application. 3-D true-range multilateration has been used on an experimental basis with GPS satellites for aircraft navigation. The requirement that an aircraft be equipped with an atomic clock precludes its general use. However, GPS receiver clock aiding is an area of active research, including aiding over a network. Thus, conclusions may change. 3-D true-range multilateration was evaluated by the International Civil Aviation Organization as an aircraft landing system, but another technique was found to be more efficient. Accurately measuring the altitude of aircraft during approach and landing requires many ground stations along the flight path. Two spherical dimensions, two or more measured spherical ranges. This is a classic celestial (or astronomical) navigation problem, termed the "altitude intercept" problem (Fig. 3). It's the spherical geometry equivalent of the trilateration method of surveying (although the distances involved are generally much larger). A solution at sea (not necessarily involving the Sun and Moon) was made possible by the marine chronometer (introduced in 1761) and the discovery of the 'line of position' (LOP) in 1837. The solution method now most taught at universities (e.g., U.S. Naval Academy) employs spherical trigonometry to solve an oblique spherical triangle based on sextant measurements of the 'altitude' of two heavenly bodies. This problem can also be addressed using vector analysis. Historically, graphical techniques – e.g., the intercept method – were employed. These can accommodate more than two measured 'altitudes'. Owing to the difficulty of making measurements at sea, 3 to 5 'altitudes' are often recommended. As the earth is better modeled as an ellipsoid of revolution than a sphere, iterative techniques may be used in modern implementations. In high-altitude aircraft and missiles, a celestial navigation subsystem is often integrated with an inertial navigation subsystem to perform automated navigation—e.g., U.S. Air Force SR-71 Blackbird and B-2 Spirit. While intended as a 'spherical' pseudo range multilateration system, Loran-C has also been used as a 'spherical' true-range multilateration system by well-equipped users (e.g., Canadian Hydrographic Service). This enabled the coverage area of a Loran-C station triad to be extended significantly (e.g., doubled or tripled) and the minimum number of available transmitters to be reduced from three to two. In modern aviation, slant ranges rather than spherical ranges are more often measured; however, when aircraft altitude is known, slant ranges are readily converted to spherical ranges. Redundant range measurements. When there are more range measurements available than there are problem dimensions, either from the same C1 and C2 (or C1, C2 and C3) stations, or from additional stations, at least these benefits accrue: The iterative Gauss–Newton algorithm for solving non-linear least squares (NLLS) problems is generally preferred when there are more 'good' measurements than the minimum necessary. An important advantage of the Gauss–Newton method over many closed-form algorithms is that it treats range errors linearly, which is often their nature, thereby reducing the effect of range errors by averaging. The Gauss–Newton method may also be used with the minimum number of measured ranges. Since it is iterative, the Gauss–Newton method requires an initial solution estimate. In 3-D Cartesian space, a fourth sphere eliminates the ambiguous solution that occurs with three ranges, provided its center is not co-planar with the first three. In 2-D Cartesian or spherical space, a third circle eliminates the ambiguous solution that occurs with two ranges, provided its center is not co-linear with the first two. One-time application versus repetitive application. This article largely describes 'one-time' application of the true-range multilateration technique, which is the most basic use of the technique. With reference to Fig. 1, the characteristic of 'one-time' situations is that point P and at least one of C1 and C2 change from one application of the true-range multilateration technique to the next. This is appropriate for surveying, celestial navigation using manual sightings, and some aircraft DME/DME navigation. However, in other situations, the true-range multilateration technique is applied repetitively (essentially continuously). In those situations, C1 and C2 (and perhaps Cn, n = 3,4...) remain constant and P is the same vehicle. Example applications (and selected intervals between measurements) are: multiple radar aircraft surveillance (5 and 12 seconds, depending upon radar coverage range), aerial surveying, Loran-C navigation with a high-accuracy user clock (roughly 0.1 seconds), and some aircraft DME/DME navigation (roughly 0.1 seconds). Generally, implementations for repetitive use: (a) employ a 'tracker' algorithm (in addition to the multilateration solution algorithm), which enables measurements collected at different times to be compared and averaged in some manner; and (b) utilize an iterative solution algorithm, as they (b1) admit varying numbers of measurements (including redundant measurements) and (b2) inherently have an initial guess each time the solution algorithm is invoked. Hybrid multilateration systems. Hybrid multilateration systems – those that are neither true-range nor pseudo range systems – are also possible. For example, in Fig. 1, if the circle centers are shifted to the left so that C1 is at formula_8 and C2 is at formula_9 then the point of interest P is at formula_10 This form of the solution explicitly depends on the sum and difference of formula_11 and formula_12 and does not require 'chaining' from the formula_13-solution to the formula_14-solution. It could be implemented as a true-range multilateration system by measuring formula_11 and formula_12. However, it could also be implemented as a hybrid multilateration system by measuring formula_15 and formula_16 using different equipment – e.g., for surveillance by a multistatic radar with one transmitter and two receivers (rather than two monostatic radars). While eliminating one transmitter is a benefit, there is a countervailing 'cost': the synchronization tolerance for the two stations becomes dependent on the propagation speed (typically, the speed of light) rather that the speed of point P, in order to accurately measure both formula_17. While not implemented operationally, hybrid multilateration systems have been investigated for aircraft surveillance near airports and as a GPS navigation backup system for aviation. Preliminary and final computations. The position accuracy of a true-range multilateration system—e.g., accuracy of the formula_1 coordinates of point P in Fig. 1 -- depends upon two factors: (1) the range measurement accuracy, and (2) the geometric relationship of P to the system's stations C1 and C2. This can be understood from Fig. 4. The two stations are shown as dots, and BLU denotes baseline units. (The measurement pattern is symmetric about both the baseline and the perpendicular bisector of the baseline, and is truncated in the figure.) As is commonly done, individual range measurement errors are taken to be independent of range, statistically independent and identically distributed. This reasonable assumption separates the effects of user-station geometry and range measurement errors on the error in the calculated formula_1 coordinates of P. Here, the measurement geometry is simply the angle at which two circles cross—or equivalently, the angle between lines P-C1 and P-C2. When point P- is not on a circle, the error in its position is approximately proportional to the area bounded by the nearest two blue and nearest two magenta circles. Without redundant measurements, a true-range multilateration system can be no more accurate than the range measurements, but can be significantly less accurate if the measurement geometry is not chosen properly. Accordingly, some applications place restrictions on the location of point P. For a 2-D Cartesian (trilateration) situation, these restrictions take one of two equivalent forms: Planning a true-range multilateration navigation or surveillance system often involves a dilution of precision (DOP) analysis to inform decisions on the number and location of the stations and the system's service area (two dimensions) or service volume (three dimensions). Fig. 5 shows horizontal DOPs (HDOPs) for a 2-D, two-station true-range multilateration system. HDOP is infinite along the baseline and its extensions, as only one of the two dimensions is actually measured. A user of such a system should be roughly broadside of the baseline and within an application-dependent range band. For example, for DME/DME navigation fixes by aircraft, the maximum HDOP permitted by the U.S. FAA is twice the minimum possible value, or 2.828, which limits the maximum usage range (which occurs along the baseline bisector) to 1.866 times the baseline length. (The plane containing two DME ground stations and an aircraft in not strictly horizontal, but usually is nearly so.) Similarly, surveyors select point P in Fig. 1 so that C1-C2-P roughly form an equilateral triangle (where HDOP = 1.633). Errors in trilateration surveys are discussed in several documents. Generally, emphasis is placed on the effects of range measurement errors, rather than on the effects of algorithm numerical errors. Applications. Advantages and disadvantages for vehicle navigation and surveillance. Navigation and surveillance systems typically involve vehicles and require that a government entity or other organization deploy multiple stations that employ a form of radio technology (i.e., utilize electromagnetic waves). The advantages and disadvantages of employing true-range multilateration for such a system are shown in the following table. True-range multilateration is often contrasted with (pseudo range) multilateration, as both require a form of user ranges to multiple stations. Complexity and cost of user equipage is likely the most important factor in limiting use of true-range multilateration for vehicle navigation and surveillance. Some uses are not the original purpose for system deployment – e.g., DME/DME aircraft navigation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U" }, { "math_id": 1, "text": "(x,y)" }, { "math_id": 2, "text": "r_1" }, { "math_id": 3, "text": "r_2" }, { "math_id": 4, "text": " \\begin{align}\nr_1^2 & = x^2 + y^2 \\\\[4pt]\nr_2^2 & = (U-x)^2 + y^2\n\\end{align} " }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "(x,y,z)" }, { "math_id": 7, "text": " \\begin{align}\nr_1^2 & = x^2 + y^2 + z^2 \\\\[4pt]\nr_2^2 & = (x-U)^2 + y^2 + z^2 \\\\[4pt]\nr_3^2 & = (x-V_x)^2 + (y-V_y)^2 + z^2\n\\end{align} " }, { "math_id": 8, "text": "x_1^\\prime = - \\tfrac{1}{2} U, y_1^\\prime = 0" }, { "math_id": 9, "text": "x_2^\\prime = \\tfrac{1}{2} U, y_2^\\prime = 0" }, { "math_id": 10, "text": " \\begin{align}\nx^\\prime & = \\frac { (r_1^\\prime + r_2^\\prime)(r_1^\\prime - r_2^\\prime) } {2 U} \\\\[4pt]\ny^\\prime & = \\pm \\frac { \\sqrt{ (r_1^\\prime + r_2^\\prime)^2 - U^2 } \\sqrt{ U^2 - (r_1^\\prime - r_2^\\prime)^2 } } {2 U}\n\\end{align} " }, { "math_id": 11, "text": "r_1^\\prime" }, { "math_id": 12, "text": "r_2^\\prime" }, { "math_id": 13, "text": "x^\\prime" }, { "math_id": 14, "text": "y^\\prime" }, { "math_id": 15, "text": "r_1^\\prime + r_2^\\prime" }, { "math_id": 16, "text": "r_1^\\prime - r_2^\\prime" }, { "math_id": 17, "text": "r_1^\\prime \\pm r_2^\\prime" }, { "math_id": 18, "text": "\\sqrt{2} \\approx 1.414" } ]
https://en.wikipedia.org/wiki?curid=702847
70285102
Relative wind stress
A summary of wind stress. A very important concept for large scale ocean circulation models. Relative wind stress is a shear stress that is produced by wind blowing over the surface of the ocean, or another large body of water. Relative wind stress is related to wind stress but takes the difference between the surface ocean current velocity and wind velocity into account. The units are Newton per meter squared formula_0 or Pascal formula_1. Wind stress over the ocean is important as it is a major source of kinetic energy input to the ocean which in turn drives large scale ocean circulation. The use of relative wind stress instead of wind stress, where the ocean current is assumed to be stationary, reduces the stress felt over the ocean in models. This leads to a decrease in the calculation of power input into the ocean of 20–35% and thus, results in a different simulation of the large scale ocean circulation. Mathematical formulation. The wind stress formula_2 acting on the ocean surface is usually parameterized using the turbulent drag formula formula_3. where formula_4 is the turbulent drag coefficient (usually determined empirically), formula_5 is the air density, and formula_6 is the wind velocity vector, usually taken at 10m above sea level. This parameterization is commonly referred to as resting ocean approximation. From now on we will refer to wind stress in resting ocean approximation as simply resting ocean wind stress. On the other hand, relative wind stress formula_7 makes use of the velocity of the surface wind relative to the velocity at the ocean surface formula_8, as follows, formula_9. where formula_8 is the surface ocean velocity and thus, the terms with formula_10 represent the wind velocity relative to the surface ocean velocity. Therefore, the difference between wind stress and relative wind stress is that relative wind stress takes into account the relative motion of the wind with respect to the surface ocean current. Work done by the wind on the ocean. The work wind does on the ocean can be computed by formula_11 where formula_12 is the chosen parameterization for the wind stress. Thus, in resting ocean approximation, the work done on the ocean by the wind is formula_13. Furthermore, if the relative wind stress parameterization is used, the work done on the ocean is given by formula_14 Then, assuming formula_15 is the same in both situations, the difference between work done by resting ocean wind stress and relative wind stress is given by formula_16. Analysing this expression, we first see that the term formula_17 is always positive (since formula_18 and all the other terms are positive). Next, for the term formula_19, we have: Therefore, it is always the case that formula_24, meaning the calculation of the work done is always larger when using the resting ocean wind stress. This overestimate is referred to in the literature as a "positive bias". Note that this may not be the case if the formula_15 used in the calculation of formula_25 is different from the formula_15 used in the calculations of formula_26 ("See section: Ocean currents as output of ocean models"). Wind mechanical damping effect. The mathematical explanation for the positive bias in the calculation of work using the resting ocean wind stress can also be interpreted physically through the mechanical damping effect. As seen in Figure 2, when the wind velocity and ocean current velocity are in the same direction, the relative wind stress is smaller than the resting ocean wind stress. In other words, less positive work is using relative wind stress. When the wind and the ocean velocities are in opposite directions, then the relative wind stress does more negative work than the resting ocean wind stress. Consequently, in both scenarios less work is being done on the ocean when the relative wind stress is used for the calculation. This physical interpretation can also be adapted to a scenario where there is an ocean eddy. As illustrated on the top part of Figure 3, in the eddy situation, the relative wind stress is smaller when the wind and ocean velocities are aligned, a similar situation to the top part of Figure 2. At the bottom part of Figure 3 we have a situation analogous to the bottom part of Figure 2, where more negative work is being performed on the system than in the resting ocean case. Therefore, at the top of the eddy less energy is being put in and at the bottom more energy is being taken out, which means the eddy is being dampened more in the relative wind case. The two situations depicted in Figures 2 and 3 are the physical reason why there is a positive bias when estimating the power (work per unit time) input to the ocean when using the resting ocean stress rather than the relative wind stress. Impact on models for large-scale ocean circulation. For the computation of surface currents, a general circulation model is forced with surface winds. A study by Pacanowski (1987) shows that including ocean current velocity through relative wind stress in an Atlantic circulation model reduces the surface currents by 30%. This decrease in surface current can impact sea surface temperature and upwelling along the equator. However, the greatest impact of including ocean currents in the air-sea stress is in the calculation of Power input to the general circulation, with the mechanism as described above. An additional effect of the computation with relative wind stress instead of resting ocean wind stress leads to a lower Residual Meridional Overturning Circulation in models. Power Input. Figure 4 shows the difference between relative wind stress and resting ocean wind stress. Data for relative wind stress is obtained from scatterometers. These accurately represent the relative wind stress as they measure backscatter from small-scale structures on the ocean surface, which respond to the sea surface-air interface and not to wind speed. Overestimations of power input into the ocean in models have been identified when using wind stress calculated from zonal mean wind instead of relative wind stress, ranging between 20-35%. In regions where wind speeds are relatively low and current speeds relatively high this effect is the greatest. An example is the tropical Pacific Ocean where trade winds blow with 5–9 m/s and the ocean current velocities can exceed 1 m/s. In this region, depending on if it is an El Niño or La Niña state, the wind stress difference (resting ocean wind stress minus relative wind stress) can vary between negative and positive, respectively. Residual Meridional Overturning Circulation. In the Southern Ocean, the use of relative wind stress is important because eddies are crucial in the Antarctic Circumpolar Circulation, and the damping of these eddies with relative wind stress will affect the overturning circulation. The Residual Meridional Overturning Circulation (RMOC), is a streamfunction that quantifies the transport of tracers across isopycnals. Wind stress is taken into account through the formulation of the RMOC, which is the sum of the Eulerian mean MOC formula_27 and eddy-induced bolus overturning formula_28. The Eulerian mean MOC is dependent on the meridional winds that drive Ekman transport in zonal direction. The eddy-induced bolus overturning acts to restore sloping isopycnals to the horizontal, which are induced by eddies. The formulation of the RMOC is given by: formula_29 with formula_30 being the zonal mean wind stress, formula_31 the reference density, formula_32 the Coriolis parameter (negative in Southern Hemisphere), formula_33 the quasi-Stokes eddy diffusivity field, equal to formula_34 being the length and the velocity of the eddy, respectively, and formula_35 the slope of the isopycnals. Inserting a lower wind stress, by using relative wind stress instead of resting ocean wind stress, directly leads to lower residual overturning, by reducing the Eulerian mean MOC (formula_27). Furthermore, it affects the eddy-induced bolus overturning (formula_28) by damping eddies which results in reduced length and velocity scale (formula_36 &amp; formula_37) of eddies. The sum of this thus leads to a lower formula_38. Ocean currents as output of ocean models. As briefly mentioned in "Section: Impact on Models for large-scale Ocean Circulation", the surface currents can be calculated by forcing surface winds into a general circulation model. The case of a model which is also forced by relative wind stress can be visualized in Figure 5. Firstly, the satellite data is used to input the 10m wind velocity into the calculation of the relative wind stress. However, if the parameterization for relative wind stress is used, this will result in a coupled problem. The ocean model requires the relative wind stress formula_39 to output the ocean current velocity, which in turn the calculation of formula_39 relies on. This coupled system needs to be formulated as an inverse problem. Another consequence is that, depending on the parameterization used for the wind stress, a different vector field will be inputted into the ocean model and, consequently, a different value of formula_15 will be outputted by the ocean model. Therefore, if a different wind field formula_15 is used for the calculations of formula_40 and formula_41 then it could be that formula_42. In other words, there may be a negative bias when calculating the work done on the ocean using the resting ocean approximation. On the global scale, however, the literature has found an over rather than underestimation, as previously mentioned. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[Nm^{-2}]" }, { "math_id": 1, "text": "[Pa]" }, { "math_id": 2, "text": "(\\tau)" }, { "math_id": 3, "text": " \\quad \\tau = C_d \\rho_{a} |\\vec{u}_{a}| \\vec{u}_{a} " }, { "math_id": 4, "text": "C_d" }, { "math_id": 5, "text": "\\rho_{a}" }, { "math_id": 6, "text": "\\vec{u}_{a}" }, { "math_id": 7, "text": "(\\tau_{rel})" }, { "math_id": 8, "text": "\\vec{u}_{o}" }, { "math_id": 9, "text": " \\quad \\tau_{rel} = C_d \\rho_{a} |\\vec{u}_{a}-\\vec{u}_{o}| (\\vec{u}_{a}-\\vec{u}_{o}) " }, { "math_id": 10, "text": "(\\vec{u}_a - \\vec{u}_o)" }, { "math_id": 11, "text": " \\qquad P= \\tau \\cdot \\vec{u}_{o} " }, { "math_id": 12, "text": "\\tau" }, { "math_id": 13, "text": " \\begin{align}\n\\qquad P_0 &= \\tau \\cdot \\vec{u}_{o} \\\\\n &= C_d \\rho_{a} |\\vec{u}_{a}| \\vec{u}_{a} \\cdot \\vec{u}_{o}\n\\end{align} " }, { "math_id": 14, "text": " \\begin{align}\n\\qquad P_1 &= \\tau_{rel} \\cdot \\vec{u}_{o} \\\\\n &= C_d \\rho_{a} |\\vec{u}_{a}-\\vec{u}_{o}| (\\vec{u}_{a}-\\vec{u}_{o}) \\cdot \\vec{u}_{o} \\\\ \n &= C_d \\rho_{a} |\\vec{u}_{a}-\\vec{u}_{o}| \\vec{u}_{a} \\cdot \\vec{u}_{o} - C_d \\rho_{a} |\\vec{u}_{a}-\\vec{u}_{o}| \\vec{u}_{o} \\cdot \\vec{u}_{o} \n\\end{align}" }, { "math_id": 15, "text": "\\vec{u}_o" }, { "math_id": 16, "text": " \\qquad P_0 - P_1 = C_d \\rho_{a} |\\vec{u}_{a}-\\vec{u}_{o}| \\vec{u}_{o} \\cdot \\vec{u}_{o} - C_d \\rho_{a} (|\\vec{u}_{a}-\\vec{u}_{o}| - |\\vec{u}_a|) \\vec{u}_{a} \\cdot \\vec{u}_{o} " }, { "math_id": 17, "text": " C_d \\rho_{a} |\\vec{u}_{a}-\\vec{u}_{o}| \\vec{u}_{o} \\cdot \\vec{u}_{o} " }, { "math_id": 18, "text": " \\vec{u}_o \\cdot \\vec{u}_o = |\\vec{u}_o|^2 > 0 " }, { "math_id": 19, "text": " - C_d \\rho_{a} (|\\vec{u}_{a}-\\vec{u}_{o}| - |\\vec{u}_a|) \\vec{u}_{a} \\cdot \\vec{u}_{o} " }, { "math_id": 20, "text": "\\vec{u}_{a} \\cdot \\vec{u}_{o} < 0 " }, { "math_id": 21, "text": "|\\vec{u}_{a}-\\vec{u}_{o}| - |\\vec{u}_a| > 0" }, { "math_id": 22, "text": "\\vec{u}_{a} \\cdot \\vec{u}_o > 0 " }, { "math_id": 23, "text": "|\\vec{u}_{a}-\\vec{u}_{o}| - |\\vec{u}_a| < 0" }, { "math_id": 24, "text": " P_0 - P_1 > 0 " }, { "math_id": 25, "text": " P_0 " }, { "math_id": 26, "text": " P_1 " }, { "math_id": 27, "text": "\\bar{\\Psi}" }, { "math_id": 28, "text": "\\Psi^*" }, { "math_id": 29, "text": " \\begin{align}\n\\Psi_{res}= \\bar{\\Psi} + \\Psi^* = \\frac{-\\bar{\\tau}_x }{\\rho_0 f} + Ks\n\\end{align} " }, { "math_id": 30, "text": "\\bar{\\tau}_x" }, { "math_id": 31, "text": "\\rho_0" }, { "math_id": 32, "text": "f" }, { "math_id": 33, "text": "K" }, { "math_id": 34, "text": "L_{eddy} \\cdot U_{eddy}" }, { "math_id": 35, "text": "s" }, { "math_id": 36, "text": "L_{eddy}" }, { "math_id": 37, "text": "U_{eddy}" }, { "math_id": 38, "text": "\\Psi_{res}" }, { "math_id": 39, "text": "\\tau_{rel}" }, { "math_id": 40, "text": "P_0" }, { "math_id": 41, "text": "P_1" }, { "math_id": 42, "text": "P_0 - P_1 < 0 " } ]
https://en.wikipedia.org/wiki?curid=70285102
70293602
Judges 7
Book of Judges, chapter 7 Judges 7 is the seventh chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Gideon, belonging to a section comprising Judges 6 to 9 and a bigger section of to . Text. This chapter was originally written in the Hebrew language. It is divided into 25 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" Chapters 6 to 9 record the Gideon/Abimelech Cycle, which has two major parts: The Abimelech account is really a sequel of the Gideon account, resolving a number of complications originated in the Gideon narrative. In this narrative, for the first time Israel's appeal to Yahweh was met with a stern rebuke rather than immediate deliverence, and the whole cycle addresses the issue of infidelity and religious deterioration. The Gideon Narrative (6:1–8:32) consists of five sections along concentric lines — thematic parallels exist between the first (A) and fifth (A') sections as well as between the second (B) and fourth (B') sections, whereas the third section (C) stands alone — forming a symmetrical pattern as follows: A. Prologue to Gideon (6:1–10) B. God's plan of deliverance through the call of Gideon—the story of two altars (6:11–32) B1. The first altar—call and commissioning of Gideon (6:11–24) B2. The second altar—the charge to clean house (6:25–32) C. Gideon's personal faith struggle (6:33–7:18) a. The Spirit-endowed Gideon mobilizes 4 tribes against the Midianites, though lacking confidence in God's promise (6:33–35) b. Gideon seeks a sign from God with two fleecings to confirm the promise that Yahweh will give Midian into his hand (6:36-40) c. With the fearful Israelites having departed, God directs Gideon to go down to the water for the further reduction of his force (7:1–8) c'. With fear still in Gideon himself, God directs Gideon to go down to the enemy camp to overhear the enemy (7:9–11) b'. God provides a sign to Gideon with the dream of the Midianite and its interpretation to confirm the promise that Yahweh will give Midian into his hand (7:12–14) a'. The worshiping Gideon mobilizes his force of 300 for a surprise attack against the Midianites, fully confident in God's promise (7:15–18) B'. God's deliverance from the Midianites—the story of two battles (7:19–8:21) B1'. The first battle (Cisjordan) (7:19–8:3) B2'. The second battle (Transjordan) (8:4–21) A'. Epilogue to Gideon (8:22–32) Gideon's army of three hundred (7:1–18). Following Deuteronomy 20:5–7, God ordered the Israelites to allow the fearful to return home (verse 2). This battle against the Midianites was not proof of Israelite prowess but of God's glory, so the fighting men did not have to be numerous. God used a test of the mode of drinking to reduce the force to a mere 300 men (verse 8), who were the 'lappers' (cf. 2 Chronicles 25:7-8 and the humbled stance of Israelite kings in the face of war at 2 Chronicles 14:9–15; 12:6; 20:12; 16:8). During Gideon's reconnaissance mission before the battle (a common biblical war motif, cf. Numbers 13; Joshua 2), God offered the 'always humble and hesitant hero' Gideon a positive sign before the battle: through the dream of the enemy which had divinatory significance (cf. Joseph's dreams and his interpretations of other dreams in Genesis 37:5–7; 40:8–22; 41:1–36). A section in Judges 6:36–40 about 'a pair of fleecings' complements the section in 7:12–14 (sections C.b. and C.b'. in the structure) which recounts two Midianites (one telling the dream, one interpreting) to encourage Gideon into action. "Then Jerubbaal, who is Gideon, and all the people that were with him, rose up early, and pitched beside the well of Harod: so that the host of the Midianites were on the north side of them, by the hill of Moreh, in the valley." Verse 1. The repetition of the Midianites' location in verses 1 and 8 marks off the boundaries of one section of this passage. Gideon defeats Midian (7:19–25). The detailed instructions before the battle and the mentioned instruments of war recall the battle of Jericho (Joshua 6), including the shouting, the trumpets, the torches, and the breaking jars, which led to the enemy's rout. Then as judge, Gideon called up tribe members of the Israelite confederation to pursue the Midianites (verse 23; cf. Judges 5:14–18). For the final operation, Gideon called up the tribe of Ephraim, whose army captured and beheaded the Midianite commanders Oreb and Zeeb (verses 24–25). "And they captured two princes of the Midianites, Oreb and Zeeb. They killed Oreb at the rock of Oreb, and Zeeb they killed at the winepress of Zeeb. They pursued Midian and brought the heads of Oreb and Zeeb to Gideon on the other side of the Jordan." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70293602
70293628
Judges 8
Book of Judges, chapter 8 Judges 8 is the eighth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Gideon, belonging to a section comprising Judges 6 to 9 and a bigger section of Judges 6:1 to 16:31. Text. This chapter was originally written in the Hebrew language. It is divided into 35 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 1Q6 (1QJudg; &lt; 68 BCE) with extant verse 1. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" Chapters 6 to 9 record the Gideon/Abimelech Cycle, which has two major parts: The Abimelech account is really a sequel of the Gideon account, resolving a number of complications originated in the Gideon narrative. In this narrative, for the first time Israel's appeal to Yahweh was met with a stern rebuke rather than immediate deliverence, and the whole cycle addresses the issue of infidelity and religious deterioration. The Gideon Narrative (Judges 6:1–8:32) consists of five sections along concentric lines — thematic parallels exist between the first (A) and fifth (A') sections as well as between the second (B) and fourth (B') sections, whereas the third section (C) stands alone — forming a symmetrical pattern as follows: A. Prologue to Gideon (6:1–10) B. God's plan of deliverance through the call of Gideon—the story of two altars (6:11–32) B1. The first altar—call and commissioning of Gideon (6:11–24) B2. The second altar—the charge to clean house (6:25–32) C. Gideon's personal faith struggle (6:33–7:18) a. The Spirit-endowed Gideon mobilizes 4 tribes against the Midianites, though lacking confidence in God's promise (6:33–35) b. Gideon seeks a sign from God with two fleecings to confirm the promise that Yahweh will give Midian into his hand (6:36-40) c. With the fearful Israelites having departed, God directs Gideon to go down to the water for the further reduction of his force (7:1–8) c'. With fear still in Gideon himself, God directs Gideon to go down to the enemy camp to overhear the enemy (7:9–11) b'. God provides a sign to Gideon with the dream of a Midianite and its interpretation to confirm the promise that Yahweh will give Midian into his hand (7:12–14) a'. The worshiping Gideon mobilizes his force of 300 for a surprise attack against the Midianites, fully confident in God's promise (7:15–18) B'. God's deliverance from the Midianites—the story of two battles (7:19–8:21) B1'. The first battle (Cisjordan) (7:19–8:3) B2'. The second battle (Transjordan) (8:4–21) A'. Epilogue to Gideon (8:22–32) The Abimelech Narrative (Judges 8:33–9:5), as the sequel (and conclusion) to the Gideon Narrative (6:1–8:32), contains a prologue (8:33–35), followed by two parts: Each of these two parts has a threefold division with interlinks between the divisions, so it displays the following structure: Prologue (8:33–35) Part 1: Abimelech's Rise (9:1–24) A. Abimelech's Treachery Against the House of Jerub-Baal (9:1–6) B. Jotham's Four-Part Plant Fable and Conditional Curse (9:7–21) a. The Fable (9:7–15) b. The Curse (9:16–21) C. The Narrator's First Assertion (9:22–24) Part 2: Abimelech's Demise (9:25–57) A. Shechem's Two Acts of Treachery Against Abimelech (9:25–41) B. The Fable's Fulfillment: Abimelech's Three Acts of Repression (9:42–55) a. First Act of Repression (9:42–45) b. Second Act of Repression (9:46–49) c. Third Act of Repression (9:50–55) C The Narrator's Second Assertion (9:56–57) Gideon appeases the Ephraimites (8:1–3). Verses 1–3 in this chapter should be one section with (and serves as an epilogue to) 7:19–25. The confrontation with the Ephraimites was a dangerous moment for Gideon, because the Ephraimites were not included in the initial call-up but once called they were able to capture and kill two Midianites leaders (Oreb and Zeeb), and it seemed to reflect a rivalry between the tribes of Ephraim and Manasseh, the two leading northerner Israel tribes. Gideon's successful diplomatic way to handle the provocations by the Ephraimites contrasts Jephthah's lack of diplomacy in Judges 12:1–6. Gideon used a double metaphor from the motif of 'winepress': "gleanings" ('what is gathered after harvest') which are generally more than the "vintage" ('the grape harvest itself'), to placate the Ephraimites that the capture and execution of enemy leaders are more glorious than the early rout by Gideon. Gideon defeats Zebah and Zalmunna (8:4–21). Gideon's interactions with the people of Succoth and Penuel show similarities to David's interactions with Nabal, the first husband of Abigail (1 Samuel 25), and Ahimelech, the priest of Nob (1 Samuel 21), that a popular hero asks for logistic support for his fighting men. As in case of David and Nabal, Gideon's requests were denied (even accompanied with taunts; verses 6, 8) and threats ensued. Gideon did succeed to capture the Midianite kings Zebah and Zalmunna, then he made good his threat to punish those cities (verses 10–17). Verses 13–14 are often cited as proof of Israelite literacy at that period of time, that an ordinary young man from Succoth was literate to write down names of the officers in his town. Verses 18–21 show Gideon's motivation to pursue the two kings of Midianites, that is, a personal vendetta for the killing of Gideon's brothers by the Midianites. Warriors expect to face their equals in battle (cf. Goliath's disdain for the lad David in 1 Samuel 17:42–43; also 2 Samuel 2:20–23), so when the inexperienced son of Gideon was not able to show his courage, the kings, quoting a proverb, requested that Gideon himself, as the leader, killed them as an appropriate death of a king. " Then he said to the men of Succoth, "Please give loaves of bread to the people who follow me, for they are exhausted, and I am pursuing Zebah and Zalmunna, kings of Midian."" "Then he went up from there to Penuel and spoke to them in the same way. And the men of Penuel answered him as the men of Succoth had answered." "And he said to Jether his firstborn, "Rise, kill them!" But the youth would not draw his sword; for he was afraid, because he was still a youth." Verse 20. The introduction of Gideon's son shortly followed the mention of kingship – that the enemies saw Gideon's brother like "sons of the king" (Hebrew: "ha-melekh") – and would be followed by the offer from the Israelites to Gideon "and his son and his grandson" to be their king (verse 22). The hesitancy of Jether, Gideon's firstborn son, to kill two "real" foreign kings would contrast to the determination of Abimelech, Gideon's last-mentioned son, to kill all his brothers in the next episode. Gideon rejects the offer of kingship (8:22–28). The Gideon Narrative formally ends at Judges 8:28 with the statement that Israel's enemies were subdued and the land had rest for 40 years. Gideon wisely rejected the hereditary kingship offered by the people of Israel (cf. 1 Samuel 8) with the theologically correct answer (verse 23). However, Gideon did not stop there, as recounted in verses 24–27, he proceeded with requesting the people to give him gold and with that he made an ephod which would become a local cultic object (just like the golden calf episode in Exodus 32) and this tarnishes the positive assessment of Gideon, "But Gideon said to them, "I will not rule over you, nor shall my son rule over you; the LORD shall rule over you."" Transition from Gideon to Abimelech (8:29–35). Verses 29–32 serve as a transitional paragraph to introduce Abimelech's humble origins (verse 31; cf. 9:1), pointing a distinction between him as "one" against "seventy" previously mentioned sons of Gideon. Verses 33–35 resume the conventionalized pattern of the judges: after the death of a God-fearing leader, Israel wandered off the covenant with YHWH, worshipping Canaanite deities, and abandoning loyalty to YHWH and the house of Gideon. "And his concubine who was in Shechem also bore him a son, whose name he called Abimelech." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70293628
70293630
Judges 9
Book of Judges chapter Judges 9 is the ninth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Gideon's son, Abimelech, belonging to a section comprising Judges 6 to 9 and a bigger section of to . Text. This chapter was originally written in the Hebrew language. It is divided into 57 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 1Q6 (1QJudg; &lt; 68 BCE) with extant verses 1–3, 5–6, 28–31, 40–43, 48–49. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes: Panel One A 3:7 And the children of Israel did evil in the sight of the LORD (KJV) B 3:12 And the children of Israel did evil "again" in the sight of the LORD B 4:1 And the children of Israel did evil "again" in the sight of the LORD Panel Two A 6:1 And the children of Israel did evil in the sight of the LORD B 10:6 And the children of Israel did evil "again" in the sight of the LORD B 13:1 And the children of Israel did evil "again" in the sight of the LORD Furthermore, from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above: Panel One 3:8 , "and he sold them," from the root , "makar" 3:12 , "and he strengthened," from the root , "khazaq" 4:2 , "and he sold them," from the root , "makar" Panel Two 6:1 , "and he gave them," from the root , "nathan" 10:7 , "and he sold them," from the root , "makar" 13:1 , "and he gave them," from the root , "nathan" Chapters 6 to 9 record the Gideon/Abimelech Cycle, which has two major parts: The Abimelech Narrative is really a sequel (and conclusion) of the Gideon account, resolving a number of complications originated in the Gideon Narrative. It contains a prologue (8:33–35), followed by two parts: Each of these two parts has a threefold division with interlinks between the divisions, so it displays the following structure: Prologue (8:33–35) Part 1: Abimelech's Rise (9:1–24) A. Abimelech's Treachery Against the House of Jerub-Baal (9:1–6) B. Jotham's Four-Part Plant Fable and Conditional Curse (9:7–21) a. The Fable (9:7–15) b. The Curse (9:16–21) C. The Narrator's First Assertion (9:22–24) Part 2: Abimelech's Demise (9:25–57) A. Shechem's Two Acts of Treachery Against Abimelech (9:25–41) B. The Fable's Fulfillment: Abimelech's Three Acts of Repression (9:42–55) a. First Act of Repression (9:42–45) b. Second Act of Repression (9:46–49) c. Third Act of Repression (9:50–55) C The Narrator's Second Assertion (9:56–57) The rise of Abimelech (9:1–24). After Gideon's death, his son of a concubine (a secondary wife; ), Abimelech, appealed to his mother's kin in Shechem for support in his plans to take over the political power. Using the money from his kinsmen Abimelech hired mercenaries (labeled as 'empty' and 'wanton' people; cf. Genesis 49:4) to kill all other Gideon's sons, a total 70 of them. Gideon's youngest son of the seventy, Jotham, survived the slaughter and went to the top of Mount Gerizim to deliver a "mashal" ("parable") to the people about useful trees, which decline rulership as beneath them, allow the useless and prickly bramble to reign over them with disastrous ending. Jotham's speech was a righteous complaint of a wronged person that would bring about vengeance through divine intervention, as the subsequent story of Abimelech's decline shows. The parable is often recited on Tu BiShvat in Israel until today. The bramble or "jujube" was translated from Hebrew ("atad"), that is, "Ziziphus spina-christi" ("Christ's thorn") which according the Christian tradition was used to make Crown of thorns placed on the head of Jesus Christ during his crucifixion. "And he went to his father's house at Ophrah and killed his brothers the sons of Jerubbaal, seventy men, on one stone. But Jotham the youngest son of Jerubbaal was left, for he hid himself." Verse 5. OpenBible lists 7 possible identifications in modern places: The demise of Abimelech (9:25–57). As predicted by Jotham, the evil coup without 'good faith' was doomed to failure (verses 16–20) and that those who were disloyal to Gideon (verses 17–18) would also be disloyal to Abimelech. The divine control of Abimelech's demise is stated as YHWH 'sent an evil spirit' between Abimelech and the Shechemites (cf. 1 Samuel 16:14.) The Shechemite chieftains soon transferred their affections to a new strongman while attempting to undermine the Abimelech's leadership credentials through the taunts of some drunken louts. Abimelech and his loyalist, Zebul, succeeded in defeating Gaal, the challenger, and then proceeded to take further vengeance on the people of Shechem (verses 42–49). Abimelech continued with his vengeance at Thebez, another fortress city, but this time, an unnamed woman threw down an upper millstone (a symbol of the woman's domestic realm) and crushed Abimelech's skull. Abimelech quickly begged his armor-bearer to kill him so it wouldn't be said that a woman actually killed him (cf. 2 Samuel 11:21). Abimelech's death concludes the whole Gideon Narrative. 52"And Abimelech came to the tower and fought against it and drew near to the door of the tower to burn it with fire." 53"And a certain woman threw an upper millstone on Abimelech's head and crushed his skull." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=70293630
7029586
Brocard points
Special points within a triangle In geometry, Brocard points are special points within a triangle. They are named after Henri Brocard (1845–1922), a French mathematician. Definition. In a triangle △"ABC" with sides a, b, c, where the vertices are labeled A, B, C in counterclockwise order, there is exactly one point P such that the line segments form the same angle, ω, with the respective sides c, a, b, namely that formula_0 Point P is called the first Brocard point of the triangle △"ABC", and the angle ω is called the Brocard angle of the triangle. This angle has the property that formula_1 There is also a second Brocard point, Q, in triangle △"ABC" such that line segments form equal angles with sides b, c, a respectively. In other words, the equations formula_2 apply. Remarkably, this second Brocard point has the same Brocard angle as the first Brocard point. In other words, angle formula_3 is the same as formula_4 The two Brocard points are closely related to one another; in fact, the difference between the first and the second depends on the order in which the angles of triangle △"ABC" are taken. So for example, the first Brocard point of △"ABC" is the same as the second Brocard point of △"ACB". The two Brocard points of a triangle △"ABC" are isogonal conjugates of each other. Construction. The most elegant construction of the Brocard points goes as follows. In the following example the first Brocard point is presented, but the construction for the second Brocard point is very similar. As in the diagram above, form a circle through points A and B, tangent to edge of the triangle (the center of this circle is at the point where the perpendicular bisector of meets the line through point B that is perpendicular to ). Symmetrically, form a circle through points B and C, tangent to edge , and a circle through points A and C, tangent to edge . These three circles have a common point, the first Brocard point of △"ABC". See also Tangent lines to circles. The three circles just constructed are also designated as epicycles of △"ABC". The second Brocard point is constructed in similar fashion. Trilinears and barycentrics of the first two Brocard points. Homogeneous trilinear coordinates for the first and second Brocard points are: formula_5 Thus their barycentric coordinates are: formula_6 The segment between the first two Brocard points. The Brocard points are an example of a bicentric pair of points, but they are not triangle centers because neither Brocard point is invariant under similarity transformations: reflecting a scalene triangle, a special case of a similarity, turns one Brocard point into the other. However, the unordered pair formed by both points is invariant under similarities. The midpoint of the two Brocard points, called the Brocard midpoint, has trilinear coordinates formula_7 and is a triangle center; it is center X(39) in the Encyclopedia of Triangle Centers. The third Brocard point, given in trilinear coordinates as formula_8 is the Brocard midpoint of the anticomplementary triangle and is also the isotomic conjugate of the symmedian point. It is center X(76) in the Encyclopedia of Triangle Centers. The distance between the first two Brocard points P and Q is always less than or equal to half the radius R of the triangle's circumcircle: formula_9 The segment between the first two Brocard points is perpendicularly bisected at the Brocard midpoint by the line connecting the triangle's circumcenter and its Lemoine point. Moreover, the circumcenter, the Lemoine point, and the first two Brocard points are concyclic—they all fall on the same circle, of which the segment connecting the circumcenter and the Lemoine point is a diameter. Distance from circumcenter. The Brocard points P and Q are equidistant from the triangle's circumcenter O: formula_10 Similarities and congruences. The pedal triangles of the first and second Brocard points are congruent to each other and similar to the original triangle. If the lines AP, BP, CP, each through one of a triangle's vertices and its first Brocard point, intersect the triangle's circumcircle at points L, M, N, then the triangle △"LMN" is congruent with the original triangle △"ABC". The same is true if the first Brocard point P is replaced by the second Brocard point Q. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\angle PAB = \\angle PBC = \\angle PCA =\\omega.\\," }, { "math_id": 1, "text": "\\cot\\omega = \\cot\\!\\bigl(\\angle CAB\\bigr) + \\cot\\!\\bigl(\\angle ABC\\bigr) + \\cot\\!\\bigl(\\angle BCA\\bigr)." }, { "math_id": 2, "text": "\\angle QCB = \\angle QBA = \\angle QAC" }, { "math_id": 3, "text": "\\angle PBC = \\angle PCA = \\angle PAB" }, { "math_id": 4, "text": "\\angle QCB = \\angle QBA = \\angle QAC." }, { "math_id": 5, "text": "\\begin{array}{rccccc}\n P= & \\frac{c}{b} &:& \\frac{a}{c} &:& \\frac{b}{a} \\\\\n Q= & \\frac{b}{c} &:& \\frac{c}{a} &:& \\frac{a}{b}\n\\end{array}" }, { "math_id": 6, "text": "\\begin{array}{rccccc}\n P = & c^2a^2 &:& a^2b^2 &:& b^2c^2 \\\\\n Q = & a^2b^2 &:& b^2c^2 &:& c^2a^2\n\\end{array}" }, { "math_id": 7, "text": "\\sin(A +\\omega ) : \\sin(B+\\omega) : \\sin(C+\\omega)=a(b^2+c^2):b(c^2+a^2):c(a^2+b^2)," }, { "math_id": 8, "text": "\\csc (A-\\omega ) : \\csc(B-\\omega):\\csc(C-\\omega)=a^{-3}:b^{-3}:c^{-3}," }, { "math_id": 9, "text": "\\overline{PQ} = 2R\\sin \\omega \\sqrt{1-4\\sin ^2\\omega} \\le \\frac{R}{2}." }, { "math_id": 10, "text": "\\overline{PO} = \\overline{QO} = R\\sqrt{\\frac{a^4+b^4+c^4}{a^2b^2+b^2c^2+c^2a^2}-1} = R\\sqrt{1-4\\sin^2 \\omega }." } ]
https://en.wikipedia.org/wiki?curid=7029586
7030
Code coverage
Metric for source code testing In software engineering, code coverage, also called test coverage, is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high code coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite. Code coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in "Communications of the ACM", in 1963. Coverage criteria. To measure what percentage of code has been executed by a test suite, one or more "coverage criteria" are used. These are usually defined as rules or requirements, which a test suite must satisfy. Basic coverage criteria. There are a number of coverage criteria, but the main ones are: For example, consider the following C function: int foo (int x, int y) int z = 0; if ((x &gt; 0) &amp;&amp; (y &gt; 0)) z = x; return z; Assume this function is a part of some bigger program and this program was run with some test suite. In programming languages that do not perform short-circuit evaluation, condition coverage does not necessarily imply branch coverage. For example, consider the following Pascal code fragment: if a and b then Condition coverage can be satisfied by two tests: However, this set of tests does not satisfy branch coverage since neither case will meet the codice_5 condition. Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing. Modified condition/decision coverage. A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a boolean expression comprising conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, the term "decision coverage" is sometimes used as a synonym for it. Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code: if (a or b) and c then The condition/decision criteria will be satisfied by the following set of tests: However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC: Multiple condition coverage. This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests: Parameter value coverage. Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC. Other coverage criteria. There are further coverage criteria, which are used less often: Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage. For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer. However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.) Martin Fowler writes: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing". Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch. Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of formula_0 decisions in it can have up to formula_1 paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve the halting problem). Basis path testing is for instance a method of achieving complete branch coverage without achieving complete path coverage. Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve "basis path" coverage the tester must cover all the path classes. In practice. The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests. In implementing test coverage policies within a software development environment, one must consider the following: Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage. Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing. There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar real time sensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code. Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. Statement coverage would also cover function coverage with entry and exit, loop, path, state flow, control flow and data flow coverage. With these methods, it is possible to achieve nearly 100% code coverage in most software projects. Usage in industry. Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B and DO-178C. Test coverage is also a requirement in part 6 of the automotive safety standard ISO 26262 "Road Vehicles - Functional Safety". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "2^n" } ]
https://en.wikipedia.org/wiki?curid=7030
70302266
Gamas's theorem
Mathematical Theorem Gamas's theorem is a result in multilinear algebra which states the necessary and sufficient conditions for a tensor symmetrized by an irreducible representation of the symmetric group formula_0 to be zero. It was proven in 1988 by Carlos Gamas. Additional proofs have been given by Pate and Berget. Statement of the theorem. Let formula_1 be a finite-dimensional complex vector space and formula_2 be a partition of formula_3. From the representation theory of the symmetric group formula_0 it is known that the partition formula_2 corresponds to an irreducible representation of formula_0. Let formula_4 be the character of this representation. The tensor formula_5 symmetrized by formula_4 is defined to be formula_6 where formula_7 is the identity element of formula_0. Gamas's theorem states that the above symmetrized tensor is non-zero if and only if it is possible to partition the set of vectors formula_8 into linearly independent sets whose sizes are in bijection with the lengths of the columns of the partition formula_2.
[ { "math_id": 0, "text": "S_n" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\chi^{\\lambda}" }, { "math_id": 5, "text": "v_1 \\otimes v_2 \\otimes \\dots \\otimes v_n \\in V^{\\otimes n}" }, { "math_id": 6, "text": "\n\\frac{\\chi^{\\lambda}(e)}{n!} \\sum_{\\sigma \\in S_n} \\chi^{\\lambda}(\\sigma) v_{\\sigma(1)} \\otimes v_{\\sigma(2)} \\otimes \\dots \\otimes v_{\\sigma(n)}, \n" }, { "math_id": 7, "text": "e" }, { "math_id": 8, "text": "\\{v_i\\}" } ]
https://en.wikipedia.org/wiki?curid=70302266
70302410
4D scanning transmission electron microscopy
4D scanning transmission electron microscopy (4D STEM) is a subset of scanning transmission electron microscopy (STEM) which utilizes a pixelated electron detector to capture a convergent beam electron diffraction (CBED) pattern at each scan location. This technique captures a 2 dimensional reciprocal space image associated with each scan point as the beam rasters across a 2 dimensional region in real space, hence the name 4D STEM. Its development was enabled by evolution in STEM detectors and improvements computational power. The technique has applications in visual diffraction imaging, phase orientation and strain mapping, phase contrast analysis, among others. The name 4D STEM is common in literature, however it is known by other names: 4D STEM EELS, ND STEM (N- since the number of dimensions could be higher than 4), position resolved diffraction (PRD), spatial resolved diffractometry, momentum-resolved STEM, "nanobeam precision electron diffraction", scanning electron nano diffraction (SEND), nanobeam electron diffraction (NBED), or pixelated STEM. History. The use of diffraction patterns as a function of position dates back to the earliest days of STEM, for instance the early review of John M. Cowley and John C. H. Spence in 1978 or the analysis in 1983 by Laurence D. Marks and David J. Smith of the orientation of different crystalline segments in nanoparticles. Later work includes the analysis of diffraction patterns as a function of probe position in 1995, where Peter Nellist, B.C. McCallum and John Rodenburg attempted electron ptychography analysis of crystalline silicon. There is also fluctuation electron microscopy (FEM) technique, proposed in 1996 by Treacy and Gibson, which also included quantitative analysis of the differences in images or diffraction patterns taken at different locations on a given sample. The field of 4D STEM remained underdeveloped due to the limited capabilities of detectors available at the time. The earliest work used either Grigson coils to scan the diffraction pattern, or an optical camera pickup from a phosphur screen. Later on CCD detectors became available, but while these are commonly used in transmission electron microscopy (TEM) they had limited data acquisition rates, could not distinguish where on the detector an electron strikes with high accuracy, and had low dynamic range which made them undesirable for use in 4D STEM. In the late 2010s, the development of hybrid pixel array detectors (PAD) with single electron sensitivity, high dynamic range, and fast readout speeds allowed for practical 4D STEM experiments. Operating Principle. While the process of data collection in 4D STEM is identical to that of standard STEM, each technique utilizes different detectors and collects different data. In 4D STEM there is a pixelated electron detector located at the back focal plane which collects the CBED pattern at each scan location. An image of the sample can be constructed from the CBED patterns by selecting an area in reciprocal space and assigning the average intensity of that area in each CBED pattern to the real space pixel the pattern corresponds to. It is also possible for there to be a(n) ADF or HAADF image taken concurrently with the CBED pattern collection, depending on where the detector is located on the microscope. An annular dark-field image taken may be complementary to a bright-field image constructed from the captured CBED images. The use of a hollow detector with a hole in the middle can allow for transmitted electrons to be passed to an EELS detector while scanning. This allows for the simultaneous collection of chemical spectra information and structure information. Detectors. In traditional TEM, imaging detectors use phosphorescent scintillators paired with a charge coupled device (CCD) to detect electrons. While these devices have good electron sensitivity, they lack the necessary readout speed and dynamic range necessary for 4D STEM. Additionally, the use of a scintillator can worsen the point spread function (PSF) of the detector due to the electron's interaction with the scintillator resulting in a broadening of the signal. In contrast, traditional annular STEM detectors have the necessary readout speed, but instead of collecting a full CBED pattern the detector integrates the collected intensity over a range of angles into a single data point. The development of pixelated detectors in the 2010s with single electron sensitivity, fast readout speeds, and high dynamic range has enabled 4D STEM as a viable experimental method. 4D STEM detectors are typically built as either a monolithic active pixel sensor (MAPS) or as a hybrid pixel array detector (PAD). Monolithic active pixel sensor (MAPS). A MAPS detector consists of a complementary metal–oxide–semiconductor (CMOS) chip paired with a doped epitaxial surface layer which converts high energy electrons into many lower energy electrons that travel down to the detector. MAPS detectors must be radiation hardened as their direct exposure to high energy electrons makes radiation damage a key concern. Due to its monolithic nature and straightforward design, MAPS detectors can attain high pixel densities on the order of 4000 x 4000. This high pixel density when paired with low electron doses can enable single electron counting for high efficiency imaging. Additionally, MAPS detectors tend to have electron high sensitivities and fast readout speeds, but suffer from limited dynamic range. Pixel array detector (PAD). PAD detectors consist of a photodiode bump bonded to an integrated circuit, where each solder bump represents a single pixel on the detector. These detectors typically have lower pixel densities on the order of 128 x 128 but can achieve much higher dynamic range on the order of 32 bits. These detectors can achieve relatively high readout speeds on the order of 1 ms/pixel but are still lacking compared to their annular detector counterparts in STEM which can achieve readout speeds on the order of 10 μs/pixel. Detector noise performance is often measured by its detective quantum efficiency (DQE) defined as: formula_0 where formula_1 is output signal to noise ratio squared and formula_2 is the input signal to noise ratio squared. Ideally the DQE of a sensor is 1 indicating the sensor generates zero noise. The DQE of MAPS, APS and other direct electron detectors tend to be higher than their CCD camera counterparts. Computational Methods. A major issue in 4D STEM is the large quantity of data collected by the technique. With upwards of 100s of TB of data produced over the course of an hour of scanning, finding pertinent information is challenging and requires advanced computation. Analysis of such large datasets can be quite complex and computational methods to process this data are being developed. Many code repositories for analysis of 4D STEM are currently in development including: HyperSpy, pyXem, LiberTEM, Pycroscopy, and py4DSTEM. AI driven analysis is possible. However, some methods require databases of information to train on which currently do not exist. Additionally, lack of metrics for data quality, limited scalability due to poor cross-platform support across different manufacturers, and lack of standardization in analysis and experimental methods brings up questions of comparability across different datasets as well as reproducibility. Selected Applications. 4D STEM has been utilized in a wide array of applications, the most common uses include virtual diffraction imaging, orientation and strain mapping, and phase contrast analysis which are covered below. The technique has also been applied in: medium range order measurement, Higher order Laue zone (HOLZ) channeling contrast imaging, Position averaged CBED, fluctuation electron microscopy, biomaterials characterization, and medical fields (microstructure of pharmaceutical materials and orientation mapping of peptide crystals). This list is in no way exhaustive and as the field is still relatively young more applications are actively being developed. Virtual Diffraction (Dark Field / Bright Field) Imaging. Virtual diffraction imaging is a method developed to generate real space images from diffraction patterns. This technique has been used in characterizing material structures since the 90s but more recently has been applied in 4D STEM applications. This technique often works best with scanning electron nano diffraction (SEND), where the probe convergence angle is relatively low to give separated diffraction disks (thus also giving a resolution measured in nm, not Å). A "virtual detector," is not a detector at all but rather a method of data processing which integrates a subset of pixels in diffraction patterns at each raster position to create a bright-field or dark-field image. A region of interest is selected on some representative diffraction pattern, and only those pixels within the aperture summed to form the image. This virtual aperture can be any size/shape desired and can be created using the 4D dataset gathered from a single scan. This ability to apply different apertures to the same dataset is possible because of having the whole diffraction pattern in the 4D STEM dataset. This eliminates a typical weaknesses in conventional STEM operation as STEM bright-field and dark-field detectors are placed at fixed angles and cannot be changed during imaging. With a 4D dataset bright/dark-field images can be obtained by integrating diffraction intensities from diffracted and transmitted beams respectively. Creating images from these patterns can give nanometer or atomic resolution information (depending on the pixel step size and the range of diffracted angles used to form the image) and is typically used to characterize the structure of nanomaterials. Additionally, these diffraction patterns can be indexed and analyzed using other 4DSTEM techniques, such as orientation and phase mapping, or strain mapping. A key advantage of performing virtual diffraction imaging in 4D STEM is the flexibility. Any shape of aperture could be used: a circle (cognate with traditional TEM bright/dark field imaging), a rectangle, an annulus (cognate with STEM ADF/ABF imaging), or any combination of apertures in a more complex pattern. The use of regular grids of apertures is particularly powerful at imaging a crystal with high signal to noise and minimising the effects of bending and has been used by McCartan et al.; this also allowed the imaging of an array of superlattice spots associated with a particular crystal ordering in part of the crystal as a result of chemical segregation. Virtual diffraction imaging has been used to map interfaces, select intensity from selected areas of the diffraction plane to form enhanced dark field images, map positions of nanoscale precipitates, create phase maps of beam sensitive battery cathode materials, and measure degree of crystallinity in metal-organic frameworks (MOFs). Recent work has further extended the possibilities of virtual diffraction imaging, by applying a more digital approach adapted from one developed for orientation and phase mapping, or strain mapping. In these methods, the diffraction spot positions in a 4D dataset are determined for each diffraction pattern and turned into a list, and operations are performed on the list, not on the whole images. For dark field imaging, the centroid positions for the list of diffraction spots can be simply compared against a list of centroid positions for where spots are expected and intensity only added where diffraction spot centroids agree with the selected positions. This gives far more selectivity than simply integrating all intensity in an aperture (particularly because it ignores diffuse intensity that does not fall in spots), and consequently, much higher contrast in the resulting images and has recently been submitted to arXiv. Phase Orientation Mapping. Phase orientation mapping is typically done with electron back scattered diffraction in SEM which can give 2D maps of grain orientation in polycrystalline materials. The technique can also be done in TEM using Kikuchi lines, which is more applicable for thicker samples since formation of Kikuchi lines relies on diffuse scattering being present. Alternatively, in TEM one can utilize precession electron diffraction (PED) to record a large number of diffraction patterns and through comparison to known patterns, the relative orientation of grains in can be determined. 4D STEM can also be used to map orientations, in a technique called Bragg spot imaging. The use of traditional TEM techniques typically results in better resolution than the 4D STEM approach but can fail in regions with high strain as the DPs become too distorted. In Bragg spot imaging, first correlation analysis method is performed to group diffraction patterns (DPs) using a correlation method between 0 (no correlation) and 1 (exact match); then the DP's are grouped by their correlation using a correlation threshold. A correlation image can then be obtained from each group. These are summed and averaged to obtain an overall representative diffraction template from each grouping. Different orientations can be assigned colors which helps visualize individual grain orientations. With proper tilting and utilizing precession electron diffraction (PED) it is even possible to make 3D tomographic renderings of grain orientation and distribution. Since the technique is computationally intensive, recent efforts have been focused on a machine learning approach to analysis of diffraction patterns. Strain Mapping. TEM can measure local strains and is often used to map strain in samples using condensed beam electron diffraction CBED. The basis of this technique is to compare an unstrained region of the sample's diffraction pattern with a strained region to see the changes in the lattice parameter. With STEM, the disc positions diffracted from an area of a specimen can provide spatial strain information. The use of this technique with 4D STEM datasets includes fairly involved calculations. Utilizing SEND, bright and dark field images can be obtained from diffraction patterns by integration of direct and diffracted beams respectively, as discussed previously. During 4D STEM operation the ADF detector can be used to visualize a particular region of interest through a collection of scattered electrons to large angles to correlate probe location with diffraction during measurements. There is a tradeoff between resolution and strain information; since larger probes can average strain measurements over a large volume, but moving to smaller probe sizes gives higher real space resolution. There are ways to combat this issue such as spacing probes further apart than the resolution limit to increase the field of view. This strain mapping technique has been applied in many crystalline materials and has been extended to semi-crystalline and amorphous materials (such as metallic glasses) since they too exhibit deviations from mean atomic spacing in regions of high strain Phase Contrast Analysis. Differential phase contrast. The differential phase contrast imaging technique (DPC) can be used in STEM to map the local electromagnetic field in samples by measuring the deflection of the electron beam caused by the field at each scan point. Traditionally the movement of the beam was tracked using segmented annular field detectors which surrounded the beam. DPC with segmented detectors has up to atomic resolution. The use of a pixelated detector in 4D STEM and a computer to track the movement of the "center of mass" of the CBED patterns was found to provide comparable results to those found using segmented detectors. 4D STEM allows for phase change measurements along all directions to be measured without the need to rotate the segmented detector to align with specimen orientation. The ability to measure local polarization in parallel with the local electric field has also been demonstrated with 4D STEM. DPC imaging with 4D STEM is up to 2 orders of magnitude slower than DPC with segmented detectors and requires advanced analysis of large four-dimensional datasets. Ptychography. The overlapping CBED measurements present in a 4D STEM dataset allow for the construction of the complex electron probe and complex sample potential using the ptychography technique. Ptychographic reconstructions with 4D STEM data were shown to provide higher contrast than ADF, BF, ABF, and segmented DPC imaging in STEM. The high signal-to-noise ratio of this technique under 4D STEM makes it attractive for imaging radiation sensitive specimens such as biological specimens The use of a pixelated detector with a hole in the middle to allow the unscattered electron beam to pass to a spectrometer has been shown to allow ptychographic analysis in conjunction with chemical analysis in 4D STEM. MIDI STEM. This technique MIDI-STEM (matched illumination and detector interferometry-STEM), while being less common, is used with ptychography to create higher contrast phase images. The placement of a phase plate with zones of 0 and π/2 phase shift in the probe forming aperture creates a series of concentric rings in the resulting CBED pattern. The difference in counts between the 0 and π/2 regions allows for direct measurement of local sample phase. The counts in the different regions could be measured via complex standard detector geometries or the use of a pixelated detector in 4D STEM. Pixelated detectors have been shown to utilize this technique with atomic resolution. (MIDI)-STEM produces image contrast information with less high-pass filtering than DPC or ptychography but is less efficient at high spatial frequencies than those techniques. (MIDI)-STEM used in conjunction with ptychography has been shown to be more efficient in providing contrast information than either technique individually. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "DQE=\\frac{SNR_o^2}{SNR_i^2}" }, { "math_id": 1, "text": "SNR_o^2" }, { "math_id": 2, "text": "SNR_i^2" } ]
https://en.wikipedia.org/wiki?curid=70302410
70309041
Slab (geometry)
In geometry, a slab is a region between two parallel lines in the Euclidean plane, or between two parallel planes in three-dimensional Euclidean space or between two hyperplanes in higher dimensions. Set definition. A slab can also be defined as a set of points: formula_0 where formula_1 is the normal vector of the planes formula_2 and formula_3. Or, if the slab is centered around the origin: formula_4 where formula_5 is the thickness of the slab. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{x \\in \\mathbb{R}^n \\mid \\alpha \\le n \\cdot x \\le \\beta \\}," }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n \\cdot x = \\alpha" }, { "math_id": 3, "text": "n \\cdot x = \\beta" }, { "math_id": 4, "text": "\\{x \\in \\mathbb{R}^n \\mid |n \\cdot x| \\le \\theta / 2 \\}," }, { "math_id": 5, "text": "\\theta = |\\alpha - \\beta|" } ]
https://en.wikipedia.org/wiki?curid=70309041