id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
12182350
Remez inequality
In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez , gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials. The inequality. Let "σ" be an arbitrary fixed positive number. Define the class of polynomials π"n"("σ") to be those polynomials "p" of degree "n" for which formula_0 on some set of measure ≥ 2 contained in the closed interval [−1, 1+"σ"]. Then the Remez inequality states that formula_1 where "T""n"("x") is the Chebyshev polynomial of degree "n", and the supremum norm is taken over the interval [−1, 1+"σ"]. Observe that "T""n" is increasing on formula_2, hence formula_3 The R.i., combined with an estimate on Chebyshev polynomials, implies the following corollary: If "J" ⊂ R is a finite interval, and "E" ⊂ "J" is an arbitrary measurable set, then for any polynomial "p" of degree "n". Extensions: Nazarov–Turán lemma. Inequalities similar to (⁎) have been proved for different classes of functions, and are known as Remez-type inequalities. One important example is Nazarov's inequality for exponential sums : Nazarov's inequality. Let formula_4 be an exponential sum (with arbitrary "λ""k" ∈C), and let "J" ⊂ R be a finite interval, "E" ⊂ "J"—an arbitrary measurable set. Then formula_5 where "C" > 0 is a numerical constant. In the special case when "λk" are pure imaginary and integer, and the subset "E" is itself an interval, the inequality was proved by Pál Turán and is known as Turán's lemma. This inequality also extends to formula_6 in the following way formula_7 for some "A" > 0 independent of "p", "E", and "n". When formula_8 a similar inequality holds for "p" > 2. For "p" = ∞ there is an extension to multidimensional polynomials. Proof: Applying Nazarov's lemma to formula_9 leads to formula_10 thus formula_11 Now fix a set formula_12 and choose formula_13 such that formula_14, that is formula_15 Note that this implies: Now formula_18 which completes the proof. Pólya inequality. One of the corollaries of the R.i. is the Pólya inequality, which was proved by George Pólya , and states that the Lebesgue measure of a sub-level set of a polynomial "p" of degree "n" is bounded in terms of the leading coefficient LC("p") as follows: formula_19
[ { "math_id": 0, "text": "|p(x)| \\le 1" }, { "math_id": 1, "text": "\\sup_{p \\in \\pi_n(\\sigma)} \\left\\|p\\right\\|_\\infty = \\left\\|T_n\\right\\|_\\infty" }, { "math_id": 2, "text": "[1, +\\infty]" }, { "math_id": 3, "text": " \\|T_n\\|_\\infty = T_n(1+\\sigma). " }, { "math_id": 4, "text": "p(x) = \\sum_{k=1}^n a_k e^{\\lambda_k x} " }, { "math_id": 5, "text": "\\max_{x \\in J} |p(x)| \\leq e^{\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes}J} \\left( \\frac{C \\,\\, \\operatorname{mes}J}{\\operatorname{mes}E} \\right)^{n-1} \\sup_{x \\in E} |p(x)|~, " }, { "math_id": 6, "text": "L^p(\\mathbb{T}),\\ 0\\leq p\\leq2" }, { "math_id": 7, "text": " \\|p\\|_{L^p(\\mathbb{T})} \\leq e^{A(n-1) \\operatorname{mes}(\\mathbb{T} \\setminus E)} \\|p\\|_{L^p(E)} " }, { "math_id": 8, "text": "\\operatorname{mes} E <1-\\frac{\\log n}{n}" }, { "math_id": 9, "text": "E = E_\\lambda = \\{x : |p(x)|\\leq\\lambda\\},\\ \\lambda>0" }, { "math_id": 10, "text": "\\max_{x \\in J} |p(x)| \\leq e^{\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes} J} \\left( \\frac{C \\,\\, \\operatorname{mes} J}{\\operatorname{mes} E_\\lambda} \\right)^{n-1} \\sup_{x \\in E_\\lambda} |p(x)| \\leq e^{\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes} J} \\left( \\frac{C \\,\\, \\operatorname{mes} J}{\\operatorname{mes} E_\\lambda} \\right)^{n-1} \\lambda " }, { "math_id": 11, "text": "\\operatorname{mes} E_\\lambda \\leq C \\,\\, \\operatorname{mes} J\\left(\\frac{\\lambda e^{\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes}J}}{\\max_{x \\in J} |p(x)|} \\right)^{\\frac{1}{n-1}}" }, { "math_id": 12, "text": "E" }, { "math_id": 13, "text": "\\lambda" }, { "math_id": 14, "text": "\\operatorname{mes} E_\\lambda\\leq\\tfrac{1}{2}\\operatorname{mes}E" }, { "math_id": 15, "text": "\\lambda = \\left(\\frac{\\operatorname{mes} E}{2C \\operatorname{mes}J}\\right)^{n-1}e^{-\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes}J}\\max_{x \\in J} |p(x)|" }, { "math_id": 16, "text": "\\operatorname{mes}E\\setminus E_{\\lambda}\\ge \\tfrac{1}{2} \\operatorname{mes}E ." }, { "math_id": 17, "text": "\\forall x \\in E \\setminus E_{\\lambda} : |p(x)| > \\lambda ." }, { "math_id": 18, "text": "\\begin{align}\n\\int_{x\\in E}|p(x)|^p\\,\\mbox{d}x &\\geq \\int_{x\\in E \\setminus E_\\lambda}|p(x)|^p\\,\\mbox{d}x \\\\[6pt]\n&\\geq \\lambda^p\\frac{1}{2}\\operatorname{mes}E \\\\[6pt]\n&= \\frac{1}{2}\\operatorname{mes}E \\left(\\frac{\\operatorname{mes}E}{2C \\operatorname{mes}J}\\right)^{p(n-1)}e^{-p\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes}J}\\max_{x \\in J} |p(x)|^p \\\\[6pt]\n&\\geq \\frac{1}{2} \\frac{\\operatorname{mes}E}{\\operatorname{mes}J}\\left(\\frac{\\operatorname{mes} E}{2C \\operatorname{mes}J}\\right)^{p(n-1)}e^{-p\\max_k |\\Re \\lambda_k| \\, \\operatorname{mes}J}\\int_{x \\in J} |p(x)|^p\\,\\mbox{d}x,\n\\end{align}" }, { "math_id": 19, "text": "\\operatorname{mes} \\left\\{ x \\in \\R : \\left|P(x)\\right| \\leq a \\right\\} \\leq 4 \\left(\\frac{a}{2 \\mathrm{LC}(p)}\\right)^{1/n}, \\quad a>0~." } ]
https://en.wikipedia.org/wiki?curid=12182350
12184856
Solid harmonics
In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions formula_0. There are two kinds: the "regular solid harmonics" formula_1, which are well-defined at the origin and the "irregular solid harmonics" formula_2, which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately: formula_3 formula_4 Derivation, relation to spherical harmonics. Introducing r, θ, and φ for the spherical polar coordinates of the 3-vector r, and assuming that formula_5 is a (smooth) function formula_0, we can write the Laplace equation in the following form formula_6 where "l"2 is the square of the nondimensional angular momentum operator, formula_7 It is known that spherical harmonics "Y" are eigenfunctions of "l"2: formula_8 Substitution of Φ(r) = "F"("r") "Y" into the Laplace equation gives, after dividing out the spherical harmonic function, the following radial equation and its general solution, formula_9 The particular solutions of the total Laplace equation are regular solid harmonics: formula_10 and irregular solid harmonics: formula_11 The regular solid harmonics correspond to harmonic homogeneous polynomials, i.e. homogeneous polynomials which are solutions to Laplace's equation. Racah's normalization. Racah's normalization (also known as Schmidt's semi-normalization) is applied to both functions formula_12 (and analogously for the irregular solid harmonic) instead of normalization to unity. This is convenient because in many applications the Racah normalization factor appears unchanged throughout the derivations. Addition theorems. The translation of the regular solid harmonic gives a finite expansion, formula_13 where the Clebsch–Gordan coefficient is given by formula_14 The similar expansion for irregular solid harmonics gives an infinite series, formula_15 with formula_16. The quantity between pointed brackets is again a Clebsch-Gordan coefficient, formula_17 The addition theorems were proved in different manners by several authors. Complex form. The regular solid harmonics are homogeneous, polynomial solutions to the Laplace equation formula_18. Separating the indeterminate formula_19 and writing formula_20, the Laplace equation is easily seen to be equivalent to the recursion formula formula_21 so that any choice of polynomials formula_22 of degree formula_23 and formula_24 of degree formula_25 gives a solution to the equation. One particular basis of the space of homogeneous polynomials (in two variables) of degree formula_26 is formula_27. Note that it is the (unique up to normalization) basis of eigenvectors of the rotation group formula_28: The rotation formula_29 of the plane by formula_30 acts as multiplication by formula_31 on the basis vector formula_32. If we combine the degree formula_23 basis and the degree formula_25 basis with the recursion formula, we obtain a basis of the space of harmonic, homogeneous polynomials (in three variables this time) of degree formula_23 consisting of eigenvectors for formula_28 (note that the recursion formula is compatible with the formula_28-action because the Laplace operator is rotationally invariant). These are the complex solid harmonics: formula_33 and in general formula_34 for formula_35. Plugging in spherical coordinates formula_36, formula_37, formula_38 and using formula_39 one finds the usual relationship to spherical harmonics formula_40 with a polynomial formula_41, which is (up to normalization) the associated Legendre polynomial, and so formula_42 (again, up to the specific choice of normalization). Real form. By a simple linear combination of solid harmonics of ±"m" these functions are transformed into real functions, i.e. functions formula_43. The real regular solid harmonics, expressed in Cartesian coordinates, are real-valued homogeneous polynomials of order formula_23 in "x", "y", "z". The explicit form of these polynomials is of some importance. They appear, for example, in the form of spherical atomic orbitals and real multipole moments. The explicit Cartesian expression of the real regular harmonics will now be derived. Linear combination. We write in agreement with the earlier definition formula_44 with formula_45 where formula_46 is a Legendre polynomial of order ℓ. The m dependent phase is known as the Condon–Shortley phase. The following expression defines the real regular solid harmonics: formula_47 and for "m" = 0: formula_48 Since the transformation is by a unitary matrix the normalization of the real and the complex solid harmonics is the same. "z"-dependent part. Upon writing "u" = cos "θ" the m-th derivative of the Legendre polynomial can be written as the following expansion in u formula_49 with formula_50 Since "z" = "r" cos "θ" it follows that this derivative, times an appropriate power of r, is a simple polynomial in z, formula_51 ("x","y")-dependent part. Consider next, recalling that "x" = "r" sin "θ" cos "φ" and "y" = "r" sin "θ" sin "φ", formula_52 Likewise formula_53 Further formula_54 and formula_55 In total. formula_56 formula_57 List of lowest functions. We list explicitly the lowest functions up to and including "ℓ" = 5. Here formula_58 formula_59 The lowest functions formula_60 and formula_61 are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^3 \\to \\mathbb{C}" }, { "math_id": 1, "text": "R^m_\\ell(\\mathbf{r})" }, { "math_id": 2, "text": "I^m_{\\ell}(\\mathbf{r})" }, { "math_id": 3, "text": " \nR^m_{\\ell}(\\mathbf{r}) \\equiv \\sqrt{\\frac{4\\pi}{2\\ell+1}}\\; r^\\ell Y^m_{\\ell}(\\theta,\\varphi)\n" }, { "math_id": 4, "text": " \nI^m_{\\ell}(\\mathbf{r}) \\equiv \\sqrt{\\frac{4\\pi}{2\\ell+1}} \\; \\frac{ Y^m_{\\ell}(\\theta,\\varphi)}{r^{\\ell+1}}\n" }, { "math_id": 5, "text": "\\Phi" }, { "math_id": 6, "text": " \\nabla^2\\Phi(\\mathbf{r}) = \\left(\\frac{1}{r} \\frac{\\partial^2}{\\partial r^2}r - \\frac{\\hat l^2}{r^2}\\right)\\Phi(\\mathbf{r}) = 0 , \\qquad \\mathbf{r} \\ne \\mathbf{0},\n" }, { "math_id": 7, "text": " \\mathbf{\\hat l} = -i\\, (\\mathbf{r} \\times \\mathbf{\\nabla}) .\n" }, { "math_id": 8, "text": "\n\\hat l^2 Y^m_{\\ell}\\equiv \\left[ {\\hat l_x}^2 +\\hat l^2_y+\\hat l^2_z\\right]Y^m_{\\ell} = \\ell(\\ell+1) Y^m_{\\ell}.\n" }, { "math_id": 9, "text": "\n\\frac{1}{r}\\frac{\\partial^2}{\\partial r^2}r F(r) = \\frac{\\ell(\\ell+1)}{r^2} F(r)\n\\Longrightarrow F(r) = A r^\\ell + B r^{-\\ell-1}.\n" }, { "math_id": 10, "text": "\nR^m_{\\ell}(\\mathbf{r}) \\equiv \\sqrt{\\frac{4\\pi}{2\\ell+1}}\\; r^\\ell Y^m_{\\ell}(\\theta,\\varphi), \n" }, { "math_id": 11, "text": "\nI^m_{\\ell}(\\mathbf{r}) \\equiv \\sqrt{\\frac{4\\pi}{2\\ell+1}} \\; \\frac{ Y^m_{\\ell}(\\theta,\\varphi)}{r^{\\ell+1}} .\n" }, { "math_id": 12, "text": "\n\\int_{0}^{\\pi}\\sin\\theta\\, d\\theta \\int_0^{2\\pi} d\\varphi\\; R^m_{\\ell}(\\mathbf{r})^*\\; R^m_{\\ell}(\\mathbf{r}) \n= \\frac{4\\pi}{2\\ell+1} r^{2\\ell}\n" }, { "math_id": 13, "text": " R^m_\\ell(\\mathbf{r}+\\mathbf{a}) = \\sum_{\\lambda=0}^\\ell\\binom{2\\ell}{2\\lambda}^{1/2} \\sum_{\\mu=-\\lambda}^\\lambda R^\\mu_{\\lambda}(\\mathbf{r}) R^{m-\\mu}_{\\ell-\\lambda}(\\mathbf{a})\\;\n\\langle \\lambda, \\mu; \\ell-\\lambda, m-\\mu| \\ell m \\rangle,\n" }, { "math_id": 14, "text": "\n\\langle \\lambda, \\mu; \\ell-\\lambda, m-\\mu| \\ell m \\rangle\n= \\binom{\\ell+m}{\\lambda+\\mu}^{1/2} \\binom{\\ell-m}{\\lambda-\\mu}^{1/2} \\binom{2\\ell}{2\\lambda}^{-1/2}.\n" }, { "math_id": 15, "text": " I^m_\\ell(\\mathbf{r}+\\mathbf{a}) = \\sum_{\\lambda=0}^\\infty\\binom{2\\ell+2\\lambda+1}{2\\lambda}^{1/2} \\sum_{\\mu=-\\lambda}^\\lambda R^\\mu_{\\lambda}(\\mathbf{r}) I^{m-\\mu}_{\\ell+\\lambda}(\\mathbf{a})\\;\n\\langle \\lambda, \\mu; \\ell+\\lambda, m-\\mu| \\ell m \\rangle\n" }, { "math_id": 16, "text": " |r| \\le |a|\\," }, { "math_id": 17, "text": "\n\\langle \\lambda, \\mu; \\ell+\\lambda, m-\\mu| \\ell m \\rangle\n= (-1)^{\\lambda+\\mu}\\binom{\\ell+\\lambda-m+\\mu}{\\lambda+\\mu}^{1/2} \\binom{\\ell+\\lambda+m-\\mu}{\\lambda-\\mu}^{1/2}\n\\binom{2\\ell+2\\lambda+1}{2\\lambda}^{-1/2}.\n" }, { "math_id": 18, "text": "\\Delta R=0" }, { "math_id": 19, "text": "z" }, { "math_id": 20, "text": "R = \\sum_a p_a(x,y) z^a" }, { "math_id": 21, "text": "p_{a+2} = \\frac{-\\left(\\partial_x^2 + \\partial_y^2\\right) p_a}{\\left(a+2\\right) \\left(a+1\\right)}" }, { "math_id": 22, "text": "p_0(x,y)" }, { "math_id": 23, "text": "\\ell" }, { "math_id": 24, "text": "p_1(x,y)" }, { "math_id": 25, "text": "\\ell-1" }, { "math_id": 26, "text": "k" }, { "math_id": 27, "text": "\\left\\{(x^2+y^2)^m(x\\pm iy)^{k-2m} \\mid 0\\leq m\\leq k/2\\right\\}" }, { "math_id": 28, "text": "SO(2)" }, { "math_id": 29, "text": "\\rho_\\alpha" }, { "math_id": 30, "text": "\\alpha\\in[0,2\\pi]" }, { "math_id": 31, "text": "e^{\\pm i(k-2m)\\alpha}" }, { "math_id": 32, "text": "(x^2+y^2)^m (x+iy)^{k-2m}" }, { "math_id": 33, "text": "\\begin{align}\nR_\\ell^{\\pm\\ell} &= (x \\pm iy)^\\ell z^0 \\\\\nR_\\ell^{\\pm(\\ell-1)} &= (x \\pm iy)^{\\ell-1} z^1 \\\\\nR_\\ell^{\\pm(\\ell-2)} &= (x^2+y^2)(x \\pm iy)^{\\ell-2} z^0 + \\frac{-(\\partial_x^2+\\partial_y^2)\\left( (x^2+y^2)(x \\pm iy)^{\\ell-2} \\right)}{1\\cdot 2} z^2 \\\\\nR_\\ell^{\\pm(\\ell-3)} &= (x^2+y^2)(x \\pm iy)^{\\ell-3} z^1 + \\frac{-(\\partial_x^2+\\partial_y^2)\\left( (x^2+y^2)(x \\pm iy)^{\\ell-3} \\right)}{2\\cdot 3} z^3 \\\\\nR_\\ell^{\\pm(\\ell-4)} &= (x^2+y^2)^2(x \\pm iy)^{\\ell-4} z^0 + \\frac{-(\\partial_x^2+\\partial_y^2)\\left( (x^2+y^2)^2(x \\pm iy)^{\\ell-4} \\right)}{1\\cdot 2} z^2 + \\frac{ (\\partial_x^2+\\partial_y^2)^2 \\left( (x^2+y^2)^2(x \\pm iy)^{\\ell-4}\\right)}{1\\cdot 2 \\cdot 3\\cdot 4}z^4 \\\\\nR_\\ell^{\\pm(\\ell-5)} &= (x^2+y^2)^2(x \\pm iy)^{\\ell-5} z^1 + \\frac{-(\\partial_x^2+\\partial_y^2)\\left( (x^2+y^2)^2(x \\pm iy)^{\\ell-5} \\right)}{2\\cdot 3} z^3 + \\frac{ (\\partial_x^2+\\partial_y^2)^2 \\left( (x^2+y^2)^2(x \\pm iy)^{\\ell-5}\\right)}{2 \\cdot 3\\cdot 4\\cdot 5}z^5 \\\\\n&\\;\\,\\vdots\n\\end{align}" }, { "math_id": 34, "text": "R_\\ell^{\\pm m} = \\begin{cases}\n\\sum_k (\\partial_x^2+\\partial_y^2)^k \\left( (x^2+y^2)^{(\\ell-m)/2} (x\\pm iy)^m \\right) \\frac{(-1)^k z^{2k}}{ (2k)! } & \\ell-m \\text{ is even} \\\\\n\\sum_k (\\partial_x^2+\\partial_y^2)^k \\left( (x^2+y^2)^{(\\ell-1-m)/2} (x\\pm iy)^m \\right) \\frac{(-1)^k z^{2k+1}}{ (2k+1)! } & \\ell-m \\text{ is odd}\n\\end{cases}" }, { "math_id": 35, "text": "0\\leq m\\leq \\ell" }, { "math_id": 36, "text": "x = r\\cos(\\theta)\\sin(\\varphi)" }, { "math_id": 37, "text": "y = r\\sin(\\theta)\\sin(\\varphi)" }, { "math_id": 38, "text": "z = r\\cos(\\varphi)" }, { "math_id": 39, "text": "x^2+y^2=r^2 \\sin(\\varphi)^2 = r^2(1-\\cos(\\varphi)^2)" }, { "math_id": 40, "text": "R_\\ell^m = r^\\ell e^{im\\phi} P_\\ell^m(\\cos(\\vartheta))" }, { "math_id": 41, "text": "P_\\ell^m" }, { "math_id": 42, "text": "R_\\ell^m = r^\\ell Y_\\ell^m(\\theta,\\varphi)" }, { "math_id": 43, "text": "\\mathbb{R}^3 \\to \\mathbb{R}" }, { "math_id": 44, "text": "\nR_\\ell^m(r,\\theta,\\varphi) = (-1)^{(m+|m|)/2}\\; r^\\ell \\;\\Theta_{\\ell}^{|m|} (\\cos\\theta)\n e^{im\\varphi}, \\qquad -\\ell \\le m \\le \\ell,\n" }, { "math_id": 45, "text": "\n\\Theta_{\\ell}^m (\\cos\\theta) \\equiv \\left[\\frac{(\\ell-m)!}{(\\ell+m)!}\\right]^{1/2} \\,\\sin^m\\theta\\, \\frac{d^m P_\\ell(\\cos\\theta)}{d\\cos^m\\theta}, \\qquad m\\ge 0,\n" }, { "math_id": 46, "text": " P_\\ell(\\cos\\theta)" }, { "math_id": 47, "text": "\n\\begin{pmatrix}\nC_\\ell^{m} \\\\\nS_\\ell^{m}\n\\end{pmatrix}\n\\equiv \\sqrt{2} \\; r^\\ell \\; \\Theta^{m}_\\ell\n\\begin{pmatrix}\n\\cos m\\varphi\\\\ \\sin m\\varphi\n\\end{pmatrix} \n=\n\\frac{1}{\\sqrt{2}}\n\\begin{pmatrix}\n(-1)^m & \\quad 1 \\\\\n-(-1)^m i & \\quad i \n\\end{pmatrix} \n\\begin{pmatrix}\nR_\\ell^{m} \\\\\nR_\\ell^{-m}\n\\end{pmatrix},\n\\qquad m > 0.\n" }, { "math_id": 48, "text": "C_\\ell^0 \\equiv R_\\ell^0 ." }, { "math_id": 49, "text": "\n\\frac{d^m P_\\ell(u)}{du^m} =\n\\sum_{k=0}^{\\left \\lfloor (\\ell-m)/2\\right \\rfloor} \\gamma^{(m)}_{\\ell k}\\; u^{\\ell-2k-m}\n" }, { "math_id": 50, "text": "\n\\gamma^{(m)}_{\\ell k} = (-1)^k 2^{-\\ell} \\binom{\\ell}{k}\\binom{2\\ell-2k}{\\ell} \\frac{(\\ell-2k)!}{(\\ell-2k-m)!}.\n" }, { "math_id": 51, "text": "\n\\Pi^m_\\ell(z)\\equiv\nr^{\\ell-m} \\frac{d^m P_\\ell(u)}{du^m} =\n\\sum_{k=0}^{\\left \\lfloor (\\ell-m)/2\\right \\rfloor} \\gamma^{(m)}_{\\ell k}\\; r^{2k}\\; z^{\\ell-2k-m}.\n" }, { "math_id": 52, "text": "\nr^m \\sin^m\\theta \\cos m\\varphi = \\frac{1}{2} \\left[ (r \\sin\\theta e^{i\\varphi})^m \n+ (r \\sin\\theta e^{-i\\varphi})^m \\right] =\n\\frac{1}{2} \\left[ (x+iy)^m + (x-iy)^m \\right]\n" }, { "math_id": 53, "text": "\nr^m \\sin^m\\theta \\sin m\\varphi = \\frac{1}{2i} \\left[ (r \\sin\\theta e^{i\\varphi})^m \n- (r \\sin\\theta e^{-i\\varphi})^m \\right] =\n\\frac{1}{2i} \\left[ (x+iy)^m - (x-iy)^m \\right].\n" }, { "math_id": 54, "text": "\nA_m(x,y) \\equiv\n\\frac{1}{2} \\left[ (x+iy)^m + (x-iy)^m \\right]= \\sum_{p=0}^m \\binom{m}{p} x^p y^{m-p} \\cos (m-p) \\frac{\\pi}{2}\n" }, { "math_id": 55, "text": "\nB_m(x,y) \\equiv\n\\frac{1}{2i} \\left[ (x+iy)^m - (x-iy)^m \\right]= \\sum_{p=0}^m \\binom{m}{p} x^p y^{m-p} \\sin (m-p) \\frac{\\pi}{2}.\n" }, { "math_id": 56, "text": "\nC^m_\\ell(x,y,z) = \\left[\\frac{(2-\\delta_{m0}) (\\ell-m)!}{(\\ell+m)!}\\right]^{1/2} \\Pi^m_{\\ell}(z)\\;A_m(x,y),\\qquad m=0,1, \\ldots,\\ell\n" }, { "math_id": 57, "text": "\nS^m_\\ell(x,y,z) = \\left[\\frac{2 (\\ell-m)!}{(\\ell+m)!}\\right]^{1/2} \\Pi^m_{\\ell}(z)\\;B_m(x,y)\n,\\qquad m=1,2,\\ldots,\\ell.\n" }, { "math_id": 58, "text": "\\bar{\\Pi}^m_\\ell(z) \\equiv \\left[\\tfrac{(2-\\delta_{m0}) (\\ell-m)!}{(\\ell+m)!}\\right]^{1/2} \\Pi^m_{\\ell}(z) .\n" }, { "math_id": 59, "text": "\n \\begin{align}\n \\bar{\\Pi}^0_0 & = 1 &\n \\bar{\\Pi}^1_3 & = \\frac{1}{4}\\sqrt{6}(5z^2-r^2) &\n \\bar{\\Pi}^4_4 & = \\frac{1}{8}\\sqrt{35} \\\\\n \\bar{\\Pi}^0_1 & = z &\n \\bar{\\Pi}^2_3 & = \\frac{1}{2}\\sqrt{15}\\; z &\n \\bar{\\Pi}^0_5 & = \\frac{1}{8}z(63z^4-70z^2r^2+15r^4) \\\\\n \\bar{\\Pi}^1_1 & = 1 &\n \\bar{\\Pi}^3_3 & = \\frac{1}{4}\\sqrt{10} &\n \\bar{\\Pi}^1_5 & = \\frac{1}{8}\\sqrt{15} (21z^4-14z^2r^2+r^4) \\\\\n \\bar{\\Pi}^0_2 & = \\frac{1}{2}(3z^2-r^2) &\n \\bar{\\Pi}^0_4 & = \\frac{1}{8}(35 z^4-30 r^2 z^2 +3r^4 ) &\n \\bar{\\Pi}^2_5 & = \\frac{1}{4}\\sqrt{105}(3z^2-r^2)z \\\\\n \\bar{\\Pi}^1_2 & = \\sqrt{3}z &\n \\bar{\\Pi}^1_4 & = \\frac{\\sqrt{10}}{4} z(7z^2-3r^2) &\n \\bar{\\Pi}^3_5 & = \\frac{1}{16}\\sqrt{70} (9z^2-r^2) \\\\\n \\bar{\\Pi}^2_2 & = \\frac{1}{2}\\sqrt{3} &\n \\bar{\\Pi}^2_4 & = \\frac{1}{4}\\sqrt{5}(7z^2-r^2) &\n \\bar{\\Pi}^4_5 & = \\frac{3}{8}\\sqrt{35} z \\\\\n \\bar{\\Pi}^0_3 & = \\frac{1}{2} z(5z^2-3r^2) &\n \\bar{\\Pi}^3_4 & = \\frac{1}{4}\\sqrt{70}\\;z &\n \\bar{\\Pi}^5_5 & = \\frac{3}{16}\\sqrt{14} \\\\\n \\end{align}\n" }, { "math_id": 60, "text": "A_m(x,y)\\," }, { "math_id": 61, "text": " B_m(x,y)\\," } ]
https://en.wikipedia.org/wiki?curid=12184856
12186373
Macaulay's method
Macaulay's method (the double integration method) is a technique used in structural analysis to determine the deflection of Euler-Bernoulli beams. Use of Macaulay's technique is very convenient for cases of discontinuous and/or discrete loading. Typically partial uniformly distributed loads (u.d.l.) and uniformly varying loads (u.v.l.) over the span and a number of concentrated loads are conveniently handled using this technique. The first English language description of the method was by Macaulay. The actual approach appears to have been developed by Clebsch in 1862. Macaulay's method has been generalized for Euler-Bernoulli beams with axial compression, to Timoshenko beams, to elastic foundations, and to problems in which the bending and shear stiffness changes discontinuously in a beam. Method. The starting point is the relation from Euler-Bernoulli beam theory formula_0 Where formula_1 is the deflection and formula_2 is the bending moment. This equation is simpler than the fourth-order beam equation and can be integrated twice to find formula_1 if the value of formula_2 as a function of formula_3 is known. For general loadings, formula_2 can be expressed in the form formula_4 where the quantities formula_5 represent the bending moments due to point loads and the quantity formula_6 is a Macaulay bracket defined as formula_7 Ordinarily, when integrating formula_8 we get formula_9 However, when integrating expressions containing Macaulay brackets, we have formula_10 with the difference between the two expressions being contained in the constant formula_11. Using these integration rules makes the calculation of the deflection of Euler-Bernoulli beams simple in situations where there are multiple point loads and point moments. The Macaulay method predates more sophisticated concepts such as Dirac delta functions and step functions but achieves the same outcomes for beam problems. Example: Simply supported beam with point load. An illustration of the Macaulay method considers a simply supported beam with a single eccentric concentrated load as shown in the adjacent figure. The first step is to find formula_2. The reactions at the supports A and C are determined from the balance of forces and moments as formula_12 Therefore, formula_13 and the bending moment at a point D between A and B (formula_14) is given by formula_15 Using the moment-curvature relation and the Euler-Bernoulli expression for the bending moment, we have formula_16 Integrating the above equation we get, for formula_14, formula_17 At formula_18 formula_19 For a point D in the region BC (formula_20), the bending moment is formula_21 In Macaulay's approach we use the Macaulay bracket form of the above expression to represent the fact that a point load has been applied at location B, i.e., formula_22 Therefore, the Euler-Bernoulli beam equation for this region has the form formula_23 Integrating the above equation, we get for formula_20 formula_24 At formula_25 formula_26 Comparing equations (iii) &amp; (vii) and (iv) &amp; (viii) we notice that due to continuity at point B, formula_27 and formula_28. The above observation implies that for the two regions considered, though the equation for bending moment and hence for the curvature are different, the constants of integration got during successive integration of the equation for curvature for the two regions are the same. The above argument holds true for any number/type of discontinuities in the equations for curvature, provided that in each case the equation retains the term for the subsequent region in the form formula_29 etc. It should be remembered that for any x, giving the quantities within the brackets, as in the above case, -ve should be neglected, and the calculations should be made considering only the quantities which give +ve sign for the terms within the brackets. Reverting to the problem, we have formula_23 It is obvious that the first term only is to be considered for formula_30 and both the terms for formula_31 and the solution is formula_32 Note that the constants are placed immediately after the first term to indicate that they go with the first term when formula_30 and with both the terms when formula_31. The Macaulay brackets help as a reminder that the quantity on the right is zero when considering points with formula_30. Boundary Conditions. As formula_33 at formula_34, formula_35. Also, as formula_33 at formula_36, formula_37 or, formula_38 Hence, formula_39 Maximum deflection. For formula_1 to be maximum, formula_40. Assuming that this happens for formula_30 we have formula_41 or formula_42 Clearly formula_43 cannot be a solution. Therefore, the maximum deflection is given by formula_44 or, formula_45 Deflection at load application point. At formula_46, i.e., at point B, the deflection is formula_47 or formula_48 Deflection at midpoint. It is instructive to examine the ratio of formula_49. At formula_50 formula_51 Therefore, formula_52 where formula_53 and for formula_54. Even when the load is as near as 0.05L from the support, the error in estimating the deflection is only 2.6%. Hence in most of the cases the estimation of maximum deflection may be made fairly accurately with reasonable margin of error by working out deflection at the centre. Special case of symmetrically applied load. When formula_55, for formula_1 to be maximum formula_56 and the maximum deflection is formula_57 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\pm EI\\dfrac{d^2w}{dx^2} = M \n" }, { "math_id": 1, "text": "w" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "\n M = M_1(x) + P_1\\langle x - a_1\\rangle + P_2\\langle x - a_2\\rangle + P_3\\langle x - a_3\\rangle + \\dots\n " }, { "math_id": 5, "text": "P_i\\langle x - a_i\\rangle" }, { "math_id": 6, "text": "\\langle x - a_i\\rangle" }, { "math_id": 7, "text": "\n \\langle x - a_i\\rangle = \\begin{cases} 0 & \\mathrm{if}~ x < a_i \\\\ x - a_i & \\mathrm{if}~ x > a_i \\end{cases}\n " }, { "math_id": 8, "text": "P(x-a)" }, { "math_id": 9, "text": "\n \\int P(x-a)~dx = P\\left[\\cfrac{x^2}{2} - ax\\right] + C\n " }, { "math_id": 10, "text": "\n \\int P\\langle x-a \\rangle~dx = P\\cfrac{\\langle x-a \\rangle^2}{2} + C_m\n " }, { "math_id": 11, "text": "C_m" }, { "math_id": 12, "text": "\n R_A + R_C = P,~~ L R_C = P a\n " }, { "math_id": 13, "text": "R_A = Pb/L" }, { "math_id": 14, "text": " 0 < x < a" }, { "math_id": 15, "text": "\n M = R_A x = Pbx/L \n " }, { "math_id": 16, "text": "\n EI\\dfrac{d^2w}{dx^2} = \\dfrac{Pbx}{L}\n" }, { "math_id": 17, "text": "\n \\begin{align}\n EI\\dfrac{dw}{dx} &= \\dfrac{Pbx^2}{2L} +C_1 & &\\quad\\mathrm{(i)}\\\\\n EI w &= \\dfrac{Pbx^3}{6L} + C_1 x + C_2 & &\\quad\\mathrm{(ii)}\n \\end{align}\n " }, { "math_id": 18, "text": "x=a_{-}" }, { "math_id": 19, "text": "\n \\begin{align}\n EI\\dfrac{dw}{dx}(a_{-}) &= \\dfrac{Pba^2}{2L} +C_1 & &\\quad\\mathrm{(iii)} \\\\\n EI w(a_{-}) &= \\dfrac{Pba^3}{6L} + C_1 a + C_2 & &\\quad\\mathrm{(iv)}\n \\end{align}\n " }, { "math_id": 20, "text": "a < x < L" }, { "math_id": 21, "text": "\n M = R_A x - P(x-a) = Pbx/L - P(x-a)\n " }, { "math_id": 22, "text": "\n M = \\frac{Pbx}{L} - P\\langle x-a \\rangle\n " }, { "math_id": 23, "text": "\n EI\\dfrac{d^2w}{dx^2} = \\dfrac{Pbx}{L} - P\\langle x-a \\rangle\n" }, { "math_id": 24, "text": "\n \\begin{align}\n EI\\dfrac{dw}{dx} &= \\dfrac{Pbx^2}{2L} - P\\cfrac{\\langle x-a \\rangle^2}{2} + D_1 & &\\quad\\mathrm{(v)}\\\\\n EI w &= \\dfrac{Pbx^3}{6L} - P\\cfrac{\\langle x-a \\rangle^3}{6} + D_1 x + D_2 & &\\quad\\mathrm{(vi)}\n \\end{align}\n " }, { "math_id": 25, "text": "x=a_{+}" }, { "math_id": 26, "text": "\n \\begin{align}\n EI\\dfrac{dw}{dx}(a_{+}) &= \\dfrac{Pba^2}{2L} + D_1 & &\\quad\\mathrm{(vii)}\\\\\n EI w(a_{+}) &= \\dfrac{Pba^3}{6L} + D_1 a + D_2 & &\\quad\\mathrm{(viii)}\n \\end{align}\n " }, { "math_id": 27, "text": "C_1 = D_1" }, { "math_id": 28, "text": "C_2 = D_2" }, { "math_id": 29, "text": "\\langle x-a\\rangle ^n, \\langle x-b\\rangle ^n, \\langle x-c\\rangle ^n" }, { "math_id": 30, "text": "x < a" }, { "math_id": 31, "text": "x > a" }, { "math_id": 32, "text": "\n \\begin{align}\n EI\\dfrac{dw}{dx} &= \\left[\\dfrac{Pbx^2}{2L} + C_1\\right] - \\cfrac{P\\langle x-a \\rangle^2}{2} \\\\\n EI w &= \\left[\\dfrac{Pbx^3}{6L} + C_1 x + C_2\\right] - \\cfrac{P\\langle x-a \\rangle^3}{6} \n \\end{align}\n " }, { "math_id": 33, "text": "w = 0" }, { "math_id": 34, "text": "x = 0" }, { "math_id": 35, "text": "C2 = 0" }, { "math_id": 36, "text": "x = L" }, { "math_id": 37, "text": "\n \\left[\\dfrac{PbL^2}{6} + C_1 L \\right] - \\cfrac{P(L-a)^3}{6} = 0\n " }, { "math_id": 38, "text": "\n C_1 = -\\cfrac{Pb}{6L}(L^2-b^2) ~.\n " }, { "math_id": 39, "text": "\n \\begin{align}\n EI\\dfrac{dw}{dx} &= \\left[\\dfrac{Pbx^2}{2L} -\\cfrac{Pb}{6L}(L^2-b^2)\\right] - \\cfrac{P\\langle x-a \\rangle^2}{2} \\\\\n EI w &= \\left[\\dfrac{Pbx^3}{6L} -\\cfrac{Pbx}{6L}(L^2-b^2)\\right] - \\cfrac{P\\langle x-a \\rangle^3}{6} \n \\end{align}\n " }, { "math_id": 40, "text": "dw/dx = 0" }, { "math_id": 41, "text": "\n \\dfrac{Pbx^2}{2L} -\\cfrac{Pb}{6L}(L^2-b^2) = 0\n " }, { "math_id": 42, "text": "\n x = \\pm \\cfrac{(L^2-b^2)^{1/2}}{\\sqrt{3}}\n " }, { "math_id": 43, "text": " x < 0" }, { "math_id": 44, "text": "\n EI w_{\\mathrm{max}} = \\cfrac{1}{3}\\left[\\dfrac{Pb(L^2-b^2)^{3/2}}{6\\sqrt{3}L}\\right] -\\cfrac{Pb(L^2-b^2)^{3/2}}{6\\sqrt{3}L}\n " }, { "math_id": 45, "text": "\n w_{\\mathrm{max}} = -\\dfrac{Pb(L^2-b^2)^{3/2}}{9\\sqrt{3}EIL}~.\n " }, { "math_id": 46, "text": "x = a" }, { "math_id": 47, "text": "\n EI w_B = \\dfrac{Pba^3}{6L} -\\cfrac{Pba}{6L}(L^2-b^2) = \\frac{Pba}{6L}(a^2+b^2-L^2)\n " }, { "math_id": 48, "text": "\n w_B = -\\cfrac{Pa^2b^2}{3LEI}\n " }, { "math_id": 49, "text": "w_{\\mathrm{max}}/w(L/2)" }, { "math_id": 50, "text": "x = L/2" }, { "math_id": 51, "text": "\n EI w(L/2) = \\dfrac{PbL^2}{48} -\\cfrac{Pb}{12}(L^2-b^2) = -\\frac{Pb}{12}\\left[\\frac{3L^2}{4} -b^2\\right]\n " }, { "math_id": 52, "text": "\n \\frac{w_{\\mathrm{max}}}{w(L/2)} = \\frac{4(L^2-b^2)^{3/2}}{3\\sqrt{3}L\\left[\\frac{3L^2}{4} -b^2\\right]} \n = \\frac{4(1-\\frac{b^2}{L^2})^{3/2}}{3\\sqrt{3}\\left[\\frac{3}{4} - \\frac{b^2}{L^2}\\right]}\n = \\frac{16(1-k^2)^{3/2}}{3\\sqrt{3}\\left(3 - 4k^2\\right)}\n " }, { "math_id": 53, "text": "k = B/L" }, { "math_id": 54, "text": "a < b; 0 < k < 0.5" }, { "math_id": 55, "text": "a = b = L/2" }, { "math_id": 56, "text": "\n x = \\cfrac{[L^2-(L/2)^2]^{1/2}}{\\sqrt{3}} = \\frac{L}{2} \n " }, { "math_id": 57, "text": "\n w_{\\mathrm{max}} = -\\dfrac{P(L/2)b[L^2-(L/2)^2]^{3/2}}{9\\sqrt{3}EIL} = -\\frac{PL^3}{48EI} = w(L/2)~.\n " } ]
https://en.wikipedia.org/wiki?curid=12186373
1218867
Hodograph
Vector representation of the movement of a body or fluid A hodograph is a diagram that gives a vectorial visual representation of the movement of a body or a fluid. It is the locus of one end of a variable vector, with the other end fixed. The position of any plotted data on such a diagram is proportional to the velocity of the moving particle. It is also called a velocity diagram. It appears to have been used by James Bradley, but its practical development is mainly from Sir William Rowan Hamilton, who published an account of it in the "Proceedings of the Royal Irish Academy" in 1846. Applications. It is used in physics, astronomy, solid and fluid mechanics to plot deformation of material, motion of planets or any other data that involves the velocities of different parts of a body. Meteorology. In meteorology, hodographs are used to plot winds from soundings of the Earth's atmosphere. It is a polar diagram where wind direction is indicated by the angle from the center axis and its strength by the distance from the center. In the figure to the right, at the bottom one finds values of wind at 4 heights above ground. They are plotted by the vectors formula_0 to formula_1. One has to notice that direction are plotted as mentioned in the upper right corner. With the hodograph and thermodynamic diagrams like the tephigram, meteorologists can calculate: Distributed Hodograph. It is a method of presenting the velocity field of a point in planar motion. The velocity vector, drawn at scale, is shown perpendicular rather than tangent to the point path, usually oriented away from the center of curvature of the path. Hodograph transformation. Hodograph transformation is a technique used to transform nonlinear partial differential equations into linear version. It consists of interchanging the dependent and independent variables in the equation to achieve linearity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec V_0" }, { "math_id": 1, "text": "\\vec V_4" }, { "math_id": 2, "text": "\\vec V_3" } ]
https://en.wikipedia.org/wiki?curid=1218867
1219048
Short-rate model
A short-rate model, in the context of interest rate derivatives, is a mathematical model that describes the future evolution of interest rates by describing the future evolution of the short rate, usually written formula_0. The short rate. Under a short rate model, the stochastic state variable is taken to be the instantaneous spot rate. The short rate, formula_0, then, is the (continuously compounded, annualized) interest rate at which an entity can borrow money for an infinitesimally short period of time from time formula_1. Specifying the current short rate does not specify the entire yield curve. However, no-arbitrage arguments show that, under some fairly relaxed technical conditions, if we model the evolution of formula_0 as a stochastic process under a risk-neutral measure formula_2, then the price at time formula_1 of a zero-coupon bond maturing at time formula_3 with a payoff of 1 is given by formula_4 where formula_5 is the natural filtration for the process. The interest rates implied by the zero coupon bonds form a yield curve, or more precisely, a zero curve. Thus, specifying a model for the short rate specifies future bond prices. This means that instantaneous forward rates are also specified by the usual formula formula_6 Short rate models are often classified as endogenous and exogenous. Endogenous short rate models are short rate models where the term structure of interest rates, or of zero-coupon bond prices formula_7, is an output of the model, so it is "inside the model" (endogenous) and is determined by the model parameters. Exogenous short rate models are models where such term structure is an input, as the model involves some time dependent functions or shifts that allow for inputing a given market term structure, so that the term structure comes from outside (exogenous). Particular short-rate models. Throughout this section formula_8 represents a standard Brownian motion under a risk-neutral probability measure and formula_9 its differential. Where the model is lognormal, a variable formula_10 is assumed to follow an Ornstein–Uhlenbeck process and formula_0 is assumed to follow formula_11. One-factor short-rate models. Following are the one-factor models, where a single stochastic factor – the short rate – determines the future evolution of all interest rates. Other than Rendleman–Bartter and Ho–Lee, which do not capture the mean reversion of interest rates, these models can be thought of as specific cases of Ornstein–Uhlenbeck processes. The Vasicek, Rendleman–Bartter and CIR models are endogenous models and have only a finite number of free parameters and so it is not possible to specify these parameter values in such a way that the model coincides with a few observed market prices ("calibration") of zero coupon bonds or linear products such as forward rate agreements or swaps, typically, or a best fit is done to these linear products to find the endogenous short rate models parameters that are closest to the market prices. This does not allow for fitting options like caps, floors and swaptions as the parameters have been used to fit linear instruments instead. This problem is overcome by allowing the parameters to vary deterministically with time, or by adding a deterministic shift to the endogenous model. In this way, exogenous models such as Ho-Lee and subsequent models, can be calibrated to market data, meaning that these can exactly return the price of bonds comprising the yield curve, and the remaining parameters can be used for options calibration. The implementation is usually via a (binomial) short rate tree or simulation; see and Monte Carlo methods for option pricing, although some short rate models have closed form solutions for zero coupon bonds, and even caps or floors, easing the calibration task considerably. We list the following endogenous models first. We now list a number of exogenous short rate models. The idea of a deterministic shift can be applied also to other models that have desirable properties in their endogenous form. For example, one could apply the shift formula_35 to the Vasicek model, but due to linearity of the Ornstein-Uhlenbeck process, this is equivalent to making formula_17 a time dependent function, and would thus coincide with the Hull-White model. Multi-factor short-rate models. Besides the above one-factor models, there are also multi-factor models of the short rate, among them the best known are the Longstaff and Schwartz two factor model and the Chen three factor model (also called "stochastic mean and stochastic volatility model"). Note that for the purposes of risk management, "to create realistic interest rate simulations", these multi-factor short-rate models are sometimes preferred over One-factor models, as they produce scenarios which are, in general, better "consistent with actual yield curve movements". formula_36 where the short rate is defined as formula_37 formula_38 Other interest rate models. The other major framework for interest rate modelling is the Heath–Jarrow–Morton framework (HJM). Unlike the short rate models described above, this class of models is generally non-Markovian. This makes general HJM models computationally intractable for most purposes. The great advantage of HJM models is that they give an analytical description of the entire yield curve, rather than just the short rate. For some purposes (e.g., valuation of mortgage backed securities), this can be a big simplification. The Cox–Ingersoll–Ross and Hull–White models in one or more dimensions can both be straightforwardly expressed in the HJM framework. Other short rate models do not have any simple dual HJM representation. The HJM framework with multiple sources of randomness, including as it does the Brace–Gatarek–Musiela model and market models, is often preferred for models of higher dimension. Models based on Fischer Black's shadow rate are used when interest rates approach the zero lower bound. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_t \\," }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "Q" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": " P(t,T) = \\operatorname{E}^Q\\left[\\left. \\exp{\\left(-\\int_t^T r_s\\, ds\\right) } \\right| \\mathcal{F}_t \\right], " }, { "math_id": 5, "text": "\\mathcal{F}" }, { "math_id": 6, "text": " f(t,T) = - \\frac{\\partial}{\\partial T} \\ln(P(t,T)). " }, { "math_id": 7, "text": " T \\mapsto P(0,T)" }, { "math_id": 8, "text": "W_t\\," }, { "math_id": 9, "text": "dW_t\\," }, { "math_id": 10, "text": "X_t " }, { "math_id": 11, "text": "r_t = \\exp{X_t}\\," }, { "math_id": 12, "text": "r_t = r_{0}+at+\\sigma W^{*}_{t}" }, { "math_id": 13, "text": "W^{*}_{t}" }, { "math_id": 14, "text": "dr_t = (\\theta-\\alpha r_t)\\,dt + \\sigma \\, dW_t" }, { "math_id": 15, "text": "dr_t = a(b-r_t)\\, dt + \\sigma \\, dW_t" }, { "math_id": 16, "text": "a" }, { "math_id": 17, "text": "b" }, { "math_id": 18, "text": "\\sigma" }, { "math_id": 19, "text": "dr_t = \\theta r_t\\, dt + \\sigma r_t\\, dW_t" }, { "math_id": 20, "text": "dr_t = (\\theta-\\alpha r_t)\\,dt + \\sqrt{r_t}\\,\\sigma\\, dW_t" }, { "math_id": 21, "text": "dr_t = a(b-r_t)\\, dt + \\sqrt{r_t}\\,\\sigma\\, dW_t" }, { "math_id": 22, "text": "\\sigma \\sqrt{r_t}" }, { "math_id": 23, "text": "2 ab>\\sigma^2" }, { "math_id": 24, "text": "r_t" }, { "math_id": 25, "text": "dr_t = \\theta_t\\, dt + \\sigma\\, dW_t" }, { "math_id": 26, "text": "\\theta_t" }, { "math_id": 27, "text": "dr_t = (\\theta_t-\\alpha_t r_t)\\,dt + \\sigma_t \\, dW_t" }, { "math_id": 28, "text": "\\theta, \\alpha" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": " d\\ln(r) = [\\theta_t + \\frac{\\sigma '_t}{\\sigma_t}\\ln(r)]dt + \\sigma_t\\, dW_t " }, { "math_id": 31, "text": "d\\ln(r) = \\theta_t\\, dt + \\sigma \\, dW_t " }, { "math_id": 32, "text": " d\\ln(r) = [\\theta_t-\\phi_t \\ln(r)] \\, dt + \\sigma_t\\, dW_t " }, { "math_id": 33, "text": " d \\ln(r_t) = \\theta_t\\, dt + \\sigma\\, dW_t" }, { "math_id": 34, "text": " dx_t = a(b-x_t)\\, dt + \\sqrt{x_t}\\,\\sigma\\, dW_t, \\ \\ r_t = x_t + \\phi(t)" }, { "math_id": 35, "text": "\\phi" }, { "math_id": 36, "text": "\n\\begin{align}\ndX_t & = (a_t-b X_t)\\,dt + \\sqrt{X_t}\\,c_t\\, dW_{1t}, \\\\[3pt]\nd Y_t & = (d_t-e Y_t)\\,dt + \\sqrt{Y_t}\\,f_t\\, dW_{2t},\n\\end{align}\n" }, { "math_id": 37, "text": " dr_t = (\\mu X + \\theta Y)\\,dt + \\sigma_t \\sqrt{Y} \\,dW_{3t}. " }, { "math_id": 38, "text": "\n\\begin{align}\ndr_t & = (\\theta_t-\\alpha_t)\\,dt + \\sqrt{r_t}\\,\\sigma_t\\, dW_t, \\\\[3pt]\nd\\alpha_t & = (\\zeta_t-\\alpha_t)\\,dt + \\sqrt{\\alpha_t}\\,\\sigma_t\\, dW_t, \\\\[3pt]\nd\\sigma_t & = (\\beta_t-\\sigma_t)\\,dt + \\sqrt{\\sigma_t}\\,\\eta_t\\, dW_t.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1219048
12191272
Newton's theorem of revolving orbits
Theorem in classical mechanics In classical mechanics, Newton's theorem of revolving orbits identifies the type of central force needed to multiply the angular speed of a particle by a factor "k" without affecting its radial motion (Figures 1 and 2). Newton applied his theorem to understanding the overall rotation of orbits ("apsidal precession", Figure 3) that is observed for the Moon and planets. The term "radial motion" signifies the motion towards or away from the center of force, whereas the angular motion is perpendicular to the radial motion. Isaac Newton derived this theorem in Propositions 43–45 of Book I of his "Philosophiæ Naturalis Principia Mathematica", first published in 1687. In Proposition 43, he showed that the added force must be a central force, one whose magnitude depends only upon the distance "r" between the particle and a point fixed in space (the center). In Proposition 44, he derived a formula for the force, showing that it was an inverse-cube force, one that varies as the inverse cube of "r". In Proposition 45 Newton extended his theorem to arbitrary central forces by assuming that the particle moved in nearly circular orbit. As noted by astrophysicist Subrahmanyan Chandrasekhar in his 1995 commentary on Newton's "Principia", this theorem remained largely unknown and undeveloped for over three centuries. Since 1997, the theorem has been studied by Donald Lynden-Bell and collaborators. Its first exact extension came in 2000 with the work of Mahomed and Vawda. Historical context. The motion of astronomical bodies has been studied systematically for thousands of years. The stars were observed to rotate uniformly, always maintaining the same relative positions to one another. However, other bodies were observed to "wander" against the background of the fixed stars; most such bodies were called planets after the Greek word "πλανήτοι" ("planētoi") for "wanderers". Although they generally move in the same direction along a path across the sky (the ecliptic), individual planets sometimes reverse their direction briefly, exhibiting retrograde motion. To describe this forward-and-backward motion, Apollonius of Perga (c. 262 – c. 190 BC) developed the concept of deferents and epicycles, according to which the planets are carried on rotating circles that are themselves carried on other rotating circles, and so on. Any orbit can be described with a sufficient number of judiciously chosen epicycles, since this approach corresponds to a modern Fourier transform. Roughly 350 years later, Claudius Ptolemaeus published his "Almagest", in which he developed this system to match the best astronomical observations of his era. To explain the epicycles, Ptolemy adopted the geocentric cosmology of Aristotle, according to which planets were confined to concentric rotating spheres. This model of the universe was authoritative for nearly 1500 years. The modern understanding of planetary motion arose from the combined efforts of astronomer Tycho Brahe and physicist Johannes Kepler in the 16th century. Tycho is credited with extremely accurate measurements of planetary motions, from which Kepler was able to derive his laws of planetary motion. According to these laws, planets move on ellipses (not epicycles) about the Sun (not the Earth). Kepler's second and third laws make specific quantitative predictions: planets sweep out equal areas in equal time, and the square of their orbital periods equals a fixed constant times the cube of their semi-major axis. Subsequent observations of the planetary orbits showed that the long axis of the ellipse (the so-called "line of apsides") rotates gradually with time; this rotation is known as "apsidal precession". The apses of an orbit are the points at which the orbiting body is closest or furthest away from the attracting center; for planets orbiting the Sun, the apses correspond to the perihelion (closest) and aphelion (furthest). With the publication of his "Principia" roughly eighty years later (1687), Isaac Newton provided a physical theory that accounted for all three of Kepler's laws, a theory based on Newton's laws of motion and his law of universal gravitation. In particular, Newton proposed that the gravitational force between any two bodies was a central force "F"("r") that varied as the inverse square of the distance "r" between them. Arguing from his laws of motion, Newton showed that the orbit of any particle acted upon by one such force is always a conic section, specifically an ellipse if it does not go to infinity. However, this conclusion holds only when two bodies are present (the two-body problem); the motion of three bodies or more acting under their mutual gravitation (the "n"-body problem) remained unsolved for centuries after Newton, although solutions to a few special cases were discovered. Newton proposed that the orbits of planets about the Sun are largely elliptical because the Sun's gravitation is dominant; to first approximation, the presence of the other planets can be ignored. By analogy, the elliptical orbit of the Moon about the Earth was dominated by the Earth's gravity; to first approximation, the Sun's gravity and those of other bodies of the Solar System can be neglected. However, Newton stated that the gradual apsidal precession of the planetary and lunar orbits was due to the effects of these neglected interactions; in particular, he stated that the precession of the Moon's orbit was due to the perturbing effects of gravitational interactions with the Sun. Newton's theorem of revolving orbits was his first attempt to understand apsidal precession quantitatively. According to this theorem, the addition of a particular type of central force—the inverse-cube force—can produce a rotating orbit; the angular speed is multiplied by a factor "k", whereas the radial motion is left unchanged. However, this theorem is restricted to a specific type of force that may not be relevant; several perturbing inverse-square interactions (such as those of other planets) seem unlikely to sum exactly to an inverse-cube force. To make his theorem applicable to other types of forces, Newton found the best approximation of an arbitrary central force "F"("r") to an inverse-cube potential in the limit of nearly circular orbits, that is, elliptical orbits of low eccentricity, as is indeed true for most orbits in the Solar System. To find this approximation, Newton developed an infinite series that can be viewed as the forerunner of the Taylor expansion. This approximation allowed Newton to estimate the rate of precession for arbitrary central forces. Newton applied this approximation to test models of the force causing the apsidal precession of the Moon's orbit. However, the problem of the Moon's motion is dauntingly complex, and Newton never published an accurate gravitational model of the Moon's apsidal precession. After a more accurate model by Clairaut in 1747, analytical models of the Moon's motion were developed in the late 19th century by Hill, Brown, and Delaunay. However, Newton's theorem is more general than merely explaining apsidal precession. It describes the effects of adding an inverse-cube force to any central force "F"("r"), not only to inverse-square forces such as Newton's law of universal gravitation and Coulomb's law. Newton's theorem simplifies orbital problems in classical mechanics by eliminating inverse-cube forces from consideration. The radial and angular motions, "r"("t") and "θ"1("t"), can be calculated without the inverse-cube force; afterwards, its effect can be calculated by multiplying the angular speed of the particle formula_0 Mathematical statement. Consider a particle moving under an arbitrary central force "F"1("r") whose magnitude depends only on the distance "r" between the particle and a fixed center. Since the motion of a particle under a central force always lies in a plane, the position of the particle can be described by polar coordinates ("r", "θ"1), the radius and angle of the particle relative to the center of force (Figure 1). Both of these coordinates, "r"("t") and "θ"1("t"), change with time "t" as the particle moves. Imagine a second particle with the same mass "m" and with the same radial motion "r"("t"), but one whose angular speed is "k" times faster than that of the first particle. In other words, the azimuthal angles of the two particles are related by the equation "θ"2("t") = "k θ"1("t"). Newton showed that the motion of the second particle can be produced by adding an inverse-cube central force to whatever force "F"1("r") acts on the first particle formula_1 where "L"1 is the magnitude of the first particle's angular momentum, which is a constant of motion (conserved) for central forces. If "k"2 is greater than one, "F"2 − "F"1 is a negative number; thus, the added inverse-cube force is "attractive", as observed in the green planet of Figures 1–4 and 9. By contrast, if "k"2 is less than one, "F"2−"F"1 is a positive number; the added inverse-cube force is "repulsive", as observed in the green planet of Figures 5 and 10, and in the red planet of Figures 4 and 5. Alteration of the particle path. The addition of such an inverse-cube force also changes the "path" followed by the particle. The path of the particle ignores the time dependencies of the radial and angular motions, such as "r"("t") and "θ"1("t"); rather, it relates the radius and angle variables to one another. For this purpose, the angle variable is unrestricted and can increase indefinitely as the particle revolves around the central point multiple times. For example, if the particle revolves twice about the central point and returns to its starting position, its final angle is not the same as its initial angle; rather, it has increased by 2×360° 720°. Formally, the angle variable is defined as the integral of the angular speed formula_2 A similar definition holds for "θ"2, the angle of the second particle. If the path of the first particle is described in the form "r" "g"("θ"1), the path of the second particle is given by the function "r" "g"(θ2/"k"), since "θ"2 "k θ"1. For example, let the path of the first particle be an ellipse formula_3 where "A" and "B" are constants; then, the path of the second particle is given by formula_4 Orbital precession. If "k" is close, but not equal, to one, the second orbit resembles the first, but revolves gradually about the center of force; this is known as orbital precession (Figure 3). If "k" is greater than one, the orbit precesses in the same direction as the orbit (Figure 3); if "k" is less than one, the orbit precesses in the opposite direction. Although the orbit in Figure 3 may seem to rotate uniformly, i.e., at a constant angular speed, this is true only for circular orbits. If the orbit rotates at an angular speed "Ω", the angular speed of the second particle is faster or slower than that of the first particle by "Ω"; in other words, the angular speeds would satisfy the equation "ω"2 "ω"1 + Ω. However, Newton's theorem of revolving orbits states that the angular speeds are related by multiplication: "ω"2 "kω"1, where "k" is a constant. Combining these two equations shows that the angular speed of the precession equals "Ω" ("k" − 1)"ω"1. Hence, "Ω" is constant only if "ω"1 is constant. According to the conservation of angular momentum, "ω"1 changes with the radius "r" formula_5 where "m" and "L"1 are the first particle's mass and angular momentum, respectively, both of which are constant. Hence, "ω"1 is constant only if the radius "r" is constant, i.e., when the orbit is a circle. However, in that case, the orbit does not change as it precesses. Illustrative example: Cotes's spirals. The simplest illustration of Newton's theorem occurs when there is no initial force, i.e., "F"1("r") = 0. In this case, the first particle is stationary or travels in a straight line. If it travels in a straight line that does not pass through the origin (yellow line in Figure 6) the equation for such a line may be written in the polar coordinates ("r", "θ"1) as formula_6 where "θ"0 is the angle at which the distance is minimized (Figure 6). The distance "r" begins at infinity (when "θ"1 – "θ"0 −90°), and decreases gradually until "θ"1 – "θ"0 0°, when the distance reaches a minimum, then gradually increases again to infinity at "θ"1 – "θ"0 90°. The minimum distance "b" is the impact parameter, which is defined as the length of the perpendicular from the fixed center to the line of motion. The same radial motion is possible when an inverse-cube central force is added. An inverse-cube central force "F"2("r") has the form formula_7 where the numerator μ may be positive (repulsive) or negative (attractive). If such an inverse-cube force is introduced, Newton's theorem says that the corresponding solutions have a shape called Cotes's spirals. These are curves defined by the equation formula_8 where the constant "k" equals formula_9 When the right-hand side of the equation is a positive real number, the solution corresponds to an epispiral. When the argument "θ"1 – "θ"0 equals ±90°×"k", the cosine goes to zero and the radius goes to infinity. Thus, when "k" is less than one, the range of allowed angles becomes small and the force is repulsive (red curve on right in Figure 7). On the other hand, when "k" is greater than one, the range of allowed angles increases, corresponding to an attractive force (green, cyan and blue curves on left in Figure 7); the orbit of the particle can even wrap around the center several times. The possible values of the parameter "k" may range from zero to infinity, which corresponds to values of μ ranging from negative infinity up to the positive upper limit, "L"12/"m". Thus, for all attractive inverse-cube forces (negative μ) there is a corresponding epispiral orbit, as for some repulsive ones (μ &lt; "L"12/"m"), as illustrated in Figure 7. Stronger repulsive forces correspond to a faster linear motion. One of the other solution types is given in terms of the hyperbolic cosine: formula_10 where the constant λ satisfies formula_11 This form of Cotes's spirals corresponds to one of the two Poinsot's spirals (Figure 8). The possible values of λ range from zero to infinity, which corresponds to values of μ greater than the positive number "L"12/"m". Thus, Poinsot spiral motion only occurs for repulsive inverse-cube central forces, and applies in the case that "L" is not too large for the given μ. Taking the limit of "k" or λ going to zero yields the third form of a Cotes's spiral, the so-called "reciprocal spiral" or "hyperbolic spiral", as a solution formula_12 where "A" and "ε" are arbitrary constants. Such curves result when the strength μ of the repulsive force exactly balances the angular momentum-mass term formula_13 Closed orbits and inverse-cube central forces. Two types of central forces—those that increase linearly with distance, "F = Cr", such as Hooke's law, and inverse-square forces, "F" "C"/"r"2, such as Newton's law of universal gravitation and Coulomb's law—have a very unusual property. A particle moving under either type of force always returns to its starting place with its initial velocity, provided that it lacks sufficient energy to move out to infinity. In other words, the path of a bound particle is always closed and its motion repeats indefinitely, no matter what its initial position or velocity. As shown by Bertrand's theorem, this property is not true for other types of forces; in general, a particle will not return to its starting point with the same velocity. However, Newton's theorem shows that an inverse-cubic force may be applied to a particle moving under a linear or inverse-square force such that its orbit remains closed, provided that "k" equals a rational number. (A number is called "rational" if it can be written as a fraction "m"/"n", where "m" and "n" are integers.) In such cases, the addition of the inverse-cubic force causes the particle to complete "m" rotations about the center of force in the same time that the original particle completes "n" rotations. This method for producing closed orbits does not violate Bertrand's theorem, because the added inverse-cubic force depends on the initial velocity of the particle. Harmonic and subharmonic orbits are special types of such closed orbits. A closed trajectory is called a "harmonic orbit" if "k" is an integer, i.e., if "n" 1 in the formula "k" "m"/"n". For example, if "k" 3 (green planet in Figures 1 and 4, green orbit in Figure 9), the resulting orbit is the third harmonic of the original orbit. Conversely, the closed trajectory is called a "subharmonic orbit" if "k" is the inverse of an integer, i.e., if "m" 1 in the formula "k" "m"/"n". For example, if "k" 1/3 (green planet in Figure 5, green orbit in Figure 10), the resulting orbit is called the third subharmonic of the original orbit. Although such orbits are unlikely to occur in nature, they are helpful for illustrating Newton's theorem. Limit of nearly circular orbits. In Proposition 45 of his "Principia", Newton applies his theorem of revolving orbits to develop a method for finding the force laws that govern the motions of planets. Johannes Kepler had noted that the orbits of most planets and the Moon seemed to be ellipses, and the long axis of those ellipses can determined accurately from astronomical measurements. The long axis is defined as the line connecting the positions of minimum and maximum distances to the central point, i.e., the line connecting the two apses. For illustration, the long axis of the planet Mercury is defined as the line through its successive positions of perihelion and aphelion. Over time, the long axis of most orbiting bodies rotates gradually, generally no more than a few degrees per complete revolution, because of gravitational perturbations from other bodies, oblateness in the attracting body, general relativistic effects, and other effects. Newton's method uses this apsidal precession as a sensitive probe of the type of force being applied to the planets. Newton's theorem describes only the effects of adding an inverse-cube central force. However, Newton extends his theorem to an arbitrary central force "F"("r") by restricting his attention to orbits that are nearly circular, such as ellipses with low orbital eccentricity ("ε" ≤ 0.1), which is true of seven of the eight planetary orbits in the solar system. Newton also applied his theorem to the planet Mercury, which has an eccentricity "ε "of roughly 0.21, and suggested that it may pertain to Halley's comet, whose orbit has an eccentricity of roughly 0.97. A qualitative justification for this extrapolation of his method has been suggested by Valluri, Wilson and Harper. According to their argument, Newton considered the apsidal precession angle α (the angle between the vectors of successive minimum and maximum distance from the center) to be a smooth, continuous function of the orbital eccentricity ε. For the inverse-square force, α equals 180°; the vectors to the positions of minimum and maximum distances lie on the same line. If α is initially not 180° at low ε (quasi-circular orbits) then, in general, α will equal 180° only for isolated values of ε; a randomly chosen value of ε would be very unlikely to give α = 180°. Therefore, the observed slow rotation of the apsides of planetary orbits suggest that the force of gravity is an inverse-square law. Quantitative formula. To simplify the equations, Newton writes "F"("r") in terms of a new function "C"("r") formula_14 where "R" is the average radius of the nearly circular orbit. Newton expands "C"("r") in a series—now known as a Taylor expansion—in powers of the distance "r", one of the first appearances of such a series. By equating the resulting inverse-cube force term with the inverse-cube force for revolving orbits, Newton derives an equivalent angular scaling factor "k" for nearly circular orbits: formula_15 In other words, the application of an arbitrary central force "F"("r") to a nearly circular elliptical orbit can accelerate the angular motion by the factor "k" without affecting the radial motion significantly. If an elliptical orbit is stationary, the particle rotates about the center of force by 180° as it moves from one end of the long axis to the other (the two apses). Thus, the corresponding apsidal angle "α" for a general central force equals "k"×180°, using the general law "θ"2 "k" "θ"1. Examples. Newton illustrates his formula with three examples. In the first two, the central force is a power law, "F"("r") "r""n"−3, so "C"("r") is proportional to "r""n". The formula above indicates that the angular motion is multiplied by a factor "k" 1/√"n", so that the apsidal angle "α" equals 180°/√"n". This angular scaling can be seen in the apsidal precession, i.e., in the gradual rotation of the long axis of the ellipse (Figure 3). As noted above, the orbit as a whole rotates with a mean angular speed "Ω"=("k"−1)"ω", where "ω" equals the mean angular speed of the particle about the stationary ellipse. If the particle requires a time "T" to move from one apse to the other, this implies that, in the same time, the long axis will rotate by an angle "β" = Ω"T" = ("k" − 1)"ωT" = ("k" − 1)×180°. For an inverse-square law such as Newton's law of universal gravitation, where "n" equals 1, there is no angular scaling ("k" = 1), the apsidal angle "α" is 180°, and the elliptical orbit is stationary (Ω = "β" = 0). As a final illustration, Newton considers a sum of two power laws formula_16 which multiplies the angular speed by a factor formula_17 Newton applies both of these formulae (the power law and sum of two power laws) to examine the apsidal precession of the Moon's orbit. Precession of the Moon's orbit. The motion of the Moon can be measured accurately, and is noticeably more complex than that of the planets. The ancient Greek astronomers, Hipparchus and Ptolemy, had noted several periodic variations in the Moon's orbit, such as small oscillations in its orbital eccentricity and the inclination of its orbit to the plane of the ecliptic. These oscillations generally occur on a once-monthly or twice-monthly time-scale. The line of its apses precesses gradually with a period of roughly 8.85 years, while its line of nodes turns a full circle in roughly double that time, 18.6 years. This accounts for the roughly 18-year periodicity of eclipses, the so-called Saros cycle. However, both lines experience small fluctuations in their motion, again on the monthly time-scale. In 1673, Jeremiah Horrocks published a reasonably accurate model of the Moon's motion in which the Moon was assumed to follow a precessing elliptical orbit. A sufficiently accurate and simple method for predicting the Moon's motion would have solved the navigational problem of determining a ship's longitude; in Newton's time, the goal was to predict the Moon's position to 2' (two arc-minutes), which would correspond to a 1° error in terrestrial longitude. Horrocks' model predicted the lunar position with errors no more than 10 arc-minutes; for comparison, the diameter of the Moon is roughly 30 arc-minutes. Newton used his theorem of revolving orbits in two ways to account for the apsidal precession of the Moon. First, he showed that the Moon's observed apsidal precession could be accounted for by changing the force law of gravity from an inverse-square law to a power law in which the exponent was 2 + 4/243 (roughly 2.0165) formula_18 In 1894, Asaph Hall adopted this approach of modifying the exponent in the inverse-square law slightly to explain an anomalous orbital precession of the planet Mercury, which had been observed in 1859 by Urbain Le Verrier. Ironically, Hall's theory was ruled out by careful astronomical observations of the Moon. The currently accepted explanation for this precession involves the theory of general relativity, which (to first approximation) adds an inverse-quartic force, i.e., one that varies as the inverse fourth power of distance. As a second approach to explaining the Moon's precession, Newton suggested that the perturbing influence of the Sun on the Moon's motion might be approximately equivalent to an additional linear force formula_19 The first term corresponds to the gravitational attraction between the Moon and the Earth, where "r" is the Moon's distance from the Earth. The second term, so Newton reasoned, might represent the average perturbing force of the Sun's gravity of the Earth-Moon system. Such a force law could also result if the Earth were surrounded by a spherical dust cloud of uniform density. Using the formula for "k" for nearly circular orbits, and estimates of "A" and "B", Newton showed that this force law could not account for the Moon's precession, since the predicted apsidal angle "α" was (≈ 180.76°) rather than the observed α (≈ 181.525°). For every revolution, the long axis would rotate 1.5°, roughly half of the observed 3.0° Generalization. Isaac Newton first published his theorem in 1687, as Propositions 43–45 of Book I of his "Philosophiæ Naturalis Principia Mathematica". However, as astrophysicist Subrahmanyan Chandrasekhar noted in his 1995 commentary on Newton's "Principia", the theorem remained largely unknown and undeveloped for over three centuries. The first generalization of Newton's theorem was discovered by Mahomed and Vawda in 2000. As Newton did, they assumed that the angular motion of the second particle was "k" times faster than that of the first particle, "θ"2 "k" "θ"1. In contrast to Newton, however, Mahomed and Vawda did not require that the radial motion of the two particles be the same, "r"1 "r"2. Rather, they required that the inverse radii be related by a linear equation formula_20 This transformation of the variables changes the path of the particle. If the path of the first particle is written "r"1 "g"(θ1), the second particle's path can be written as formula_21 If the motion of the first particle is produced by a central force "F"1("r"), Mahomed and Vawda showed that the motion of the second particle can be produced by the following force formula_22 According to this equation, the second force "F"2("r") is obtained by scaling the first force and changing its argument, as well as by adding inverse-square and inverse-cube central forces. For comparison, Newton's theorem of revolving orbits corresponds to the case "a" 1 and "b" 0, so that "r"1 "r"2. In this case, the original force is not scaled, and its argument is unchanged; the inverse-cube force is added, but the inverse-square term is not. Also, the path of the second particle is "r"2 "g"(θ2/"k"), consistent with the formula given above. Derivations. Newton's derivation. Newton's derivation is found in Section IX of his "Principia", specifically Propositions 43–45. His derivations of these Propositions are based largely on geometry. "It is required to make a body move in a curve that revolves about the center of force in the same manner as another body in the same curve at rest." Newton's derivation of Proposition 43 depends on his Proposition 2, derived earlier in the "Principia". Proposition 2 provides a geometrical test for whether the net force acting on a point mass (a particle) is a central force. Newton showed that a force is central if and only if the particle sweeps out equal areas in equal times as measured from the center. Newton's derivation begins with a particle moving under an arbitrary central force "F"1("r"); the motion of this particle under this force is described by its radius "r"("t") from the center as a function of time, and also its angle θ1("t"). In an infinitesimal time "dt", the particle sweeps out an approximate right triangle whose area is formula_23 Since the force acting on the particle is assumed to be a central force, the particle sweeps out equal angles in equal times, by Newton's Proposition 2. Expressed another way, the "rate" of sweeping out area is constant formula_24 This constant "areal velocity" can be calculated as follows. At the apapsis and periapsis, the positions of closest and furthest distance from the attracting center, the velocity and radius vectors are perpendicular; therefore, the angular momentum "L1" per mass "m" of the particle (written as "h"1) can be related to the rate of sweeping out areas formula_25 Now consider a second particle whose orbit is identical in its radius, but whose angular variation is multiplied by a constant factor "k" formula_26 The areal velocity of the second particle equals that of the first particle multiplied by the same factor "k" formula_27 Since "k" is a constant, the second particle also sweeps out equal areas in equal times. Therefore, by Proposition 2, the second particle is also acted upon by a central force "F"2("r"). This is the conclusion of Proposition 43. "The difference of the forces, by which two bodies may be made to move equally, one in a fixed, the other in the same orbit revolving, varies inversely as the cube of their common altitudes." To find the magnitude of "F"2("r") from the original central force "F"1("r"), Newton calculated their difference "F"2("r") − "F"1("r") using geometry and the definition of centripetal acceleration. In Proposition 44 of his "Principia", he showed that the difference is proportional to the inverse cube of the radius, specifically by the formula given above, which Newtons writes in terms of the two constant areal velocities, "h"1 and "h"2 formula_28 "To find the motion of the apsides in orbits approaching very near to circles." In this Proposition, Newton derives the consequences of his theorem of revolving orbits in the limit of nearly circular orbits. This approximation is generally valid for planetary orbits and the orbit of the Moon about the Earth. This approximation also allows Newton to consider a great variety of central force laws, not merely inverse-square and inverse-cube force laws. Modern derivation. Modern derivations of Newton's theorem have been published by Whittaker (1937) and Chandrasekhar (1995). By assumption, the second angular speed is "k" times faster than the first formula_29 Since the two radii have the same behavior with time, "r"("t"), the conserved angular momenta are related by the same factor "k" formula_30 The equation of motion for a radius "r" of a particle of mass "m" moving in a central potential "V"("r") is given by Lagrange's equations formula_31 Applying the general formula to the two orbits yields the equation formula_32 which can be re-arranged to the form formula_33 This equation relating the two radial forces can be understood qualitatively as follows. The difference in angular speeds (or equivalently, in angular momenta) causes a difference in the centripetal force requirement; to offset this, the radial force must be altered with an inverse-cube force. Newton's theorem can be expressed equivalently in terms of potential energy, which is defined for central forces formula_34 The radial force equation can be written in terms of the two potential energies formula_35 Integrating with respect to the distance "r", Newtons's theorem states that a "k"-fold change in angular speed results from adding an inverse-square potential energy to any given potential energy "V"1("r") formula_36 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\omega_{2} = \\frac{d\\theta_{2}}{dt} = k \\frac{d\\theta_{1}}{dt} = k \\omega_{1}.\n" }, { "math_id": 1, "text": "\nF_2(r) - F_1(r) = \\frac{L_1^2}{mr^3} \\left( 1 - k^2 \\right)\n" }, { "math_id": 2, "text": "\n\\theta_1 \\equiv \\int \\omega_1(t)\\, dt.\n" }, { "math_id": 3, "text": "\n\\frac{1}{r} = A + B \\cos \\theta_1\n" }, { "math_id": 4, "text": "\n\\frac{1}{r} = A + B \\cos \\left( \\frac{\\theta_2}{k} \\right).\n" }, { "math_id": 5, "text": "\n\\omega_{1} = \\frac{L_{1}}{m r^{2}};\n" }, { "math_id": 6, "text": "\n\\frac{1}{r} = \\frac{1}{b} \\cos\\ (\\theta_1 - \\theta_0)\n" }, { "math_id": 7, "text": "\nF_2(r) = \\frac{\\mu}{r^3}\n" }, { "math_id": 8, "text": "\n\\frac{1}{r} = \\frac{1}{b} \\cos\\ \\left(\\frac{\\theta_2 - \\theta_0}{k} \\right)\n" }, { "math_id": 9, "text": "\nk^2 = 1 - \\frac{m \\mu}{L_1^2}\n" }, { "math_id": 10, "text": "\n\\frac{1}{r} = \\frac{1}{b} \\cosh\\ \\left(\\frac{\\theta_0 - \\theta_2}{\\lambda} \\right)\n" }, { "math_id": 11, "text": "\n\\lambda^2 = \\frac{m \\mu}{L_1^{2}} - 1\n" }, { "math_id": 12, "text": "\n\\frac{1}{r} = A \\theta_2 + \\varepsilon\n" }, { "math_id": 13, "text": "\n\\mu = \\frac{L_{1}^{2}}{m}\n" }, { "math_id": 14, "text": "\nF(r) = \\frac{C(r)}{R r^3}\n" }, { "math_id": 15, "text": "\n\\frac{1}{k^{2}} = \\left( \\frac{R}{C} \\right) \\left. \\frac{dC}{dr} \\right|_{r=R}\n" }, { "math_id": 16, "text": "\nC(r) \\propto a r^m + b r^n\n" }, { "math_id": 17, "text": "\nk = \\sqrt{\\frac{a + b}{am + bn}}\n" }, { "math_id": 18, "text": "\nF(r) = - \\frac{GMm}{r^{2 + 4/243}}\n" }, { "math_id": 19, "text": "\nF(r) = \\frac{A}{r^{2}} + B r\n" }, { "math_id": 20, "text": "\n\\frac{1}{r_{2}(t)} = \\frac{a}{r_{1}(t)} + b\n" }, { "math_id": 21, "text": "\n\\frac{a r_2}{1 - b r_2} = g\\left( \\frac{\\theta_2}{k} \\right)\n" }, { "math_id": 22, "text": "\nF_2(r_2) = \\frac{a^3}{\\left( 1 - b r_2 \\right)^2} F_{1}\\left( \\frac{a r_2}{1 - b r_2} \\right) +\n\\frac{L^2}{mr^3} \\left( 1 - k^2 \\right) - \\frac{bL^2}{mr^2}\n" }, { "math_id": 23, "text": "\ndA_1 = \\frac{1}{2} r^2 d\\theta_1\n" }, { "math_id": 24, "text": "\n\\frac{dA_1}{dt} = \\frac{1}{2} r^2 \\frac{d\\theta_1}{dt} = \\mathrm{constant}\n" }, { "math_id": 25, "text": "\nh_1 = \\frac{L_1}{m} = r v_1 = r^2 \\frac{d\\theta_1}{dt} = 2 \\frac{dA_1}{dt}\n" }, { "math_id": 26, "text": "\n\\theta_2(t) = k \\theta_1(t)\\,\\!\n" }, { "math_id": 27, "text": "\nh_2 = 2 \\frac{dA_2}{dt} = r^2 \\frac{d\\theta_2}{dt} =\nk r^2 \\frac{d\\theta_1}{dt} = 2 k \\frac{dA_1}{dt} = k h_1\n" }, { "math_id": 28, "text": "\nF_2(r) - F_1(r) = m \\frac{h_1^2 - h_2^2}{r^3}\n" }, { "math_id": 29, "text": "\n\\omega_{2} = \\frac{d\\theta_{2}}{dt} = k \\frac{d\\theta_{1}}{dt} = k \\omega_{1}\n" }, { "math_id": 30, "text": "\nL_{2} = m r^{2} \\omega_{2} = m r^{2} k \\omega_{1} = k L_{1} \\,\\!\n" }, { "math_id": 31, "text": "\nm\\frac{d^2 r}{dt^2} - mr \\omega^2 =\nm\\frac{d^2 r}{dt^2} - \\frac{L^2}{mr^3} = F(r)\n" }, { "math_id": 32, "text": "\nm\\frac{d^2 r}{dt^2} = F_1(r) + \\frac{L_1^2}{mr^{3}} = F_2(r) + \\frac{L_2^2}{mr^3} = F_2(r) + \\frac{k^2 L_1^2}{mr^3}\n" }, { "math_id": 33, "text": "\nF_{2}(r) = F_1(r) + \\frac{L_1^2}{mr^3} \\left( 1 - k^2 \\right)\n" }, { "math_id": 34, "text": " F(r) = -\\frac{dV}{dr} " }, { "math_id": 35, "text": "\n- \\frac{dV_2}{dr} = - \\frac{dV_1}{dr} + \\frac{L_1^2}{mr^3} \\left( 1 - k^2 \\right)\n" }, { "math_id": 36, "text": "\nV_2(r) = V_1(r) + \\frac{L_1^2}{2mr^2} \\left( 1 - k^2 \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=12191272
12193437
Infinitesimal generator (stochastic processes)
In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process (i.e. a continuous-time Markov process satisfying certain regularity conditions) is a Fourier multiplier operator that encodes a great deal of information about the process. The generator is used in evolution equations such as the Kolmogorov backward equation, which describes the evolution of statistics of the process; its "L"2 Hermitian adjoint is used in evolution equations such as the Fokker–Planck equation, also known as Kolmogorov forward equation, which describes the evolution of the probability density functions of the process. The Kolmogorov forward equation in the notation is just formula_0, where formula_1 is the probability density function, and formula_2 is the adjoint of the infinitesimal generator of the underlying stochastic process. The Klein–Kramers equation is a special case of that. Definition. General case. For a Feller process formula_3 with Feller semigroup formula_4 and state space formula_5 we define the generator formula_6 by formula_7 formula_8 Here formula_9 denotes the Banach space of continuous functions on formula_5 vanishing at infinity, equipped with the supremum norm, and formula_10. In general, it is not easy to describe the domain of the Feller generator. However, the Feller generator is always closed and densely defined. If formula_11 is formula_12-valued and formula_13 contains the test functions (compactly supported smooth functions) then formula_14 where formula_15, and formula_16 is a Lévy triplet for fixed formula_17. Lévy processes. The generator of Lévy semigroup is of the form formula_18 where formula_19 is positive semidefinite and formula_20 is a Lévy measure satisfying formula_21 and formula_22for some formula_23 with formula_24 is bounded. If we define formula_25 for formula_26 then the generator can be written as formula_27 where formula_28 denotes the Fourier transform. So the generator of a Lévy process (or semigroup) is a Fourier multiplier operator with symbol formula_29. Stochastic differential equations driven by Lévy processes. Let formula_30 be a Lévy process with symbol formula_31 (see above). Let formula_32 be locally Lipschitz and bounded. The solution of the SDE formula_33 exists for each deterministic initial condition formula_17 and yields a Feller process with symbol formula_34 Note that in general, the solution of an SDE driven by a Feller process which is not Lévy might fail to be Feller or even Markovian. As a simple example consider formula_35 with a Brownian motion driving noise. If we assume formula_36 are Lipschitz and of linear growth, then for each deterministic initial condition there exists a unique solution, which is Feller with symbol formula_37 Mean first passage time. The mean first passage time formula_38 satisfies formula_39. This can be used to calculate, for example, the time it takes for a Brownian motion particle in a box to hit the boundary of the box, or the time it takes for a Brownian motion particle in a potential well to escape the well. Under certain assumptions, the escape time satisfies the Arrhenius equation. Generators of some common processes. For finite-state continuous time Markov chains the generator may be expressed as a transition rate matrix. The general n-dimensional diffusion process formula_40 has generatorformula_41where formula_42 is the diffusion matrix, formula_43 is the Hessian of the function formula_44, and formula_45 is the matrix trace. Its adjoint operator isformula_46The following are commonly used special cases for the general n-dimensional diffusion process. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\partial_t \\rho = \\mathcal A^* \\rho" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "\\mathcal A^*" }, { "math_id": 3, "text": "(X_t)_{t \\geq 0}" }, { "math_id": 4, "text": "T=(T_t)_{t\\geq 0}" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "(A,D(A))" }, { "math_id": 7, "text": "D(A) = \\left\\{f\\in C_0(E): \\lim_{t\\downarrow 0} \\frac{T_t f-f}{t} \\text{ exists as uniform limit}\\right\\}," }, { "math_id": 8, "text": "A f = \\lim_{t \\downarrow 0} \\frac{T_t f-f}{t} , ~~ \\text{ for any } f\\in D(A)." }, { "math_id": 9, "text": "C_{0}(E)" }, { "math_id": 10, "text": "T_t f(x) = \\mathbb{E}^x f(X_t)=\\mathbb{E}(f(X_t)|X_0=x)" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "\\mathbb{R}^d" }, { "math_id": 13, "text": "D(A)" }, { "math_id": 14, "text": "A f(x) = - c(x) f(x) + l (x) \\cdot \\nabla f(x) + \\frac{1}{2} \\text{div} Q(x) \\nabla f(x) + \\int_{\\mathbb{R}^d \\setminus{\\{0\\}}} \\left( f(x+y)-f(x)-\\nabla f(x) \\cdot y \\chi(|y|) \\right) N(x,dy)," }, { "math_id": 15, "text": "c(x) \\geq 0" }, { "math_id": 16, "text": "(l(x), Q(x),N(x,\\cdot))" }, { "math_id": 17, "text": "x \\in \\mathbb{R}^d" }, { "math_id": 18, "text": "A f(x)= l \\cdot \\nabla f(x) + \\frac{1}{2} \\text{div} Q \\nabla f(x) + \\int_{\\mathbb{R}^d \\setminus{\\{0\\}}} \\left( f(x+y)-f(x)-\\nabla f(x) \\cdot y \\chi(|y|) \\right) \\nu(dy)" }, { "math_id": 19, "text": "l \\in \\mathbb{R}^d, Q\\in \\mathbb{R}^{d\\times d} " }, { "math_id": 20, "text": "\\nu " }, { "math_id": 21, "text": "\\int_{\\mathbb{R}^d\\setminus \\{0\\}} \\min(|y|^2,1) \\nu(dy) < \\infty" }, { "math_id": 22, "text": "0 \\leq 1-\\chi(s) \\leq \\kappa \\min(s,1)" }, { "math_id": 23, "text": "\\kappa >0" }, { "math_id": 24, "text": "s \\chi(s)" }, { "math_id": 25, "text": "\\psi(\\xi)=\\psi(0)-i l \\cdot \\xi + \\frac{1}{2} \\xi \\cdot Q \\xi + \\int_{\\mathbb{R}^d \\setminus \\{0\\}} (1-e^{i y \\cdot \\xi}+i\\xi \\cdot y \\chi(|y|)) \\nu(dy )" }, { "math_id": 26, "text": "\\psi(0) \\geq 0" }, { "math_id": 27, "text": "A f (x) = - \\int e^{i x \\cdot \\xi} \\psi (\\xi) \\hat{f}(\\xi) d \\xi" }, { "math_id": 28, "text": "\\hat{f}" }, { "math_id": 29, "text": "-\\psi" }, { "math_id": 30, "text": "L" }, { "math_id": 31, "text": "\\psi" }, { "math_id": 32, "text": "\\Phi" }, { "math_id": 33, "text": "d X_t = \\Phi(X_{t-}) d L_t" }, { "math_id": 34, "text": "q(x,\\xi)=\\psi(\\Phi^\\top(x)\\xi)." }, { "math_id": 35, "text": "d X_t = l(X_t) dt+ \\sigma(X_t) dW_t" }, { "math_id": 36, "text": "l,\\sigma" }, { "math_id": 37, "text": "q(x,\\xi)=- i l(x)\\cdot \\xi + \\frac{1}{2} \\xi Q(x)\\xi." }, { "math_id": 38, "text": "T_1" }, { "math_id": 39, "text": "\\mathcal A T_1 = -1" }, { "math_id": 40, "text": "dX_t = \\mu(X_t, t) \\,dt + \\sigma(X_t, t) \\,dW_t" }, { "math_id": 41, "text": "\\mathcal{A}f = (\\nabla f)^T \\mu + tr( (\\nabla^2 f) D)" }, { "math_id": 42, "text": "D = \\frac 12 \\sigma\\sigma^T" }, { "math_id": 43, "text": "\\nabla^2 f" }, { "math_id": 44, "text": "f" }, { "math_id": 45, "text": "tr" }, { "math_id": 46, "text": "\\mathcal{A}^*f = -\\sum_i \\partial_i (f \\mu_i) + \\sum_{ij} \\partial_{ij} (fD_{ij})" }, { "math_id": 47, "text": "\\mathbb{R}^{n}" }, { "math_id": 48, "text": "dX_{t} = dB_{t}" }, { "math_id": 49, "text": "{1\\over{2}}\\Delta" }, { "math_id": 50, "text": "\\Delta" }, { "math_id": 51, "text": "Y" }, { "math_id": 52, "text": "\\mathrm{d} Y_{t} = { \\mathrm{d} t \\choose \\mathrm{d} B_{t} }" }, { "math_id": 53, "text": "B" }, { "math_id": 54, "text": "\\mathcal{A}f(t, x) = \\frac{\\partial f}{\\partial t} (t, x) + \\frac1{2} \\frac{\\partial^{2} f}{\\partial x^{2}} (t, x)" }, { "math_id": 55, "text": "\\mathbb{R}" }, { "math_id": 56, "text": "dX_{t} = \\theta(\\mu-X_{t})dt + \\sigma dB_{t}" }, { "math_id": 57, "text": "\\mathcal{A} f(x) = \\theta(\\mu - x) f'(x) + \\frac{\\sigma^{2}}{2} f''(x)" }, { "math_id": 58, "text": "\\mathcal{A} f(t, x) = \\frac{\\partial f}{\\partial t} (t, x) + \\theta(\\mu - x) \\frac{\\partial f}{\\partial x} (t, x) + \\frac{\\sigma^{2}}{2} \\frac{\\partial^{2} f}{\\partial x^{2}} (t, x)" }, { "math_id": 59, "text": "dX_{t} = rX_{t}dt + \\alpha X_{t}dB_{t}" }, { "math_id": 60, "text": "\\mathcal{A} f(x) = r x f'(x) + \\frac1{2} \\alpha^{2} x^{2} f''(x)" } ]
https://en.wikipedia.org/wiki?curid=12193437
12195613
Stochastic processes and boundary value problems
In mathematics, some boundary value problems can be solved using the methods of stochastic analysis. Perhaps the most celebrated example is Shizuo Kakutani's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion. However, it turns out that for a large class of semi-elliptic second-order partial differential equations the associated Dirichlet boundary value problem can be solved using an Itō process that solves an associated stochastic differential equation. Introduction: Kakutani's solution to the classical Dirichlet problem. Let formula_0 be a domain (an open and connected set) in formula_1. Let formula_2 be the Laplace operator, let formula_3 be a bounded function on the boundary formula_4, and consider the problem: formula_5 It can be shown that if a solution formula_6 exists, then formula_7 is the expected value of formula_8 at the (random) first exit point from formula_0 for a canonical Brownian motion starting at formula_9. See theorem 3 in Kakutani 1944, p. 710. The Dirichlet–Poisson problem. Let formula_0 be a domain in formula_1 and let formula_10 be a semi-elliptic differential operator on formula_11 of the form: formula_12 where the coefficients "formula_13" and "formula_14" are continuous functions and all the eigenvalues of the matrix "formula_15" are non-negative. Let "formula_16" and "formula_17". Consider the Poisson problem: formula_18 The idea of the stochastic method for solving this problem is as follows. First, one finds an Itō diffusion formula_19 whose infinitesimal generator formula_20 coincides with formula_10 on compactly-supported formula_21 functions formula_22. For example, formula_19 can be taken to be the solution to the stochastic differential equation: formula_23 where formula_24 is "n"-dimensional Brownian motion, "formula_25" has components "formula_13" as above, and the matrix field "formula_26" is chosen so that: formula_27 For a point formula_28, let formula_29 denote the law of formula_19 given initial datum formula_30, and let formula_31denote expectation with respect to formula_29. Let "formula_32" denote the first exit time of formula_19 from formula_0. In this notation, the candidate solution for (P1) is: formula_33 provided that formula_3 is a bounded function and that: formula_34 It turns out that one further condition is required: formula_35 For all formula_9, the process formula_19 starting at formula_9 almost surely leaves formula_0 in finite time. Under this assumption, the candidate solution above reduces to: formula_36 and solves (P1) in the sense that if formula_37 denotes the characteristic operator for formula_19 (which agrees with formula_20 on formula_21 functions), then: formula_38 Moreover, if formula_39 satisfies (P2) and there exists a constant formula_40 such that, for all formula_41: formula_42 then formula_43.
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "\\mathbb{R}^{n}" }, { "math_id": 2, "text": "\\Delta" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "\\partial D" }, { "math_id": 5, "text": "\\begin{cases} - \\Delta u(x) = 0, & x \\in D \\\\ \\displaystyle{\\lim_{y \\to x} u(y)} = g(x), & x \\in \\partial D \\end{cases}" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "u(x)" }, { "math_id": 8, "text": "g(x)" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "L" }, { "math_id": 11, "text": "C^{2}(\\mathbb{R}^{n};\\mathbb{R})" }, { "math_id": 12, "text": "L = \\sum_{i = 1}^{n} b_{i} (x) \\frac{\\partial}{\\partial x_{i}} + \\sum_{i, j = 1}^{n} a_{ij} (x) \\frac{\\partial^{2}}{\\partial x_{i} \\, \\partial x_{j}}" }, { "math_id": 13, "text": "b_{i}" }, { "math_id": 14, "text": "a_{ij}" }, { "math_id": 15, "text": "\\alpha(x) = a_{ij}(x)" }, { "math_id": 16, "text": "f\\in C(D;\\mathbb{R})" }, { "math_id": 17, "text": "g\\in C(\\partial D;\\mathbb{R})" }, { "math_id": 18, "text": "\\begin{cases} - L u(x) = f(x), & x \\in D \\\\ \\displaystyle{\\lim_{y \\to x} u(y)} = g(x), & x \\in \\partial D \\end{cases} \\quad \\mbox{(P1)}" }, { "math_id": 19, "text": "X" }, { "math_id": 20, "text": "A" }, { "math_id": 21, "text": "C^{2}" }, { "math_id": 22, "text": "f:\\mathbb{R}^{n}\\rightarrow \\mathbb{R}" }, { "math_id": 23, "text": "\\mathrm{d} X_{t} = b(X_{t}) \\, \\mathrm{d} t + \\sigma (X_{t}) \\, \\mathrm{d} B_{t}" }, { "math_id": 24, "text": "B" }, { "math_id": 25, "text": "b" }, { "math_id": 26, "text": "\\sigma" }, { "math_id": 27, "text": "\\frac1{2} \\sigma (x) \\sigma(x)^{\\top} = a(x), \\quad \\forall x \\in\\mathbb{R}^{n}" }, { "math_id": 28, "text": "x\\in\\mathbb{R}^{n}" }, { "math_id": 29, "text": "\\mathbb{P}^{x}" }, { "math_id": 30, "text": "X_{0} = x" }, { "math_id": 31, "text": "\\mathbb{E}^{x}" }, { "math_id": 32, "text": "\\tau_{D}" }, { "math_id": 33, "text": "u(x) = \\mathbb{E}^{x} \\left[ g \\big( X_{\\tau_{D}} \\big) \\cdot \\chi_{\\{ \\tau_{D} < + \\infty \\}} \\right] + \\mathbb{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} f(X_{t}) \\, \\mathrm{d} t \\right]" }, { "math_id": 34, "text": "\\mathbb{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} \\big| f(X_{t}) \\big| \\, \\mathrm{d} t \\right] < + \\infty" }, { "math_id": 35, "text": "\\mathbb{P}^{x} \\big( \\tau_{D} < \\infty \\big) = 1, \\quad \\forall x \\in D" }, { "math_id": 36, "text": "u(x) = \\mathbb{E}^{x} \\left[ g \\big( X_{\\tau_{D}} \\big) \\right] + \\mathbb{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} f(X_{t}) \\, \\mathrm{d} t \\right]" }, { "math_id": 37, "text": "\\mathcal{A}" }, { "math_id": 38, "text": "\\begin{cases} - \\mathcal{A} u(x) = f(x), & x \\in D \\\\ \\displaystyle{\\lim_{t \\uparrow \\tau_{D}} u(X_{t})} = g \\big( X_{\\tau_{D}} \\big), & \\mathbb{P}^{x} \\mbox{-a.s.,} \\; \\forall x \\in D \\end{cases} \\quad \\mbox{(P2)}" }, { "math_id": 39, "text": "v \\in C^{2}(D;\\mathbb{R})" }, { "math_id": 40, "text": "C" }, { "math_id": 41, "text": "x\\in D" }, { "math_id": 42, "text": "| v(x) | \\leq C \\left( 1 + \\mathbb{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} \\big| g(X_{s}) \\big| \\, \\mathrm{d} s \\right] \\right)" }, { "math_id": 43, "text": "v=u" } ]
https://en.wikipedia.org/wiki?curid=12195613
12200213
Hajós's theorem
In group theory, Hajós's theorem states that if a finite abelian group is expressed as the Cartesian product of simplexes, that is, sets of the form formula_0 where formula_1 is the identity element, then at least one of the factors is a subgroup. The theorem was proved by the Hungarian mathematician György Hajós in 1941 using group rings. Rédei later proved the statement when the factors are only required to contain the identity element and be of prime cardinality. Rédei's proof of Hajós's theorem was simplified by Tibor Szele. An equivalent statement on homogeneous linear forms was originally conjectured by Hermann Minkowski. A consequence is Minkowski's conjecture on lattice tilings, which says that in any lattice tiling of space by cubes, there are two cubes that meet face to face. Keller's conjecture is the same conjecture for non-lattice tilings, which turns out to be false in high dimensions.
[ { "math_id": 0, "text": "\\{e,a,a^2,\\dots,a^{s-1}\\}" }, { "math_id": 1, "text": "e" } ]
https://en.wikipedia.org/wiki?curid=12200213
12200818
Dean number
Dimensionless group in fluid mechanics The Dean number ("De") is a dimensionless group in fluid mechanics, which occurs in the study of flow in curved pipes and channels. It is named after the British scientist W. R. Dean, who was the first to provide a theoretical solution of the fluid motion through curved pipes for laminar flow by using a perturbation procedure from a Poiseuille flow in a straight pipe to a flow in a pipe with very small curvature. Physical Context. If a fluid is moving along a straight pipe that after some point becomes curved, then the flow entering a curved portion develops a centrifugal force in an asymmetrical geometry. Such asymmetricity affects the parabolic velocity profile and causes a shift in the location of the maximum velocity compared to a straight pipe. Therefore, the maximum velocity shifts from the centerline towards the concave outer wall and forms an asymmetric velocity profile. There will be an adverse pressure gradient generated from the curvature with an increase in pressure, therefore a decrease in velocity close to the convex wall, and the contrary occurring towards the concave outer wall of the pipe. This gives rise to a secondary motion superposed on the primary flow, with the fluid in the centre of the pipe being swept towards the outer side of the bend and the fluid near the pipe wall will return towards the inside of the bend. This secondary motion is expected to appear as a pair of counter-rotating cells, which are called Dean vortices. Definition. The Dean number is typically denoted by "De" (or "Dn"). For a flow in a pipe or tube it is defined as: formula_0 where The Dean number is therefore the product of the Reynolds number (based on axial flow formula_3 through a pipe of diameter formula_4) and the square root of the curvature ratio. Turbulence transition. The flow is completely unidirectional for low Dean numbers (De &lt; 40~60). As the Dean number increases between 40~60 to 64~75, some wavy perturbations can be observed in the cross-section, which evidences some secondary flow. At higher Dean numbers than that (De &gt; 64~75) the pair of Dean vortices becomes stable, indicating a primary dynamic instability. A secondary instability appears for De &gt; 75~200, where the vortices present undulations, twisting, and eventually merging and pair splitting. Fully turbulent flow forms for De &gt; 400. Transition from laminar to turbulent flow has also been examined in a number of studies, even though no universal solution exists since the parameter is highly dependent on the curvature ratio. Somewhat unexpectedly, laminar flow can be maintained for larger Reynolds numbers (even by a factor of two for the highest curvature ratios studied) than for straight pipes, even though curvature is known to cause instability. The Dean equations. The Dean number appears in the so-called Dean equations. These are an approximation to the full Navier–Stokes equations for the steady axially uniform flow of a Newtonian fluid in a toroidal pipe, obtained by retaining just the leading order curvature effects (i.e. the leading-order equations for formula_7). We use orthogonal coordinates formula_8 with corresponding unit vectors formula_9 aligned with the centre-line of the pipe at each point. The axial direction is formula_10, with formula_11 being the normal in the plane of the centre-line, and formula_12 the binormal. For an axial flow driven by a pressure gradient formula_13, the axial velocity formula_14 is scaled with formula_15. The cross-stream velocities formula_16 are scaled with formula_17, and cross-stream pressures with formula_18. Lengths are scaled with the tube radius formula_19. In terms of these non-dimensional variables and coordinates, the Dean equations are then formula_20 formula_21 formula_22 formula_23 where formula_24 is the convective derivative. The Dean number "De" is the only parameter left in the system, and encapsulates the leading order curvature effects. Higher-order approximations will involve additional parameters. For weak curvature effects (small "De"), the Dean equations can be solved as a series expansion in "De". The first correction to the leading-order axial Poiseuille flow is a pair of vortices in the cross-section carrying flow from the inside to the outside of the bend across the centre and back around the edges. This solution is stable up to a critical Dean number formula_25. For larger "De", there are multiple solutions, many of which are unstable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathit{De} \n= \\frac{\\sqrt{\\frac{1}{2}\\,(\\text{inertial forces})(\\text{centrifugal forces})}}{\\text{viscous forces}} \n= \\frac{\\sqrt{\\frac{1}{2}\\,(\\rho\\,D^2\\,R_c\\,\\frac{v^2}{D}) (\\rho\\,D^2\\,R_c\\,\\frac{v^2}{R_c})}}{\\mu \\frac{v}{D} D\\,R_c} \n= \\frac{\\rho\\,D\\,v}{\\mu} \\sqrt{ \\frac{D}{2\\,R_c} } \n= \\textit{Re} \\, \\sqrt{ \\frac{D}{2\\,R_c} } " }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "v" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "R_c" }, { "math_id": 6, "text": "\\textit{Re}" }, { "math_id": 7, "text": "a/r \\ll 1" }, { "math_id": 8, "text": "(x,y,z)" }, { "math_id": 9, "text": "(\\hat{\\boldsymbol{x}},\\hat{\\boldsymbol{y}},\\hat{\\boldsymbol{z}})" }, { "math_id": 10, "text": "\\hat{\\boldsymbol{z}}" }, { "math_id": 11, "text": "\\hat{\\boldsymbol{x}}" }, { "math_id": 12, "text": "\\hat{\\boldsymbol{y}}" }, { "math_id": 13, "text": "G" }, { "math_id": 14, "text": "u_z" }, { "math_id": 15, "text": "U=Ga^2/\\mu" }, { "math_id": 16, "text": "u_x, u_y" }, { "math_id": 17, "text": "(a/R)^{1/2} U" }, { "math_id": 18, "text": "\\rho a U^2/L" }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "De \\left( \\frac{\\mathrm{D} u_x}{\\mathrm{D} t} - u_z^2 \\right) = -De \\frac{\\partial p}{\\partial x} + \\nabla^2 u_x " }, { "math_id": 21, "text": "De \\frac{\\mathrm{D} u_y}{\\mathrm{D} t} = -De\\frac{\\partial p}{\\partial y} + \\nabla^2 u_y" }, { "math_id": 22, "text": "De \\frac{\\mathrm{D} u_z}{\\mathrm{D} t} = 1 + \\nabla^2 u_z " }, { "math_id": 23, "text": "\\frac{\\partial u_x}{\\partial x} + \\frac{\\partial u_y}{\\partial y} = 0" }, { "math_id": 24, "text": "\\frac{\\mathrm{D}}{\\mathrm{D} t} = u_x \\frac{\\partial}{\\partial x} + u_y \\frac{\\partial}{\\partial y}" }, { "math_id": 25, "text": "De_c \\approx 956" } ]
https://en.wikipedia.org/wiki?curid=12200818
12200859
McShane's identity
In geometric topology, McShane's identity for a once punctured torus formula_0 with a complete, finite-volume hyperbolic structure is given by formula_1 where This identity was generalized by Maryam Mirzakhani in her PhD thesis References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{T}" }, { "math_id": 1, "text": "\\sum_\\gamma \\frac{1}{1 + e^{\\ell(\\gamma)}}=\\frac{1}{2}" } ]
https://en.wikipedia.org/wiki?curid=12200859
12201787
Hutchinson metric
In mathematics, the Hutchinson metric otherwise known as Kantorovich metric is a function which measures "the discrepancy between two images for use in fractal image processing" and "can also be applied to describe the similarity between DNA sequences expressed as real or complex genomic signals". Formal definition. Consider only nonempty, compact, and finite metric spaces. For such a space formula_0, let formula_1 denote the space of Borel probability measures on formula_2, with formula_3 the embedding associating to formula_4 the point measure formula_5. The support formula_6 of a measure in formula_1 is the smallest closed subset of measure 1. If formula_7 is Borel measurable then the induced map formula_8 associates to formula_9 the measure formula_10 defined by formula_11 for all formula_12 Borel in formula_13. Then the Hutchinson metric is given by formula_14 where the formula_15 is taken over all real-valued functions formula_16 with Lipschitz constant formula_17 Then formula_18 is an isometric embedding of formula_2 into formula_1, and if formula_7 is Lipschitz then formula_8 is Lipschitz with the same Lipschitz constant. Sources and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X " }, { "math_id": 1, "text": "P(X)" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\delta : X \\rightarrow P(X)" }, { "math_id": 4, "text": "x \\in X" }, { "math_id": 5, "text": "\\delta_x" }, { "math_id": 6, "text": "|\\mu|" }, { "math_id": 7, "text": "f : X_1 \\rightarrow X_2" }, { "math_id": 8, "text": "f_* : P(X_1) \\rightarrow P(X_2)" }, { "math_id": 9, "text": "\\mu" }, { "math_id": 10, "text": "f_*(\\mu)" }, { "math_id": 11, "text": "f_*(\\mu)(B)= \\mu(f^{-1}(B))" }, { "math_id": 12, "text": "B" }, { "math_id": 13, "text": "X_2 " }, { "math_id": 14, "text": "d(\\mu_1,\\mu_2) = \\sup \\left\\lbrace \\int u(x) \\, \\mu_1(dx) - \\int u(x) \\, \\mu_2(dx) \\right\\rbrace" }, { "math_id": 15, "text": "\\sup" }, { "math_id": 16, "text": "u" }, { "math_id": 17, "text": "\\le\\!1." }, { "math_id": 18, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=12201787
12202029
Gibbons–Hawking effect
In the theory of general relativity, the Gibbons–Hawking effect is the statement that a temperature can be associated to each solution of the Einstein field equations that contains a causal horizon. It is named after Gary Gibbons and Stephen Hawking. The term "causal horizon" does not necessarily refer to event horizons only, but could also stand for the horizon of the visible universe, for instance. For example, Schwarzschild spacetime contains an event horizon and so can be associated a temperature. In the case of Schwarzschild spacetime this is the temperature formula_0 of a black hole of mass formula_1, satisfying formula_2 (see also Hawking radiation). A second example is de Sitter space which contains an event horizon. In this case the temperature formula_0 is proportional to the Hubble parameter formula_3, i.e. formula_4.
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "T \\propto M^{-1}" }, { "math_id": 3, "text": "H" }, { "math_id": 4, "text": "T \\propto H" } ]
https://en.wikipedia.org/wiki?curid=12202029
12202917
Taylor dispersion
Taylor dispersion or Taylor diffusion is an apparent or effective diffusion of some scalar field arising on the large scale due to the presence of a strong, confined, zero-mean shear flow on the small scale. Essentially, the shear acts to smear out the concentration distribution in the direction of the flow, enhancing the rate at which it spreads in that direction. The effect is named after the British fluid dynamicist G. I. Taylor, who described the shear-induced dispersion for large Peclet numbers. The analysis was later generalized by Rutherford Aris for arbitrary values of the Peclet number. The dispersion process is sometimes also referred to as the Taylor-Aris dispersion. The canonical example is that of a simple diffusing species in uniform Poiseuille flow through a uniform circular pipe with no-flux boundary conditions. Description. We use "z" as an axial coordinate and "r" as the radial coordinate, and assume axisymmetry. The pipe has radius "a", and the fluid velocity is: formula_0 The concentration of the diffusing species is denoted "c" and its diffusivity is "D". The concentration is assumed to be governed by the linear advection–diffusion equation: formula_1 The concentration and velocity are written as the sum of a cross-sectional average (indicated by an overbar) and a deviation (indicated by a prime), thus: formula_2 formula_3 Under some assumptions (see below), it is possible to derive an equation just involving the average quantities: formula_4 Observe how the effective diffusivity multiplying the derivative on the right hand side is greater than the original value of diffusion coefficient, D. The effective diffusivity is often written as: formula_5 where formula_6 is the Péclet number, based on the channel radius formula_7. The interesting result is that for large values of the Péclet number, the effective diffusivity is inversely proportional to the molecular diffusivity. The effect of Taylor dispersion is therefore more pronounced at higher Péclet numbers. In a frame moving with the mean velocity, i.e., by introducing formula_8, the dispersion process becomes a purely diffusion process, formula_9 with diffusivity given by the effective diffusivity. The assumption is that formula_10 for given formula_11, which is the case if the length scale in the formula_11 direction is long enough to smooth the gradient in the formula_12 direction. This can be translated into the requirement that the length scale formula_13 in the formula_11 direction satisfies: formula_14. Dispersion is also a function of channel geometry. An interesting phenomenon for example is that the dispersion of a flow between two infinite flat plates and a rectangular channel, which is infinitely thin, differs approximately 8.75 times. Here the very small side walls of the rectangular channel have an enormous influence on the dispersion. While the exact formula will not hold in more general circumstances, the mechanism still applies, and the effect is stronger at higher Péclet numbers. Taylor dispersion is of particular relevance for flows in porous media modelled by Darcy's law. Derivation. One may derive the Taylor equation using method of averages, first introduced by Aris. The result can also be derived from large-time asymptotics, which is more intuitively clear. In the dimensional coordinate system formula_15, consider the fully-developed Poiseuille flow formula_16 flowing inside a pipe of radius formula_7, where formula_17 is the average velocity of the fluid. A species of concentration formula_18 with some arbitrary distribution is to be released at somewhere inside the pipe at time formula_19. As long as this initial distribution is compact, for instance the species/solute is not released everywhere with finite concentration level, the species will be convected along the pipe with the mean velocity formula_17. In a frame moving with the mean velocity and scaled with following non-dimensional scales formula_20 where formula_21 is the time required for the species to diffuse in the radial direction, formula_22 is the diffusion coefficient of the species and formula_23 is the Peclet number, the governing equations are given by formula_24 Thus in this moving frame, at times formula_25 (in dimensional variables, formula_26), the species will diffuse radially. It is clear then that when formula_27 (in dimensional variables, formula_28), diffusion in the radial direction will make the concentration uniform across the pipe, although however the species is still diffusing in the formula_29 direction. Taylor dispersion quantifies this axial diffusion process for large formula_30. Suppose formula_31 (i.e., times large in comparison with the radial diffusion time formula_21), where formula_32 is a small number. Then at these times, the concentration would spread to an axial extent formula_33. To quantify large-time behavior, the following rescalings formula_34 can be introduced. The equation then becomes formula_35 If pipe walls do not absorb or react with the species, then the boundary condition formula_36 must be satisfied at formula_37. Due to symmetry, formula_36 at formula_38. Since formula_32, the solution can be expanded in an asymptotic series, formula_39 Substituting this series into the governing equation and collecting terms of different orders will lead to series of equations. At leading order, the equation obtained is formula_40 Integrating this equation with boundary conditions defined before, one finds formula_41. At this order, formula_42 is still an unknown function. This fact that formula_42 is independent of formula_12 is an expected result since as already said, at times formula_28, the radial diffusion will dominate first and make the concentration uniform across the pipe. Terms of order formula_43 leads to the equation formula_44 Integrating this equation with respect to formula_12 using the boundary conditions leads to formula_45 where formula_46 is the value of formula_47 at formula_38, an unknown function at this order. Terms of order formula_48 leads to the equation formula_49 This equation can also be integrated with respect to formula_12, but what is required is the solvability condition of the above equation. The solvability condition is obtained by multiplying the above equation by formula_50 and integrating the whole equation from formula_38 to formula_37. This is also the same as averaging the above equation over the radial direction. Using the boundary conditions and results obtained in the previous two orders, the solvability condition leads to formula_51 This is the required diffusion equation. Going back to the laboratory frame and dimensional variables, the equation becomes formula_52 By the way in which this equation is derived, it can be seen that this is valid for formula_28 in which formula_42 changes significantly over a length scale formula_53 (or more precisely on a scale formula_54. At the same time scale formula_28, at any small length scale about some location that moves with the mean flow, say formula_55, i.e., on the length scale formula_56, the concentration is no longer independent of formula_12, but is given by formula_57 Higher order asymptotics. Integrating the equations obtained at the second order, we find formula_58 where formula_59 is an unknown at this order. Now collecting terms of order formula_60, we find formula_61 The solvability condition of the above equation yields the governing equation for formula_62 as follows formula_63 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol{u} = w\\hat{\\boldsymbol{z}} = w_0 (1-r^2/a^2) \\hat{\\boldsymbol{z}}" }, { "math_id": 1, "text": "\\frac{\\partial c}{\\partial t} + \\boldsymbol{w} \\cdot \\boldsymbol{\\nabla} c = D \\nabla^2 c" }, { "math_id": 2, "text": " w(r) = \\bar{w} + w'(r)" }, { "math_id": 3, "text": " c(r,z) = \\bar{c}(z) + c'(r,z)" }, { "math_id": 4, "text": " \\frac{\\partial \\bar{c}}{\\partial t} + \\bar{w} \\frac{\\partial \\bar{c}}{\\partial z} = D \\left( 1 + \\frac{a^2 \\bar{w}^2}{48 D^2} \\right) \\frac{\\partial^2 \\bar{c}}{\\partial z ^2}" }, { "math_id": 5, "text": " D_{\\mathrm{eff}} = D \\left( 1 + \\frac{\\mathit{Pe}^{2}}{48} \\right)\\, , " }, { "math_id": 6, "text": "\\mathit{Pe}=a\\bar{w}/D" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "\\xi=z-\\bar w t" }, { "math_id": 9, "text": " \\frac{\\partial \\bar{c}}{\\partial t} = D_{\\mathrm{eff}}\\frac{\\partial^2 \\bar{c}}{\\partial \\xi ^2}" }, { "math_id": 10, "text": "c' \\ll \\bar{c}" }, { "math_id": 11, "text": "z" }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": "L" }, { "math_id": 14, "text": "L \\gg \\frac{a^2}{D} \\bar w = a\\mathit{Pe}" }, { "math_id": 15, "text": "(x',r',\\theta)" }, { "math_id": 16, "text": "u=2 U [1-(r'/a)^2]" }, { "math_id": 17, "text": "U" }, { "math_id": 18, "text": "c" }, { "math_id": 19, "text": "t'=0" }, { "math_id": 20, "text": "t=\\frac{t'}{a^2/D},\\quad x=\\frac{x'-Ut'}{a}, \\quad r=\\frac{r'}{a}, \\quad Pe = \\frac{Ua}{D}" }, { "math_id": 21, "text": "a^2/D" }, { "math_id": 22, "text": "D" }, { "math_id": 23, "text": "Pe" }, { "math_id": 24, "text": "\\frac{\\partial c}{\\partial t}+ Pe(1-2r^2)\\frac{\\partial c}{\\partial x} =\\frac{\\partial^2 c }{\\partial x^2} + \\frac{1}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial c}{\\partial r}\\right)." }, { "math_id": 25, "text": "t\\sim 1" }, { "math_id": 26, "text": "t'\\sim a^2/D" }, { "math_id": 27, "text": "t\\gg 1" }, { "math_id": 28, "text": "t'\\gg a^2/D" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "t" }, { "math_id": 31, "text": "t\\sim 1/\\epsilon \\gg 1" }, { "math_id": 32, "text": "\\epsilon \\ll 1" }, { "math_id": 33, "text": "x\\sim \\sqrt t\\sim \\sqrt{1/\\epsilon}\\gg 1" }, { "math_id": 34, "text": "\\tau = \\epsilon t, \\quad \\xi = \\sqrt\\epsilon x " }, { "math_id": 35, "text": "\\epsilon\\frac{\\partial c}{\\partial \\tau}+ \\sqrt\\epsilon Pe(1-2r^2)\\frac{\\partial c}{\\partial \\xi} =\\epsilon \\frac{\\partial^2 c }{\\partial \\xi^2} + \\frac{1}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial c}{\\partial r}\\right)." }, { "math_id": 36, "text": "\\partial c/\\partial r=0" }, { "math_id": 37, "text": "r=1" }, { "math_id": 38, "text": "r=0" }, { "math_id": 39, "text": "c=c_0 + \\sqrt\\epsilon c_1 +\\epsilon c_2 + \\cdots " }, { "math_id": 40, "text": " \\frac{1}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial c_0}{\\partial r}\\right)=0." }, { "math_id": 41, "text": "c_0=c_0(\\xi,\\tau)" }, { "math_id": 42, "text": "c_0" }, { "math_id": 43, "text": "\\sqrt\\epsilon" }, { "math_id": 44, "text": " \\frac{1}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial c_1}{\\partial r}\\right)=Pe (1-2r^2)\\frac{\\partial c_0}{\\partial \\xi}." }, { "math_id": 45, "text": "c_1(\\xi,r,\\tau) = c_{1a}(\\xi,\\tau) + \\frac{Pe}{8}(2r^2-r^4)\\frac{\\partial c_0}{\\partial \\xi}" }, { "math_id": 46, "text": "c_{1a}" }, { "math_id": 47, "text": "c_1" }, { "math_id": 48, "text": "\\epsilon" }, { "math_id": 49, "text": " \\frac{1}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial c_2}{\\partial r}\\right)=Pe (1-2r^2)\\frac{\\partial c_1}{\\partial \\xi} + \\frac{\\partial c_0}{\\partial \\tau} - \\frac{\\partial^2 c_0}{\\partial \\xi^2}." }, { "math_id": 50, "text": "2r dr" }, { "math_id": 51, "text": "\\frac{\\partial c_0}{\\partial \\tau} =\\left(1+\\frac{Pe^2}{48}\\right) \\frac{\\partial^2 c_0}{\\partial \\xi^2} \\quad \\Rightarrow \\quad \\frac{\\partial c_0}{\\partial t} =\\left(1+\\frac{Pe^2}{48}\\right) \\frac{\\partial^2 c_0}{\\partial x^2}." }, { "math_id": 52, "text": "\\frac{\\partial c_0}{\\partial t'} + U \\frac{\\partial c_0}{\\partial x'} =D\\left(1+\\frac{U^2 a^2}{48 D^2}\\right) \\frac{\\partial^2 c_0}{\\partial x'^2}." }, { "math_id": 53, "text": "x'\\gg a" }, { "math_id": 54, "text": "x\\sim \\sqrt{Dt'})" }, { "math_id": 55, "text": "x'-Ut'=x_s'-Ut'" }, { "math_id": 56, "text": "x'-x_s'\\sim a" }, { "math_id": 57, "text": "c=c_0 + \\sqrt{\\epsilon} c_1." }, { "math_id": 58, "text": "c_2(\\xi,\\tau) = c_{2a}(\\xi,\\tau) + \\frac{Pe}{4}\\left(r^2-\\frac{r^4}{2}\\right) \\frac{\\partial c_{1a}}{\\partial\\xi} + \\frac{Pe^2}{32}\\left(\\frac{r^2}{6}+\\frac{r^4}{2}-\\frac{5r^6}{8}+\\frac{r^8}{8}\\right) \\frac{\\partial^2c_0}{\\partial\\xi^2}" }, { "math_id": 59, "text": "c_{2a}(\\xi,\\tau)" }, { "math_id": 60, "text": "\\epsilon\\sqrt\\epsilon" }, { "math_id": 61, "text": " \\frac{1}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial c_3}{\\partial r}\\right)=Pe (1-2r^2)\\frac{\\partial c_2}{\\partial \\xi} + \\frac{\\partial c_1}{\\partial \\tau} - \\frac{\\partial^2 c_1}{\\partial \\xi^2}." }, { "math_id": 62, "text": "c_{1a}(\\xi,\\tau)" }, { "math_id": 63, "text": "\\frac{\\partial c_{1a}}{\\partial \\tau} -\\left(1+\\frac{Pe^2}{48}\\right) \\frac{\\partial^2 c_{1a}}{\\partial \\xi^2} = -\\frac{Pe^3}{2880}\\frac{\\partial^3c_0}{\\partial\\xi^3}." } ]
https://en.wikipedia.org/wiki?curid=12202917
12205222
Coherence time (communications systems)
Duration a communication channel's impulse response is effectively constant In communications systems, a communication channel may change with time. Coherence time is the time duration over which the channel impulse response is considered to be not varying. Such channel variation is much more significant in wireless communications systems, due to Doppler effects. Simple model. In a simple model, a signal formula_0 transmitted at time formula_1 will be received as formula_2 where formula_3 is the channel impulse response (CIR) at time formula_1. A signal transmitted at time formula_4 will be received as formula_5 Now, if formula_6 is relatively small, the channel may be considered constant within the interval formula_1 to formula_4. Coherence time (formula_7) will therefore be given by formula_8 Relation with Doppler frequency. Coherence time formula_7 is the time-domain dual of Doppler spread and is used to characterize the time-varying nature of the frequency dispersiveness of the channel in the time domain. The Maximum Doppler spread and coherence time are inversely proportional to one another. That is, formula_9 where formula_10 is the maximum Doppler spread or, maximum Doppler frequency or, maximum Doppler shift given by formula_11 with formula_12 being the center frequency of the emitter. Coherence time is actually a statistical measure of the time duration over which the channel impulse response is essentially invariant, and quantifies the similarity of the channel response at different times. In other words, coherence time is the time duration over which two received signals have a strong potential for amplitude correlation. If the reciprocal bandwidth of the baseband signal is greater than the coherence time of the channel, then the channel will change during the transmission of the baseband message, thus causing distortion at the receiver. If the coherence time is defined as the time over which the time correlation function is above 0.5, then the coherence time is approximately, formula_13 In practice, the first approximation of coherence time suggests a time duration during which a Rayleigh fading signal may fluctuate wildly, and the second approximation is often too restrictive. A popular rule of thumb for modern digital communications is to define the coherence time as the geometric mean of the two approximate values, also known as Clarke's model; from the maximum Doppler frequency formula_14 we can obtain 50% coherence time formula_15 Usually, we use the following relation formula_16
[ { "math_id": 0, "text": "x(t)" }, { "math_id": 1, "text": "t_1" }, { "math_id": 2, "text": "y_{t_1}(t) = x(t-t_1)*h_{t_1}(t)," }, { "math_id": 3, "text": "h_{t_1}(t)" }, { "math_id": 4, "text": "t_2" }, { "math_id": 5, "text": "y_{t_2}(t) = x(t-t_2)*h_{t_2}(t)." }, { "math_id": 6, "text": "h_{t_1}(t) - h_{t_2}(t)" }, { "math_id": 7, "text": "T_c" }, { "math_id": 8, "text": "T_c = t_2 - t_1." }, { "math_id": 9, "text": "T_c\\approx\\frac{1}{f_m}" }, { "math_id": 10, "text": "(f_m)" }, { "math_id": 11, "text": "f_m=\\frac{v}{c}f_c" }, { "math_id": 12, "text": "f_c" }, { "math_id": 13, "text": "T_c\\approx\\frac{9}{16\\pi f_m}" }, { "math_id": 14, "text": "f_m" }, { "math_id": 15, "text": "T_c=\\sqrt{\\frac{9}{16\\pi f_m^2}}" }, { "math_id": 16, "text": "T_c=\\sqrt{\\frac{9}{16\\pi}}\\frac{1}{f_m}\\simeq\\frac{0.423}{f_m}" } ]
https://en.wikipedia.org/wiki?curid=12205222
12207392
Compact complement topology
In mathematics, the compact complement topology is a topology defined on the set formula_0 of real numbers, defined by declaring a subset formula_1 open if and only if it is either empty or its complement formula_2 is compact in the standard Euclidean topology on formula_0.
[ { "math_id": 0, "text": "\\scriptstyle\\mathbb{R}" }, { "math_id": 1, "text": "\\scriptstyle X \\subseteq \\mathbb{R}" }, { "math_id": 2, "text": "\\scriptstyle\\mathbb{R} \\setminus X" } ]
https://en.wikipedia.org/wiki?curid=12207392
12209510
Jean Richer
French astronomer Jean Richer (1630–1696) was a French astronomer and assistant ("élève astronome") at the French Academy of Sciences, under the direction of Giovanni Domenico Cassini. Between 1671 and 1673 he performed experiments and carried out celestial observations in Cayenne, French Guiana, at the request of the French Academy. His observations and measurements of Mars during its perihelic opposition, coupled with those made simultaneously in Paris by Cassini, led to the earliest data-based estimate of the distance between Earth and Mars, which they then used to calculate the distance between the Sun and Earth (the astronomical unit). While there he also measured the length of a seconds pendulum, that is a pendulum with a half-swing of one second, and found it to be 1.25 "lignes" (2.256 millimeters*) shorter than at Paris. His method was to compare the oscillation of a freely decaying pendulum with the time kept by another mechanical clock and astronomical observations. It could be said that Richer was the first person to observe a change in gravitational force over the surface of the Earth, beginning the science of gravimetry. To obtain the relative difference in the pendulum's frequency from this number, note that Richer gives the Paris pendulum's length as 3 feet 8+1/3 lignes, equaling 1321/3 lignes (993.3 mm). The 1.25 ligne discrepancy is a 0.28% difference in length, and thus a 0.14% difference in frequency. Since the seconds pendulum length is proportional to the local gravity, Richer's result means that the local gravity in is weaker by 0.28% of gravity, or about formula_0. Isaac Newton later commented that if, as he had proposed, the force of gravity decreases with the inverse square of the distance between objects, the obvious conclusion to be drawn from Richer's work is that near-equatorial Cayenne is further from the centre of the Earth than Paris, where the first such measurements had been taken. Thus the Earth could not be spherical, as had earlier been presumed, but rather bulges at and near the equator (equatorial bulge). Newton's claim of a 2.5 minutes per day difference translates to a 0.17% difference in frequency, in fair agreement with Richer's measurement. While Newton interpreted it as due to oblateness of Earth, Christiaan Huygens interpreted it instead as due to the centrifugal force which reduces the apparent gravity at the equator. Assuming the actual gravity is a constant formula_1 across the surface of Earth, and the Earth is a perfect sphere of radius formula_2 and angular velocity formula_3, then the apparent gravity at latitude formula_4 is formula_5. Paris is at latitude 49°, and Cayenne is at latitude 5°, which gives the difference in apparent gravity as formula_6. Richer's 1673 return to Paris was duly celebrated, and when his data were reproduced, the findings for which we remember him could be made public. However, publication was delayed, for unknown causes, until 1679, when a work entitled " Observations Astronomiques et Physiques Faites en L'Isle de Caïenne par M. Richer, de l'Académie Royale des Sciences," was released under Richer's name. Not long thereafter, he was assigned to an engineering project in Germany. The remainder of his life is undocumented. Most biographers believe that he died at Paris in 1696. A detailed account of his pendulum experiment is found in.
[ { "math_id": 0, "text": "0.028 \\mathrm{ m/s}^2" }, { "math_id": 1, "text": "g" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "\\Omega" }, { "math_id": 4, "text": "\\phi" }, { "math_id": 5, "text": "g - \\Omega^2 R \\cos^3(\\phi)" }, { "math_id": 6, "text": "0.024 \\mathrm{ m/s}^2" } ]
https://en.wikipedia.org/wiki?curid=12209510
1221168
Hydroformylation
Chemical process for converting alkenes to aldehydes &lt;templatestyles src="Reactionbox/styles.css"/&gt; In organic chemistry, hydroformylation, also known as oxo synthesis or oxo process, is an industrial process for the production of aldehydes () from alkenes (). This chemical reaction entails the net addition of a formyl group () and a hydrogen atom to a carbon-carbon double bond. This process has undergone continuous growth since its invention: production capacity reached 6.6×106 tons in 1995. It is important because aldehydes are easily converted into many secondary products. For example, the resultant aldehydes are hydrogenated to alcohols that are converted to detergents. Hydroformylation is also used in speciality chemicals, relevant to the organic synthesis of fragrances and pharmaceuticals. The development of hydroformylation is one of the premier achievements of 20th-century industrial chemistry. The process entails treatment of an alkene typically with high pressures (between 10 and 100 atmospheres) of carbon monoxide and hydrogen at temperatures between 40 and 200 °C. In one variation, formaldehyde is used in place of synthesis gas. Transition metal catalysts are required. Invariably, the catalyst dissolves in the reaction medium, i.e. hydroformylation is an example of homogeneous catalysis. History. The process was discovered by the German chemist Otto Roelen in 1938 in the course of investigations of the Fischer–Tropsch process. Aldehydes and diethylketone were obtained when ethylene was added to an F-T reactor. Through these studies, Roelen discovered the utility of cobalt catalysts. HCo(CO)4, which had been isolated only a few years prior to Roelen's work, was shown to be an excellent catalyst. The term oxo synthesis was coined by the Ruhrchemie patent department, who expected the process to be applicable to the preparation of both aldehydes and ketones. Subsequent work demonstrated that the ligand tributylphosphine (PBu3) improved the selectivity of the cobalt-catalysed process. The mechanism of Co-catalyzed hydroformylation was elucidated by Richard F. Heck and David Breslow in the 1960s. In 1968, highly active rhodium-based catalysts were reported. Since the 1970s, most hydroformylation relies on catalysts based on rhodium. Water-soluble catalysts have been developed. They facilitate the separation of the products from the catalyst. Mechanism. Selectivity. A key consideration of hydroformylation is the "normal" vs. "iso" selectivity. For example, the hydroformylation of propylene can afford two isomeric products, butyraldehyde or isobutyraldehyde: formula_0 These isomers reflect the regiochemistry of the insertion of the alkene into the M–H bond. Since both products are not equally desirable (normal is more stable than iso), much research was dedicated to the quest for catalyst that favored the normal isomer. Steric effects. Markovnikov's rule addition of the cobalt hydride to primary alkenes is disfavored by steric hindrance between the cobalt centre and the secondary alkyl ligand. Bulky ligands exacerbate this steric hindrance. Hence, the mixed carbonyl/phosphine complexes offer a greater selectivity for anti-Markovnikov addition, thus favoring straight chain products ("n"-) aldehydes. Modern catalysts rely increasingly on chelating ligands, especially diphosphites. Electronic effects. Additionally, electron-rich the hydride complex are less proton-like. Thus, as a result, the electronic effects that normally favour the Markovnikov addition to an alkene are less applicable. Thus, electron-rich hydrides are more selective. Acyl formation. To suppress competing isomerization of the alkene, the rate of migratory insertion of the carbonyl into the carbon-metal bond of the alkyl must be relatively fast. The rate of insertion of the carbonyl carbon into the C-M bond is likely to be greater than the rate of beta-hydride elimination. Asymmetric hydroformylation. Hydroformylation of prochiral alkenes creates new stereocenters. Using chiral phosphine ligands, the hydroformylation can be tailored to favor one enantiomer. Thus, for example, dexibuprofen, the (+)−("S")-enantiomer of ibuprofen, can be produced by enantioselective hydroformylation followed by oxidation. Processes. The industrial processes vary depending on the chain length of the olefin to be hydroformylated, the catalyst metal and ligands, and the recovery of the catalyst. The original Ruhrchemie process produced propanal from ethene and syngas using cobalt tetracarbonyl hydride. Today, industrial processes based on cobalt catalysts are mainly used for the production of medium- to long-chain olefins, whereas the rhodium-based catalysts are usually used for the hydroformylation of propene. The rhodium catalysts are significantly more expensive than cobalt catalysts. In the hydroformylation of higher molecular weight olefins the separation of the catalyst from the produced aldehydes is difficult. BASF-oxo process. The BASF-oxo process starts mostly with higher olefins and relies on cobalt carbonyl-based catalyst. By conducting the reaction at low temperatures, one observes increased selectivity favoring the linear product. The process is carried out at a pressure of about 30 MPa and in a temperature range of 150 to 170 °C. The cobalt is recovered from the liquid product by oxidation to water-soluble Co2 +, followed by the addition of aqueous formic or acetic acids. This process gives an aqueous phase of cobalt, which can then be recycled. Losses are compensated by the addition of cobalt salts. Exxon process. The Exxon process, also Kuhlmann- or PCUK – oxo process, is used for the hydroformylation of C6–C12 olefins. The process relies on cobalt catalysts. In order to recover the catalyst, an aqueous sodium hydroxide solution or sodium carbonate is added to the organic phase. By extraction with olefin and neutralization by addition of sulfuric acid solution under carbon monoxide pressure the metal carbonyl hydride can recovered. This is stripped out with syngas, absorbed by the olefin, and returned to the reactor. Similar to the BASF process, the Exxon process is carried out at a pressure of about 30 MPa and at a temperature of about 160 to 180 °C. Shell process. The Shell process uses cobalt complexes modified with phosphine ligands for the hydroformylation of C7–C14 olefins. The resulting aldehydes are directly hydrogenated to the fatty alcohols, which are separated by distillation, which allows the catalyst to be recycled. The process has good selectivity to linear products, which find use as feedstock for detergents. The process is carried out at a pressure of about 4 to 8 MPa and at a temperature range of about 150–190 °C. Union Carbide process. The Union Carbide (UCC) process, also known as low-pressure oxo process (LPO), relies on a rhodium catalyst dissolved in high-boiling thick oil, a higher molecular weight condensation product of the primary aldehydes, for the hydroformylation of propene. The reaction mixture is separated in a falling film evaporator from volatile components. The liquid phase is distilled and butyraldehyde is removed as head product while the catalyst containing bottom product is recycled to the process. The process is carried out at about 1.8 MPa and 95–100 °C. Ruhrchemie/Rhone–Poulenc process. The Ruhrchemie/Rhone–Poulenc process (RCRPP) relies on a rhodium catalyst with water-soluble TPPTS as ligand (Kuntz Cornils catalyst) for the hydroformylation of propene. The tri-sulfonation of triphenylphosphane ligand provides hydrophilic properties to the organometallic complex. The catalyst complex carries nine sulfonate-groups and is highly soluble in water (about 1 kg L−1), but not in the emerging product phase. The water-soluble TPPTS is used in about 50-fold excess, whereby the leaching of the catalyst is effectively suppressed. Reactants are propene and syngas consisting of hydrogen and carbon monoxide in a ratio of 1.1:1. A mixture of butyraldehyde and isobutyraldehyde in the ratio 96:4 is generated with few by-products such as alcohols, esters and higher boiling fractions. The Ruhrchemie/Rhone-Poulenc-process is the first commercially available two-phase system in which the catalyst is present in the aqueous phase. In the progress of the reaction an organic product phase is formed which is separated continuously by means of phase separation, wherein the aqueous catalyst phase remains in the reactor. The process is carried out in a stirred tank reactor where the olefin and the syngas are entrained from the bottom of the reactor through the catalyst phase under intensive stirring. The resulting crude aldehyde phase is separated at the top from the aqueous phase. The aqueous catalyst-containing solution is re-heated via a heat exchanger and pumped back into the reactor. The excess olefin and syngas is separated from the aldehyde phase in a stripper and fed back to the reactor. The generated heat is used for the generation of process steam, which is used for subsequent distillation of the organic phase to separate into butyraldehyde and isobutyraldehyde. Potential catalyst poisons coming from the synthesis gas migrate into the organic phase and removed from the reaction with the aldehyde. Thus, poisons do not accumulate, and the elaborate fine purification of the syngas can be omitted. A plant was built in Oberhausen in 1984, which was debottlenecked in 1988 and again in 1998 up to a production capacity of 500,000 t/a butanal. The conversion rate of propene is 98% and the selectivity to n-butanal is high. During the life time of a catalyst batch in the process less than 1 ppb rhodium is lost. Laboratory process. Recipes have been developed for the hydroformylation on a laboratory scale, e.g. of cyclohexene. Substrates other than alkenes. Cobalt carbonyl and rhodium complexes catalyse the hydroformylation of formaldehyde and ethylene oxide to give hydroxyacetaldehyde and 3-hydroxypropanal, which can then be hydrogenated to ethylene glycol and propane-1,3-diol, respectively. The reactions work best when the solvent is basic (such as pyridine). In the case of dicobalt octacarbonyl or Co2(CO)8 as a catalyst, pentan-3-one can arise from ethene and CO, in the absence of hydrogen. A proposed intermediate is the ethylene-propionyl species [CH3C(O)Co(CO)3(ethene)] which undergoes a migratory insertion to form [CH3COCH2CH2Co(CO)3]. The required hydrogen arises from the water shift reaction. For details, see If the water shift reaction is not operative, the reaction affords a polymer containing alternating carbon monoxide and ethylene units. Such aliphatic polyketones are more conventionally prepared using palladium catalysts. Functionalized olefins such as allyl alcohol can be hydroformylated. The target product 1,4-butanediol and its isomer is obtained with isomerization free catalysts such as rhodium-triphenylphosphine complexes. The use of the cobalt complex leads by isomerization of the double bond to n-propanal. The hydroformylation of alkenyl ethers and alkenyl esters occurs usually in the α-position to the ether or ester function. The hydroformylation of acrylic acid and methacrylic acid in the rhodium-catalyzed process leads to the Markovnikov product in the first step. By variation of the reaction conditions the reaction can be directed to different products. A high reaction temperature and low carbon monoxide pressure favors the isomerization of the Markovnikov product to the thermodynamically more stable β-isomer, which leads to the n-aldehyde. Low temperatures and high carbon monoxide pressure and an excess of phosphine, which blocks free coordination sites, can lead to faster hydroformylation in the α-position to the ester group and suppress the isomerization. Side- and consecutive reactions. Tandem carbonylation-water gas shift reactions. Side reactions of the alkenes are the isomerization and hydrogenation of the double bond. While the alkanes resulting from hydrogenation of the double bond do not participate further in the reaction, the isomerization of the double bond with subsequent formation of the n-alkyl complexes is a desired reaction. The hydrogenation is usually of minor importance; However, cobalt-phosphine-modified catalysts can have an increased hydrogenation activity, where up to 15% of the alkene is hydrogenated. Tandem hydroformylation-hydrogenation. Using tandem catalysis, systems have been developed for the one-pot conversion of akenes to alcohols. The first step is hydroformylation. Ligand degradation. Conditions for hydroformylation catalysis can induce degradation of supporting organophosphorus ligands. Triphenylphosphine is subject to hydrogenolysis, releasing benzene and diphenylphosphine. The insertion of carbon monoxide in an intermediate metal-phenyl bond can lead to the formation of benzaldehyde or by subsequent hydrogenation to benzyl alcohol. One of the ligands phenyl-groups can be replaced by propene, and the resulting diphenylpropylphosphine ligand can inhibit the hydroformylation reaction due to its increased basicity. Metals. Although the original hydroformylation catalysts were based on cobalt, most modern processes rely on rhodium, which is expensive. There has therefore been interest in finding alternative metal catalysts. Examples of alternative metals include iron and ruthenium. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{H2 + CO + CH3CH=CH2 ->} \\ \n\\begin{cases}\n\\ce{CH3CH2CH2CHO} & \\text{(normal)} \\\\\n& \\\\\n\\ce{(CH3)2CHCHO} & \\text{(iso)} \\\\\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=1221168
12213076
Abel's inequality
In mathematics, Abel's inequality, named after Niels Henrik Abel, supplies a simple bound on the absolute value of the inner product of two vectors in an important special case. Mathematical description. Let {"a1, a2..."} be a sequence of real numbers that is either nonincreasing or nondecreasing, and let {"b1, b2..."} be a sequence of real or complex numbers. If {"an"} is nondecreasing, it holds that formula_0 and if {"an"} is nonincreasing, it holds that formula_1 where formula_2 In particular, if the sequence {"a"n} is nonincreasing and nonnegative, it follows that formula_3 Relation to Abel's transformation. Abel's inequality follows easily from Abel's transformation, which is the discrete version of integration by parts: If formula_4
[ { "math_id": 0, "text": "\n\\left |\\sum_{k=1}^n a_k b_k \\right | \\le \\operatorname{max}_{k=1,\\dots,n} |B_k| (|a_n| + a_n - a_1),\n" }, { "math_id": 1, "text": "\n\\left |\\sum_{k=1}^n a_k b_k \\right | \\le \\operatorname{max}_{k=1,\\dots,n} |B_k| (|a_n| - a_n + a_1),\n" }, { "math_id": 2, "text": "\nB_k =b_1+\\cdots+b_k.\n" }, { "math_id": 3, "text": "\n\\left |\\sum_{k=1}^n a_k b_k \\right | \\le \\operatorname{max}_{k=1,\\dots,n} |B_k| a_1,\n" }, { "math_id": 4, "text": "\n\\sum_{k=1}^n a_k b_k = a_n B_n - \\sum_{k=1}^{n-1} B_k (a_{k+1} - a_k).\n" } ]
https://en.wikipedia.org/wiki?curid=12213076
12213637
QM/MM
Molecular simulation method The hybrid QM/MM (quantum mechanics/molecular mechanics) approach is a molecular simulation method that combines the strengths of "ab initio" QM calculations (accuracy) and MM (speed) approaches, thus allowing for the study of chemical processes in solution and in proteins. The QM/MM approach was introduced in the 1976 paper of Warshel and Levitt. They, along with Martin Karplus, won the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". Efficiency. An important advantage of QM/MM methods is their efficiency. The cost of doing classical molecular mechanics (MM) simulations in the most straightforward case scales as "O"("N"2), where "N" is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with everything else). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle mesh Ewald (PME) method has reduced this to between "O"("N") to "O"("N"2). In other words, if a system with twice as many atoms is simulated then it would take between twice to four times as much computing power. On the other hand, the simplest "ab initio" calculations formally scale as "O"("N"3) or worse (restricted Hartree–Fock calculations have been suggested to scale ~"O"("N"2.7)). Here in the "ab initio" calculations, "N" stands for the number of basis functions rather than the number of atoms. Each atom has at least as many basis functions as is the number of electrons. To overcome the limitation, a small part of the system that is of major interest is treated quantum-mechanically (for instance, the active site of an enzyme) and the remaining system is treated classically. Calculating the energy of the combined system. The energy of the combined system may be calculated in two different ways. The simplest is referred to as the 'subtractive scheme' which was proposed by Maseras and Morokuma in 1995. In the subtractive scheme the energy of the entire system is calculated using a molecular mechanics force field, then the energy of the QM system is added (calculated using a QM method), finally the MM energy of the QM system is subtracted. formula_0 In this equation formula_1 would refer to the energy of the QM region as calculated using molecular mechanics. In this scheme, the interaction between the two regions will only be considered at a MM level of theory. In practice, a more widely used approach is a more accurate, additive method. The equation for this consists of three terms: formula_2 The index formula_3 labels the nuclei in the QM region whereas formula_4 labels the MM nuclei. The first two terms represent the interaction between the total charge density (due to electrons and cores) in the QM region and classical charges of the MM region. The third term accounts for dispersion interactions across the QM/MM boundary. Any covalent bond-stretching potentials that cross the boundary are accounted for by the fourth term. The final two terms account for the energy across the boundary that arises from bending covalent bonds and torsional potentials. At least one of the atoms in the angles formula_5 or formula_6 will be a QM atom with the others being MM atoms. Reducing the computational cost of calculating QM-MM interactions. Evaluating the charge-charge term in the QM/MM interaction equation given previously can be very computationally expensive (consider the number of evaluations required a system with 106 grid points for the electron density of the QM system and 104 MM atoms). A method by which this issue can be mitigated is to construct three concentric spheres around the QM region and evaluate which one of these spheres the MM atoms lie within. If the MM atoms reside within the innermost sphere their interactions with the QM system are treated as per the equation for formula_7. The MM charges that lie within the second sphere (but not the first) interact with the QM region by giving the QM nuclei constructed charges. These charges are determined by the RESP approach in an attempt to mimic electron density. Using this approach the changing charges on the QM nuclei during the course of a simulation are accounted for. In the third outermost region the classical charges interact with the multipole moments of the quantum charge distribution. By calculating charge-charge interactions by using successively more approximate methods it is possible to obtain a very significant reduction in computational cost whilst not suffering a significant loss in accuracy. The electrostatic QM-MM interaction. Electrostatic interactions between the QM and MM region may be considered at different levels of sophistication. These methods can be classified as either mechanical embedding, electrostatic embedding or polarized embedding. Mechanical embedding. Mechanical embedding treats the electrostatic interactions at the MM level, though simpler than the other methods, certain issues may occur, in part due to the extra difficulty in assigning appropriate MM properties such as atom centered point charges to the QM region. The QM region being simulated is the site of the reaction thus it is likely that during the course of the reaction the charge distribution will change resulting in a high level of error if a single set of MM electrostatic parameters is used to describe it. Another problem is the fact that mechanical embedding will not consider the effects of electrostatic interactions with the MM system on the electronic structure of the QM system. Electronic embedding. Electrostatic embedding does not require the MM electrostatic parameters for the QM. This is due to it considering the effects of the electrostatic interactions by including certain one electron terms in the QM regions Hamiltonian. This means that polarization of the QM system by the electrostatic interactions with the MM system will now be accounted for. Though an improvement on the mechanical embedding scheme it comes at the cost of increased complexity hence requiring more computational effort. Another issue is it neglects the effects of the QM system on the MM system whereas in reality both systems would polarize each other until an equilibrium is met. In order to construct the required one electron terms for the MM region it is possible to utilize the partial charges described by the MM calculation. This is the most popular method for constructing the QM Hamiltonian however it may not be suitable for all systems. Polarized embedding. Whereas electrostatic embedding accounts for the polarisation of the QM system by the MM system, neglecting the polarization of the MM system by the QM system, polarized embedding accounts for both the polarization of the MM system by the QM. These models allow for flexible MM charges and fall into two categories. In the first category, the MM region is polarized by the QM electric field but then does not act back on the QM system. In the second category are fully self-consistent formulations which allow for mutual polarization between the QM and the MM systems. polarized embedding schemes have scarcely been applied to bio-molecular simulations and have essentially been restricted to explicit solvation models where the solute will be treated as a QM system and the solvent a polarizable force field. Problems involved with QM/MM. Even though QM/MM methods are often very efficient, they are still rather tricky to handle. A researcher has to limit the regions (atomic sites) which are simulated by QM, however methods have been developed that allow particles to move between the QM and MM region. Moving the limitation borders can both affect the results and the time computing the results. The way the QM and MM systems are coupled can differ substantially depending on the arrangement of particles in the system and their deviations from equilibrium positions in time. Usually, limits are set at carbon-carbon bonds and avoided in regions that are associated with charged groups, since such an electronically variant limit can influence the quality of the model. Covalent bonds across the QM-MM boundary. Directly connected atoms, where one is described by QM and the other by MM are referred to as Junction atoms. Having the boundary between the QM region and MM region pass through a covalent bond may prove problematic however this is sometimes unavoidable. When it does occur it is important that the bond of the QM atom be capped in order to prevent the appearance of bond cleavage in the QM system. Boundary schemes. In systems where the QM/MM boundary cuts a bond three issues must be dealt with. First, the dangling bond of the QM system must be capped, this is because it is undesirable to truncate the QM system (treating the bond as if it has been cleaved will yield very unrealistic calculations). The second issue relates to polarisation, more specifically for electrostatic or polarized embedding it is important to ensure that the proximity of the MM charges near the boundary does not cause over-polarisation of the QM density. The final issue is the bonding MM terms must be carefully selected in order to prevent double counting of interactions when looking at bonds across the boundary. Overall the goal is to obtain a good description of QM-MM interactions at the boundary between the QM and the MM system and there are three schemes by which this can be achieved. Link atom schemes. Link atom schemes introduce an additional atomic centre (usually a hydrogen atom). This atom is not part of the real system. It is covalently bonded to the atom being described by quantum mechanics which serves to saturate its valency (by replacing the bond that has been broken). Boundary atom schemes. In boundary atom schemes, the MM atom which is bonded across the boundary to a QM atom is replaced with a special boundary atom which appears in both the QM and the MM calculation. In the MM calculation, it simply behaves as an MM atom but in the QM system it mimics the electronic character of the MM atom bounded across the boundary to the QM atom. Localized-orbital schemes. These schemes place hybrid orbitals at the boundary and keep some of them frozen. These orbitals cap the QM region and replace the cut bond. BuRNN. BuRNN (Buffer Region Neural Network) approach was developed as an alternative to QM/MM methods. Its focus is to reduce artifacts that are created in between QM and MM region by introducing buffer region between them. Buffer region experiences full electronic polarization by the QM region and together with QM region is described by NN (neural network) trained on QM calculations. The substitution of QM calculations for NN also speeds up overall simulation. BuRNN was introduced in the 2022 paper of Lier, Poliak, Marquetand, Westermayr, and Oostenbrink. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E = E^{QM}(QM)+ E^{MM}(QM+MM)-E^{MM}(QM)" }, { "math_id": 1, "text": "E^{MM}(QM)" }, { "math_id": 2, "text": "\n\\begin{aligned}\nE(QM/MM) & = \\sum^{MM}_{I'=1}\\left[\\int \\operatorname{d}\\mathbf{r}{q_{I'} \\rho(\\mathbf{r}) \\over \\left \\vert \\mathbf{R}_{I'}-\\mathbf{r} \\right \\vert }+\\sum^{QM}_{I=1}\\frac{q_{I'}q_I}{\\left \\vert \\mathbf{R}_{I'}-\\mathbf{R}_I \\right \\vert}\\right] \\\\\n & + \\sum_{\\text{non-bonded pairs}}\\left( \\frac{A_{II'}}{R^{12}_{II'}} - \\frac{B_{II'}}{R^{6}_{II'}}\\right) + \\sum_{\\text{bonds}}k_r(R_{II'}-r_0)^2 \\\\\n & + \\sum_{\\text{angles}}k_{\\theta}(\\theta-\\theta_0)^2 + \\sum_{\\text{torsions}}\\sum_{n}k_{\\phi,n}[\\cos(n\\phi+\\delta_n)+1]\n\\end{aligned}" }, { "math_id": 3, "text": "I " }, { "math_id": 4, "text": "I' " }, { "math_id": 5, "text": "\\theta " }, { "math_id": 6, "text": "\\phi " }, { "math_id": 7, "text": "E(QM/MM) " } ]
https://en.wikipedia.org/wiki?curid=12213637
1221448
Prompt neutron
Immediate emission of neutrons after nuclear fission In nuclear engineering, a prompt neutron is a neutron immediately emitted (neutron emission) by a nuclear fission event, as opposed to a delayed neutron decay which can occur within the same context, emitted after beta decay of one of the fission products anytime from a few milliseconds to a few minutes later. Prompt neutrons emerge from the fission of an unstable fissionable or fissile heavy nucleus almost instantaneously. There are different definitions for how long it takes for a prompt neutron to emerge. For example, the United States Department of Energy defines a prompt neutron as a neutron born from fission within 10−13 seconds after the fission event. The U.S. Nuclear Regulatory Commission defines a prompt neutron as a neutron emerging from fission within 10−14 seconds. This emission is controlled by the nuclear force and is extremely fast. By contrast, so-called delayed neutrons are delayed by the time delay associated with beta decay (mediated by the weak force) to the precursor excited nuclide, after which neutron emission happens on a prompt time scale (i.e., almost immediately). Principle. Using uranium-235 as an example, this nucleus absorbs a thermal neutron, and the immediate mass products of a fission event are two large fission fragments, which are remnants of the formed uranium-236 nucleus. These fragments emit two or three free neutrons (2.5 on average), called "prompt" neutrons. A subsequent fission fragment occasionally undergoes a stage of radioactive decay that yields an additional neutron, called a "delayed" neutron. These neutron-emitting fission fragments are called "delayed neutron precursor atoms". Delayed neutrons are associated with the beta decay of the fission products. After prompt fission neutron emission the residual fragments are still neutron rich and undergo a beta decay chain. The more neutron rich the fragment, the more energetic and faster the beta decay. In some cases the available energy in the beta decay is high enough to leave the residual nucleus in such a highly excited state that neutron emission instead of gamma emission occurs. Importance in nuclear fission basic research. The standard deviation of the final kinetic energy distribution as a function of mass of final fragments from low energy fission of uranium 234 and uranium 236, presents a peak around light fragment masses region and another on heavy fragment masses region. Simulation by Monte Carlo method of these experiments suggests that those peaks are produced by prompt neutron emission. This effect of prompt neutron emission does not provide a primary mass and kinetic distribution which is important to study fission dynamics from saddle to scission point. Importance in nuclear reactors. If a nuclear reactor happened to be prompt critical - even very slightly - the number of neutrons and power output would increase exponentially at a high rate. The response time of mechanical systems like control rods is far too slow to moderate this kind of power surge. The control of the power rise would then be left to its intrinsic physical stability factors, like the thermal dilatation of the core, or the increased resonance absorptions of neutrons, that usually tend to decrease the reactor's reactivity when temperature rises; but the reactor would run the risk of being damaged or destroyed by heat. However, thanks to the delayed neutrons, it is possible to leave the reactor in a subcritical state as far as only prompt neutrons are concerned: the delayed neutrons come a moment later, just in time to sustain the chain reaction when it is going to die out. In that regime, neutron production overall still grows exponentially, but on a time scale that is governed by the delayed neutron production, which is slow enough to be controlled (just as an otherwise unstable bicycle can be balanced because human reflexes are quick enough on the time scale of its instability). Thus, by widening the margins of non-operation and supercriticality and allowing more time to regulate the reactor, the delayed neutrons are essential to inherent reactor safety and even in reactors requiring active control. Fraction definitions. The factor β is defined as: formula_0 and it is equal to 0.0064 for U-235. The delayed neutron fraction (DNF) is defined as: formula_1 These two factors, β and "DNF", are not the same thing in case of a rapid change in the number of neutrons in the reactor. Another concept, is the "effective fraction of delayed neutrons", which is the fraction of delayed neutrons weighted (over space, energy, and angle) on the adjoint neutron flux. This concept arises because delayed neutrons are emitted with an energy spectrum more thermalized relative to prompt neutrons. For low enriched uranium fuel working on a thermal neutron spectrum, the difference between the average and effective delayed neutron fractions can reach 50 pcm (1 pcm = 1e-5). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\beta = \\frac{\\mbox{precursor atoms}}\n {\\mbox{prompt neutrons}+\\mbox{precursor atoms}}.\n" }, { "math_id": 1, "text": "\nDNF = \\frac{\\mbox{delayed neutrons}}\n {\\mbox{prompt neutrons}+\\mbox{delayed neutrons}}.\n" } ]
https://en.wikipedia.org/wiki?curid=1221448
1221457
Delayed neutron
Delayed emission of neutrons after nuclear fission In nuclear engineering, a delayed neutron is a neutron emitted after a nuclear fission event, by one of the fission products (or actually, a fission product daughter after beta decay), any time from a few milliseconds to a few minutes after the fission event. Neutrons born within 10−14 seconds of the fission are termed "prompt neutrons". In a nuclear reactor large nuclides fission into two neutron-rich fission products (i.e. unstable nuclides) and free neutrons (prompt neutrons). Many of these fission products then undergo radioactive decay (usually beta decay) and the resulting nuclides are unstable with respect to beta decay. A small fraction of them are excited enough to be able to beta-decay by emitting a delayed neutron in addition to the beta. The moment of beta decay of the precursor nuclides – which are the precursors of the delayed neutrons – happens orders of magnitude later compared to the emission of the prompt neutrons. Hence the neutron that originates from the precursor's decay is termed a delayed neutron. The "delay" in the neutron emission is due to the delay in beta decay (which is slower since controlled by the weak force), since neutron emission, like gamma emission, is controlled by the strong nuclear force and thus either happens at fission, or nearly simultaneously with the beta decay, immediately after it. The various half lives of these decays that finally result in neutron emission, are thus the beta decay half lives of the precursor radionuclides. Delayed neutrons play an important role in nuclear reactor control and safety analysis. Principle. Delayed neutrons are associated with the beta decay of the fission products. After prompt fission neutron emission the residual fragments are still neutron rich and undergo a beta decay chain. The more neutron rich the fragment, the more energetic and faster the beta decay. In some cases the available energy in the beta decay is high enough to leave the residual nucleus in such a highly excited state that neutron emission instead of gamma emission occurs. Using U-235 as an example, this nucleus absorbs thermal neutrons, and the immediate mass products of a fission event are two large fission fragments, which are remnants of the formed U-236 nucleus. These fragments emit, on average, two or three free neutrons (in average 2.47), called "prompt" neutrons. A subsequent fission fragment occasionally undergoes a stage of radioactive decay (which is a beta minus decay) that yields a new nucleus (the emitter nucleus) in an excited state that emits an additional neutron, called a "delayed" neutron, to get to ground state. These neutron-emitting fission fragments are called delayed neutron precursor atoms. Delayed Neutron Data for Thermal Fission in U-235 Importance in nuclear reactors. If a nuclear reactor happened to be prompt critical – even very slightly – the number of neutrons would increase exponentially at a high rate, and very quickly the reactor would become uncontrollable by means of external mechanisms. The control of the power rise would then be left to its intrinsic physical stability factors, like the thermal dilatation of the core, or the increased resonance absorptions of neutrons, that usually tend to decrease the reactor's reactivity when temperature rises; but the reactor would run the risk of being damaged or destroyed by heat. However, thanks to the delayed neutrons, it is possible to leave the reactor in a subcritical state as far as only prompt neutrons are concerned: the delayed neutrons come a moment later, just in time to sustain the chain reaction when it is going to die out. In that regime, neutron production overall still grows exponentially, but on a time scale that is governed by the delayed neutron production, which is slow enough to be controlled (just as an otherwise unstable bicycle can be balanced because human reflexes are quick enough on the time scale of its instability). Thus, by widening the margins of non-operation and supercriticality and allowing more time to regulate the reactor, the delayed neutrons are essential to inherent reactor safety, even in reactors requiring active control. The lower percentage of delayed neutrons makes the use of large percentages of plutonium in nuclear reactors more challenging. Fraction definitions. The precursor yield fraction β is defined as: formula_0 and it is equal to 0.0064 for U-235. The delayed neutron fraction (DNF) is defined as: formula_1 These two factors, β and "DNF", are almost the same thing, but not quite; they differ in the case a rapid (faster than the decay time of the precursor atoms) change in the number of neutrons in the reactor. Another concept, is the "effective fraction of delayed neutrons" βeff, which is the fraction of delayed neutrons weighted (over space, energy, and angle) on the adjoint neutron flux. This concept arises because delayed neutrons are emitted with an energy spectrum more thermalized relative to prompt neutrons. For low enriched uranium fuel working on a thermal neutron spectrum, the difference between the average and effective delayed neutron fractions can reach 50 pcm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\beta = \\frac{\\mbox{precursor atoms}}\n {\\mbox{prompt neutrons}+\\mbox{precursor atoms}}.\n" }, { "math_id": 1, "text": "\nDNF = \\frac{\\mbox{delayed neutrons}}\n {\\mbox{prompt neutrons}+\\mbox{delayed neutrons}}.\n" } ]
https://en.wikipedia.org/wiki?curid=1221457
1221466
Coulometry
Method of chemical analysis In analytical electrochemistry, coulometry determines the amount of matter transformed during an electrolysis reaction by measuring the amount of electricity (in coulombs) consumed or produced. It can be used for precision measurements of charge, and the amperes even used to have a coulometric definition. However, today coulometry is mainly used for analytical applications. It is named after Charles-Augustin de Coulomb. There are two basic categories of coulometric techniques. "Potentiostatic coulometry" involves holding the electric potential constant during the reaction using a potentiostat. The other, called "coulometric titration" or "amperostatic coulometry", keeps the current (measured in amperes) constant using an amperostat. Potentiostatic coulometry. Potentiostatic coulometry is a technique most commonly referred to as "bulk electrolysis". The working electrode is kept at a constant potential and the current that flows through the circuit is measured. This constant potential is applied long enough to fully reduce or oxidize all of the electroactive species in a given solution. As the electroactive molecules are consumed, the current also decreases, approaching zero when the conversion is complete. The sample mass, molecular mass, number of electrons in the electrode reaction, and number of electrons passed during the experiment are all related by Faraday's laws. It follows that, if three of the values are known, then the fourth can be calculated. Bulk electrolysis is often used to unambiguously assign the number of electrons consumed in a reaction observed through voltammetry. It also has the added benefit of producing a solution of a species (oxidation state) which may not be accessible through chemical routes. This species can then be isolated or further characterized while in solution. The rate of such reactions is not determined by the concentration of the solution, but rather the mass transfer of the electroactive species in the solution to the electrode surface. Rates will increase when the volume of the solution is decreased, the solution is stirred more rapidly, or the area of the working electrode is increased. Since mass transfer is so important the solution is stirred during a bulk electrolysis. However, this technique is generally not considered a hydrodynamic technique, since a laminar flow of solution against the electrode is neither the objective nor outcome of the stirring. The extent to which a reaction goes to completion is also related to how much greater the applied potential is than the reduction potential of interest. In the case where multiple reduction potentials are of interest, it is often difficult to set an electrolysis potential a "safe" distance (such as 200 mV) past a redox event. The result is incomplete conversion of the substrate, or else conversion of some of the substrate to the more reduced form. This factor must be considered when analyzing the current passed and when attempting to do further analysis/isolation/experiments with the substrate solution. An advantage to this kind of analysis over electrogravimetry is that it does not require that the product of the reaction be weighed. This is useful for reactions where the product does not deposit as a solid, such as the determination of the amount of arsenic in a sample from the electrolysis of arsenous acid (H3AsO3) to arsenic acid (H3AsO4). Coulometric titration. Coulometric titrations use a constant current system to accurately quantify the concentration of a species. In this experiment, the applied current is equivalent to a titrant. Current is applied to the unknown solution until all of the unknown species is either oxidized or reduced to a new state, at which point the potential of the working electrode shifts dramatically. This potential shift indicates the endpoint. The magnitude of the current (in amperes) and the duration of the current (seconds) can be used to determine the moles of the unknown species in solution. When the volume of the solution is known, then the molarity of the unknown species can be determined. Advantages of Coulometric Titration Coulometric titration has the advantage that constant current sources for the generation of titrants are relatively easy to make. Applications. Karl Fischer reaction. The Karl Fischer reaction uses a coulometric titration to determine the amount of water in a sample. It can determine concentrations of water on the order of milligrams per liter. It is used to find the amount of water in substances such as butter, sugar, cheese, paper, and petroleum. The reaction involves converting solid iodine into hydrogen iodide in the presence of sulfur dioxide and water. Methanol is most often used as the solvent, but ethylene glycol and diethylene glycol also work. Pyridine is often used to prevent the buildup of sulfuric acid, although the use of imidazole and diethanolamine for this role are becoming more common. All reagents must be anhydrous for the analysis to be quantitative. The balanced chemical equation, using methanol and pyridine, is: formula_0 In this reaction, a single molecule of water reacts with a molecule of iodine. Since this technique is used to determine the water content of samples, atmospheric humidity could alter the results. Therefore, the system is usually isolated with drying tubes or placed in an inert gas container. In addition, the solvent will undoubtedly have some water in it so the solvent's water content must be measured to compensate for this inaccuracy. To determine the amount of water in the sample, analysis must first be performed using either back or direct titration. In the direct method, just enough of the reagents will be added to completely use up all of the water. At this point in the titration, the current approaches zero. It is then possible to relate the amount of reagents used to the amount of water in the system via stoichiometry. The back-titration method is similar, but involves the addition of an excess of the reagent. This excess is then consumed by adding a known amount of a standard solution with known water content. The result reflects the water content of the sample and the standard solution. Since the amount of water in the standard solution is known, the difference reflects the water content of the sample. Determination of film thickness. Coulometry can be used in the determination of the thickness of metallic coatings. This is performed by measuring the quantity of electricity needed to dissolve a well-defined area of the coating. The film thickness formula_1 is proportional to the constant current formula_2, the molecular weight formula_3 of the metal, the density formula_4 of the metal, and the surface area formula_5: formula_6 The electrodes for this reaction are often platinum electrode and an electrode that relates to the reaction. For tin coating on a copper wire, a tin electrode is used, while a sodium chloride-zinc sulfate electrode would be used to determine the zinc film on a piece of steel. Special cells have been created to adhere to the surface of the metal to measure its thickness. These are basically columns with the internal electrodes with magnets or weights to attach to the surface. The results obtained by this coulometric method are similar to those achieved by other chemical and metallurgic techniques. Coulometers. Electronic coulometer. The electronic coulometer is based on the application of the operational amplifier in the "integrator"-type circuit. The current passed through the resistor R1 makes a potential drop which is integrated by operational amplifier on the capacitor plates; the higher current, the larger the potential drop. The current need not be constant. In such scheme "V"out is proportional of the passed charge. Sensitivity of the coulometer can be changed by choosing of the appropriate value of "R"1. Electrochemical coulometers. There are three common types of coulometers based on electrochemical processes: "Voltameter" is a synonym for "coulometer". References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\ce{[C5H5NH]SO3CH3 + I2 + H2O + 2 C5H5N -> [C5H5NH]SO4CH3 + 2 [C5H5NH]I}\n" }, { "math_id": 1, "text": "\\Delta" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "\\triangle \\propto \\frac{iM}{A\\rho}" } ]
https://en.wikipedia.org/wiki?curid=1221466
1221571
Rescorla–Wagner model
The Rescorla–Wagner model ("R-W") is a model of classical conditioning, in which learning is conceptualized in terms of associations between conditioned (CS) and unconditioned (US) stimuli. A strong CS-US association means that the CS signals predict the US. One might say that before conditioning, the subject is surprised by the US, but after conditioning, the subject is no longer surprised, because the CS predicts the coming of the US. The model casts the conditioning processes into discrete trials, during which stimuli may be either present or absent. The strength of prediction of the US on a trial can be represented as the summed associative strengths of all CSs present during the trial. This feature of the model represented a major advance over previous models, and it allowed a straightforward explanation of important experimental phenomena, most notably the blocking effect. Failures of the model have led to modifications, alternative models, and many additional findings. The model has had some impact on neural science in recent years, as studies have suggested that the phasic activity of dopamine neurons in mesostriatal DA projections in the midbrain encodes for the type of prediction error detailed in the model. The Rescorla–Wagner model was created by Yale psychologists Robert A. Rescorla and Allan R. Wagner in 1972. Basic assumptions of the model. The first two assumptions were new in the Rescorla–Wagner model. The last three assumptions were present in previous models and are less crucial to the R-W model's novel predictions. formula_0 Equation. and formula_1 where The revised RW model by Van Hamme and Wasserman (1994). Van Hamme and Wasserman have extended the original Rescorla–Wagner (RW) model and introduced a new factor in their revised RW model in 1994: They suggested that not only conditioned stimuli physically present on a given trial can undergo changes in their associative strength, the associative value of a CS can also be altered by a within-compound-association with a CS present on that trial. A within-compound-association is established if two CSs are presented together during training (compound stimulus). If one of the two component CSs is subsequently presented alone, then it is assumed to activate a representation of the other (previously paired) CS as well. Van Hamme and Wasserman propose that stimuli indirectly activated through within-compound-associations have a negative learning parameter—thus phenomena of retrospective reevaluation can be explained. Consider the following example, an experimental paradigm called "backward blocking," indicative of retrospective revaluation, where AB is the compound stimulus A+B: Test trials: Group 1, which received both Phase 1- and 2-trials, elicits a weaker conditioned response (CR) to B compared to the Control group, which only received Phase 1-trials. The original RW model cannot account for this effect. But the revised model can: In Phase 2, stimulus B is indirectly activated through within-compound-association with A. But instead of a positive learning parameter (usually called alpha) when physically present, during Phase 2, B has a negative learning parameter. Thus during the second phase, B's associative strength declines whereas A's value increases because of its positive learning parameter. Thus, the revised RW model can explain why the CR elicited by B after backward blocking training is weaker compared with AB-only conditioning. It is a well-established observation that a time-out interval after completion of extinction results in partial recovery from extinction, i.e., the previously extinguished reaction or response recurs—but usually at a lower level than before extinction training. Reinstatement refers to the phenomenon that exposure to the US from training alone after completion of extinction results in partial recovery from extinction. The RW model can't account for those phenomena. The RW model predicts that repeated presentation of a conditioned inhibitor alone (a CS with negative associative strength) results in extinction of this stimulus (a decline of its negative associative value). This is a false prediction. Contrarily, experiments show the repeated presentation of a conditioned inhibitor alone even increases its inhibitory potential One of the assumptions of the model is that the history of conditioning of a CS does not have any influences on its present status—only its current associative value is important. Contrary to this assumption, many experiments show that stimuli that were first conditioned and then extinguished are more easily reconditioned (i.e., fewer trials are necessary for conditioning). The RW model also assumes that excitation and inhibition are opponent features. A stimulus can either have excitatory potential (a positive associative strength) or inhibitory potential (a negative associative strength), but not both. By contrast it is sometimes observed, that stimuli can have both qualities. One example is backward excitatory conditioning in which a CS is backwardly paired with a US (US–CS instead of CS–US). This usually makes the CS become a conditioned excitor. The stimulus also has inhibitory features which can be proven by the retardation of acquisition test. This test is used to assess the inhibitory potential of a stimulus since it is observed that excitatory conditioning with a previously conditioned inhibitor is retarded. The backwardly conditioned stimulus passes this test and thus seems to have both excitatory and inhibitory features. A conditioned inhibitor is assumed to have a negative associative value. By presenting an inhibitor with a novel stimulus (i.e., its associative strength is zero), the model predicts that the novel cue should become a conditioned excitor. This is not the case in experimental situations. The predictions of the model stem from its basic term (lambda-V). Since the summed associative strength of all stimuli (V) present on the trial is negative (zero + inhibitory potential) and lambda is zero (no US present), the resulting change in the associative strength is positive, thus making the novel cue a conditioned excitor. The CS-preexposure effect (also called latent inhibition) is the well-established observation that conditioning after exposure to the stimulus later used as the CS in conditioning is retarded. The RW model doesn't predict any effect of presenting a novel stimulus without a US. In higher-order conditioning a previously conditioned CS is paired with a novel cue (i.e., first CS1–US then CS2–CS1). This usually makes the novel cue CS2 elicit similar reactions to the CS1. The model cannot account for this phenomenon since during CS2–CS1 trials, no US is present. But by allowing CS1 to act similarly to a US, one can reconcile the model with this effect. Sensory preconditioning refers to first pairing two novel cues (CS1–CS2) and then pairing one of them with a US (CS2–US). This turns both CS1 and CS2 into conditioned excitors. The RW model cannot explain this, since during the CS1–CS2-phase both stimuli have an associative value of zero and lambda is also zero (no US present) which results in no change in the associative strength of the stimuli. Success and popularity. The Rescorla–Wagner model owes its success to several factors, including References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta V^{n+1}_X = \\alpha_X \\beta (\\lambda - V_{\\mathrm{tot}})" }, { "math_id": 1, "text": "V^{n+1}_X = V^n_X + \\Delta V^{n+1}_X" }, { "math_id": 2, "text": "\\Delta V_X" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\beta" }, { "math_id": 5, "text": "\\lambda" }, { "math_id": 6, "text": "V_X" }, { "math_id": 7, "text": "V_{\\mathrm{tot}}" } ]
https://en.wikipedia.org/wiki?curid=1221571
12216
Georg Cantor
German mathematician (1845–1918) Georg Ferdinand Ludwig Philipp Cantor ( , ; 3 March [O.S. 19 February] 1845 – 6 January 1918) was a mathematician who played a pivotal role in the creation of set theory, which has become a fundamental theory in mathematics. Cantor established the importance of one-to-one correspondence between the members of two sets, defined infinite and well-ordered sets, and proved that the real numbers are more numerous than the natural numbers. Cantor's method of proof of this theorem implies the existence of an infinity of infinities. He defined the cardinal and ordinal numbers and their arithmetic. Cantor's work is of great philosophical interest, a fact he was well aware of. Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections; see Controversy over Cantor's theory. Cantor, a devout Lutheran Christian, believed the theory had been communicated to him by God. Some Christian theologians (particularly neo-Scholastics) saw Cantor's work as a challenge to the uniqueness of the absolute infinity in the nature of God – on one occasion equating the theory of transfinite numbers with pantheism – a proposition that Cantor vigorously rejected. Not all theologians were against Cantor's theory; prominent neo-scholastic philosopher Constantin Gutberlet was in favor of it and Cardinal Johann Baptist Franzelin accepted it as a valid theory (after Cantor made some important clarifications). The objections to Cantor's work were occasionally fierce: Leopold Kronecker's public opposition and personal attacks included describing Cantor as a "scientific charlatan", a "renegade" and a "corrupter of youth". Kronecker objected to Cantor's proofs that the algebraic numbers are countable, and that the transcendental numbers are uncountable, results now included in a standard mathematics curriculum. Writing decades after Cantor's death, Wittgenstein lamented that mathematics is "ridden through and through with the pernicious idioms of set theory", which he dismissed as "utter nonsense" that is "laughable" and "wrong". Cantor's recurring bouts of depression from 1884 to the end of his life have been blamed on the hostile attitude of many of his contemporaries, though some have explained these episodes as probable manifestations of a bipolar disorder. The harsh criticism has been matched by later accolades. In 1904, the Royal Society awarded Cantor its Sylvester Medal, the highest honor it can confer for work in mathematics. David Hilbert defended it from its critics by declaring, "No one shall expel us from the paradise that Cantor has created." Biography. Youth and studies. Georg Cantor, born in 1845 in Saint Petersburg, Russian Empire, was brought up in that city until the age of eleven. The oldest of six children, he was regarded as an outstanding violinist. His grandfather Franz Böhm (1788–1846) (the violinist Joseph Böhm's brother) was a well-known musician and soloist in a Russian imperial orchestra. Cantor's father had been a member of the Saint Petersburg stock exchange; when he became ill, the family moved to Germany in 1856, first to Wiesbaden, then to Frankfurt, seeking milder winters than those of Saint Petersburg. In 1860, Cantor graduated with distinction from the Realschule in Darmstadt; his exceptional skills in mathematics, trigonometry in particular, were noted. In August 1862, he then graduated from the "Höhere Gewerbeschule Darmstadt", now the Technische Universität Darmstadt. In 1862 Cantor entered the Swiss Federal Polytechnic in Zurich. After receiving a substantial inheritance upon his father's death in June 1863, Cantor transferred to the University of Berlin, attending lectures by Leopold Kronecker, Karl Weierstrass and Ernst Kummer. He spent the summer of 1866 at the University of Göttingen, then and later a center for mathematical research. Cantor was a good student, and he received his doctoral degree in 1867. Teacher and researcher. Cantor submitted his dissertation on number theory at the University of Berlin in 1867. After teaching briefly in a Berlin girls' school, he took up a position at the University of Halle, where he spent his entire career. He was awarded the requisite habilitation for his thesis, also on number theory, which he presented in 1869 upon his appointment at Halle University. In 1874, Cantor married Vally Guttmann. They had six children, the last (Rudolph) born in 1886. Cantor was able to support a family despite his modest academic pay, thanks to his inheritance from his father. During his honeymoon in the Harz mountains, Cantor spent much time in mathematical discussions with Richard Dedekind, whom he had met at Interlaken in Switzerland two years earlier while on holiday. Cantor was promoted to extraordinary professor in 1872 and made full professor in 1879. To attain the latter rank at the age of 34 was a notable accomplishment, but Cantor desired a chair at a more prestigious university, in particular at Berlin, at that time the leading German university. However, his work encountered too much opposition for that to be possible. Kronecker, who headed mathematics at Berlin until his death in 1891, became increasingly uncomfortable with the prospect of having Cantor as a colleague, perceiving him as a "corrupter of youth" for teaching his ideas to a younger generation of mathematicians. Worse yet, Kronecker, a well-established figure within the mathematical community and Cantor's former professor, disagreed fundamentally with the thrust of Cantor's work ever since he had intentionally delayed the publication of Cantor's first major publication in 1874. Kronecker, now seen as one of the founders of the constructive viewpoint in mathematics, disliked much of Cantor's set theory because it asserted the existence of sets satisfying certain properties, without giving specific examples of sets whose members did indeed satisfy those properties. Whenever Cantor applied for a post in Berlin, he was declined, and the process usually involved Kronecker, so Cantor came to believe that Kronecker's stance would make it impossible for him ever to leave Halle. In 1881, Cantor's Halle colleague Eduard Heine died. Halle accepted Cantor's suggestion that Heine's vacant chair be offered to Dedekind, Heinrich M. Weber and Franz Mertens, in that order, but each declined the chair after being offered it. Friedrich Wangerin was eventually appointed, but he was never close to Cantor. In 1882, the mathematical correspondence between Cantor and Dedekind came to an end, apparently as a result of Dedekind's declining the chair at Halle. Cantor also began another important correspondence, with Gösta Mittag-Leffler in Sweden, and soon began to publish in Mittag-Leffler's journal "Acta Mathematica". But in 1885, Mittag-Leffler was concerned about the philosophical nature and new terminology in a paper Cantor had submitted to "Acta". He asked Cantor to withdraw the paper from "Acta" while it was in proof, writing that it was "... about one hundred years too soon." Cantor complied, but then curtailed his relationship and correspondence with Mittag-Leffler, writing to a third party, "Had Mittag-Leffler had his way, I should have to wait until the year 1984, which to me seemed too great a demand! ... But of course I never want to know anything again about "Acta Mathematica"." Cantor suffered his first known bout of depression in May 1884. Criticism of his work weighed on his mind: every one of the fifty-two letters he wrote to Mittag-Leffler in 1884 mentioned Kronecker. A passage from one of these letters is revealing of the damage to Cantor's self-confidence: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... I don't know when I shall return to the continuation of my scientific work. At the moment I can do absolutely nothing with it, and limit myself to the most necessary duty of my lectures; how much happier I would be to be scientifically active, if only I had the necessary mental freshness. This crisis led him to apply to lecture on philosophy rather than on mathematics. He also began an intense study of Elizabethan literature, thinking there might be evidence that Francis Bacon wrote the plays attributed to William Shakespeare (see Shakespearean authorship question); this ultimately resulted in two pamphlets, published in 1896 and 1897. Cantor recovered soon thereafter, and subsequently made further important contributions, including his diagonal argument and theorem. However, he never again attained the high level of his remarkable papers of 1874–84, even after Kronecker's death on December 29, 1891. He eventually sought, and achieved, a reconciliation with Kronecker. Nevertheless, the philosophical disagreements and difficulties dividing them persisted. In 1889, Cantor was instrumental in founding the German Mathematical Society, and he chaired its first meeting in Halle in 1891, where he first introduced his diagonal argument; his reputation was strong enough, despite Kronecker's opposition to his work, to ensure he was elected as the first president of this society. Setting aside the animosity Kronecker had displayed towards him, Cantor invited him to address the meeting, but Kronecker was unable to do so because his wife was dying from injuries sustained in a skiing accident at the time. Georg Cantor was also instrumental in the establishment of the first International Congress of Mathematicians, which took place in Zürich, Switzerland, in 1897. Later years and death. After Cantor's 1884 hospitalization there is no record that he was in any sanatorium again until 1899. Soon after that second hospitalization, Cantor's youngest son Rudolph died suddenly on December 16 (Cantor was delivering a lecture on his views on Baconian theory and William Shakespeare), and this tragedy drained Cantor of much of his passion for mathematics. Cantor was again hospitalized in 1903. One year later, he was outraged and agitated by a paper presented by Julius König at the Third International Congress of Mathematicians. The paper attempted to prove that the basic tenets of transfinite set theory were false. Since the paper had been read in front of his daughters and colleagues, Cantor perceived himself as having been publicly humiliated. Although Ernst Zermelo demonstrated less than a day later that König's proof had failed, Cantor remained shaken, and momentarily questioning God. Cantor suffered from chronic depression for the rest of his life, for which he was excused from teaching on several occasions and repeatedly confined to various sanatoria. The events of 1904 preceded a series of hospitalizations at intervals of two or three years. He did not abandon mathematics completely, however, lecturing on the paradoxes of set theory (Burali-Forti paradox, Cantor's paradox, and Russell's paradox) to a meeting of the "Deutsche Mathematiker-Vereinigung" in 1903, and attending the International Congress of Mathematicians at Heidelberg in 1904. In 1911, Cantor was one of the distinguished foreign scholars invited to the 500th anniversary of the founding of the University of St. Andrews in Scotland. Cantor attended, hoping to meet Bertrand Russell, whose newly published "Principia Mathematica" repeatedly cited Cantor's work, but the encounter did not come about. The following year, St. Andrews awarded Cantor an honorary doctorate, but illness precluded his receiving the degree in person. Cantor retired in 1913, and lived in poverty and suffered from malnourishment during World War I. The public celebration of his 70th birthday was canceled because of the war. In June 1917, he entered a sanatorium for the last time and continually wrote to his wife asking to be allowed to go home. Georg Cantor had a fatal heart attack on January 6, 1918, in the sanatorium where he had spent the last year of his life. Mathematical work. Cantor's work between 1874 and 1884 is the origin of set theory. Prior to this work, the concept of a set was a rather elementary one that had been used implicitly since the beginning of mathematics, dating back to the ideas of Aristotle. No one had realized that set theory had any nontrivial content. Before Cantor, there were only finite sets (which are easy to understand) and "the infinite" (which was considered a topic for philosophical, rather than mathematical, discussion). By proving that there are (infinitely) many possible sizes for infinite sets, Cantor established that set theory was not trivial, and it needed to be studied. Set theory has come to play the role of a foundational theory in modern mathematics, in the sense that it interprets propositions about mathematical objects (for example, numbers and functions) from all the traditional areas of mathematics (such as algebra, analysis, and topology) in a single theory, and provides a standard set of axioms to prove or disprove them. The basic concepts of set theory are now used throughout mathematics. In one of his earliest papers, Cantor proved that the set of real numbers is "more numerous" than the set of natural numbers; this showed, for the first time, that there exist infinite sets of different sizes. He was also the first to appreciate the importance of one-to-one correspondences (hereinafter denoted "1-to-1 correspondence") in set theory. He used this concept to define finite and infinite sets, subdividing the latter into denumerable (or countably infinite) sets and nondenumerable sets (uncountably infinite sets). Cantor developed important concepts in topology and their relation to cardinality. For example, he showed that the Cantor set, discovered by Henry John Stephen Smith in 1875, is nowhere dense, but has the same cardinality as the set of all real numbers, whereas the rationals are everywhere dense, but countable. He also showed that all countable dense linear orders without end points are order-isomorphic to the rational numbers. Cantor introduced fundamental constructions in set theory, such as the power set of a set "A", which is the set of all possible subsets of "A". He later proved that the size of the power set of "A" is strictly larger than the size of "A", even when "A" is an infinite set; this result soon became known as Cantor's theorem. Cantor developed an entire theory and arithmetic of infinite sets, called cardinals and ordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letter formula_0 (ℵ, aleph) with a natural number subscript; for the ordinals he employed the Greek letter formula_1 (ω, omega). This notation is still in use today. The "Continuum hypothesis", introduced by Cantor, was presented by David Hilbert as the first of his twenty-three open problems in his address at the 1900 International Congress of Mathematicians in Paris. Cantor's work also attracted favorable notice beyond Hilbert's celebrated encomium. The US philosopher Charles Sanders Peirce praised Cantor's set theory and, following public lectures delivered by Cantor at the first International Congress of Mathematicians, held in Zürich in 1897, Adolf Hurwitz and Jacques Hadamard also both expressed their admiration. At that Congress, Cantor renewed his friendship and correspondence with Dedekind. From 1905, Cantor corresponded with his British admirer and translator Philip Jourdain on the history of set theory and on Cantor's religious ideas. This was later published, as were several of his expository works. Number theory, trigonometric series and ordinals. Cantor's first ten papers were on number theory, his thesis topic. At the suggestion of Eduard Heine, the Professor at Halle, Cantor turned to analysis. Heine proposed that Cantor solve an open problem that had eluded Peter Gustav Lejeune Dirichlet, Rudolf Lipschitz, Bernhard Riemann, and Heine himself: the uniqueness of the representation of a function by trigonometric series. Cantor solved this problem in 1869. It was while working on this problem that he discovered transfinite ordinals, which occurred as indices "n" in the "n"th derived set "S""n" of a set "S" of zeros of a trigonometric series. Given a trigonometric series f(x) with "S" as its set of zeros, Cantor had discovered a procedure that produced another trigonometric series that had "S"1 as its set of zeros, where "S"1 is the set of limit points of "S". If "S""k+1" is the set of limit points of "S""k", then he could construct a trigonometric series whose zeros are "S""k+1". Because the sets "S""k" were closed, they contained their limit points, and the intersection of the infinite decreasing sequence of sets "S", "S"1, "S"2, "S"3... formed a limit set, which we would now call "S""ω", and then he noticed that "S"ω would also have to have a set of limit points "S"ω+1, and so on. He had examples that went on forever, and so here was a naturally occurring infinite sequence of infinite numbers "ω", "ω" + 1, "ω" + 2, ... Between 1870 and 1872, Cantor published more papers on trigonometric series, and also a paper defining irrational numbers as convergent sequences of rational numbers. Dedekind, whom Cantor befriended in 1872, cited this paper later that year, in the paper where he first set out his celebrated definition of real numbers by Dedekind cuts. While extending the notion of number by means of his revolutionary concept of infinite cardinality, Cantor was paradoxically opposed to theories of infinitesimals of his contemporaries Otto Stolz and Paul du Bois-Reymond, describing them as both "an abomination" and "a cholera bacillus of mathematics". Cantor also published an erroneous "proof" of the inconsistency of infinitesimals. Set theory. The beginning of set theory as a branch of mathematics is often marked by the publication of Cantor's 1874 paper, "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"). This paper was the first to provide a rigorous proof that there was more than one kind of infinity. Previously, all infinite collections had been implicitly assumed to be equinumerous (that is, of "the same size" or having the same number of elements). Cantor proved that the collection of real numbers and the collection of positive integers are not equinumerous. In other words, the real numbers are not countable. His proof differs from the diagonal argument that he gave in 1891. Cantor's article also contains a new method of constructing transcendental numbers. Transcendental numbers were first constructed by Joseph Liouville in 1844. Cantor established these results using two constructions. His first construction shows how to write the real algebraic numbers as a sequence "a"1, "a"2, "a"3, ... In other words, the real algebraic numbers are countable. Cantor starts his second construction with any sequence of real numbers. Using this sequence, he constructs nested intervals whose intersection contains a real number not in the sequence. Since every sequence of real numbers can be used to construct a real not in the sequence, the real numbers cannot be written as a sequence – that is, the real numbers are not countable. By applying his construction to the sequence of real algebraic numbers, Cantor produces a transcendental number. Cantor points out that his constructions prove more – namely, they provide a new proof of Liouville's theorem: Every interval contains infinitely many transcendental numbers. Cantor's next article contains a construction that proves the set of transcendental numbers has the same "power" (see below) as the set of real numbers. Between 1879 and 1884, Cantor published a series of six articles in "Mathematische Annalen" that together formed an introduction to his set theory. At the same time, there was growing opposition to Cantor's ideas, led by Leopold Kronecker, who admitted mathematical concepts only if they could be constructed in a finite number of steps from the natural numbers, which he took as intuitively given. For Kronecker, Cantor's hierarchy of infinities was inadmissible, since accepting the concept of actual infinity would open the door to paradoxes which would challenge the validity of mathematics as a whole. Cantor also introduced the Cantor set during this period. The fifth paper in this series, "Grundlagen einer allgemeinen Mannigfaltigkeitslehre" ("Foundations of a General Theory of Aggregates"), published in 1883, was the most important of the six and was also published as a separate monograph. It contained Cantor's reply to his critics and showed how the transfinite numbers were a systematic extension of the natural numbers. It begins by defining well-ordered sets. Ordinal numbers are then introduced as the order types of well-ordered sets. Cantor then defines the addition and multiplication of the cardinal and ordinal numbers. In 1885, Cantor extended his theory of order types so that the ordinal numbers simply became a special case of order types. In 1891, he published a paper containing his elegant "diagonal argument" for the existence of an uncountable set. He applied the same idea to prove Cantor's theorem: the cardinality of the power set of a set "A" is strictly larger than the cardinality of "A". This established the richness of the hierarchy of infinite sets, and of the cardinal and ordinal arithmetic that Cantor had defined. His argument is fundamental in the solution of the Halting problem and the proof of Gödel's first incompleteness theorem. Cantor wrote on the Goldbach conjecture in 1894. In 1895 and 1897, Cantor published a two-part paper in "Mathematische Annalen" under Felix Klein's editorship; these were his last significant papers on set theory. The first paper begins by defining set, subset, etc., in ways that would be largely acceptable now. The cardinal and ordinal arithmetic are reviewed. Cantor wanted the second paper to include a proof of the continuum hypothesis, but had to settle for expositing his theory of well-ordered sets and ordinal numbers. Cantor attempts to prove that if "A" and "B" are sets with "A" equivalent to a subset of "B" and "B" equivalent to a subset of "A", then "A" and "B" are equivalent. Ernst Schröder had stated this theorem a bit earlier, but his proof, as well as Cantor's, was flawed. Felix Bernstein supplied a correct proof in his 1898 PhD thesis; hence the name Cantor–Bernstein–Schröder theorem. One-to-one correspondence. Cantor's 1874 Crelle paper was the first to invoke the notion of a 1-to-1 correspondence, though he did not use that phrase. He then began looking for a 1-to-1 correspondence between the points of the unit square and the points of a unit line segment. In an 1877 letter to Richard Dedekind, Cantor proved a far stronger result: for any positive integer "n", there exists a 1-to-1 correspondence between the points on the unit line segment and all of the points in an "n"-dimensional space. About this discovery Cantor wrote to Dedekind: "" ("I see it, but I don't believe it!") The result that he found so astonishing has implications for geometry and the notion of dimension. In 1878, Cantor submitted another paper to Crelle's Journal, in which he defined precisely the concept of a 1-to-1 correspondence and introduced the notion of "power" (a term he took from Jakob Steiner) or "equivalence" of sets: two sets are equivalent (have the same power) if there exists a 1-to-1 correspondence between them. Cantor defined countable sets (or denumerable sets) as sets which can be put into a 1-to-1 correspondence with the natural numbers, and proved that the rational numbers are denumerable. He also proved that "n"-dimensional Euclidean space R"n" has the same power as the real numbers R, as does a countably infinite product of copies of R. While he made free use of countability as a concept, he did not write the word "countable" until 1883. Cantor also discussed his thinking about dimension, stressing that his mapping between the unit interval and the unit square was not a continuous one. This paper displeased Kronecker and Cantor wanted to withdraw it; however, Dedekind persuaded him not to do so and Karl Weierstrass supported its publication. Nevertheless, Cantor never again submitted anything to Crelle. Continuum hypothesis. Cantor was the first to formulate what later came to be known as the continuum hypothesis or CH: there exists no set whose power is greater than that of the naturals and less than that of the reals (or equivalently, the cardinality of the reals is "exactly" aleph-one, rather than just "at least" aleph-one). Cantor believed the continuum hypothesis to be true and tried for many years to prove it, in vain. His inability to prove the continuum hypothesis caused him considerable anxiety. The difficulty Cantor had in proving the continuum hypothesis has been underscored by later developments in the field of mathematics: a 1940 result by Kurt Gödel and a 1963 one by Paul Cohen together imply that the continuum hypothesis can be neither proved nor disproved using standard Zermelo–Fraenkel set theory plus the axiom of choice (the combination referred to as "ZFC"). Absolute infinite, well-ordering theorem, and paradoxes. In 1883, Cantor divided the infinite into the transfinite and the absolute. The transfinite is increasable in magnitude, while the absolute is unincreasable. For example, an ordinal α is transfinite because it can be increased to α + 1. On the other hand, the ordinals form an absolutely infinite sequence that cannot be increased in magnitude because there are no larger ordinals to add to it. In 1883, Cantor also introduced the well-ordering principle "every set can be well-ordered" and stated that it is a "law of thought". Cantor extended his work on the absolute infinite by using it in a proof. Around 1895, he began to regard his well-ordering principle as a theorem and attempted to prove it. In 1899, he sent Dedekind a proof of the equivalent aleph theorem: the cardinality of every infinite set is an aleph. First, he defined two types of multiplicities: consistent multiplicities (sets) and inconsistent multiplicities (absolutely infinite multiplicities). Next he assumed that the ordinals form a set, proved that this leads to a contradiction, and concluded that the ordinals form an inconsistent multiplicity. He used this inconsistent multiplicity to prove the aleph theorem. In 1932, Zermelo criticized the construction in Cantor's proof. Cantor avoided paradoxes by recognizing that there are two types of multiplicities. In his set theory, when it is assumed that the ordinals form a set, the resulting contradiction implies only that the ordinals form an inconsistent multiplicity. In contrast, Bertrand Russell treated all collections as sets, which leads to paradoxes. In Russell's set theory, the ordinals form a set, so the resulting contradiction implies that the theory is inconsistent. From 1901 to 1903, Russell discovered three paradoxes implying that his set theory is inconsistent: the Burali-Forti paradox (which was just mentioned), Cantor's paradox, and Russell's paradox. Russell named paradoxes after Cesare Burali-Forti and Cantor even though neither of them believed that they had found paradoxes. In 1908, Zermelo published his axiom system for set theory. He had two motivations for developing the axiom system: eliminating the paradoxes and securing his proof of the well-ordering theorem. Zermelo had proved this theorem in 1904 using the axiom of choice, but his proof was criticized for a variety of reasons. His response to the criticism included his axiom system and a new proof of the well-ordering theorem. His axioms support this new proof, and they eliminate the paradoxes by restricting the formation of sets. In 1923, John von Neumann developed an axiom system that eliminates the paradoxes by using an approach similar to Cantor's—namely, by identifying collections that are not sets and treating them differently. Von Neumann stated that a class is too big to be a set if it can be put into one-to-one correspondence with the class of all sets. He defined a set as a class that is a member of some class and stated the axiom: A class is not a set if and only if there is a one-to-one correspondence between it and the class of all sets. This axiom implies that these big classes are not sets, which eliminates the paradoxes since they cannot be members of any class. Von Neumann also used his axiom to prove the well-ordering theorem: Like Cantor, he assumed that the ordinals form a set. The resulting contradiction implies that the class of all ordinals is not a set. Then his axiom provides a one-to-one correspondence between this class and the class of all sets. This correspondence well-orders the class of all sets, which implies the well-ordering theorem. In 1930, Zermelo defined models of set theory that satisfy von Neumann's axiom. Philosophy, religion, literature and Cantor's mathematics. The concept of the existence of an actual infinity was an important shared concern within the realms of mathematics, philosophy and religion. Preserving the orthodoxy of the relationship between God and mathematics, although not in the same form as held by his critics, was long a concern of Cantor's. He directly addressed this intersection between these disciplines in the introduction to his "Grundlagen einer allgemeinen Mannigfaltigkeitslehre", where he stressed the connection between his view of the infinite and the philosophical one. To Cantor, his mathematical views were intrinsically linked to their philosophical and theological implications – he identified the absolute infinite with God, and he considered his work on transfinite numbers to have been directly communicated to him by God, who had chosen Cantor to reveal them to the world. He was a devout Lutheran whose explicit Christian beliefs shaped his philosophy of science. Joseph Dauben has traced the effect Cantor's Christian convictions had on the development of transfinite set theory. Debate among mathematicians grew out of opposing views in the philosophy of mathematics regarding the nature of actual infinity. Some held to the view that infinity was an abstraction which was not mathematically legitimate, and denied its existence. Mathematicians from three major schools of thought (constructivism and its two offshoots, intuitionism and finitism) opposed Cantor's theories in this matter. For constructivists such as Kronecker, this rejection of actual infinity stems from fundamental disagreement with the idea that nonconstructive proofs such as Cantor's diagonal argument are sufficient proof that something exists, holding instead that constructive proofs are required. Intuitionism also rejects the idea that actual infinity is an expression of any sort of reality, but arrive at the decision via a different route than constructivism. Firstly, Cantor's argument rests on logic to prove the existence of transfinite numbers as an actual mathematical entity, whereas intuitionists hold that mathematical entities cannot be reduced to logical propositions, originating instead in the intuitions of the mind. Secondly, the notion of infinity as an expression of reality is itself disallowed in intuitionism, since the human mind cannot intuitively construct an infinite set. Mathematicians such as L. E. J. Brouwer and especially Henri Poincaré adopted an intuitionist stance against Cantor's work. Finally, Wittgenstein's attacks were finitist: he believed that Cantor's diagonal argument conflated the intension of a set of cardinal or real numbers with its extension, thus conflating the concept of rules for generating a set with an actual set. Some Christian theologians saw Cantor's work as a challenge to the uniqueness of the absolute infinity in the nature of God. In particular, neo-Thomist thinkers saw the existence of an actual infinity that consisted of something other than God as jeopardizing "God's exclusive claim to supreme infinity". Cantor strongly believed that this view was a misinterpretation of infinity, and was convinced that set theory could help correct this mistake: "... the transfinite species are just as much at the disposal of the intentions of the Creator and His absolute boundless will as are the finite numbers.". Prominent neo-scholastic German philosopher Constantin Gutberlet was in favor of such theory, holding that it didn't oppose the nature of God. Cantor also believed that his theory of transfinite numbers ran counter to both materialism and determinism – and was shocked when he realized that he was the only faculty member at Halle who did "not" hold to deterministic philosophical beliefs. It was important to Cantor that his philosophy provided an "organic explanation" of nature, and in his 1883 "Grundlagen", he said that such an explanation could only come about by drawing on the resources of the philosophy of Spinoza and Leibniz. In making these claims, Cantor may have been influenced by F. A. Trendelenburg, whose lecture courses he attended at Berlin, and in turn Cantor produced a Latin commentary on Book 1 of Spinoza's "Ethica". Trendelenburg was also the examiner of Cantor's "Habilitationsschrift". In 1888, Cantor published his correspondence with several philosophers on the philosophical implications of his set theory. In an extensive attempt to persuade other Christian thinkers and authorities to adopt his views, Cantor had corresponded with Christian philosophers such as Tilman Pesch and Joseph Hontheim, as well as theologians such as Cardinal Johann Baptist Franzelin, who once replied by equating the theory of transfinite numbers with pantheism. Although later this Cardinal accepted the theory as valid, due to some clarifications from Cantor's. Cantor even sent one letter directly to Pope Leo XIII himself, and addressed several pamphlets to him. Cantor's philosophy on the nature of numbers led him to affirm a belief in the freedom of mathematics to posit and prove concepts apart from the realm of physical phenomena, as expressions within an internal reality. The only restrictions on this metaphysical system are that all mathematical concepts must be devoid of internal contradiction, and that they follow from existing definitions, axioms, and theorems. This belief is summarized in his assertion that "the essence of mathematics is its freedom." These ideas parallel those of Edmund Husserl, whom Cantor had met in Halle. Meanwhile, Cantor himself was fiercely opposed to infinitesimals, describing them as both an "abomination" and "the cholera bacillus of mathematics". Cantor's 1883 paper reveals that he was well aware of the opposition his ideas were encountering: "... I realize that in this undertaking I place myself in a certain opposition to views widely held concerning the mathematical infinite and to opinions frequently defended on the nature of numbers." Hence he devotes much space to justifying his earlier work, asserting that mathematical concepts may be freely introduced as long as they are free of contradiction and defined in terms of previously accepted concepts. He also cites Aristotle, René Descartes, George Berkeley, Gottfried Leibniz, and Bernard Bolzano on infinity. Instead, he always strongly rejected Immanuel Kant's philosophy, in the realms of both the philosophy of mathematics and metaphysics. He shared B. Russell's motto "Kant or Cantor", and defined Kant "yonder sophistical Philistine who knew so little mathematics." Cantor's ancestry. Cantor's paternal grandparents were from Copenhagen and fled to Russia from the disruption of the Napoleonic Wars. There is very little direct information on them. Cantor's father, Georg Waldemar Cantor, was educated in the Lutheran mission in Saint Petersburg, and his correspondence with his son shows both of them as devout Lutherans. Very little is known for sure about Georg Waldemar's origin or education. Cantor's mother, Maria Anna Böhm, was an Austro-Hungarian born in Saint Petersburg and baptized Roman Catholic; she converted to Protestantism upon marriage. However, there is a letter from Cantor's brother Louis to their mother, stating: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; ("Even if we were descended from Jews ten times over, and even though I may be, in principle, completely in favour of equal rights for Hebrews, in social life I prefer Christians...") which could be read to imply that she was of Jewish ancestry. According to biographer Eric Temple Bell, Cantor was of Jewish descent, although both parents were baptized. In a 1971 article entitled "Towards a Biography of Georg Cantor", the British historian of mathematics Ivor Grattan-Guinness mentions (Annals of Science 27, pp. 345–391, 1971) that he was unable to find evidence of Jewish ancestry. (He also states that Cantor's wife, Vally Guttmann, was Jewish). In a letter written to Paul Tannery in 1896 (Paul Tannery, Memoires Scientifique 13 Correspondence, Gauthier-Villars, Paris, 1934, p. 306), Cantor states that his paternal grandparents were members of the Sephardic Jewish community of Copenhagen. Specifically, Cantor states in describing his father: "Er ist aber in Kopenhagen geboren, von israelitischen Eltern, die der dortigen portugisischen Judengemeinde..." ("He was born in Copenhagen of Jewish (lit: 'Israelite') parents from the local Portuguese-Jewish community.") In addition, Cantor's maternal great uncle, Josef Böhm, a Hungarian violinist, has been described as Jewish, which may imply that Cantor's mother was at least partly descended from the Hungarian Jewish community. In a letter to Bertrand Russell, Cantor described his ancestry and self-perception as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Neither my father nor my mother were of German blood, the first being a Dane, borne in Kopenhagen, my mother of Austrian Hungar descension. You must know, Sir, that I am not a "regular just Germain", for I am born 3 March 1845 at Saint Peterborough, Capital of Russia, but I went with my father and mother and brothers and sister, eleven years old in the year 1856, into Germany. There were documented statements, during the 1930s, that called this Jewish ancestry into question: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Biographies. Until the 1970s, the chief academic publications on Cantor were two short monographs by Arthur Moritz Schönflies (1927) – largely the correspondence with Mittag-Leffler – and Fraenkel (1930). Both were at second and third hand; neither had much on his personal life. The gap was largely filled by Eric Temple Bell's "Men of Mathematics" (1937), which one of Cantor's modern biographers describes as "perhaps the most widely read modern book on the history of mathematics"; and as "one of the worst". Bell presents Cantor's relationship with his father as Oedipal, Cantor's differences with Kronecker as a quarrel between two Jews, and Cantor's madness as Romantic despair over his failure to win acceptance for his mathematics. Grattan-Guinness (1971) found that none of these claims were true, but they may be found in many books of the intervening period, owing to the absence of any other narrative. There are other legends, independent of Bell – including one that labels Cantor's father a foundling, shipped to Saint Petersburg by unknown parents. A critique of Bell's book is contained in Joseph Dauben's biography. Writes Dauben: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Cantor devoted some of his most vituperative correspondence, as well as a portion of the "Beiträge", to attacking what he described at one point as the 'infinitesimal Cholera bacillus of mathematics', which had spread from Germany through the work of Thomae, du Bois Reymond and Stolz, to infect Italian mathematics ... Any acceptance of infinitesimals necessarily meant that his own theory of number was incomplete. Thus to accept the work of Thomae, du Bois-Reymond, Stolz and Veronese was to deny the perfection of Cantor's own creation. Understandably, Cantor launched a thorough campaign to discredit Veronese's work in every way possible. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; "Older sources on Cantor's life should be treated with caution. See section § Biographies above."
[ { "math_id": 0, "text": "\\aleph" }, { "math_id": 1, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=12216
12216711
Abel's binomial theorem
Mathematical identity involving sums of binomial coefficients Abel's binomial theorem, named after Niels Henrik Abel, is a mathematical identity involving sums of binomial coefficients. It states the following: formula_0 formula_1
[ { "math_id": 0, "text": "\\sum_{k=0}^m \\binom{m}{k} (w+m-k)^{m-k-1}(z+k)^k=w^{-1}(z+w+m)^m." }, { "math_id": 1, "text": "\n\\begin{align}\n& {} \\quad \\binom{2}{0}(w+2)^1(z+0)^0+\\binom{2}{1}(w+1)^0(z+1)^1+\\binom{2}{2}(w+0)^{-1}(z+2)^2 \\\\\n& = (w+2)+2(z+1)+\\frac{(z+2)^2}{w} \\\\\n& = \\frac{(z+w+2)^2}{w}.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=12216711
12217323
Bounded deformation
In mathematics, a function of bounded deformation is a function whose distributional derivatives are not quite well-behaved-enough to qualify as functions of bounded variation, although the symmetric part of the derivative matrix does meet that condition. Thought of as deformations of elasto-plastic bodies, functions of bounded deformation play a major role in the mathematical study of materials, e.g. the Francfort-Marigo model of brittle crack evolution. More precisely, given an open subset Ω of R"n", a function "u" : Ω → R"n" is said to be of bounded deformation if the symmetrized gradient "ε"("u") of "u", formula_0 is a bounded, symmetric "n" × "n" matrix-valued Radon measure. The collection of all functions of bounded deformation is denoted BD(Ω; R"n"), or simply BD, introduced essentially by P.-M. Suquet in 1978. BD is a strictly larger space than the space BV of functions of bounded variation. One can show that if "u" is of bounded deformation then the measure "ε"("u") can be decomposed into three parts: one absolutely continuous with respect to Lebesgue measure, denoted "e"("u") d"x"; a jump part, supported on a rectifiable ("n" − 1)-dimensional set "J""u" of points where "u" has two different approximate limits "u"+ and "u"−, together with a normal vector "ν""u"; and a "Cantor part", which vanishes on Borel sets of finite "H""n"−1-measure (where "H""k" denotes "k"-dimensional Hausdorff measure). A function "u" is said to be of special bounded deformation if the Cantor part of "ε"("u") vanishes, so that the measure can be written as formula_1 where "H" "n"−1 | "J""u" denotes "H" "n"−1 on the jump set "J""u" and formula_2 denotes the symmetrized dyadic product: formula_3 The collection of all functions of special bounded deformation is denoted SBD(Ω; R"n"), or simply SBD.
[ { "math_id": 0, "text": "\\varepsilon(u) = \\frac{\\nabla u + \\nabla u^{\\top}}{2}" }, { "math_id": 1, "text": "\\varepsilon(u) = e(u) \\, \\mathrm{d} x + \\big( u_{+}(x) - u_{-}(x) \\big) \\odot \\nu_{u} (x) H^{n - 1} | J_{u}," }, { "math_id": 2, "text": "\\odot" }, { "math_id": 3, "text": "a \\odot b = \\frac{a \\otimes b + b \\otimes a}{2}." } ]
https://en.wikipedia.org/wiki?curid=12217323
1221919
Sturm's theorem
Counting polynomial roots in an interval In mathematics, the Sturm sequence of a univariate polynomial p is a sequence of polynomials associated with p and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of p located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of p. Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity, it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrarily small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials. For computing over the reals, Sturm's theorem is less efficient than other methods based on Descartes' rule of signs. However, it works on every real closed field, and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers. The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm, who discovered the theorem in 1829. The theorem. The Sturm chain or Sturm sequence of a univariate polynomial "P"("x") with real coefficients is the sequence of polynomials formula_0 such that formula_1 for "i" ≥ 1, where "P"' is the derivative of P, and formula_2 is the remainder of the Euclidean division of formula_3 by formula_4 The length of the Sturm sequence is at most the degree of P. The number of sign variations at ξ of the Sturm sequence of P is the number of sign changes (ignoring zeros) in the sequence of real numbers formula_5 This number of sign variations is denoted here "V"("ξ"). Sturm's theorem states that, if P is a square-free polynomial, the number of distinct real roots of P in the half-open interval ("a", "b"] is "V"("a") − "V"("b") (here, a and b are real numbers such that "a" &lt; "b"). The theorem extends to unbounded intervals by defining the sign at +∞ of a polynomial as the sign of its leading coefficient (that is, the coefficient of the term of highest degree). At –∞ the sign of a polynomial is the sign of its leading coefficient for a polynomial of even degree, and the opposite sign for a polynomial of odd degree. In the case of a non-square-free polynomial, if neither a nor b is a multiple root of p, then "V"("a") − "V"("b") is the number of "distinct" real roots of P. The proof of the theorem is as follows: when the value of x increases from a to b, it may pass through a zero of some formula_6 ("i" &gt; 0); when this occurs, the number of sign variations of formula_7 does not change. When x passes through a root of formula_8 the number of sign variations of formula_9 decreases from 1 to 0. These are the only values of x where some sign may change. Example. Suppose we wish to find the number of roots in some range for the polynomial formula_10. So formula_11 The remainder of the Euclidean division of "p"0 by "p"1 is formula_12 multiplying it by −1 we obtain formula_13. Next dividing "p"1 by "p"2 and multiplying the remainder by −1, we obtain formula_14. Now dividing "p"2 by "p"3 and multiplying the remainder by −1, we obtain formula_15. As this is a constant, this finishes the computation of the Sturm sequence. To find the number of real roots of formula_16 one has to evaluate the sequences of the signs of these polynomials at −∞ and ∞, which are respectively (+, −, +, +, −) and (+, +, +, −, −). Thus formula_17 where V denotes the number of sign changes in the sequence, which shows that p has two real roots. This can be verified by noting that "p"("x") can be factored as ("x"2 − 1)("x"2 + "x" + 1), where the first factor has the roots −1 and 1, and second factor has no real roots. This last assertion results from the quadratic formula, and also from Sturm's theorem, which gives the sign sequences (+, –, –) at −∞ and (+, +, –) at +∞. Generalization. Sturm sequences have been generalized in two directions. To define each polynomial in the sequence, Sturm used the negative of the remainder of the Euclidean division of the two preceding ones. The theorem remains true if one replaces the negative of the remainder by its product or quotient by a positive constant or the square of a polynomial. It is also useful (see below) to consider sequences where the second polynomial is not the derivative of the first one. A "generalized Sturm sequence" is a finite sequence of polynomials with real coefficients formula_18 such that 0 for 0 &lt; "i" &lt; "m" and ξ a real number, then "P""i" −1 ("ξ") "P""i" + 1("ξ") &lt; 0. The last condition implies that two consecutive polynomials do not have any common real root. In particular the original Sturm sequence is a generalized Sturm sequence, if (and only if) the polynomial has no multiple real root (otherwise the first two polynomials of its Sturm sequence have a common root). When computing the original Sturm sequence by Euclidean division, it may happen that one encounters a polynomial that has a factor that is never negative, such a formula_21 or formula_22. In this case, if one continues the computation with the polynomial replaced by its quotient by the nonnegative factor, one gets a generalized Sturm sequence, which may also be used for computing the number of real roots, since the proof of Sturm's theorem still applies (because of the third condition). This may sometimes simplify the computation, although it is generally difficult to find such nonnegative factors, except for even powers of x. Use of pseudo-remainder sequences. In computer algebra, the polynomials that are considered have integer coefficients or may be transformed to have integer coefficients. The Sturm sequence of a polynomial with integer coefficients generally contains polynomials whose coefficients are not integers (see above example). To avoid computation with rational numbers, a common method is to replace Euclidean division by pseudo-division for computing polynomial greatest common divisors. This amounts to replacing the remainder sequence of the Euclidean algorithm by a pseudo-remainder sequence, a pseudo remainder sequence being a sequence formula_23 of polynomials such that there are constants formula_24 and formula_25 such that formula_26 is the remainder of the Euclidean division of formula_27 by formula_28 (The different kinds of pseudo-remainder sequences are defined by the choice of formula_24 and formula_29 typically, formula_24 is chosen for not introducing denominators during Euclidean division, and formula_25 is a common divisor of the coefficients of the resulting remainder; see Pseudo-remainder sequence for details.) For example, the remainder sequence of the Euclidean algorithm is a pseudo-remainder sequence with formula_30 for every i, and the Sturm sequence of a polynomial is a pseudo-remainder sequence with formula_31 and formula_32 for every i. Various pseudo-remainder sequences have been designed for computing greatest common divisors of polynomials with integer coefficients without introducing denominators (see Pseudo-remainder sequence). They can all be made generalized Sturm sequences by choosing the sign of the formula_25 to be the opposite of the sign of the formula_33 This allows the use of Sturm's theorem with pseudo-remainder sequences. Root isolation. For a polynomial with real coefficients, "root isolation" consists of finding, for each real root, an interval that contains this root, and no other roots. This is useful for root finding, allowing the selection of the root to be found and providing a good starting point for fast numerical algorithms such as Newton's method; it is also useful for certifying the result, as if Newton's method converge outside the interval one may immediately deduce that it converges to the wrong root. Root isolation is also useful for computing with algebraic numbers. For computing with algebraic numbers, a common method is to represent them as a pair of a polynomial to which the algebraic number is a root, and an isolation interval. For example formula_34 may be unambiguously represented by formula_35 Sturm's theorem provides a way for isolating real roots that is less efficient (for polynomials with integer coefficients) than other methods involving Descartes' rule of signs. However, it remains useful in some circumstances, mainly for theoretical purposes, for example for algorithms of real algebraic geometry that involve infinitesimals. For isolating the real roots, one starts from an interval formula_36 containing all the real roots, or the roots of interest (often, typically in physical problems, only positive roots are interesting), and one computes formula_37 and formula_38 For defining this starting interval, one may use bounds on the size of the roots (see ). Then, one divides this interval in two, by choosing c in the middle of formula_39 The computation of formula_40 provides the number of real roots in formula_41 and formula_42 and one may repeat the same operation on each subinterval. When one encounters, during this process an interval that does not contain any root, it may be suppressed from the list of intervals to consider. When one encounters an interval containing exactly one root, one may stop dividing it, as it is an isolation interval. The process stops eventually, when only isolating intervals remain. This isolating process may be used with any method for computing the number of real roots in an interval. Theoretical complexity analysis and practical experiences show that methods based on Descartes' rule of signs are more efficient. It follows that, nowadays, Sturm sequences are rarely used for root isolation. Application. Generalized Sturm sequences allow counting the roots of a polynomial where another polynomial is positive (or negative), without computing these root explicitly. If one knows an isolating interval for a root of the first polynomial, this allows also finding the sign of the second polynomial at this particular root of the first polynomial, without computing a better approximation of the root. Let "P"("x") and "Q"("x") be two polynomials with real coefficients such that P and Q have no common root and P has no multiple roots. In other words, P and "P'  Q" are coprime polynomials. This restriction does not really affect the generality of what follows as GCD computations allows reducing the general case to this case, and the cost of the computation of a Sturm sequence is the same as that of a GCD. Let "W"("a") denote the number of sign variations at a of a generalized Sturm sequence starting from P and "P'  Q". If "a" &lt; "b" are two real numbers, then "W"("a") – "W"("b") is the number of roots of P in the interval formula_36 such that "Q"("a") &gt; 0 minus the number of roots in the same interval such that "Q"("a") &lt; 0. Combined with the total number of roots of P in the same interval given by Sturm's theorem, this gives the number of roots of P such that "Q"("a") &gt; 0 and the number of roots of P such that "Q"("a") &lt; 0. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_0, P_1, \\ldots," }, { "math_id": 1, "text": "\\begin{align}\nP_0&=P,\\\\\nP_1&=P',\\\\\nP_{i+1}&=-\\operatorname{rem}(P_{i-1},P_i),\n\\end{align}" }, { "math_id": 2, "text": "\\operatorname{rem}(P_{i-1},P_i)" }, { "math_id": 3, "text": "P_{i-1}" }, { "math_id": 4, "text": "P_{i}." }, { "math_id": 5, "text": "P_0(\\xi), P_1(\\xi),P_2(\\xi),\\ldots." }, { "math_id": 6, "text": "P_i" }, { "math_id": 7, "text": "(P_{i-1}, P_i, P_{i+1})" }, { "math_id": 8, "text": "P_0=P," }, { "math_id": 9, "text": "(P_0, P_1)" }, { "math_id": 10, "text": "p(x)=x^4+x^3-x-1" }, { "math_id": 11, "text": "\\begin{align} p_0(x) &=p(x)=x^4+x^3-x-1 \\\\\np_1(x)&=p'(x)=4x^3+3x^2-1\n\\end{align}" }, { "math_id": 12, "text": "-\\tfrac{3}{16}x^2-\\tfrac{3}{4}x-\\tfrac{15}{16};" }, { "math_id": 13, "text": "p_2(x)=\\tfrac{3}{16}x^2+\\tfrac{3}{4}x+\\tfrac{15}{16}" }, { "math_id": 14, "text": "p_3(x)=-32x-64" }, { "math_id": 15, "text": "p_4(x)=-\\tfrac{3}{16}" }, { "math_id": 16, "text": "p_0" }, { "math_id": 17, "text": "V(-\\infty)-V(+\\infty) = 3-1=2," }, { "math_id": 18, "text": "P_0, P_1, \\dots, P_m" }, { "math_id": 19, "text": "\\deg P_{i} <\\deg P_{i-1}" }, { "math_id": 20, "text": "P_m" }, { "math_id": 21, "text": "x^2" }, { "math_id": 22, "text": "x^2+1" }, { "math_id": 23, "text": "p_0, \\ldots, p_k" }, { "math_id": 24, "text": "a_i" }, { "math_id": 25, "text": "b_i" }, { "math_id": 26, "text": "b_ip_{i+1}" }, { "math_id": 27, "text": "a_ip_{i-1}" }, { "math_id": 28, "text": "p_i." }, { "math_id": 29, "text": "b_i;" }, { "math_id": 30, "text": "a_i=b_i=1" }, { "math_id": 31, "text": "a_i=1" }, { "math_id": 32, "text": "b_i=-1" }, { "math_id": 33, "text": "a_i." }, { "math_id": 34, "text": "\\sqrt 2" }, { "math_id": 35, "text": "(x^2-2, [0,2])." }, { "math_id": 36, "text": "(a,b]" }, { "math_id": 37, "text": "V(a)" }, { "math_id": 38, "text": "V(b)." }, { "math_id": 39, "text": "(a,b]." }, { "math_id": 40, "text": "V(c)" }, { "math_id": 41, "text": "(a,c]" }, { "math_id": 42, "text": "(c,b]," } ]
https://en.wikipedia.org/wiki?curid=1221919
12219719
Cellular waste product
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration. Each pathway generates different waste products. Aerobic respiration. When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules. Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen. In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant. Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle. The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted. Anaerobic respiration. Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that selectively perform anaerobic respiration even in the presence of oxygen. In anaerobic respiration, weak oxidants like sulfate and nitrate serve as oxidants in the place of oxygen. Generally, in anaerobic respiration sugars are broken down into carbon dioxide and other waste products that are dictated by the oxidant the cell uses. Whereas in aerobic respiration the oxidant is always oxygen, in anaerobic respiration it varies. Each oxidant produces a different waste product, such as nitrite, succinate, sulfide, methane, and acetate. Anaerobic respiration is correspondingly less efficient than aerobic respiration. In the absence of oxygen, not all of the carbon-carbon bonds in glucose can be broken to release energy. A great deal of extractable energy is left in the waste products. Anaerobic respiration generally occurs in prokaryotes in environments that do not contain oxygen. Fermentation. Fermentation is another process by which cells can extract energy from glucose. It is not a form of cellular respiration, but it does generate ATP, break down glucose, and produce waste products. Fermentation, like aerobic respiration, begins by breaking glucose into two pyruvate molecules. From here, it proceeds using endogenous organic electron receptors, whereas cellular respiration uses exogenous receptors, such as oxygen in aerobic respiration and nitrate in anaerobic respiration. These varied organic receptors each generate different waste products. Common products are lactic acid, lactose, hydrogen, and ethanol. Carbon dioxide is also commonly produced. Fermentation occurs primarily in anaerobic conditions, although some organisms such as yeast use fermentation even when oxygen is plentiful. Lactic Acid Fermentation. Simplified Theoretical Reaction: C6H12O6 formula_0 2C3H6O3 + 2 ATP (120 kJ) Lactic Acid Fermentation is commonly known as the process by which mammalian muscle cells produce energy in anaerobic environments, as in instances of great physical exertion, and is the simplest type of fermentation. It starts along the same pathway as aerobic respiration, but once glucose is converted to pyruvate proceeds down one of two pathways and produces only two molecules of ATP from each molecule of glucose. In the homolactic pathway, it produces lactic acid as waste. In the heterolactic pathway, it produces lactic acid as well as ethanol and carbon dioxide. Lactic acid fermentation is relatively inefficient. The waste products lactic acid and ethanol have not been fully oxidized and still contain energy, but it requires the addition of oxygen to extract this energy. Generally, lactic acid fermentation occurs only when aerobic cells are lacking oxygen. However, some aerobic mammalian cells will preferentially use lactic acid fermentation over aerobic respiration. This phenomenon is called the Warburg effect and is found primarily in cancer cells. Muscles cells under great exertion will also use lactic acid fermentation to supplement aerobic respiration. Lactic acid fermentation is somewhat faster, although less efficient, than aerobic respiration, so in activities like sprinting it can help quickly provide needed energy to muscles. Secretion and effects of waste products. Cellular respiration takes place in the cristae of the mitochondria within cells. Depending on the pathways followed, the products are dealt with in different ways. CO2 is excreted from the cell via diffusion into the blood stream, where it is transported in three ways: H2O also diffuses out of the cell into the bloodstream, from where it is excreted in the form of perspiration, water vapour in the breath, or urine from the kidneys. Water, along with some dissolved solutes, are removed from blood circulation in the nephrons of the kidney and eventually excreted as urine. The products of fermentation can be processed in different ways, depending on the cellular conditions. Lactic acid tends to accumulate in the muscles, which causes pain in the muscle and joint as well as fatigue. It also creates a gradient which induces water to flow out of cells and increases blood pressure. Research suggests that lactic acid may also play a role in lowering levels of potassium in the blood. It can also be converted back to pyruvate or converted back to glucose in the liver and fully metabolized by aerobic respiration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\to" } ]
https://en.wikipedia.org/wiki?curid=12219719
12219849
Extension by definitions
In mathematical logic, more specifically in the proof theory of first-order theories, extensions by definitions formalize the introduction of new symbols by means of a definition. For example, it is common in naive set theory to introduce a symbol formula_0 for the set that has no member. In the formal setting of first-order theories, this can be done by adding to the theory a new constant formula_0 and the new axiom formula_1, meaning "for all "x", "x" is not a member of formula_0". It can then be proved that doing so adds essentially nothing to the old theory, as should be expected from a definition. More precisely, the new theory is a conservative extension of the old one. Definition of relation symbols. "Let" formula_2 be a first-order theory and formula_3 a formula of formula_2 such that formula_4, ..., formula_5 are distinct and include the variables free in formula_3. Form a new first-order theory formula_6 from formula_2 by adding a new formula_7-ary relation symbol formula_8, the logical axioms featuring the symbol formula_8 and the new axiom formula_9, called the "defining axiom" of formula_8. If formula_10 is a formula of formula_6, let formula_11 be the formula of formula_2 obtained from formula_10 by replacing any occurrence of formula_12 by formula_13 (changing the bound variables in formula_14 if necessary so that the variables occurring in the formula_15 are not bound in formula_13). Then the following hold: The fact that formula_6 is a conservative extension of formula_2 shows that the defining axiom of formula_8 cannot be used to prove new theorems. The formula formula_11 is called a "translation" of formula_10 into formula_2. Semantically, the formula formula_11 has the same meaning as formula_10, but the defined symbol formula_8 has been eliminated. Definition of function symbols. Let formula_2 be a first-order theory (with equality) and formula_17 a formula of formula_2 such that formula_18, formula_4, ..., formula_5 are distinct and include the variables free in formula_17. Assume that we can prove formula_19 in formula_2, i.e. for all formula_4, ..., formula_5, there exists a unique "y" such that formula_17. Form a new first-order theory formula_6 from formula_2 by adding a new formula_7-ary function symbol formula_20, the logical axioms featuring the symbol formula_20 and the new axiom formula_21, called the "defining axiom" of formula_20. Let formula_10 be any atomic formula of formula_6. We define formula formula_11 of formula_2 recursively as follows. If the new symbol formula_20 does not occur in formula_10, let formula_11 be formula_10. Otherwise, choose an occurrence of formula_22 in formula_10 such that formula_20 does not occur in the terms formula_15, and let formula_23 be obtained from formula_10 by replacing that occurrence by a new variable formula_24. Then since formula_20 occurs in formula_23 one less time than in formula_10, the formula formula_25 has already been defined, and we let formula_11 be formula_26 (changing the bound variables in formula_14 if necessary so that the variables occurring in the formula_15 are not bound in formula_27). For a general formula formula_10, the formula formula_11 is formed by replacing every occurrence of an atomic subformula formula_23 by formula_25. Then the following hold: The formula formula_11 is called a "translation" of formula_10 into formula_2. As in the case of relation symbols, the formula formula_11 has the same meaning as formula_10, but the new symbol formula_20 has been eliminated. The construction of this paragraph also works for constants, which can be viewed as 0-ary function symbols. Extensions by definitions. A first-order theory formula_6 obtained from formula_2 by successive introductions of relation symbols and function symbols as above is called an extension by definitions of formula_2. Then formula_6 is a conservative extension of formula_2, and for any formula formula_10 of formula_6 we can form a formula formula_11 of formula_2, called a "translation" of formula_10 into formula_2, such that formula_16 is provable in formula_6. Such a formula is not unique, but any two of them can be proved to be equivalent in "T". In practice, an extension by definitions formula_6 of "T" is not distinguished from the original theory "T". In fact, the formulas of formula_6 can be thought of as "abbreviating" their translations into "T". The manipulation of these abbreviations as actual formulas is then justified by the fact that extensions by definitions are conservative. formula_31, and what we obtain is an extension by definitions formula_6 of formula_2. Then in formula_6 we can prove that for every "x", there exists a unique "y" such that "x"×"y"="y"×"x"="e". Consequently, the first-order theory formula_32 obtained from formula_6 by adding a unary function symbol formula_20 and the axiom formula_33 is an extension by definitions of formula_2. Usually, formula_34 is denoted formula_35.
[ { "math_id": 0, "text": "\\emptyset" }, { "math_id": 1, "text": "\\forall x(x\\notin\\emptyset)" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "\\phi(x_1,\\dots,x_n)" }, { "math_id": 4, "text": "x_1" }, { "math_id": 5, "text": "x_n" }, { "math_id": 6, "text": "T'" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "\\forall x_1\\dots\\forall x_n(R(x_1,\\dots,x_n)\\leftrightarrow\\phi(x_1,\\dots,x_n))" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "\\psi^\\ast" }, { "math_id": 12, "text": "R(t_1,\\dots,t_n)" }, { "math_id": 13, "text": "\\phi(t_1,\\dots,t_n)" }, { "math_id": 14, "text": "\\phi" }, { "math_id": 15, "text": "t_i" }, { "math_id": 16, "text": "\\psi\\leftrightarrow\\psi^\\ast" }, { "math_id": 17, "text": "\\phi(y,x_1,\\dots,x_n)" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "\\forall x_1\\dots\\forall x_n\\exists !y\\phi(y,x_1,\\dots,x_n)" }, { "math_id": 20, "text": "f" }, { "math_id": 21, "text": "\\forall x_1\\dots\\forall x_n\\phi(f(x_1,\\dots,x_n),x_1,\\dots,x_n)" }, { "math_id": 22, "text": "f(t_1,\\dots,t_n)" }, { "math_id": 23, "text": "\\chi" }, { "math_id": 24, "text": "z" }, { "math_id": 25, "text": "\\chi^\\ast" }, { "math_id": 26, "text": "\\forall z(\\phi(z,t_1,\\dots,t_n)\\rightarrow\\chi^\\ast)" }, { "math_id": 27, "text": "\\phi(z,t_1,\\dots,t_n)" }, { "math_id": 28, "text": "=" }, { "math_id": 29, "text": "\\in" }, { "math_id": 30, "text": "\\subseteq" }, { "math_id": 31, "text": "\\forall x(x \\times e = x\\text{ and }e \\times x = x)" }, { "math_id": 32, "text": "T''" }, { "math_id": 33, "text": "\\forall x(x \\times f(x)=e\\text{ and }f(x) \\times x=e)" }, { "math_id": 34, "text": "f(x)" }, { "math_id": 35, "text": "x^{-1}" } ]
https://en.wikipedia.org/wiki?curid=12219849
12220009
Logic redundancy
Presence of more logic gates in a digital circuit than it theoretically requires Logic redundancy occurs in a digital gate network containing circuitry that does not affect the static logic function. There are several reasons why logic redundancy may exist. One reason is that it may have been added deliberately to suppress transient glitches (thus causing a race condition) in the output signals by having two or more product terms overlap with a third one. Consider the following equation: formula_0 The third product term formula_1 is a redundant consensus term. If formula_2 switches from 1 to 0 while formula_3 and formula_4, formula_5 remains 1. During the transition of signal formula_2 in logic gates, both the first and second term may be 0 momentarily. The third term prevents a glitch since its value of 1 in this case is not affected by the transition of signal formula_2. Another reason for logic redundancy is poor design practices which unintentionally result in logically redundant terms. This causes an unnecessary increase in network complexity, and possibly hampering the ability to test manufactured designs using traditional test methods (single stuck-at fault models). Testing might be possible using IDDQ models. Removing logic redundancy. Logic redundancy is, in general, not desired. Redundancy, by definition, requires extra parts (in this case: logical terms) which raises the cost of implementation (either actual cost of physical parts or CPU time to process). Logic redundancy can be removed by several well-known techniques, such as Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. Adding logic redundancy. In some cases it may be desirable to "add" logic redundancy. One of those cases is to avoid race conditions whereby an output can fluctuate because different terms are "racing" to turn off and on. To explain this in more concrete terms the Karnaugh map to the right shows the minterms for the following function: formula_7 The boxes represent the minimal AND/OR terms needed to implement this function: formula_8 The k-map visually shows where race conditions occur in the minimal expression by having gaps between minterms, for example, the gap between the blue and green rectangles. If the input were to change from formula_9 to formula_10 then a race will occur between formula_11 turning off and formula_12 turning on. If the blue term switches off before the green turns on then the output will fluctuate and may register as 0. Another race condition is between the blue and the red for transition from formula_9 to formula_13. The race condition is removed by adding in logic redundancy. Both minterm race conditions are covered by addition of the yellow term formula_6. In this case, the addition of logic redundancy has stabilized the output to avoid output fluctuations because terms are racing each other to change state. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nY = A B + \\overline{A} C + B C.\n" }, { "math_id": 1, "text": "BC" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "B = 1" }, { "math_id": 4, "text": "C = 1" }, { "math_id": 5, "text": "Y" }, { "math_id": 6, "text": "A\\overline{D}" }, { "math_id": 7, "text": "f(A, B, C, D) = E(6, 8, 9, 10, 11, 12, 13, 14).\\ " }, { "math_id": 8, "text": "f = A\\overline{C} + A\\overline{B} + BC\\overline{D}." }, { "math_id": 9, "text": "1110" }, { "math_id": 10, "text": "1010" }, { "math_id": 11, "text": "BC\\overline{D}" }, { "math_id": 12, "text": "A\\overline{B}" }, { "math_id": 13, "text": "1100" } ]
https://en.wikipedia.org/wiki?curid=12220009
1222578
Generative model
Model for generating observable data in probability and statistics In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsistent, but three major types can be distinguished, following : The distinction between these last two classes is not consistently made; refers to these three classes as "generative learning", "conditional learning", and "discriminative learning", but only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model. Standard examples of each, all of which are linear classifiers, are: In application to classification, one wishes to go from an observation "x" to a label "y" (or probability distribution on labels). One can compute this directly, without using a probability distribution ("distribution-free classifier"); one can estimate the probability of a label given an observation, formula_3 ("discriminative model"), and base classification on that; or one can estimate the joint distribution formula_0 ("generative model"), from that compute the conditional probability formula_3, and then base classification on that. These are increasingly indirect, but increasingly probabilistic, allowing more domain knowledge and probability theory to be applied. In practice different approaches are used, depending on the particular problem, and hybrids can combine strengths of multiple approaches. Definition. An alternative division defines these symmetrically as: Regardless of precise definition, the terminology is constitutional because a generative model can be used to "generate" random instances (outcomes), either of an observation and target formula_9, or of an observation "x" given a target value "y", while a discriminative model or discriminative classifier (without a model) can be used to "discriminate" the value of the target variable "Y", given an observation "x". The difference between "discriminate" (distinguish) and "classify" is subtle, and these are not consistently distinguished. (The term "discriminative classifier" becomes a pleonasm when "discrimination" is equivalent to "classification".) The term "generative model" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables. Generative adversarial networks are examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers. Relationships between models. In application to classification, the observable "X" is frequently a continuous variable, the target "Y" is generally a discrete variable consisting of a finite set of labels, and the conditional probability formula_6 can also be interpreted as a (non-deterministic) target function formula_5, considering "X" as inputs and "Y" as outputs. Given a finite set of labels, the two definitions of "generative model" are closely related. A model of the conditional distribution formula_4 is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label values formula_8, together with the distribution of observations given a label, formula_7; symbolically, formula_10 Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label (but without their relative frequencies), it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution, formula_0, the distribution of the individual variables can be computed as the marginal distributions formula_11 and formula_12 (considering "X" as continuous, hence integrating over it, and "Y" as discrete, hence summing over it), and either conditional distribution can be computed from the definition of conditional probability: formula_13 and formula_14. Given a model of one conditional probability, and estimated probability distributions for the variables "X" and "Y", denoted formula_15 and formula_8, one can estimate the opposite conditional probability using Bayes' rule: formula_16 For example, given a generative model for formula_7, one can estimate: formula_17 and given a discriminative model for formula_6, one can estimate: formula_18 Note that Bayes' rule (computing one conditional probability in terms of the other) and the definition of conditional probability (computing conditional probability in terms of the joint distribution) are frequently conflated as well. Contrast with discriminative classifiers. A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learn formula_2 directly from the data and then try to classify data. On the other hand, generative algorithms try to learn formula_19 which can be transformed into formula_2 later to classify the data. One of the advantages of generative algorithms is that you can use formula_19 to generate new data similar to existing data. On the other hand, it has been proved that some discriminative algorithms give better performance than some generative algorithms in classification tasks. Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. But in general, they don't necessarily perform better than generative models at classification and regression tasks. The two classes are seen as complementary or as different views of the same procedure. Deep generative models. With the rise of deep learning, a new family of methods, called deep generative models (DGMs), is formed through the combination of generative models and deep neural networks. An increase in the scale of the neural networks is typically accompanied by an increase in the scale of the training data, both of which are required for good performance. Popular DGMs include variational autoencoders (VAEs), generative adversarial networks (GANs), and auto-regressive models. Recently, there has been a trend to build very large deep generative models. For example, GPT-3, and its precursor GPT-2, are auto-regressive neural language models that contain billions of parameters, BigGAN and VQ-VAE which are used for image generation that can have hundreds of millions of parameters, and Jukebox is a very large generative model for musical audio that contains billions of parameters. Types. Generative models. Types of generative models are: If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model to maximize the data likelihood is a common method. However, since most statistical models are only approximations to the "true" distribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using a discriminative model (see below), although application-specific details will ultimately dictate which approach is most suitable in any particular case. Examples. Simple example. Suppose the input data is formula_20, the set of labels for formula_21 is formula_22, and there are the following 4 data points: formula_23 For the above data, estimating the joint probability distribution formula_19 from the empirical measure will be the following: while formula_2 will be following: Text generation. gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with "representing and speedily is an good"; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P(X, Y)" }, { "math_id": 1, "text": "P(Y\\mid X = x)" }, { "math_id": 2, "text": "p(y|x)" }, { "math_id": 3, "text": "P(Y|X=x)" }, { "math_id": 4, "text": "P(X\\mid Y = y)" }, { "math_id": 5, "text": "f\\colon X \\to Y" }, { "math_id": 6, "text": "P(Y\\mid X)" }, { "math_id": 7, "text": "P(X\\mid Y)" }, { "math_id": 8, "text": "P(Y)" }, { "math_id": 9, "text": "(x, y)" }, { "math_id": 10, "text": "P(X, Y) = P(X\\mid Y)P(Y)." }, { "math_id": 11, "text": "P(X) = \\sum_y P(X , Y = y)" }, { "math_id": 12, "text": "P(Y) = \\int_x P(Y, X = x)" }, { "math_id": 13, "text": "P(X\\mid Y)=P(X, Y)/P(Y)" }, { "math_id": 14, "text": "P(Y\\mid X)=P(X, Y)/P(X)" }, { "math_id": 15, "text": "P(X)" }, { "math_id": 16, "text": "P(X\\mid Y)P(Y) = P(Y\\mid X)P(X)." }, { "math_id": 17, "text": "P(Y\\mid X) = P(X\\mid Y)P(Y)/P(X)," }, { "math_id": 18, "text": "P(X\\mid Y) = P(Y\\mid X)P(X)/P(Y)." }, { "math_id": 19, "text": "p(x,y)" }, { "math_id": 20, "text": "x \\in \\{1, 2\\}" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "y \\in \\{0, 1\\}" }, { "math_id": 23, "text": "(x,y) = \\{(1,0), (1,1), (2,0), (2,1)\\}" } ]
https://en.wikipedia.org/wiki?curid=1222578
12226619
Devil's curve
2-dimensional curve In geometry, a Devil's curve, also known as the Devil on Two Sticks, is a curve defined in the Cartesian plane by an equation of the form formula_0 The polar equation of this curve is of the form formula_1. Devil's curves were discovered in 1750 by Gabriel Cramer, who studied them extensively. The name comes from the shape its central lemniscate takes when graphed. The shape is named after the juggling game diabolo, which was named after the Devil and which involves two sticks, a string, and a spinning prop in the likeness of the lemniscate. For formula_2, the central lemniscate, often called hourglass, is horizontal. For formula_3 it is vertical. If formula_4, the shape becomes a circle. The vertical hourglass intersects the y-axis at formula_5 . The horizontal hourglass intersects the x-axis at formula_6. Electric Motor Curve. A special case of the Devil's curve occurs at formula_7, where the curve is called the electric motor curve. It is defined by an equation of the form formula_8. The name of the special case comes from the middle shape's resemblance to the coils of wire, which rotate from forces exerted by magnets surrounding it. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " y^2(y^2 - b^2) = x^2(x^2 - a^2)." }, { "math_id": 1, "text": "r = \\sqrt{\\frac{b^2 \\sin^2\\theta-a^2 \\cos^2\\theta}{\\sin^2\\theta-\\cos^2\\theta}} = \\sqrt{\\frac{b^2 -a^2 \\cot^2\\theta}{1-\\cot^2\\theta}}" }, { "math_id": 2, "text": " |b|>|a| " }, { "math_id": 3, "text": " |b|<|a| " }, { "math_id": 4, "text": " |b|=|a| " }, { "math_id": 5, "text": " b,-b, 0 " }, { "math_id": 6, "text": " a,-a,0 " }, { "math_id": 7, "text": "\\frac{a^2}{b^2}=\\frac{25}{24}" }, { "math_id": 8, "text": "y^2(y^2-96) = x^2(x^2-100)" } ]
https://en.wikipedia.org/wiki?curid=12226619
12226979
Peano–Russell notation
In mathematical logic, Peano–Russell notation was Bertrand Russell's application of Giuseppe Peano's logical notation to the logical notions of Frege and was used in the writing of "Principia Mathematica" in collaboration with Alfred North Whitehead: "The notation adopted in the present work is based upon that of Peano, and the following explanations are to some extent modelled on those which he prefixes to his "Formulario Mathematico"." (Chapter I: Preliminary Explanations of Ideas and Notations, page 4) Variables. In the notation, variables are ambiguous in denotation, preserve a recognizable identity appearing in various places in logical statements within a given context, and have a range of possible determination between any two variables which is the same or different. When the possible determination is the same for both variables, then one implies the other; otherwise, the possible determination of one given to the other produces a meaningless phrase. The alphabetic symbol set for variables includes the lower and upper case Roman letters as well as many from the Greek alphabet. Fundamental functions of propositions. The four fundamental functions are the "contradictory function", the "logical sum", the "logical product", and the "implicative function". Contradictory function. The contradictory function applied to a proposition returns its negation. formula_0 Logical sum. The logical sum applied to two propositions returns their disjunction. formula_1 Logical product. The logical product applied to two propositions returns the truth-value of both propositions being simultaneously true. formula_2 Implicative function. The implicative function applied to two ordered propositions returns the truth value of the first implying the second proposition. formula_3 More complex functions of propositions. "Equivalence" is written as formula_4, standing for formula_5. "Assertion" is same as the making of a statement between two full stops. formula_6 An asserted proposition is either true or an error on the part of the writer. "Inference" is equivalent to the rule "modus ponens", where formula_7 In addition to the logical product, "dots" are also used to show groupings of functions of propositions. In the above example, the dot before the final implication function symbol groups all of the previous functions on that line together as the antecedent to the final consequent. The notation includes "definitions" as complex functions of propositions, using the equals sign "=" to separate the defined term from its symbolic definition, ending with the letters "Df". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sim p" }, { "math_id": 1, "text": "p \\lor q" }, { "math_id": 2, "text": "p \\cdot q" }, { "math_id": 3, "text": "p \\supset q" }, { "math_id": 4, "text": "p \\equiv q" }, { "math_id": 5, "text": "p \\supset q \\cdot q \\supset p" }, { "math_id": 6, "text": "\\vdash p" }, { "math_id": 7, "text": "p \\cdot p \\supset q . \\supset q" } ]
https://en.wikipedia.org/wiki?curid=12226979
12230039
Isomorphism extension theorem
In field theory, a branch of mathematics, the isomorphism extension theorem is an important theorem regarding the extension of a field isomorphism to a larger field. Isomorphism extension theorem. The theorem states that given any field formula_0, an algebraic extension field formula_1 of formula_0 and an isomorphism formula_2 mapping formula_0 onto a field formula_3 then formula_2 can be extended to an isomorphism formula_4 mapping formula_1 onto an algebraic extension formula_5 of formula_3 (a subfield of the algebraic closure of formula_3). The proof of the isomorphism extension theorem depends on Zorn's lemma.
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "\\phi" }, { "math_id": 3, "text": "F'" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "E'" } ]
https://en.wikipedia.org/wiki?curid=12230039
12230337
Rowland ring
Rowland's ring (aka Rowland ring) is an experimental arrangement for the measurement of the hysteresis curve of a sample of magnetic material. It was developed by Henry Augustus Rowland. The geometry of a Rowland's ring is usually a toroid of magnetic material around which is closely wound a magnetization coil consisting of a large number of windings to magnetize the material, and a sampling coil consisting of a smaller number of windings to sample the induced magnetic flux. The electric current flowing in the magnetization coil dictates the magnetic field intensity formula_0 in the material. The sampling coil produces a voltage proportional to the rate of change of the magnetic field formula_1 in the material. By measuring the time integral of the voltage in the sampling coil versus the current in the magnetization coil, one obtains the hysteresis curve. References. Paul Lorrain and Dale Corson, "Electromagnetic Fields and Waves, 2nd ed", W.H. Freeman and Company (1970).
[ { "math_id": 0, "text": "\\mathbf{H}" }, { "math_id": 1, "text": "\\mathbf{B}" } ]
https://en.wikipedia.org/wiki?curid=12230337
1223245
Bibliometrics
Statistical analysis of written publications Bibliometrics is the application of statistical methods to the study of bibliographic data, especially in scientific and library and information science contexts, and is closely associated with scientometrics (the analysis of scientific metrics and indicators) to the point that both fields largely overlap. Bibliometrics studies first appeared in the late 19th century. They have known a significant development after the Second World War in a context of "periodical crisis" and new technical opportunities offered by computing tools. In the early 1960s, the Science Citation Index of Eugene Garfield and the citation network analysis of Derek John de Solla Price laid the fundamental basis of a structured research program on bibliometrics. Citation analysis is a commonly used bibliometric method based on constructing the citation graph, a network or graph representation of the citations shared by documents. Many research fields use bibliometric methods to explore the impact of their field, the impact of a set of researchers, the impact of a particular paper, or to identify particularly impactful papers within a specific field of research. Bibliometrics tools have been commonly integrated in descriptive linguistics, the development of thesauri, and evaluation of reader usage. Beyond specialized scientific use, popular web search engines, such as the pagerank algorithm implemented by Google have been largely shaped by bibliometrics methods and concepts. The emergence of the Web and the open science movement has gradually transformed the definition and the purpose of "bibliometrics." In the 2010s historical proprietary infrastructures for citation data such as the Web of Science or Scopus have been challenged by new initiatives in favor of open citation data. The "Leiden Manifesto for Research Metrics" (2015) opened a wide debate on the use and transparency of metrics. Definition. The term "bibliométrie" was first used by Paul Otlet in 1934, and defined as "the measurement of all aspects related to the publication and reading of books and documents." The anglicized version "bibliometrics" was first used by Alan Pritchard in a paper published in 1969, titled "Statistical Bibliography or Bibliometrics?" He defined the term as "the application of mathematics and statistical methods to books and other media of communication." "Bibliometrics" was conceived as a replacement for "statistical bibliography", the main label used by publications in the field until then: for Pritchard, statistical bibliography was too "clumsy" and did not make it very clear what was the main object of study. The concept of bibliometrics "stresses the material aspect of the undertaking: counting books, articles, publications, citations". In theory, bibliometrics is a distinct field from "scientometrics" (from the Russian "naukometriya"), which relies on the analysis of non-bibliographic indicators of scientific activity. In practice, bibliometrics and scientometrics studies tend to use similar data sources and methods, as citation data has become the leading standard of quantitative scientific evaluation during the mid-20th century: "insofar as bibliometric techniques are applied to scientific and technical literature, the two areas of scientometrics and bibliometrics overlap to a considerable degree." The development of the web and the expansion of bibliometrics approach to non-scientific production has entailed the introduction of broader labels in the 1990s and the 2000s: infometrics, webometrics or cybermetrics. These terms have not been extensively adopted, as they partly overlap with pre-existing research practices, such as information retrieval. History. Scientific works, studies and researches that have a bibliometric character can be identified, depending on the definition, already for the 12th century in the form of Jewish indexes. Early experiments (1880–1914). Bibliometric analysis appeared at the turn of the 19th and the 20th century. These developments predate the first occurrence of the concept of "bibiometrics" by several decades. Alternative label were commonly used: "bibliography statistics" became especially prevalent after 1920 and continued to remain in use until the end of the 1960s. Early statistical studies of scientific metadata were motivated by the significant expansion of scientific output and the parallel development of indexing services of databases that made this information more accessible in the first place. Citation index were first applied to case law in the 1860s and their most famous example, "Shepard's Citations" (first published in 1873) will serve as a direct inspiration for the Science Citation Index one century later. The emergence of social sciences inspired new speculative research on the "science of science" and the possibility of studying science itself as a scientific object: "The belief that social activities, including science, could be reduced to quantitative laws, just as the trajectory of a cannonball and the revolutions of the heavenly bodies, traces back to the positivist sociology of Auguste Comte, William Ogburn, and Herbert Spencer." Bibliometric analysis was not conceived as a separate body studies but one of the available methods for the quantitative analysis of scientific activity in different fields of research: science history ("Histoire des sciences et des savants depuis deux siècles" of Alphonse de Candolle in 1885, "The history of comparative anatomy, a statistical analysis of the literature" by Francis Joseph Cole and Nellie B. Eales in 1917), bibliography ("The Theory of National and International Bibliography" of Francis Burburry Campbell in 1896) or sociology of science ("Statistics of American Psychologists" of James McKeen Cattell in 1903). Early bibliometrics and scientometrics work were not simply descriptive but expressed normative views of what science should be and how it could progress. The measurement of the performance of individual researchers, scientific institutions or entire countries was a major objective. The statistical analysis of James McKeen Cattell acted as a preparatory work for a large scale evaluation of American researchers with eugenicists undertones: "American Men of Science" (1906), "with its astoundingly simplistic rating system of asterisks attached to individual entries in proportion to the estimated eminence of the starred scholar." Development of bibliography statistics (1910–1945). After 1910, bibliometrics approach increasingly became the main focus in several study of scientific performance rather than one quantitative method among others. In 1917, Francis Joseph Cole and Nellie B. Eales argued in favor of the primary statistical value of publications as a publication "is an isolated and definite piece of work, it is permanent, accessible, and may be judged, and in most cases it is not difficult to ascertain when, where, and by whom it was done, and to plot the results on squared paper." Five years later, Edward Wyndham Hulme expanded this argument to the point that publications could be considered as the standard measure of an entire civilization: "If civilization is but the product of the human mind operating upon a shifting platform of its environment, we may claim for bibliography that it is not only a pillar in the structure of the edifice, but that it can function as a measure of the varying forces to which this structure is continuously subjected." This shift toward publication had a limited impact: well until the 1970s, national and international evaluation of scientific activities "disdained bibliometric indicators" which were deemed too simplistic, in favor of socological and economic measures. Both the enhanced value attached to scientific publications as a measure of knowledge and the difficulties met by libraries to manage the growing flow of academic periodicals entailed the development of the first citation indexes. In 1927, P. Gross and E. M. Gross compiled the 3,633 references quoted by the "Journal of the American Chemical Society" during the year 1926 and ranked journals depending on their level of citation. The two authors created a set of tools and methods still commonly used by academic search engines, including attributing a bonus to recent citations since "the "present trend" rather than the "past performance" of a journal should be considered first." Yet the academic environment measured was markedly different: German rather than English ranked by far the main language of science of chemistry with more than 50% of all references. In the same period, fundamental algorithms, metrics and methods of bibliometrics were first identified in several unrelated projects, most of them being related to the structural inequalities of scientific production. In Alfred Lotka introduced its law of productivity from an analysis of the authored publications in the "Chemical Abstracts" and the "Geschichtstafeln der Physik": the number of authors producing an n number of contributions is equal to the 1/n^2 number of authors that only produced one publication. In, the chief librarian of the London Science Museum, Samuel Bradford derived a "law of scattering" from his experience in bibliographic indexing: there are exponentially diminishing returns of searching for references in science journals, as more and more work need to be consulted to find relevant work. Both the Lotka and Bradford law have been criticized as they are far from universal and rather uncovers a rough power law relationship rendered by deceivingly precise equations. Periodical crisis, digitization and citation index (1945–1960). After the Second World War, the growing challenge in managing and accessing scientific publications turned into a full-fledged "periodical crisis": existing journals could not keep up with the rapidly increasing scientific output spurred by the big science projects. The issue became politically relevant after the successful launch of Sputnik in 1957: "The Sputnik crisis turned the librarians' problem of bibliographic control into a national information crisis.." In a context of rapid and dramatic change, the emerging field of bibliometrics was linked to large scale reforms of academic publishing and nearly utopian visions of the future of science. In 1934, Paul Otlet introduced under the concept of "bibliométrie" or "bibliology" an ambitious project of measuring the impact of texts on society. In contrast with the bounded definition of "bibliometrics" that will become prevalent after the 1960s, the vision of Otlet was not limited to scientific publication nor in fact to "publication" as a fundamental unit: it aimed for "by the resolution of texts into atomic elements, or ideas, which he located in the single paragraphs (alinéa, verset, articulet) composing a book." In 1939 John Desmond Bernal envisioned a network of scientific archives, which was briefly considered by the Royal Society in 1948: "The scientific paper sent to the central publication office, upon approval by an editorial board of referees, would be microfilmed, and a sort of print-on-demand system set in action thereafter." While not using the concept of "bibliometrics", Bernal had a formative influence of leading figures of the field such as Derek John de Solla Price. The emerging computing technologies were immediately considered as a potential solution to make a larger amount of scientific output readable and searchable. During the 1950s and 1960s, an uncoordinated wave of experiments in indexing technologies resulted in the rapid development of key concepts of computing research retrieval. In 1957, IBM engineer Hans Peter Luhn introduced an influential paradigm of statistical-based analysis of word frequencies, as "communication of ideas by means of words is carried out on the basis of statistical probability." Automated translation of non-English scientific work has also significantly contributed to fundamental research on natural language processing of bibliographic references, as in this period a significant amount of scientific publications were not still available in English, especially the one coming from the Soviet block. Influent members of the National Science Foundation like Joshua Ledeberg advocated for the creation of a "centralized information system", SCITEL, partly influenced by the ideas of John Desmond Bernal. This system would at first coexist with printed journals and gradually replace them altogether on account of its efficiency. In the plan laid out by Ledeberg to Eugen Garfield in November 1961, a centralized deposit would index as much as 1,000,000 scientific articles per year. Beyond full-text searching, the infrastructure would also ensure the indexation of citation and other metadata, as well as the automated translation of foreign language articles. The first working prototype on an online retrieval system developed in 1963 by Doug Engelbart and Charles Bourne at the Stanford Research Institute proved the feasibility of these theoretical assumptions, although it was heavily constrained by memory issues: no more than 10,000 words of a few documents could be indexed. The early scientific computing infrastructures were focused on more specific research areas, such as MEDLINE for medicine, NASA/RECON for space engineering or OCLC Worldcat for library search: "most of the earliest online retrieval system provided access to a bibliographic database and the rest used a file containing another sort of information—encyclopedia articles, inventory data, or chemical compounds." Exclusive focus on text analysis proved limitative as the digitized collections expanded: a query could yield a large number results and it was difficult to evaluate the relevancy and the accuracy of the results. The "periodical crisis" and the limitations of index retrieval technologies motivated the development of bibliometric tools and large citation index like the Science Citation Index of Eugene Garfield. Garfield's work was initially primarily concerned with the automated analysis of text work. In contrast with ongoing work largely focused on internal semantic relationship, Garfield highlighted "the importance of metatext in discourse analysis", such as introductory sentences and bibliographic references. Secondary forms of scientific production like literature reviews and bibliographic notes became central to Garfield's vision as they have already been to John Desmond Bernal's vision of scientific archives. By 1953, Garfield's attention was permanently shifted to citation analysis: in a private letter to William C. Adair, the vice-president of the publisher of the Shepard's Citation index, "he suggested a well tried solution to the problem of automatic indexing, namely to "shepardize" biomedical literature, to untangle the skein of its content by following the thread of citation links in the same way the legal citator did with court sentences." In 1955, Garfield published his seminal article "Citation Indexes for Science", that both laid out the outline of the Science Citation Index and had a large influence on the future development of bibliometrics. The general citation index envisioned by Garfield was originally one of the building block of the ambitious plan of Joshua Lederberg to computerize scientific literature. Due to lack of funding, the plan was never realized. In 1963, Eugene Garfield created the Institute for Scientific Information that aimed to transform the projects initially envisioned with Lederberg into a profitable business. Bibliometric reductionism, metrics, and structuration of a research field (1960–1990). The field of bibliometrics coalesced in parallel to the development of the Science Citation Index, that was to become its fundamental infrastructure and data resource: "while the early twentieth century contributed methods that were necessary for measuring research, the mid-twentieth century was characterized by the development of institutions that motivated and facilitated research measurement." Significant influences of the nascent field included along with John Desmond Bernal, Paul Otlet the sociology of science of Robert K. Merton, that was re-interpreted in a non-ethic manner: the Matthew Effect, that is the increasing concentration of attention given to researchers that were already notable, was no longer considered "as a derive"(?) but a feature of normal science. A follower of Bernal, the British historian of science Derek John de Solla Price has had a major impact on the disciplinary formation of bibliometrics: with "the publication of "Science Since Babylon" (1961), "Little Science, Big Science" (1963), and "Networks of Scientific Papers" (1965) by Derek Price, scientometrics already had a sound empirical and conceptual toolkit available." Price was a proponent of "bibliometric reductionism". As Francis Joseph Cole and Nellie B. Eales in 1917, he argued that a publication is the best possible standard to lay out a quantitative study of science: they "resemble a pile of bricks (…) to remain in perpetuity as an intellectual edifice built by skill and artifice, resting on primitive foundation." Price doubled down on this reductionist approach by limiting in turn the large set of existing bibliographic data to citation data. Price's framework, like Garfield's, takes for granted the structural inequality of science production, as a minority of researchers creates a large share of publication and an even smaller share have a real measurable impact on subsequent research (with as few as 2% of papers having 4 citations or more at the time). Despite the unprecedented growth of post-war science, Price claimed for the continued existence of an "invisible college" of elite scientists that, as in the time of Robert Boyle undertook the most valuable work. While Price was aware of the power relationships that ensured the domination of such an elite, there was a fundamental ambiguity in the bibliometrics studies, that highlighted the concentration of academic publishing and prestige but also created tools, models and metrics that normalized pre-existing inequalities. The central position of the Scientific Citation Index amplified this performative effect. In the end of the 1960s Eugene Garfield formulated a "law of concentration" that was formally a reinterpretation of the Samuel Bradford's "law of scattering", with a major difference: while Bradford talked for the perspective of a specific research project, Garfield drew a generalization of the law to the entire set of scientific publishing: "the core literature for all scientific disciplines involves a group of no more than 1000 journals, and may involve as few as 500." Such law was also a justification of the practical limitation of the citation index to a limited subset of "core" journals, with the underlying assumption that any expansion into second-tier journals would yield diminishing returns. Rather than simply observing structural trends and patterns, bibliometrics tend to amplify and stratify them even further: "Garfield's citation indexes would have brought to a logical completion, the story of a stratified scientific literature produced by (…) a few, high-quality, "must-buy" international journals owned by a decreasing number of multinational corporations ruling the roost in the global information market." Under the impulsion of Garfield and Price, bibliometrics became both a research field and a testing ground for quantitative policy evaluation of research. This second aspect was not a major focus of the Science Citation Index has been a progressive development: the famous Impact Factor was originally devised in the 1960s by Garfield and Irving Sher to select the core group of journals that were to be featured in "Current Contents" and the Science Citation Index and was only regularly published after 1975. The metric itself is a very simple ratio between the total count of citation received by the journal on the past year and its productivity on the past two years, to ponderate the prolificity of some publications. For example, "Nature" had an impact factor of 41.577 in 2017: formula_0 The simplicity of the impact factor has likely been a major factor in its wide adoption by scientific institutions, journals, funders or evaluators: "none of the revised versions or substitutes of ISI IF has gained general acceptance beyond its proponents, probably because the alleged alternatives lack the degree of interpretability of the original measure." Alongside these simplified measurements, Garfield continued to support and fund fundamental research in science history and sociology of science. First published 1964, "The Use of Citation Data in Writing the History of Science" compiles several experimental case studies relying on the citation network of the Science Citation Index, including a quantitative reconstruction of the discovery of the DNA. Interest in this area persisted well after the sell of the Index to Thomson Reuters: as late as 2001, Garfield unveiled "HistCite", a software for "algorithmic historiography" created in collaboration with Alexander Pudovkin, and Vladimir S. Istomin. The Web turn (1990–…). The development of the World Wide Web and the Digital Revolution had a complex impact on bibliometrics. The web itself and some of its key components (such as search engines) were partly a product of bibliometrics theory. In its original form, it was derived from a bibliographic scientific infrastructure commissioned to Tim Berners-Lee by the CERN for the specific needs of high energy physics, ENQUIRE. The structure of ENQUIRE was closer to an internal web of data: it connected "nodes" that "could refer to a person, a software module, etc. and that could be interlined with various relations such as made, include, describes and so forth." Sharing of data and data documentation was a major focus in the initial communication of the World Wide Web when the project was first unveiled in August 1991 : "The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data." The web rapidly superseded pre-existing online infrastructure, even when they included more advanced computing features. The core value attached to hyperlinking in the design of the web seem to validate the intuitions of the funding figures of bibliometrics: "The onset of the World Wide Web in the mid-1990s made Garfield's citationist dream more likely to come true. In the world network of hypertexts, not only is the bibliographic reference one of the possible forms taken by a hyperlink inside the electronic version of a scientific article, but the Web itself also exhibits a citation structure, links between web pages being formally similar to bibliographic citations." Consequently, bibliometrics concepts have been incorporated in major communication technologies the search algorithm of Google: "the citation-driven concept of relevance applied to the network of hyperlinks between web pages would revolutionize the way Web search engines let users quickly pick useful materials out of the anarchical universe of digital information." While the web expanded the intellectual influence of bibliometrics way beyond specialized scientific research, it also shattered the core tenets of the field. In contrast with the wide utopian visions of Bernal and Otlet that partly inspired it, the Science Citation Index was always conceived as a closed infrastructure, not only from the perspective of their users but also from the perspective of the collection index: the logical conclusion of Price's theory of "invisible college" and Garfield's law of concentration was to focus exclusively on a limited set of core scientific journals. With the rapid expansion of the Web, numerous forms of publications (notably preprints), scientific activities and communities suddenly became visible and highlighted by contrast the limitations of applied bibliometrics. The other fundamental aspect of bibliometric reductionism, the exclusive focus on citation, has also been increasingly fragilized by the multiplication of alternative data sources and the unprecedented access to full text corpus that made it possible to revive the large scale semantic analysis first envisioned by Garfield in the early 1950s: "Links alone, then, just like bibliographic citations alone, do not seem sufficient to pin down critical communication patterns on the Web, and their statistical analysis will probably follow, in the years to come, the same path of citation analysis, establishing fruitful alliances with other emerging qualitative and quantitative outlooks over the web landscape." The close relationship between bibliometrics and commercial vendors of citation data and indicators has become more strained since the 1990s. Leading scientific publishers have diversified their activities beyond publishing and moved "from a content-provision to a data analytics business." By 2019, Elsevier has either acquired or built a large portofolio platforms, tools, databases and indicators covering all aspects and stages of scientific research: "the largest supplier of academic journals is also in charge of evaluating and validating research quality and impact (e.g., Pure, Plum Analytics, Sci Val), identifying academic experts for potential employers (e.g., Expert Lookup5), managing the research networking platforms through which to collaborate (e.g., SSRN, Hivebench, Mendeley), managing the tools through which to find funding (e.g., Plum X, Mendeley, Sci Val), and controlling the platforms through which to analyze and store researchers' data (e.g., Hivebench, Mendeley)." Metrics and indicators are key components of this vertical integration: "Elsevier's further move to offering metrics-based decision making is simultaneously a move to gain further influence in the entirety of the knowledge production process, as well as to further monetize its disproportionate ownership of content." The new market for scientific publication and scientific data has been compared with the business models of social networks, search engines and other forms of "platform capitalism" While content access is free, it is indirectly paid through data extraction and surveillance. In 2020, Rafael Ball envisioned a bleak future for bibliometricians where their research contribute to the emerge of a highly invasive form of "surveillance capitalism":scientists "be given a whole series of scores which not only provide a more comprehensive picture of the academic performance, but also the perception, behaviour, demeanour, appearance and (subjective) credibility (…) In China, this kind of personal data analysis is already being implemented and used simultaneously as an incentive and penalty system." The "Leiden manifesto for research metrics" (2015) highlighted the growing rift between the commercial providers of scientific metrics and bibliometric communities. The signatories stressed the potential social damage of uncontrolled metric-based evaluation and surveillance: "as scientometricians, social scientists and research administrators, we have watched with increasing alarm the pervasive misapplication of indicators to the evaluation of scientific performance." Several structural reforms of bibliometric research and research evaluation are proposed, including a stronger reliance on qualitative assessment and the reliance on "open, transparent and simple" data collection. The Leiden Manifesto has stirred an important debate in bibliometrics/scientometrics/infometrics with some critics arguing that the elaboration of quantitative metrics bears no responsibility on their misuse in commercial platforms and research evaluation. Usage. Historically, bibliometric methods have been used to trace relationships amongst academic journal citations. Citation analysis, which involves examining an item's referring documents, is used in searching for materials and analyzing their merit. Citation indices, such as Institute for Scientific Information's Web of Science, allow users to search forward in time from a known article to more recent publications which cite the known item. Data from citation indexes can be analyzed to determine the popularity and impact of specific articles, authors, and publications. Using citation analysis to gauge the importance of one's work, for example, has been common in hiring practices of the late 20th century. Information scientists also use citation analysis to quantitatively assess the core journal titles and watershed publications in particular disciplines; interrelationships between authors from different institutions and schools of thought; and related data about the sociology of academia. Some more pragmatic applications of this information includes the planning of retrospective bibliographies, "giving some indication both of the age of material used in a discipline, and of the extent to which more recent publications supersede the older ones"; indicating through high frequency of citation which documents should be archived; comparing the coverage of secondary services which can help publishers gauge their achievements and competition, and can aid librarians in evaluating "the effectiveness of their stock." There are also some limitations to the value of citation data. They are often incomplete or biased; data has been largely collected by hand (which is expensive), though citation indexes can also be used; incorrect citing of sources occurs continually; thus, further investigation is required to truly understand the rationale behind citing to allow it to be confidently applied. Bibliometrics can be used for understanding the research hot topics, for example, in housing Bibliometrics, the results show that Keywords such as influencing factors of housing prices, supply and demand analysis, policy impact on housing prices, and regional city trends are commonly found in housing price research literature. Recent popular keywords include regression analysis and house price predictions. The USA has been a pioneer in housing price research, with well-established means and methods leading the way in this field. Developing countries, on the other hand, need to adopt innovative research approaches and focus more on sustainability in their housing price studies. Research indicates a strong correlation between housing prices and the economy, with keywords like gross domestic product, interest rates, and currency frequently appearing in economic-related cluster analyses. Bibliometrics are now used in quantitative research assessment exercises of academic output which is starting to threaten practice based research. The UK government has considered using bibliometrics as a possible auxiliary tool in its Research Excellence Framework, a process which will assess the quality of the research output of UK universities and on the basis of the assessment results, allocate research funding. This has met with significant skepticism and, after a pilot study, looks unlikely to replace the current peer review process. Furthermore, excessive usage of bibliometrics in assessment of value of academic research encourages gaming the system in various ways including publishing large quantity of works with low new content (see least publishable unit), publishing premature research to satisfy the numbers, focusing on popularity of the topic rather than scientific value and author's interest, often with detrimental role to research. Some of these phenomena are addressed in a number of recent initiatives, including the San Francisco Declaration on Research Assessment. Guidelines have been written on the using of bibliometrics in academic research, in disciplines such as Management, Education, and Information Science. Other bibliometrics applications include: creating thesauri; measuring term frequencies; as metrics in scientometric analysis, exploring grammatical and syntactical structures of texts; measuring usage by readers; quantifying value of online media of communication; in the context of technological trend analyses; measuring Jaccard distance cluster analysis and text mining based on binary logistic regression. In the context of the big deal cancellations by several library systems in the world, data analysis tools like Unpaywall Journals are used by libraries to assist with big deal cancellations: libraries can avoid subscriptions for materials already served by instant open access via open archives like PubMed Central. Bibliometrics and open science. The open science movement has been acknowledged as the most important transformation faced by bibliometrics since the emergence of the field in the 1960s. The free sharing of a wide variety of scientific outputs on the web affected the practice of bibliometrics at all levels: the definition and the collection of the data, infrastructure, and metrics. Before the crystallization of the field around the Science Citation Index and the reductionist theories of Derek de Solla Price, bibliometrics has been largely influenced by utopian projects of enhanced knowledge sharing beyond specialized academic communities. The scientific networks envisioned by Paul Otlet or John Desmond Bernal have gained a new relevancy with the development of the Web: "The philosophical inspiration of the pioneers in pursuing the above lines of inquiry, however, faded gradually into the background (…) Whereas Bernal's input would eventually find an ideal continuation in the open access movement, the citation machine set into motion by Garfield and Small led to the proliferation of sectorial studies of a fundamentally empirical nature." From altmetrics to open metrics. In the early developments, the open science movement partly co-opted the standard tools of bibliometrics and quantitative evaluation: "the fact that no reference was made to metadata in the main OA declarations (Budapest, Berlin, Bethesda) has led to a paradoxical situation (…) it was through the use of the Web of Science that OA advocates were eager to show how much accessibility led to a citation advantage compared to paywalled articles." After 2000, an important bibliometric literature was devoted to the citation advantage of open access publications. By the end of the 2000s, the impact factor and other metrics have increasingly held responsible a systemic "locked-in" of prestigious non-accessible sources. Key figures of the open science movement like Stevan Harnad called for the creation of "open access scientometrics" that would take "advantage of the wealth of usage and impact metrics enabled by the multiplication of online, full-text, open access digital archives." As the public of open science expanded beyond academic circles, new metrics should aim for "measuring the broader societal impacts of scientific research." The concept of "alt-metrics" was introduced in 2009 by Cameron Neylon and Shirly Wu as "article-level metrics". In contrast with the focus of leading metrics on journals (impact factor) or, more recently, on individual researchers (h-index), the article-level metrics makes it possible to track the circulation of individual publications: "(an) article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero – where we can see and count it" As such they are more compatible with the diversity of publication strategies that has characterized open science: preprints, reports or even non-textual outputs like dataset or software may also have associated metrics. In their original research proposition, Neylon and Wu favored the use of data from reference management software like Zotero or Mendeley. The concept of "altmetrics" evolved and came to encover data extracted "from social media applications, like blogs, Twitter, ResearchGate and Mendeley.". Social media sources proved especially to be more reliable on a long-term basis, as specialized academic tools like Mendeley came to be integrated into a proprietary ecosystem developed by leading scientific publishers. Major altmetrics indicators that emerged in the 2010s include Altmetric.com, PLUMx and ImpactStory. As the meaning of altmetrics shifted, the debate over the positive impact of the metrics evolved toward their redefinition in an open science ecosystem: "Discussions on the misuse of metrics and their interpretation put metrics themselves in the center of open science practices." While altmetrics were initially conceived for open science publications and their expanded circulation beyond academic circles, their compatibility with the emerging requirements for open metrics has been brought into question: social network data, in particular, is far from transparent and readily accessible. In 2016, Ulrich Herb published a systematic assessment of the leading publications' metrics in regard to open science principles and concluded that "neither citation-based impact metrics nor alternative metrics can be labeled open metrics. They all lack scientific foundation, transparency and verifiability." Herb laid an alternative program for open metrics that have yet to be developed. The main criteria included: This definition has been implemented in research programs, like ROSI ("Reference implementation for open scientometric indicators"). In 2017, the European Commission Expert Group on Altmetrics expanded the open metrics program of Ulrich Herb under a new concept, the "Next-generation metrics". These metrics should be managed by "open, transparent and linked data infrastructure". The expert group underline that not everything should be measured and not all metrics are relevants: "Measure what matters: the next generation of metrics should begin with those qualities and impacts that European societies most value and need indices for, rather than those which are most easily collected and measure". Infrastructure for open citation data. Until the 2010s, the impact of open science movement was largely limited to scientific publications: it "has tended to overlook the importance of social structures and systemic constraints in the design of new forms of knowledge infrastructures." In 1997, Robert D. Cameron called for the development of an open databases of citation that would completely alter the condition of science communication: "Imagine a universal bibliographic and citation database linking every scholarly work ever written—no matter how published—to every work that cites and every work that cites it. Imagine that such a citation database was freely available over the Internet and was updated every day with all the new works published that day, including papers in traditional and electronic journals, conference papers, theses, technical reports, working papers, and preprints." Despite the development of specific indexes focused on open access works like CiteSeer, a large open alternative to the Science Citation Index failed to materialize. The collection of citation data, remained dominated by large commercial structure such as the direct descendant of the Scientific Citation Index, the Web of Science. This had the effect of maintaining the emerging ecosystem of open resources at the periphery of academic networks: "common pool of resources is not governed or managed by the current scholarly commons initiative. There is no dedicated hard infrastructure and though there may be a nascent community, there is no formal membership." Since 2015, open science infrastructures, platforms and journals have converged to the creation of digital academic commons, increasingly structured around a shared ecosystem of services and standards has emerged through the network of dependencies from one infrastructure to another. This movement stem from an increasingly critical stance toward leading proprietary databases. In 2012, the San Francisco Declaration on Research Assessment (DORA) called for "ending the use of journal impact factors in funding, hiring and promotion decisions." The Leiden Manifesto for research metrics (2015) encouraged the development of "open, transparent and simple" data collection. Collaborations between academic and non-academic actors collectively committed in the creation and maintenance of knowledge commons has been a determining factor in the creation of new infrastructure for open citation data. Since 2010, a dataset of open citation data, the "Open Citation Corpus", has been collected by several researchers from a variety of open access sources (including PLOS and Pubmed). This collection was the initial kernel of the Initiative for OpenCitations, incepted in 2017 in response to issues of data accessibility faced by a Wikimedia project, Wikidata. A conference, given by Dario Taraborelli, head of research at the Wikimedia Foundation showed that only 1% of papers in Crossref had citations metadata that were freely available and references stored on Wikidata were unable to include the very large segment of non-free data. This coverage expanded to more than half of the recorded papers, when Elsevier finally joined the initiative in January 2021. Since 2021, OpenAlex has become a major open infrastructure for scientific metadata. Initially created as a replacement for the discontinued "Microsoft Academic Graph", OpenAlex indexed in 2022 209 millions of scholarly works from 213 millions authors as well as their associated institutions, venues and concepts in a knowledge graph integrated into the semantic web (and Wikidata). Due to its large coverage and large amount of data properly migrated from the Microsoft Academic Graph (MAG), OpenAlex "seems to be at least as suited for bibliometric analyses as MAG for publication years before 2021." In 2023, a study on the coverage of data journals in scientific indexes found that OpenAlex, along with Dimensions, "enjoy a strong advantage over the two more traditional databases, WoS and Scopus" and is overall especially suited for the indexation of non-journal publications like books or from researchers in non-western countries The opening of science data has been a major topic of debate in the bibliometrics and scientometrics community and had wide range social and intellectual consequences. In 2019, the entire scientific board of the "Journal of Infometrics" resigned and created a new open access journals, "Quantitative Science Studies". The journal was published by Elsevier since 2007 and the members of the board were increasingly critical of the lack of progress in the open sharing of open citation data: "Our field depends on high-quality scientific metadata. To make our science more robust and reproducible, these data must be as open as possible. Therefore, our editorial board was deeply concerned with the refusal of Elsevier to participate in the Initiative for Open Citations (I4OC)." Bibliometrics without evaluation: the shift to quantitative science studies. The unprecedented availability of a wide range of scientific productions (publications, data, software, conference, reviews...) has entailed a more dramatic redefinition of the bibliometrics project. For new alternative works anchored in the open science landscape, the principles of bibliometrics as defined by Garfield and Price in the 1960s need to be rethought. The pre-selection of a limited corpus of important journals seem neither necessary nor appropriate. In 2019, the proponents of the Matilda project, "do not want to just "open" the existing closed information, but wish to give back a fair place to the whole academic content that has been excluded from such tools, in a "all texts are born equal" fashion." They aim to "redefine bibliometrics tools as a technology" by focusing on the exploration and mapping of scientific corpus. Issues of inclusivity and more critical approach of structural inequalities in science have become more prevalent in scientometrics and bibliometrics, especially in relation to gender imbalance. After 2020, one of the most heated debate in the field revolved around the reception of a study on the gender imbalance in fundamental physics. The structural shift in the definition of bibliometrics, scientometrics or infometrics has entailed the need for alternative labels. The concept of "Quantitative Science Studies" was originally introduced in the late 2000s in the context of a renewed critical assessment of classic bibliometric findings. It has become more prevalent in the late 2010s. After leaving Elsevier, the editors of the "Journal of Infometrics" opted for this new label and created a journal for "Quantitative Science Studies". The first editorial removed all references to metric and aimed for a wider inclusion of quantitative and qualitative research on the science of science: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;We hope that those who identify under labels such as scientometrics, science of science, and metascience will all find a home in QSS. We also recognize the diverse range of disciplines for whom science is an object of study: We welcome historians of science, philosophers of science, and sociologists of science to our journal. While we bear the moniker of quantitative, we are inclusive of a breadth of epistemological perspectives. Quantitative science studies cannot operate in isolation: Robust empirical work requires the integration of theories and insights from all metasciences. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. A Guide to Utilize Qualitative Data Analysis Service for PhD Research
[ { "math_id": 0, "text": "\\text{IF}_{2017} = \\frac{\\text{Citations}_{2017}}{\\text{Publications}_{2016} + \\text{Publications}_{2015}} = \\frac{74090}{880 + 902} = 41.577." } ]
https://en.wikipedia.org/wiki?curid=1223245
12234623
Constant phase element
Circuit component which represents double-layer capacitance In electronics, a constant phase element is an equivalent electrical circuit component that models the behaviour of a double layer, that is, an imperfect capacitor (see double-layer capacitance). Constant phase elements are also used in equivalent circuit modeling and data fitting of electrochemical impedance spectroscopy data. A constant phase element also currently appears in modeling the imperfect dielectrics' behavior. The generalization in the fields of imperfect electrical resistances, capacitances, and inductances leads to the general "phasance" concept: http://fr.scribd.com/doc/71923015/The-Phasance-Concept General equation. The electrical impedance can be calculated: formula_0 where the CPE admittance is: formula_1 and Q0 and n (0&lt;n&lt;1) are frequency independent. Q0 = 1/|Z| at ω = 1 rad/s The constant phase is always −(90*n)°, with n from 0 to 1. The case n = 1 describes an ideal capacitor while the case n = 0 describes a pure resistor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z_{CPE}=\\frac{1}{Y_{CPE}}=\\frac{1}{Q_0\\omega^n}e^{-\\frac{\\pi}{2}ni}" }, { "math_id": 1, "text": "Y_{CPE}=Q_0(\\omega i)^n" } ]
https://en.wikipedia.org/wiki?curid=12234623
12234732
Randles circuit
Equivalent circuit for an electrochemical reaction In electrochemistry, a Randles circuit is an equivalent electrical circuit that consists of an active electrolyte resistance "R"S in series with the parallel combination of the double-layer capacitance "C"dl and an impedance ("Z"w) of a faradaic reaction. It is commonly used in electrochemical impedance spectroscopy (EIS) for interpretation of impedance spectra, often with a constant phase element (CPE) replacing the double layer capacity. The Randles equivalent circuit is one of the simplest possible models describing processes at the electrochemical interface. In real electrochemical systems, impedance spectra are usually more complicated and, thus, the Randles circuit may not give appropriate results. Explanation. Figure 1 shows the equivalent circuit initially proposed by John Edward Brough Randles for modeling of interfacial electrochemical reactions in presence of semi-infinite linear diffusion of electroactive particles to flat electrodes. A simple model for an electrode immersed in an electrolyte is simply the series combination of the ionic resistance, "R"S, with the double layer capacitance, "C"dl. If a faradaic reaction is taking place then that reaction is occurring in parallel with the charging of the double layer – so the charge transfer resistance, "R"ct, associated with the faradaic reaction is in parallel with "C"dl. The key assumption is that the rate of the faradaic reaction is controlled by diffusion of the reactants to the electrode surface. The diffusional resistance element (the Warburg impedance, "Z"W), is therefore in series with "R"ct. In this model, the impedance of a faradaic reaction consists of an active charge transfer resistance "R"ct and a specific electrochemical element of diffusion "Z"W, represented by a Warburg element formula_0 where Identifying the Warburg element. In a simple situation, the Warburg element manifests itself in EIS spectra by a line with an angle of 45 degrees in the low frequency region. Figure 2 shows an example of EIS spectrum (presented in the Nyquist plot) simulated using the following parameters: "R"S = 20 Ω, "C"dl = 25 μF, "R"ct = 100 Ω, "A"W = 300 Ω•s−0.5. Values of the charge transfer resistance and Warburg coefficient depend on physico-chemical parameters of a system under investigation. To obtain the Randles circuit parameters, the fitting of the model to the experimental data should be performed using complex nonlinear least-squares procedures available in numerous EIS data fitting computer programs.
[ { "math_id": 0, "text": "Z_\\mathrm{w} = \\frac{A_\\mathrm{w}}{\\sqrt{j\\omega}}," } ]
https://en.wikipedia.org/wiki?curid=12234732
12235870
Rainbow option
Rainbow option is a derivative exposed to two or more sources of uncertainty, as opposed to a simple option that is exposed to one source of uncertainty, such as the price of underlying asset. The name of "rainbow" comes from Rubinstein (1991), who emphasises that this option was based on a combination of various assets like a rainbow is a combination of various colors. More generally, rainbow options are multiasset options, also referred to as correlation options, or basket options. Rainbow can take various other forms but the combining idea is to have a payoff that is depending on the assets sorted by their performance at maturity. When the rainbow only pays the best (or worst) performing asset of the basket, it is also called "best-of" (or "worst-of"). Other popular options that can be reformulated as a rainbow option are spread and exchange options. Overview. Rainbow options are usually calls or puts on the best or worst of "n" underlying assets. Like the basket option, which is written on a group of assets and pays out on a weighted-average gain on the basket as a whole, a rainbow option also considers a group of assets, but usually pays out on the level of one of them. A simple example is a call rainbow option written on FTSE 100, Nikkei and S&amp;P 500 which will pay out the difference between the strike price and the level of the index that has risen by the largest amount of the three. Another example is an option that includes more than one strike on more than one underlying asset with a payoff equivalent to largest in-the-money portion of any of the strike prices. Alternatively, in a more complex scenario, the assets are sorted by their performance at maturity, for instance, a rainbow call with weights 50%, 30%, 20%, with a basket including FTSE 100, Nikkei and S&amp;P 500 pays 50% of the best return (at maturity) between the three indices, 30% of the second best and 20% of the third best. The options are often considered a correlation trade since the value of the option is sensitive to the correlation between the various basket components. Rainbow options are used, for example, to value natural resources deposits. Such assets are exposed to two uncertainties—price and quantity. Some simple options can be transformed into more complex instruments if the underlying risk model that the option reflected does not match a future reality. In particular, derivatives in the currency and mortgage markets have been subject to liquidity risk that was not reflected in the pricing of the option when sold. Payoff. Rainbow options refer to all options whose payoff depends on more than one underlying risky asset; each asset is referred to as a color of the rainbow. Examples of these include: Thus, the payoffs at expiry for rainbow European options are: Pricing and valuation. Rainbow options are usually priced using an appropriate industry-standard model (such as Black–Scholes) for each individual basket component, and a matrix of correlation coefficients applied to the underlying stochastic drivers for the various models. While there are some closed-form solutions for simpler cases (e.g. two-color European rainbows), semi-analytic solutions, and analytical approximations, the general case must be approached with Monte Carlo or binomial lattice methods. For bibliography see Lyden (1996). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\max(S_1, S_2, . . . , S_n, K)" }, { "math_id": 1, "text": "\\max(\\max(S_1, S_2, . . . ,S_n)-K,0)" }, { "math_id": 2, "text": "\\max(\\min(S_1, S_2, . . . ,S_n)-K,0)" }, { "math_id": 3, "text": "\\max(K-\\max(S_1, S_2, . . . ,S_n),0)" }, { "math_id": 4, "text": "\\max(K-\\min(S_1, S_2, . . . , S_n),0)" }, { "math_id": 5, "text": "\\max(S_1 - S_2, 0)" } ]
https://en.wikipedia.org/wiki?curid=12235870
12237167
Sokhotski–Plemelj theorem
Complex analysis theorem The Sokhotski–Plemelj theorem (Polish spelling is "Sochocki") is a theorem in complex analysis, which helps in evaluating certain integrals. The real-line version of it (see below) is often used in physics, although rarely referred to by name. The theorem is named after Julian Sochocki, who proved it in 1868, and Josip Plemelj, who rediscovered it as a main ingredient of his solution of the Riemann–Hilbert problem in 1908. Statement of the theorem. Let "C" be a smooth closed simple curve in the plane, and formula_0 an analytic function on "C". Note that the Cauchy-type integral formula_1 cannot be evaluated for any "z" on the curve "C". However, on the interior and exterior of the curve, the integral produces analytic functions, which will be denoted formula_2 inside "C" and formula_3 outside. The Sokhotski–Plemelj formulas relate the limiting boundary values of these two analytic functions at a point "z" on "C" and the Cauchy principal value formula_4 of the integral: formula_5 formula_6 Subsequent generalizations relax the smoothness requirements on curve "C" and the function "φ". Version for the real line. Especially important is the version for integrals over the real line. formula_7 where formula_8 is the Dirac delta function where formula_4 denotes the Cauchy principal value. One may take the difference of these two equalities to obtain formula_9 These formulae should be interpreted as integral equalities, as follows: Let f be a complex-valued function which is defined and continuous on the real line, and let a and b be real constants with formula_10. Then formula_11 and formula_12 Note that this version makes no use of analyticity. Proof of the real version. A simple proof is as follows. formula_13 For the first term, we note that &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄"ε" is a nascent delta function, and therefore approaches a Dirac delta function in the limit. Therefore, the first term equals ∓"i"π "f"(0). For the second term, we note that the factor &lt;templatestyles src="Fraction/styles.css" /&gt;"x"2⁄("x"2 + "ε"2) approaches 1 for |"x"| ≫ "ε", approaches 0 for |"x"| ≪ ε, and is exactly symmetric about 0. Therefore, in the limit, it turns the integral into a Cauchy principal value integral. Physics application. In quantum mechanics and quantum field theory, one often has to evaluate integrals of the form formula_14 where "E" is some energy and "t" is time. This expression, as written, is undefined (since the time integral does not converge), so it is typically modified by adding a negative real term to "-iEt" in the exponential, and then taking that to zero, i.e.: formula_15 where the latter step uses the real version of the theorem. Heitler function. In theoretical quantum optics, the derivation of a master equation in Lindblad form often requires the following integral function, which is a direct consequence of the Sokhotski–Plemelj theorem and is often called the Heitler-function: formula_16 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": " \\phi(z) = \\frac{1}{2\\pi i} \\int_C\\frac{\\varphi(\\zeta) \\, d\\zeta}{\\zeta-z}, " }, { "math_id": 2, "text": "\\phi_i" }, { "math_id": 3, "text": "\\phi_e" }, { "math_id": 4, "text": "\\mathcal{P}" }, { "math_id": 5, "text": "\\lim_{w \\to z}\\phi_i(w) = \\frac{1}{2\\pi i}\\mathcal{P}\\int_C\\frac{\\varphi(\\zeta) \\, d\\zeta}{\\zeta-z} + \\frac{1}{2}\\varphi(z)," }, { "math_id": 6, "text": "\\lim_{w \\to z}\\phi_e(w) = \\frac{1}{2\\pi i}\\mathcal{P}\\int_C\\frac{\\varphi(\\zeta) \\, d\\zeta}{\\zeta-z}-\\frac{1}{2}\\varphi(z). " }, { "math_id": 7, "text": " \\lim_{\\varepsilon\\to0^{+}} \\frac{1}{x\\pm i\\varepsilon}= \\mp i\\pi\\delta(x) + {\\mathcal{P}} {\\Big(\\frac{1}{x}\\Big)}." }, { "math_id": 8, "text": "\\delta(x)" }, { "math_id": 9, "text": " \\lim_{\\varepsilon \\to 0^+} \\left[ \\frac{1}{x-i\\varepsilon} - \\frac{1}{x+i\\varepsilon} \\right] = 2\\pi i \\delta(x)." }, { "math_id": 10, "text": "a < 0 < b" }, { "math_id": 11, "text": "\\lim_{\\varepsilon\\to 0^+} \\int_a^b \\frac{f(x)}{x\\pm i \\varepsilon}\\,dx = \\mp i \\pi f(0) + \\mathcal{P}\\int_a^b \\frac{f(x)}{x}\\, dx" }, { "math_id": 12, "text": "\\lim_{\\varepsilon \\to 0^+ } \\int_a^b \\left[ \\frac{f(x)}{x- i \\varepsilon} - \\frac{f(x)}{x+ i \\varepsilon} \\right] \\, dx = 2 \\pi i f(0)" }, { "math_id": 13, "text": "\n\\lim_{\\varepsilon\\to 0^+} \\int_a^b \\frac{f(x)}{x\\pm i \\varepsilon}\\,dx = \\mp i \\pi \\lim_{\\varepsilon\\to 0^+} \\int_a^b \\frac{\\varepsilon}{\\pi(x^2+\\varepsilon^2)}f(x)\\,dx + \\lim_{\\varepsilon\\to 0^+} \\int_a^b \\frac{x^2}{x^2+\\varepsilon^2} \\, \\frac{f(x)}{x}\\, dx." }, { "math_id": 14, "text": "\\int_{-\\infty}^\\infty dE\\, \\int_0^\\infty dt\\, f(E)\\exp(-iEt)" }, { "math_id": 15, "text": "\\lim_{\\varepsilon\\to 0^+} \\int_{-\\infty}^\\infty dE\\, \\int_0^\\infty dt\\, f(E)\\exp(-iEt-\\varepsilon t) = -i \\lim_{\\varepsilon\\to 0^+} \\int_{-\\infty}^\\infty \\frac{f(E)}{E-i\\varepsilon}\\,dE = \\pi f(0)-i \\mathcal{P}\\int_{-\\infty}^{\\infty} \\frac{f(E)}{E}\\,dE," }, { "math_id": 16, "text": " \\int_0^\\infty d\\tau\\, \\exp(-i(\\omega \\pm \\nu)\\tau) = \\pi \\delta(\\omega \\pm \\nu) - i \\mathcal{P} \\Big(\\frac{1}{\\omega \\pm \\nu}\\Big)" } ]
https://en.wikipedia.org/wiki?curid=12237167
1223782
Triflate
Chemical group (–OSO2CF3) or anion (charge –1) In organic chemistry, triflate (systematic name: trifluoromethanesulfonate), is a functional group with the formula and structure . The triflate group is often represented by , as opposed to −Tf, which is the triflyl group, . For example, "n"-butyl triflate can be written as . The corresponding triflate anion, , is an extremely stable polyatomic ion; this comes from the fact that triflic acid () is a superacid; i.e. it is more acidic than pure sulfuric acid, already one of the strongest acids known. Applications. A triflate group is an excellent leaving group used in certain organic reactions such as nucleophilic substitution, Suzuki couplings and Heck reactions. Since alkyl triflates are extremely reactive in SN2 reactions, they must be stored in conditions free of nucleophiles (such as water). The anion owes its stability to resonance stabilization which causes the negative charge to be spread symmetrically over the three oxygen atoms. An additional stabilization is achieved by the trifluoromethyl group, which acts as a strong electron-withdrawing group using the sulfur atom as a bridge. Triflates have also been applied as ligands for group 11 and 13 metals along with lanthanides. Lithium triflates are used in some lithium ion batteries as a component of the electrolyte. A mild triflating reagent is phenyl triflimide or "N","N"-bis(trifluoromethanesulfonyl)aniline, where the by-product is [CF3SO2N−Ph]−. Triflate salts. Triflate salts are thermally very stable with melting points up to 350 °C for sodium, boron and silver salts especially in water-free form. They can be obtained directly from triflic acid and the metal hydroxide or metal carbonate in water. Alternatively, they can be obtained from reacting metal chlorides with neat triflic acid or silver triflate, or from reacting barium triflate with metal sulfates in water: formula_0 Metal triflates are used as Lewis acid catalysts in organic chemistry. Especially useful are the lanthanide triflates of the type (where Ln is a lanthanoid). A related popular catalyst scandium triflate is used in such reactions as aldol reactions and Diels–Alder reactions. An example is the Mukaiyama aldol addition reaction between benzaldehyde and the silyl enol ether of cyclohexanone with an 81% chemical yield. The corresponding reaction with the yttrium salt fails: Triflate is a commonly used weakly coordinating anion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n \\ce{MCl}_n + n \\, \\ce{HOTf} &\\longrightarrow \\ce{M(OTf)}_n + n \\, \\ce{HCl} \\\\\n \\ce{MCl}_n + n \\, \\ce{AgOTf} &\\longrightarrow \\ce{M(OTf)}_n + n \\, \\ce{AgCl \\, v} \\\\\n \\ce{M(SO4)}_n + n \\, \\ce{Ba(OTf)2} &\\longrightarrow \\ce{M(OTf)}_{2n} + n \\, \\ce{BaSO4 v}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1223782
12240
Gold
Chemical element with atomic number 79 (Au) Gold is a chemical element; it has symbol Au (from Latin ) and atomic number 79. In its pure form, it is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal. Chemically, gold is a transition metal, a group 11 element, and one of the noble metals. It is one of the least reactive chemical elements, being the second-lowest in the reactivity series. It is solid under standard conditions. Gold often occurs in free elemental (native state), as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element silver (as in electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides). Gold is resistant to most acids, though it does dissolve in aqua regia (a mixture of nitric acid and hydrochloric acid), forming a soluble tetrachloroaurate anion. Gold is insoluble in nitric acid alone, which dissolves silver and base metals, a property long used to refine gold and confirm the presence of gold in metallic substances, giving rise to the term 'acid test'. Gold dissolves in alkaline solutions of cyanide, which are used in mining and electroplating. Gold also dissolves in mercury, forming amalgam alloys, and as the gold acts simply as a solute, this is not a chemical reaction. A relatively rare element, gold is a precious metal that has been used for coinage, jewelry, and other works of art throughout recorded history. In the past, a gold standard was often implemented as a monetary policy. Gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after the Nixon shock measures of 1971. In 2020, the world's largest gold producer was China, followed by Russia and Australia. As of 2020[ [update]], a total of around 201,296 tonnes of gold exist above ground. This is equal to a cube, with each side measuring roughly . The world's consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry. Gold's high malleability, ductility, resistance to corrosion and most other chemical reactions, as well as conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, the production of colored glass, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatory agents in medicine. Characteristics. Gold is the most malleable of all metals. It can be drawn into a wire of single-atom width, and then stretched considerably before it breaks. Such nanowires distort via the formation, reorientation, and migration of dislocations and crystal twins without noticeable hardening. A single gram of gold can be beaten into a sheet of , and an avoirdupois ounce into . Gold leaf can be beaten thin enough to become semi-transparent. The transmitted light appears greenish-blue because gold strongly reflects yellow and red. Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in the visors of heat-resistant suits and in sun visors for spacesuits. Gold is a good conductor of heat and electricity. Gold has a density of 19.3 g/cm3, almost identical to that of tungsten at 19.25 g/cm3; as such, tungsten has been used in the counterfeiting of gold bars, such as by plating a tungsten bar with gold. By comparison, the density of lead is 11.34 g/cm3, and that of the densest element, osmium, is . Color. Whereas most metals are gray or silvery white, gold is slightly reddish-yellow. This color is determined by the frequency of plasma oscillations among the metal's valence electrons, in the ultraviolet range for most metals but in the visible range for gold due to relativistic effects affecting the orbitals around gold atoms. Similar effects impart a golden hue to metallic caesium. Common colored gold alloys include the distinctive eighteen-karat rose gold created by the addition of copper. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Fourteen-karat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Fourteen- and eighteen-karat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. Blue gold can be made by alloying with iron, and purple gold can be made by alloying with aluminium. Less commonly, addition of manganese, indium, and other elements can produce more unusual colors of gold for various applications. Colloidal gold, used by electron-microscopists, is red if the particles are small; larger particles of colloidal gold are blue. Isotopes. Gold has only one stable isotope, 197Au, which is also its only naturally occurring isotope, so gold is both a mononuclidic and monoisotopic element. Thirty-six radioisotopes have been synthesized, ranging in atomic mass from 169 to 205. The most stable of these is 195Au with a half-life of 186.1 days. The least stable is 171Au, which decays by proton emission with a half-life of 30 μs. Most of gold's radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are 195Au, which decays by electron capture, and 196Au, which decays most often by electron capture (93%) with a minor β− decay path (7%). All of gold's radioisotopes with atomic masses above 197 decay by β− decay. At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only 178Au, 180Au, 181Au, 182Au, and 188Au do not have isomers. Gold's most stable isomer is 198m2Au with a half-life of 2.27 days. Gold's least stable isomer is 177m2Au with a half-life of only 7 ns. 184m1Au has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths. Synthesis. The possible production of gold from a more common element, such as lead, has long been a subject of human inquiry, and the ancient and medieval discipline of alchemy often focused on it; however, the transmutation of the chemical elements did not become possible until the understanding of nuclear physics in the 20th century. The first synthesis of gold was conducted by Japanese physicist Hantaro Nagaoka, who synthesized gold from mercury in 1924 by neutron bombardment. An American team, working without knowledge of Nagaoka's prior study, conducted the same experiment in 1941, achieving the same result and showing that the isotopes of gold produced by it were all radioactive. In 1980, Glenn Seaborg transmuted several thousand atoms of bismuth into gold at the Lawrence Berkeley Laboratory. Gold can be manufactured in a nuclear reactor, but doing so is highly impractical and would cost far more than the value of the gold that is produced. Chemistry. Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is , which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives. Au(III) (referred to as auric) is a common oxidation state, and is illustrated by gold(III) chloride, . The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex. Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone: formula_0 formula_1 Some free halogens react to form the corresponding gold halides. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride . Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride . Gold reacts with bromine at 140 °C to form a combination of gold(III) bromide and gold(I) bromide AuBr, but reacts very slowly with iodine to form gold(I) iodide AuI: &lt;chem display=block&gt;2 Au{} + 3 F2 -&gt;[{}\atop\Delta] 2 AuF3&lt;/chem&gt; &lt;chem display=block&gt;2 Au{} + 3 Cl2 -&gt;[{}\atop\Delta] 2 AuCl3&lt;/chem&gt; &lt;chem display=block&gt;2 Au{} + 2 Br2 -&gt;[{}\atop\Delta] AuBr3{} + AuBr&lt;/chem&gt; &lt;chem display=block&gt;2 Au{} + I2 -&gt;[{}\atop\Delta] 2 AuI&lt;/chem&gt; Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chlorauric acid. Unlike sulfur, phosphorus reacts directly with gold at elevated temperatures to produce gold phosphide (Au2P3). Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors. Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming ions, or chloroauric acid, thereby enabling further oxidation: &lt;chem display=block&gt;2 Au{} + 6 H2SeO4 -&gt;[{}\atop{200^\circ\text{C}}] Au2(SeO4)3{} + 3 H2SeO3{} + 3 H2O&lt;/chem&gt; &lt;chem display=block&gt;Au{} + 4HCl{} + HNO3 -&gt; HAuCl4{} + NO\uparrow + 2H2O &lt;/chem&gt; Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes. Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate. Rare oxidation states. Less common oxidation states of gold include −1, +2, and +5. The −1 oxidation state occurs in aurides, compounds containing the anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making a stable species, analogous to the halides. Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride. Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [. The evaporation of a solution of in concentrated produces red crystals of gold(II) sulfate, . Originally thought to be a mixed-valence compound, it has been shown to contain cations, analogous to the better-known mercury(I) ion, . A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in . In September 2023, a novel type of metal-halide perovskite material consisting of Au3+ and Au2+ cations in its crystal structure has been found. It has been shown to be unexpectedly stable at normal conditions. Gold pentafluoride, along with its derivative anion, , and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state. Some gold compounds exhibit "aurophilic bonding", which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond. Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species . Origin. Gold production in the universe. Gold is thought to have been produced in supernova nucleosynthesis, and from the collision of neutron stars, and to have been present in the dust from which the Solar System formed. Traditionally, gold in the universe is thought to have formed by the r-process (rapid neutron capture) in supernova nucleosynthesis, but more recently it has been suggested that gold and other elements heavier than iron may also be produced in quantity by the r-process in the collision of neutron stars. In both cases, satellite spectrometers at first only indirectly detected the resulting gold. However, in August 2017, the spectroscopic signatures of heavy elements, including gold, were observed by electromagnetic observatories in the GW170817 neutron star merger event, after gravitational wave detectors confirmed the event as a neutron star merger. Current astrophysical models suggest that this single neutron star merger event generated between 3 and 13 Earth masses of gold. This amount, along with estimations of the rate of occurrence of these neutron star merger events, suggests that such mergers may produce enough gold to account for most of the abundance of this element in the universe. Asteroid origin theories. Because the Earth was molten when it was formed, almost all of the gold present in the early Earth probably sank into the planetary core. Therefore, as hypothesized in one model, most of the gold in the Earth's crust and mantle is thought to have been delivered to Earth by asteroid impacts during the Late Heavy Bombardment, about 4 billion years ago. Gold which is reachable by humans has, in one case, been associated with a particular asteroid impact. The asteroid that formed Vredefort impact structure 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth. However, this scenario is now questioned. The gold-bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact. These gold-bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas and the Transvaal Supergroup of rocks before the meteor struck, and thus the gold did not actually arrive in the asteroid/meteorite. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold-bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Some 22% of all the gold that is ascertained to exist today on Earth has been extracted from these Witwatersrand rocks. Mantle return theories. Much of the rest of the gold on Earth is thought to have been incorporated into the planet since its very beginning, as planetesimals formed the mantle. In 2017, an international group of scientists established that gold "came to the Earth's surface from the deepest regions of our planet", the mantle, as evidenced by their findings at Deseado Massif in the Argentinian Patagonia. Occurrence. On Earth, gold is found in ores in rock formed from the Precambrian time onward. It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold/silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver, and is commonly known as white gold. Electrum's color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity. Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "fool's gold", which is a pyrite. These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the exposed surface of gold-bearing veins, owing to the oxidation of accompanying minerals followed by weathering; and by washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets. Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite () and antimonide aurostibite (). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (), novodneprite () and weishanite (). A 2004 research paper suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits. A 2013 study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About below the surface, under very high temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces. Seawater. The world's oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 femtomol/L or 10–30 parts per quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 femtomol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 femtomol/L), which is attributed to wind-blown dust or rivers. At 10 parts per quadrillion, the Earth's oceans would hold 15,000 tonnes of gold. These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data. A number of people have claimed to be able to economically recover gold from sea water, but they were either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s, as did an English fraudster in the early 1900s. Fritz Haber did research on the extraction of gold from sea water in an effort to help pay Germany's reparations following World War I. Based on the published values of 2 to 64 ppb of gold in seawater, a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb, it became clear that extraction would not be possible, and he ended the project. History. The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC. The oldest gold artifacts in the world are from Bulgaria and are dating back to the 5th millennium BC (4,600 BC to 4,200 BC), such as those found in the Varna Necropolis near Lake Varna and the Black Sea coast, thought to be the earliest "well-dated" finding of gold artifacts in history. Several prehistoric Bulgarian finds are considered no less old – the golden treasures of Hotnitsa, Durankulak, artifacts from the Kurgan settlement of Yunatsite near Pazardzhik, the golden treasure Sakar, as well as beads and gold jewelry found in the Kurgan settlement of Provadia – Solnitsata ("salt pit"). However, Varna gold is most often called the oldest since this treasure is the largest and most diverse. Gold artifacts probably made their first appearance in Ancient Egypt at the very beginning of the pre-dynastic period, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium. As of 1990, gold artifacts found at the Wadi Qana cave cemetery of the 4th millennium BC in West Bank were the earliest from the Levant. Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age. The oldest known map of a gold mine was drawn in the 19th Dynasty of Ancient Egypt (1320–1200 BC), whereas the first written reference to gold was recorded in the 12th Dynasty around 1900 BC. Egyptian hieroglyphs from as early as 2600 BC describe gold, which King Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt. Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. One of the earliest known maps, known as the Turin Papyrus Map, shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia. Gold is mentioned in the Amarna letters numbered 19 and 26 from around the 14th century BC. Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of the golden calf, and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the world's earliest coinage in Lydia around 610 BC. The legend of the golden fleece dating from eighth century BCE may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin. In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León, where seven long aqueducts enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia "Naturalis Historia" written towards the end of the first century AD. During Mansa Musa's (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade, causing high inflation. A contemporary Arab historian remarked: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Gold was at a high price in Egypt until they came in that year. The mithqal did not go below 25 dirhams and was generally above, but from that time its value fell and it cheapened in price and has remained cheap till now. The mithqal does not exceed 22 dirhams or less. This has been the state of affairs for about twelve years until this day by reason of the large amount of gold which they brought into Egypt and spent there [...]. The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as the product of the gods, calling it literally "god excrement" ("teocuitlatl" in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain. However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate. El Dorado is applied to a legendary story in which precious stones were found in fabulous abundance along with gold coins. The concept of El Dorado underwent several transformations, and eventually accounts of the previous myth were also combined with those of a legendary lost city. El Dorado, was the term used by the Spanish Empire to describe a mythical tribal chief (zipa) of the Muisca native people in Colombia, who, as an initiation rite, covered himself with gold dust and submerged in Lake Guatavita. The legends surrounding El Dorado changed over time, as it went from being a man, to a city, to a kingdom, and then finally to an empire. Beginning in the early modern period, European exploration and colonization of West Africa was driven in large part by reports of gold deposits in the region, which was eventually referred to by Europeans as the "Gold Coast". From the late 15th to early 19th centuries, European trade in the region was primarily focused in gold, along with ivory and slaves. The gold trade in West Africa was dominated by the Ashanti Empire, who initially traded with the Portuguese before branching out and trading with British, French, Spanish and Danish merchants. British desires to secure control of West African gold deposits played a role in the Anglo-Ashanti wars of the late 19th century, which saw the Ashanti Empire annexed by Britain. Gold played a role in western culture, as a cause for desire and of corruption, as told in children's fables such as Rumpelstiltskin—where Rumpelstiltskin turns hay into gold for the peasant's daughter in return for her child when she becomes a princess—and the stealing of the hen that lays golden eggs in Jack and the Beanstalk. The top prize at the Olympic Games and many other sports competitions is the gold medal. 75% of the presently accounted for gold has been extracted since 1910, two-thirds since 1950. One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosopher's stone. Trying to produce gold led the alchemists to systematically find out what can be done with substances, and this laid the foundation for today's chemistry, which can produce gold (albeit uneconomically) by using nuclear transmutation. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun. The Dome of the Rock is covered with an ultra-thin golden glassier. The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Buddhist temple (wat) in Thailand has ornamental gold-leafed statues and roofs. Some European king and queen's crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, wife of Rabbi Akiva, receiving a "Jerusalem of Gold" (diadem). A Greek burial crown made of gold was found in a grave circa 370 BC. Etymology. "Gold" is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *"gulþą" from Proto-Indo-European *"ǵʰelh₃-" 'to shine, to gleam; to be yellow or green'. The symbol "Au" is from the Latin 'gold'. The Proto-Indo-European ancestor of "aurum" was "*h₂é-h₂us-o-", meaning 'glow'. This word is derived from the same root (Proto-Indo-European "*h₂u̯es-" 'to dawn') as "*h₂éu̯sōs", the ancestor of the Latin word 'dawn'. This etymological relationship is presumably behind the frequent claim in scientific publications that meant 'shining dawn'. Culture. In popular culture gold is a high standard of excellence, often used in awards. Great achievements are frequently rewarded with gold, in the form of gold medals, gold trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme d'Or, and the British Academy Film Awards). Aristotle in his ethics used gold symbolism when referring to what is now known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the Golden Rule. Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. A person's most valued or most successful latter years are sometimes considered "golden years" or "golden jubilee". The height of a civilization is referred to as a golden age. Religion. The first known prehistoric human usages of gold were religious in nature. In some forms of Christianity and Judaism, gold has been associated both with the sacred and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Virgin Mary and the saints are often golden. In Islam, gold (along with silk) is often cited as being forbidden for men to wear. Abu Bakr al-Jazaeri, quoting a hadith, said that "[t]he wearing of silk and gold are forbidden on the males of my nation, and they are lawful to their women". This, however, has not been enforced consistently throughout history, e.g. in the Ottoman Empire. Further, small gold accents on clothing, such as in embroidery, may be permitted. In ancient Greek religion and mythology, Theia was seen as the goddess of gold, silver and other gemstones. According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise. Wedding rings are typically made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites. On 24 August 2020, Israeli archaeologists discovered a trove of early Islamic gold coins near the central city of Yavne. Analysis of the extremely rare collection of 425 gold coins indicated that they were from the late 9th century. Dating to around 1,100 years back, the gold coins were from the Abbasid Caliphate. Production. According to the United States Geological Survey in 2016, about of gold has been accounted for, of which 85% remains in active use. Mining and prospecting. Since the 1880s, South Africa has been the source of a large proportion of the world's gold supply, and about 22% of the gold presently accounted is from South Africa. Production in 1970 accounted for 79% of the world supply, about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the world's largest gold producer, the first time since 1905 that South Africa had not been the largest. In 2020, China was the world's leading gold-mining country, followed in order by Russia, Australia, the United States, Canada, and Ghana. In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina. It has been estimated that up to one-quarter of the yearly global gold production originates from artisanal or small scale mining. The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest natural gold deposits in recorded history. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces. These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin. From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly , making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on Earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly Braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited. The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa. During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803. The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega. Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, a number of locations across Australia, Witwatersrand in South Africa, and the Klondike in Canada. Grasberg mine located in Papua, Indonesia is the largest gold mine in the world. Extraction and refining. Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 parts per million (ppm) can be economical. Typical ore grades in open-pit mines are 1–5 ppm; ore grades in underground or hard rock mines are usually at least 3 ppm. Because ore grades of 30 ppm are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible. The average gold mining and extraction costs were about $317 per troy ounce in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes. After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations. Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia. Recycling. In 1997, recycled gold accounted for approximately 20% of the 2700 tons of gold supplied to the market. Jewelry companies such as Generation Collection and computer companies including Dell conduct recycling As of 2020, the amount of carbon dioxide produced in mining a kilogram of gold is 16 tonnes, while recycling a kilogram of gold produces 53 kilograms of equivalent. Approximately 30 percent of the global gold supply is recycled and not mined as of 2020. Consumption. The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry. According to the World Gold Council, China was the world's largest single consumer of gold in 2013, overtaking India. Pollution. Gold production is associated with contribution to hazardous pollution. Low-grade gold ore may contain less than one ppm gold metal; such ore is ground and mixed with sodium cyanide to dissolve the gold. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills from gold mines have occurred in both developed and developing countries which killed aquatic life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters. Up to thirty tons of used ore can be dumped as waste for producing one troy ounce of gold. Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide-bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps contain long-term, highly hazardous waste. It was once common to use mercury to recover gold from ore, but today the use of mercury is largely limited to small-scale individual miners. Minute quantities of mercury compounds can reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans causes incurable brain function damage and severe retardation. Gold extraction is also a highly energy-intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires nearly 25 kWh of electricity per gram of gold produced. Monetary use. Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity. The first known coins containing gold were struck in Lydia, Asia Minor, around 600 BC. The "talent" coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams. From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries. Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies. In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort. Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations. After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States' refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; this was ended by a referendum in 1999. Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining. With the sharp growth of economies in the 20th century, and increasing foreign exchange, the world's gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1% or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices. The gold proportion (fineness) of alloys is measured by karat (k). Pure gold (commercially termed "fine" gold) is designated as 24 karat, abbreviated 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold, for hardness (American gold coins for circulation after 1837 contain an alloy of 0.900 fine gold, or 21.6 kt). Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise "Utopia". On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party. The ISO 4217 currency code of gold is XAU. Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions, though its efficacy as such has been questioned; historically, it has not proven itself reliable as a hedging instrument. Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerrand, first released in 1967, is also 22k (0.92). The "special issue" Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the "popular issue" Canadian Gold Maple Leaf coin has a purity of 99.99%. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda. Price. Like other precious metals, gold is measured by troy weight and by grams. The proportion of gold in the alloy is measured by "karat" (k), with 24 karat (24k) being pure gold (100%), and lower karat numbers proportionally less (18k = 75%). The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being nearly pure. The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open. As of September 2017[ [update]], gold was valued at around $42 per gram ($1,300 per troy ounce). History. Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($0.665 per gram), but in 1934 the dollar was devalued to $35.00 per troy ounce ($0.889/g). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand. The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3% of the gold known to exist and accounted for today, as does the similarly laden U.S. Bullion Depository at Fort Knox. In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes. After 15 August 1971 Nixon shock, the price began to greatly increase, and between 1968 and 2000 the price of gold ranged widely, from a high of $850 per troy ounce ($27.33/g) on 21 January 1980, to a low of $252.90 per troy ounce ($8.13/g) on 21 June 1999 (London Gold Fixing). Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008, when a new maximum of $865.35 per troy ounce was set. Another record price was set on 17 March 2008, at $1023.50 per troy ounce ($32.91/g). On 2 December 2009, gold reached a new high closing at $1,217.23. Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset. On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East. From April 2001 to August 2011, spot gold prices more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011, prompting speculation that the long secular bear market had ended and a bull market had returned. However, the price then began a slow decline towards $1200 per troy ounce in late 2014 and 2015. In August 2020, the gold price picked up to US$2060 per ounce after a total growth of 59% from August 2018 to October 2020, a period during which it outplaced the Nasdaq total return of 54%. Gold futures are traded on the COMEX exchange. These contacts are priced in USD per troy ounce (1 troy ounce = 31.1034768 grams). Below are the CQG contract specifications outlining the futures contracts: Other applications. Jewelry. Because of the softness of pure (24k) gold, it is usually alloyed with other metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower karat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper, silver, palladium or other base metals in the alloy. Nickel is toxic, and its release from nickel white gold is controlled by legislation in Europe. Palladium-gold alloys are more expensive than those using nickel. High-karat white gold alloys are more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects. By 2014, the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report. Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, the gold solder alloy must match the fineness of the work, and alloy formulas are manufactured to color-match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints. Gold can also be made into thread and used in embroidery. Electronics. Only 10% of the world consumption of new gold produced goes to industry, but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold Council, a typical cell phone may contain 50 mg of gold, worth about three dollars. But since nearly one billion cell phones are produced each year, a gold value of US$2.82 in each phone adds to US$2.82 billion in gold from just this application. (Prices updated to November 2022) Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin-layer coating on electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common. Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity. Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding. The concentration of free electrons in gold metal is 5.91×1022 cm−3. Gold is highly conductive to electricity and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Project's atomic experiments, but large high-current silver wires were used in the calutron isotope separator magnets in the project. It is estimated that 16% of the world's presently-accounted-for gold and 22% of the world's silver is contained in electronic technology in Japan. Medicine. Metallic and gold compounds have long been used for medicinal purposes. Gold, usually as the metal, is perhaps the most anciently administered medicine (apparently by shamanic practitioners) and known to Dioscorides. In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy. Even some modern esotericists and forms of alternative medicine assign metallic gold a healing power. In the 19th century gold had a reputation as an anxiolytic, a therapy for nervous disorders. Depression, epilepsy, migraine, and glandular problems such as amenorrhea and impotence were treated, and most notably alcoholism (Keeley, 1897). The apparent paradox of the actual toxicology of the substance suggests the possibility of serious gaps in the understanding of the action of gold in physiology. Only salts and radioisotopes of gold are of pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (e.g., ingested gold cannot be attacked by stomach acid). Some gold salts do have anti-inflammatory properties and at present two are still used as pharmaceuticals in the treatment of arthritis and other similar conditions in the US (sodium aurothiomalate and auranofin). These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites. Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys' slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others. Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen. Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Gold's very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope. The isotope gold-198 (half-life 2.7 days) is used in nuclear medicine, in some cancer treatments and for treating other diseases. Cuisine. Cake with edible gold decoration Toxicity. Pure metallic (elemental) gold is non-toxic and non-irritating when ingested and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body. Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide. Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol. Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society; gold contact allergies affect mostly women. Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel. A sample of the fungus "Aspergillus niger" was found growing from gold mining solution; and was found to contain cyano metal complexes, such as gold, silver, copper, iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{Au + O2 -> }(\\text{no reaction})" }, { "math_id": 1, "text": "\\ce{Au{} + O3 ->[{}\\atop{t<100^\\circ\\text{C}}] }(\\text{no reaction})" } ]
https://en.wikipedia.org/wiki?curid=12240
1224067
Stille reaction
Chemical reaction used in organic synthesis &lt;templatestyles src="Reactionbox/styles.css"/&gt; The Stille reaction is a chemical reaction widely used in organic synthesis. The reaction involves the coupling of two organic groups, one of which is carried as an organotin compound (also known as organostannanes). A variety of organic electrophiles provide the other coupling partner. The Stille reaction is one of many palladium-catalyzed coupling reactions. formula_0 *formula_1: Allyl, alkenyl, aryl, benzyl, acyl *formula_2: halides (Cl, Br, I), pseudohalides (OTf, OPO(OR)2), OAc The R1 group attached to the trialkyltin is normally sp2-hybridized, including vinyl, and aryl groups. These organostannanes are also stable to both air and moisture, and many of these reagents either are commercially available or can be synthesized from literature precedent. However, these tin reagents tend to be highly toxic. X is typically a halide, such as Cl, Br, or I, yet pseudohalides such as triflates and sulfonates and phosphates can also be used. Several reviews have been published. History. The first example of a palladium catalyzed coupling of aryl halides with organotin reagents was reported by Colin Eaborn in 1976. This reaction yielded from 7% to 53% of diaryl product. This process was expanded to the coupling of acyl chlorides with alkyl-tin reagents in 1977 by Toshihiko Migita, yielding 53% to 87% ketone product. In 1977, Migita published further work on the coupling of allyl-tin reagents with both aryl (C) and acyl (D) halides. The greater ability of allyl groups to migrate to the palladium catalyst allowed the reactions to be performed at lower temperatures. Yields for aryl halides ranged from 4% to 100%, and for acyl halides from 27% to 86%. Reflecting the early contributions of Migita and Kosugi, the Stille reaction is sometimes called the Migita–Kosugi–Stille coupling. John Kenneth Stille subsequently reported the coupling of a variety of alkyl tin reagents in 1978 with numerous aryl and acyl halides under mild reaction conditions with much better yields (76%–99%). Stille continued his work in the 1980s on the synthesis of a multitude of ketones using this broad and mild process and elucidated a mechanism for this transformation. By the mid-1980s, over 65 papers on the topic of coupling reactions involving tin had been published, continuing to explore the substrate scope of this reaction. While initial research in the field focused on the coupling of alkyl groups, most future work involved the much more synthetically useful coupling of vinyl, alkenyl, aryl, and allyl organostannanes to halides. Due to these organotin reagent's stability to air and their ease of synthesis, the Stille reaction became common in organic synthesis. Mechanism. The mechanism of the Stille reaction has been extensively studied. The catalytic cycle involves an oxidative addition of a halide or pseudohalide (2) to a palladium catalyst (1), transmetalation of 3 with an organotin reagent (4), and reductive elimination of 5 to yield the coupled product (7) and the regenerated palladium catalyst (1). However, the detailed mechanism of the Stille coupling is extremely complex and can occur via numerous reaction pathways. Like other palladium-catalyzed coupling reactions, the active palladium catalyst is believed to be a 14-electron Pd(0) complex, which can be generated in a variety of ways. Use of an 18- or 16- electron Pd(0) source , can undergo ligand dissociation to form the active species. Second, phosphines can be added to ligandless palladium(0). Finally, as pictured, reduction of a Pd(II) source (8) , , , , etc.) by added phosphine ligands or organotin reagents is also common Oxidative addition. Oxidative addition to the 14-electron Pd(0) complex is proposed. This process gives a 16-electron Pd(II) species. It has been suggested that anionic ligands, such as OAc, accelerate this step by the formation of [Pd(OAc)(PR3)n]−, making the palladium species more nucleophillic. In some cases, especially when an sp3-hybridized organohalide is used, an SN2 type mechanism tends to prevail, yet this is not as commonly seen in the literature. However, despite normally forming a "cis"-intermediate after a concerted oxidative addition, this product is in rapid equilibrium with its "trans"-isomer. There are multiple reasons why isomerization is favored here. First, a bulky ligand set is usually used in these processes, such as phosphines, and it is highly unfavorable for them to adopt a "cis" orientation relative to each other, resulting in isomerization to the more favorable trans product. An alternative explanation for this phenomenon, dubbed antisymbiosis or transphobia, is by invocation of the sdn model. Under this theory, palladium is a hypervalent species. Hence R1 and the trans ligand, being trans to each other, will compete with one palladium orbital for bonding. This 4-electron 3-center bond is weakest when two strong donating groups are present, which heavily compete for the palladium orbital. Relative to any ligand normally used, the C-donor R1 ligand has a much higher trans effect. This trans influence is a measure of how competitive ligands trans to each other will compete for palladium's orbital. The usual ligand set, phosphines, and C-donors (R1) are both soft ligands, meaning that they will form strong bonds to palladium, and heavily compete with each other for bonding. Since halides or pseudohalides are significantly more electronegative, their bonding with palladium will be highly polarized, with most of the electron density on the X group, making them low trans effect ligands. Hence, it will be highly favorable for R1 to be trans to X, since the R1 group will be able to form a stronger bond to the palladium. Transmetallation. The transmetallation of the "trans" intermediate from the oxidative addition step is believed to proceed via a variety of mechanisms depending on the substrates and conditions. The most common type of transmetallation for the Stille coupling involves an associative mechanism. This pathway implies that the organostannane, normally a tin atom bonded to an allyl, alkenyl, or aryl group, can coordinate to the palladium via one of these double bonds. This produces a fleeting pentavalent, 18-electron species, which can then undergo ligand detachment to form a square planar complex again. Despite the organostannane being coordinated to the palladium through the R2 group, R2 must be formally transferred to the palladium (the R2-Sn bond must be broken), and the X group must leave with the tin, completing the transmetalation. This is believed to occur through two mechanisms. First, when the organostannane initially adds to the trans metal complex, the X group can coordinate to the tin, in addition to the palladium, producing a cyclic transition state. Breakdown of this adduct results in the loss of R3Sn-X and a trivalent palladium complex with R1 and R2 present in a "cis" relationship. Another commonly seen mechanism involves the same initial addition of the organostannane to the "trans" palladium complex as seen above; however, in this case, the X group does not coordinate to the tin, producing an open transition state. After the α-carbon relative to tin attacks the palladium, the tin complex will leave with a net positive charge. In the scheme below, please note that the double bond coordinating to tin denotes R2, so any alkenyl, allyl, or aryl group. Furthermore, the X group can dissociate at any time during the mechanism and bind to the Sn+ complex at the end. Density functional theory calculations predict that an open mechanism will prevail if the 2 ligands remain attached to the palladium and the X group leaves, while the cyclic mechanism is more probable if a ligand dissociates prior to the transmetalation. Hence, good leaving groups such as triflates in polar solvents favor the cyclic transition state, while bulky phosphine ligands will favor the open transition state. A less common pathway for transmetalation is through a dissociative or solvent assisted mechanism. Here, a ligand from the tetravalent palladium species dissociates, and a coordinating solvent can add onto the palladium. When the solvent detaches, to form a 14-electron trivalent intermediate, the organostannane can add to the palladium, undergoing an open or cyclic type process as above. Reductive elimination step. In order for R1-R2 to reductively eliminate, these groups must occupy mutually "cis" coordination sites. Any "trans"-adducts must therefore isomerize to the "cis" intermediate or the coupling will be frustrated. A variety of mechanisms exist for reductive elimination and these are usually considered to be concerted. First, the 16-electron tetravalent intermediate from the transmetalation step can undergo unassisted reductive elimination from a square planar complex. This reaction occurs in two steps: first, the reductive elimination is followed by coordination of the newly formed sigma bond between R1 and R2 to the metal, with ultimate dissociation yielding the coupled product. The previous process, however, is sometimes slow and can be greatly accelerated by dissociation of a ligand to yield a 14-electron T shaped intermediate. This intermediate can then rearrange to form a Y-shaped adduct, which can undergo faster reductive elimination. Finally, an extra ligand can associate to the palladium to form an 18-electron trigonal bipyramidal structure, with R1 and R2 cis to each other in equatorial positions. The geometry of this intermediate makes it similar to the Y-shaped above. The presence of bulky ligands can also increase the rate of elimination. Ligands such as phosphines with large bite angles cause steric repulsion between L and R1 and R2, resulting in the angle between L and the R groups to increase and the angle between R1 and R2 to hence decrease, allowing for quicker reductive elimination. Kinetics. The rate at which organostannanes transmetalate with palladium catalysts is shown below. Sp2-hybridized carbon groups attached to tin are the most commonly used coupling partners, and sp3-hybridized carbons require harsher conditions and terminal alkynes may be coupled via a C-H bond through the Sonogashira reaction. As the organic tin compound, a trimethylstannyl or tributylstannyl compound is normally used. Although trimethylstannyl compounds show higher reactivity compared with tributylstannyl compounds and have much simpler 1H-NMR spectra, the toxicity of the former is much larger. Optimizing which ligands are best at carrying out the reaction with high yield and turnover rate can be difficult. This is because the oxidative addition requires an electron rich metal, hence favoring electron donating ligands. However, an electron deficient metal is more favorable for the transmetalation and reductive elimination steps, making electron withdrawing ligands the best here. Therefore, the optimal ligand set heavily depends on the individual substrates and conditions used. These can change the rate determining step, as well as the mechanism for the transmetalation step. Normally, ligands of intermediate donicity, such as phosphines, are utilized. Rate enhancements can be seen when moderately electron-poor ligands, such as tri-2-furylphosphine or triphenylarsenine are used. Likewise, ligands of high donor number can slow down or inhibit coupling reactions. These observations imply that normally, the rate-determining step for the Stille reaction is transmetalation. Additives. The most common additive to the Stille reaction is stoichiometric or co-catalytic copper(I), specifically copper iodide, which can enhance rates up by &gt;103 fold. It has been theorized that in polar solvents copper transmetalate with the organostannane. The resulting organocuprate reagent could then transmetalate with the palladium catalyst. Furthermore, in ethereal solvents, the copper could also facilitate the removal of a phosphine ligand, activating the Pd center. Lithium chloride has been found to be a powerful rate accelerant in cases where the X group dissociates from palladium (i.e. the open mechanism). The chloride ion is believed to either displace the X group on the palladium making the catalyst more active for transmetalation or by coordination to the Pd(0) adduct to accelerate the oxidative addition. Also, LiCl salt enhances the polarity of the solvent, making it easier for this normally anionic ligand (–Cl, –Br, –OTf, etc.) to leave. This additive is necessary when a solvent like THF is used; however, utilization of a more polar solvent, such as NMP, can replace the need for this salt additive. However, when the coupling's transmetalation step proceeds via the cyclic mechanism, addition of lithium chloride can actually decrease the rate. As in the cyclic mechanism, a neutral ligand, such as phosphine, must dissociate instead of the anionic X group. Finally, sources of fluoride ions, such as cesium fluoride, also effect on the catalytic cycle. First, fluoride can increase the rates of reactions of organotriflates, possibly by the same effect as lithium chloride. Furthermore, fluoride ions can act as scavengers for tin byproducts, making them easier to remove via filtration. Competing side reactions. The most common side reactivity associated with the Stille reaction is homocoupling of the stannane reagents to form an R2-R2 dimer. It is believed to proceed through two possible mechanisms. First, reaction of two equivalents of organostannane with the Pd(II) precatalyst will yield the homocoupled product after reductive elimination. Second, the Pd(0) catalyst can undergo a radical process to yield the dimer. The organostannane reagent used is traditionally tetravalent at tin, normally consisting of the sp2-hybridized group to be transferred and three "non-transferable" alkyl groups. As seen above, alkyl groups are normally the slowest at migrating onto the palladium catalyst. It has also been found that at temperatures as low as 50 °C, aryl groups on both palladium and a coordinated phosphine can exchange. While normally not detected, they can be a potential minor product in many cases. Finally, a rather rare and exotic side reaction is known as cine substitution. Here, after initial oxidative addition of an aryl halide, this Pd-Ar species can insert across a vinyl tin double bond. After β-hydride elimination, migratory insertion, and protodestannylation, a 1,2-disubstituted olefin can be synthesized. Numerous other side reactions can occur, and these include E/Z isomerization, which can potentially be a problem when an alkenylstannane is utilized. The mechanism of this transformation is currently unknown. Normally, organostannanes are quite stable to hydrolysis, yet when very electron-rich aryl stannanes are used, this can become a significant side reaction. Scope. Electrophile. Vinyl halides are common coupling partners in the Stille reaction, and reactions of this type are found in numerous natural product total syntheses. Normally, vinyl iodides and bromides are used. Vinyl chlorides are insufficiently reactive toward oxidative addition to Pd(0). Iodides are normally preferred: they will typically react faster and under milder conditions than will bromides. This difference is demonstrated below by the selective coupling of a vinyl iodide in the presence of a vinyl bromide. Normally, the stereochemistry of the alkene is retained throughout the reaction, except under harsh reaction conditions. A variety of alkenes may be used, and these include both α- and β-halo-α,β unsaturated ketones, esters, and sulfoxides (which normally need a copper (I) additive to proceed), and more (see example below). Vinyl triflates are also sometimes used. Some reactions require the addition of LiCl and others are slowed down, implying that two mechanistic pathways are present. Another class of common electrophiles are aryl and heterocyclic halides. As for the vinyl substrates, bromides and iodides are more common despite their greater expense. A multitude of aryl groups can be chosen, including rings substituted with electron donating substituents, biaryl rings, and more. Halogen-substituted heterocycles have also been used as coupling partners, including pyridines, furans, thiophenes, thiazoles, indoles, imidazoles, purines, uracil, cytosines, pyrimidines, and more (See below for table of heterocycles; halogens can be substituted at a variety of positions on each). Below is an example of the use of Stille coupling to build complexity on heterocycles of nucleosides, such as purines. Aryl triflates and sulfonates are also couple to a wide variety of organostannane reagents. Triflates tend to react comparably to bromides in the Stille reaction. Acyl chlorides are also used as coupling partners and can be used with a large range of organostannane, even alkyl-tin reagents, to produce ketones (see example below). However, it is sometimes difficult to introduce acyl chloride functional groups into large molecules with sensitive functional groups. An alternative developed to this process is the Stille-carbonylative cross-coupling reaction, which introduces the carbonyl group via carbon monoxide insertion. Allylic, benzylic, and propargylic halides can also be coupled. While commonly employed, allylic halides proceed via an η3 transition state, allowing for coupling with the organostannane at either the α or γ position, occurring predominantly at the least substituted carbon (see example below). Alkenyl epoxides (adjacent epoxides and alkenes) can also undergo this same coupling through an η3 transition state as, opening the epoxide to an alcohol. While allylic and benzylic acetates are commonly used, propargylic acetates are unreactive with organostannanes. Stannane. Organostannane reagents are common. Several are commercially available. Stannane reagents can be synthesized by the reaction of a Grignard or organolithium reagent with trialkyltin chlorides. For example, vinyltributyltin is prepared by the reaction of vinylmagnesium bromide with tributyltin chloride. Hydrostannylation of alkynes or alkenes provides many derivatives. Organotin reagents are air and moisture stable. Some reactions can even take place in water. They can be purified by chromatography. They are tolerant to most functional groups. Some organotin compounds are heavily toxic, especially trimethylstannyl derivatives. The use of vinylstannane, or alkenylstannane reagents is widespread. In regards to limitations, both very bulky stannane reagents and stannanes with substitution on the α-carbon tend to react sluggishly or require optimization. For example, in the case below, the α-substituted vinylstannane only reacts with a terminal iodide due to steric hindrance. Arylstannane reagents are also common and both electron donating and electron withdrawing groups actually increase the rate of the transmetalation. This again implies that two mechanisms of transmetalation can occur. The only limitation to these reagents are substituents at the ortho-position as small as methyl groups can decrease the rate of reaction. A wide variety of heterocycles (see Electrophile section) can also be used as coupling partners (see example with a thiazole ring below). Alkynylstannanes, the most reactive of stannanes, have also been used in Stille couplings. They are not usually needed as terminal alkynes can couple directly to palladium catalysts through their C-H bond via Sonogashira coupling. Allylstannanes have been reported to have worked, yet difficulties arise, like with allylic halides, with the difficulty in control regioselectivity for α and γ addition. Distannane and acyl stannane reagents have also been used in Stille couplings. Applications. The Stille reaction has been used in the synthesis of a variety of polymers. However, the most widespread use of the Stille reaction is its use in organic syntheses, and specifically, in the synthesis of natural products. Natural product total synthesis. Larry Overman's 19-step enantioselective total synthesis of quadrigemine C involves a double Stille cross metathesis reaction. The complex organostannane is coupled onto two aryl iodide groups. After a double Heck cyclization, the product is achieved. Panek's 32 step enantioselective total synthesis of ansamycin antibiotic (+)-mycotrienol makes use of a late stage tandem Stille type macrocycle coupling. Here, the organostannane has two terminal tributyl tin groups attacked to an alkene. This organostannane "stitches" the two ends of the linear starting material into a macrocycle, adding the missing two methylene units in the process. After oxidation of the aromatic core with ceric ammonium nitrate (CAN) and deprotection with hydrofluoric acid yields the natural product in 54% yield for the 3 steps. Stephen F. Martin and coworkers' 21 step enantioselective total synthesis of the manzamine antitumor alkaloid Ircinal A makes use of a tandem one-pot Stille/Diels-Alder reaction. An alkene group is added to vinyl bromide, followed by an "in situ" Diels-Alder cycloaddition between the added alkene and the alkene in the pyrrolidine ring. Numerous other total syntheses utilize the Stille reaction, including those of oxazolomycin, lankacidin C, onamide A, calyculin A, lepicidin A, ripostatin A, and lucilactaene. The image below displays the final natural product, the organohalide (blue), the organostannane (red), and the bond being formed (green and circled). From these examples, it is clear that the Stille reaction can be used both at the early stages of the synthesis (oxazolomycin and calyculin A), at the end of a convergent route (onamide A, lankacidin C, ripostatin A), or in the middle (lepicidin A and lucilactaene). The synthesis of ripostatin A features two concurrent Stille couplings followed by a ring-closing metathesis. The synthesis of lucilactaene features a middle subunit, having a borane on one side and a stannane on the other, allowing for Stille reactionfollowed by a subsequent Suzuki coupling. Variations. In addition to performing the reaction in a variety of organic solvents, conditions have been devised which allow for a broad range of Stille couplings in aqueous solvent. In the presence of Cu(I) salts, palladium-on-carbon has been shown to be an effective catalyst. In the realm of green chemistry a Stille reaction is reported taking place in a low melting and highly polar mixture of a sugar such as mannitol, a urea such as dimethylurea and a salt such as ammonium chloride . The catalyst system is tris(dibenzylideneacetone)dipalladium(0) with triphenylarsine: Stille–carbonylative cross-coupling. A common alteration to the Stille coupling is the incorporation of a carbonyl group between R1 and R2, serving as an efficient method to form ketones. This process is extremely similar to the initial exploration by Migita and Stille (see History) of coupling organostannane to acyl chlorides. However, these moieties are not always readily available and can be difficult to form, especially in the presence of sensitive functional groups. Furthermore, controlling their high reactivity can be challenging. The Stille-carbonylative cross-coupling employs the same conditions as the Stille coupling, except with an atmosphere of carbon monoxide (CO) being used. The CO can coordinate to the palladium catalyst (9) after initial oxidative addition, followed by CO insertion into the Pd-R1 bond (10), resulting in subsequent reductive elimination to the ketone (12). The transmetalation step is normally the rate-determining step. Larry Overman and coworkers make use of the Stille-carbonylative cross-coupling in their 20-step enantioselective total synthesis of strychnine. The added carbonyl is later converted to a terminal alkene via a Wittig reaction, allowing for the key tertiary nitrogen and the pentacyclic core to be formed via an aza-Cope-Mannich reaction. Giorgio Ortar et al. explored how the Stille-carbonylative cross-coupling could be used to synthesize benzophenone phosphores. These were embedded into 4-benzoyl-L-phenylalanine peptides and used for their photoaffinity labelling properties to explore various peptide-protein interactions. Louis Hegedus' 16-step racemic total synthesis of Jatraphone involved a Stille-carbonylative cross-coupling as its final step to form the 11-membered macrocycle. Instead of a halide, a vinyl triflate is used there as the coupling partner. Stille–Kelly coupling. Using the seminal publication by Eaborn in 1976, which forms arylstannanes from arylhalides and distannanes, T. Ross Kelly applied this process to the intramolecular coupling of arylhalides. This tandem stannylation/aryl halide coupling was used for the syntheses of a variety of dihydrophenanthrenes. Most of the internal rings formed are limited to 5 or 6 members, however some cases of macrocyclization have been reported. Unlike a normal Stille coupling, chlorine does not work as a halogen, possibly due to its lower reactivity in the halogen sequence (its shorter bond length and stronger bond dissociation energy makes it more difficult to break via oxidative addition). Starting in the middle of the scheme below and going clockwise, the palladium catalyst (1) oxidatively adds to the most reactive C-X bond (13) to form 14, followed by transmetalation with distannane (15) to yield 16 and reductive elimination to yield an arylstannane (18). The regenerated palladium catalyst (1) can oxidative add to the second C-X bond of 18 to form 19, followed by intramolecular transmetalation to yield 20, followed by reductive elimination to yield the coupled product (22). Jie Jack Lie et al. made use of the Stille-Kelly coupling in their synthesis of a variety of benzo[4,5]furopyridines ring systems. They invoke a three-step process, involving a Buchwald-Hartwig amination, another palladium-catalyzed coupling reaction, followed by an intramolecular Stille-Kelly coupling. Note that the aryl-iodide bond will oxidatively add to the palladium faster than either of the aryl-bromide bonds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n{\\color{Blue}\\ce{R^1-Sn(Alkyl)3}} + {\\color{Red}\\ce{R^2-X}} \n\\ \\ce{->[{\\color{Green}\\ce{Pd^0}}\\text{ (catalytic)}][\\text{ligand set}]} \\ \n\\overbrace{{\\color{Blue}\\ce{R^1}}\\!-\\!{\\color{Red}\\ce{R^2}}}^{coupled\\ product} + {\\color{Red}\\ce{X}}\\!-\\!{\\color{Blue}\\ce{Sn(Alkyl)3}}" }, { "math_id": 1, "text": "{\\color{Blue}\\ce{R^1}}\\!,\\ {\\color{Red}\\ce{R^2}}" }, { "math_id": 2, "text": "{\\color{Red}\\ce{X}}" } ]
https://en.wikipedia.org/wiki?curid=1224067
1224131
Hull–White model
Model of future interest rates In financial mathematics, the Hull–White model is a model of future interest rates. In its most generic formulation, it belongs to the class of no-arbitrage models that are able to fit today's term structure of interest rates. It is relatively straightforward to translate the mathematical description of the evolution of future interest rates onto a tree or lattice and so interest rate derivatives such as bermudan swaptions can be valued in the model. The first Hull–White model was described by John C. Hull and Alan White in 1990. The model is still popular in the market today. The model. One-factor model. The model is a short-rate model. In general, it has the following dynamics: formula_0 There is a degree of ambiguity among practitioners about exactly which parameters in the model are time-dependent or what name to apply to the model in each case. The most commonly accepted naming convention is the following: Two-factor model. The two-factor Hull–White model contains an additional disturbance term whose mean reverts to zero, and is of the form: formula_3 where formula_4 is a deterministic function, typically the identity function (extension of the one-factor version, analytically tractable, and with potentially negative rates), the natural logarithm (extension of Black–Karasinksi, not analytically tractable, and with positive interest rates), or combinations (proportional to the natural logarithm on small rates and proportional to the identity function on large rates); and formula_5 has an initial value of 0 and follows the process: formula_6 Analysis of the one-factor model. For the rest of this article we assume only formula_7 has "t"-dependence. Neglecting the stochastic term for a moment, notice that for formula_8 the change in "r" is negative if "r" is currently "large" (greater than formula_9 and positive if the current value is small. That is, the stochastic process is a mean-reverting Ornstein–Uhlenbeck process. θ is calculated from the initial yield curve describing the current term structure of interest rates. Typically α is left as a user input (for example it may be estimated from historical data). σ is determined via calibration to a set of caplets and swaptions readily tradeable in the market. When formula_2, formula_1, and formula_10 are constant, Itô's lemma can be used to prove that formula_11 which has distribution formula_12 where formula_13 is the normal distribution with mean formula_14 and variance formula_15. When formula_16 is time-dependent, formula_17 which has distribution formula_18 Bond pricing using the Hull–White model. It turns out that the time-"S" value of the "T"-maturity discount bond has distribution (note the "affine term" structure here!) formula_19 where formula_20 formula_21 Note that their terminal distribution for formula_22 is distributed log-normally. Derivative pricing. By selecting as numeraire the time-"S" bond (which corresponds to switching to the "S"-forward measure), we have from the fundamental theorem of arbitrage-free pricing, the value at time "t" of a derivative which has payoff at time "S". formula_23 Here, formula_24 is the expectation taken with respect to the forward measure. Moreover, standard arbitrage arguments show that the time "T" forward price formula_25 for a payoff at time "T" given by "V(T)" must satisfy formula_26, thus formula_27 Thus it is possible to value many derivatives "V" dependent solely on a single bond formula_22 analytically when working in the Hull–White model. For example, in the case of a bond put formula_28 Because formula_22 is lognormally distributed, the general calculation used for the Black–Scholes model shows that formula_29 where formula_30 and formula_31 Thus today's value (with the "P"(0,"S") multiplied back in and "t" set to 0) is: formula_32 Here formula_33 is the standard deviation (relative volatility) of the log-normal distribution for formula_22. A fairly substantial amount of algebra shows that it is related to the original parameters via formula_34 Note that this expectation was done in the "S"-bond measure, whereas we did not specify a measure at all for the original Hull–White process. This does not matter — the volatility is all that matters and is measure-independent. Because interest rate caps/floors are equivalent to bond puts and calls respectively, the above analysis shows that caps and floors can be priced analytically in the Hull–White model. Jamshidian's trick applies to Hull–White (as today's value of a swaption in the Hull–White model is a monotonic function of today's short rate). Thus knowing how to price caps is also sufficient for pricing swaptions. In the even that the underlying is a compounded backward-looking rate rather than a (forward-looking) LIBOR term rate, Turfus (2020) shows how this formula can be straightforwardly modified to take into account the additional convexity. Swaptions can also be priced directly as described in Henrard (2003). Direct implementations are usually more efficient. Monte-Carlo simulation, trees and lattices. However, valuing vanilla instruments such as caps and swaptions is useful primarily for calibration. The real use of the model is to value somewhat more exotic derivatives such as bermudan swaptions on a lattice, or other derivatives in a multi-currency context such as Quanto Constant Maturity Swaps, as explained for example in Brigo and Mercurio (2001). The efficient and exact Monte-Carlo simulation of the Hull–White model with time dependent parameters can be easily performed, see Ostrovski (2013) and (2016). An open-source implementation of the exact Monte-Carlo simulation following Fries (2016) can be found in finmath lib. Forecasting. Even though single factor models such as Vasicek, CIR and Hull–White model has been devised for pricing, recent research has shown their potential with regard to forecasting. In Orlando et al. (2018, 2019,) was provided a new methodology to forecast future interest rates called CIR#. The ideas, apart from turning a short-rate model used for pricing into a forecasting tool, lies in an appropriate partitioning of the dataset into subgroups according to a given distribution ). In there it was shown how the said partitioning enables capturing statistically significant time changes in volatility of interest rates. following the said approach, Orlando et al. (2021) ) compares the Hull–White model with the CIR model in terms of forecasting and prediction of interest rate directionality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "dr(t) = \\left[\\theta(t) - \\alpha(t) r(t)\\right]\\,dt + \\sigma(t)\\, dW(t)." }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "d\\,f(r(t)) = \\left [\\theta(t) + u - \\alpha(t)\\,f(r(t))\\right ]dt + \\sigma_1(t)\\, dW_1(t)," }, { "math_id": 4, "text": "\\displaystyle f" }, { "math_id": 5, "text": "\\displaystyle u" }, { "math_id": 6, "text": "du = -bu\\,dt + \\sigma_2\\,dW_2(t)" }, { "math_id": 7, "text": "\\theta " }, { "math_id": 8, "text": "\\alpha > 0" }, { "math_id": 9, "text": "\\theta(t)/\\alpha)" }, { "math_id": 10, "text": "\\sigma" }, { "math_id": 11, "text": " r(t) = e^{-\\alpha t}r(0) + \\frac{\\theta}{\\alpha} \\left(1- e^{-\\alpha t}\\right) + \\sigma e^{-\\alpha t}\\int_0^t e^{\\alpha u}\\,dW(u)," }, { "math_id": 12, "text": "r(t) \\sim \\mathcal{N}\\left(e^{-\\alpha t} r(0) + \\frac{\\theta}{\\alpha} \\left(1- e^{-\\alpha t}\\right), \\frac{\\sigma^2}{2\\alpha} \\left(1-e^{-2\\alpha t}\\right)\\right)," }, { "math_id": 13, "text": "\\mathcal{N}( \\mu ,\\sigma^2 )" }, { "math_id": 14, "text": "\\mu" }, { "math_id": 15, "text": "\\sigma^2" }, { "math_id": 16, "text": "\\theta(t)" }, { "math_id": 17, "text": " r(t) = e^{-\\alpha t}r(0) + \\int_{0}^{t}e^{\\alpha(s-t)}\\theta(s)ds + \\sigma e^{-\\alpha t}\\int_0^t e^{\\alpha u}\\,dW(u)," }, { "math_id": 18, "text": "r(t) \\sim \\mathcal{N}\\left(e^{-\\alpha t} r(0) + \\int_{0}^{t}e^{\\alpha(s-t)}\\theta(s)ds, \\frac{\\sigma^2}{2\\alpha} \\left(1-e^{-2\\alpha t}\\right)\\right)." }, { "math_id": 19, "text": "P(S,T) = A(S,T)\\exp(-B(S,T)r(S))," }, { "math_id": 20, "text": " B(S,T) = \\frac{1-\\exp(-\\alpha(T-S))}{\\alpha} ," }, { "math_id": 21, "text": " A(S,T) = \\frac{P(0,T)}{P(0,S)}\\exp\\left( \\, -B(S,T) \\frac{\\partial\\log(P(0,S))}{\\partial S} - \\frac{\\sigma^2(\\exp(-\\alpha T)-\\exp(-\\alpha S))^2(\\exp(2\\alpha S)-1)}{4\\alpha^3}\\right) ." }, { "math_id": 22, "text": "P(S,T)" }, { "math_id": 23, "text": "V(t) = P(t,S)\\mathbb{E}_S[V(S) \\mid \\mathcal{F}(t)]." }, { "math_id": 24, "text": "\\mathbb{E}_S" }, { "math_id": 25, "text": "F_V(t,T)" }, { "math_id": 26, "text": "F_V(t,T) = V(t)/P(t,T)" }, { "math_id": 27, "text": "F_V(t,T) = \\mathbb{E}_T[V(T)\\mid\\mathcal{F}(t)]." }, { "math_id": 28, "text": "V(S) = (K-P(S,T))^+." }, { "math_id": 29, "text": "{E}_S[(K-P(S,T))^{+}] = KN(-d_2) - F(t,S,T)N(-d_1)," }, { "math_id": 30, "text": "d_1 = \\frac{\\log(F/K) + \\sigma_P^2S/2}{\\sigma_P \\sqrt{S}}" }, { "math_id": 31, "text": "d_2 = d_1 - \\sigma_P \\sqrt{S}." }, { "math_id": 32, "text": "P(0,S)KN(-d_2) - P(0,T)N(-d_1)." }, { "math_id": 33, "text": "\\sigma_P" }, { "math_id": 34, "text": "\\sqrt{S}\\sigma_P\n=\\frac{\\sigma}{\\alpha}(1-\\exp(-\\alpha(T-S)))\\sqrt{\\frac{1-\\exp(-2\\alpha S)}{2\\alpha}}." } ]
https://en.wikipedia.org/wiki?curid=1224131
12241533
8-simplex
In geometry, an 8-simplex is a self-dual regular 8-polytope. It has 9 vertices, 36 edges, 84 triangle faces, 126 tetrahedral cells, 126 5-cell 4-faces, 84 5-simplex 5-faces, 36 6-simplex 6-faces, and 9 7-simplex 7-faces. Its dihedral angle is cos−1(1/8), or approximately 82.82°. It can also be called an enneazetton, or ennea-8-tope, as a 9-facetted polytope in eight-dimensions. The name "enneazetton" is derived from "ennea" for nine facets in Greek and "-zetta" for having seven-dimensional facets, and "-on". As a configuration. This configuration matrix represents the 8-simplex. The rows and columns correspond to vertices, edges, faces, cells, 4-faces, 5-faces, 6-faces and 7-faces. The diagonal numbers say how many of each element occur in the whole 8-simplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. This self-dual simplex's matrix is identical to its 180 degree rotation. formula_0 Coordinates. The Cartesian coordinates of the vertices of an origin-centered regular enneazetton having edge length 2 are: formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 formula_8 More simply, the vertices of the "8-simplex" can be positioned in 9-space as permutations of (0,0,0,0,0,0,0,0,1). This construction is based on facets of the 9-orthoplex. Another origin-centered construction uses (1,1,1,1,1,1,1,1)/3 and permutations of (1,1,1,1,1,1,1,-11)/12 for edge length √2. Related polytopes and honeycombs. This polytope is a facet in the uniform tessellations: 251, and 521 with respective Coxeter-Dynkin diagrams: This polytope is one of 135 uniform 8-polytopes with A8 symmetry.
[ { "math_id": 0, "text": "\\begin{bmatrix}\\begin{matrix}\n9 & 8 & 28 & 56 & 70 & 56 & 28 & 8\n\\\\ 2 & 36 & 7 & 21 & 35 & 35 & 21 & 7\n\\\\ 3 & 3 & 84 & 6 & 15 & 20 & 15 & 6\n\\\\ 4 & 6 & 4 & 126 & 5 & 10 & 10 & 5\n\\\\ 5 & 10 & 10 & 5 & 126 & 4 & 6 & 4\n\\\\ 6 & 15 & 20 & 15 & 6 & 84 & 3 & 3\n\\\\ 7 & 21 & 35 & 35 & 21 & 7 & 36 & 2\n\\\\ 8 & 28 & 56 & 70 & 56 & 28 & 8 & 9\n\\end{matrix}\\end{bmatrix}" }, { "math_id": 1, "text": "\\left(1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ \\sqrt{1/3},\\ \\pm1\\right)" }, { "math_id": 2, "text": "\\left(1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ -2\\sqrt{1/3},\\ 0\\right)" }, { "math_id": 3, "text": "\\left(1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ -\\sqrt{3/2},\\ 0,\\ 0\\right)" }, { "math_id": 4, "text": "\\left(1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ -2\\sqrt{2/5},\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 5, "text": "\\left(1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ -\\sqrt{5/3},\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 6, "text": "\\left(1/6,\\ \\sqrt{1/28},\\ -\\sqrt{12/7},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 7, "text": "\\left(1/6,\\ -\\sqrt{7/4},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 8, "text": "\\left(-4/3,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" } ]
https://en.wikipedia.org/wiki?curid=12241533
12242
Germanium
Chemical element with atomic number 32 Chemical element with atomic number 32 (Ge) Germanium is a chemical element; it has symbol Ge and atomic number 32. It is lustrous, hard-brittle, grayish-white and similar in appearance to silicon. It is a metalloid (more rarely considered a metal) in the carbon group that is chemically similar to its group neighbors silicon and tin. Like silicon, germanium naturally reacts and forms complexes with oxygen in nature. Because it seldom appears in high concentration, germanium was found comparatively late in the discovery of the elements. Germanium ranks 50th in abundance of the elements in the Earth's crust. In 1869, Dmitri Mendeleev predicted its existence and some of its properties from its position on his periodic table, and called the element ekasilicon. On February 6, 1886, Clemens Winkler at Freiberg University found the new element, along with silver and sulfur, in the mineral argyrodite. Winkler named the element after his country of birth, Germany. Germanium is mined primarily from sphalerite (the primary ore of zinc), though germanium is also recovered commercially from silver, lead, and copper ores. Elemental germanium is used as a semiconductor in transistors and various other electronic devices. Historically, the first decade of semiconductor electronics was based entirely on germanium. Presently, the major end uses are fibre-optic systems, infrared optics, solar cell applications, and light-emitting diodes (LEDs). Germanium compounds are also used for polymerization catalysts and have most recently found use in the production of nanowires. This element forms a large number of organogermanium compounds, such as tetraethylgermanium, useful in organometallic chemistry. Germanium is considered a technology-critical element. Germanium is not thought to be an essential element for any living organism. Similar to silicon and aluminium, naturally-occurring germanium compounds tend to be insoluble in water and thus have little oral toxicity. However, synthetic soluble germanium salts are nephrotoxic, and synthetic chemically reactive germanium compounds with halogens and hydrogen are irritants and toxins. History. In his report on "The Periodic Law of the Chemical Elements" in 1869, the Russian chemist Dmitri Mendeleev predicted the existence of several unknown chemical elements, including one that would fill a gap in the carbon family, located between silicon and tin. Because of its position in his periodic table, Mendeleev called it "ekasilicon (Es)", and he estimated its atomic weight to be 70 (later 72). In mid-1885, at a mine near Freiberg, Saxony, a new mineral was discovered and named "argyrodite" because of its high silver content. The chemist Clemens Winkler analyzed this new mineral, which proved to be a combination of silver, sulfur, and a new element. Winkler was able to isolate the new element in 1886 and found it similar to antimony. He initially considered the new element to be eka-antimony, but was soon convinced that it was instead eka-silicon. Before Winkler published his results on the new element, he decided that he would name his element "neptunium", since the recent discovery of planet Neptune in 1846 had similarly been preceded by mathematical predictions of its existence. However, the name "neptunium" had already been given to another proposed chemical element (though not the element that today bears the name neptunium, which was discovered in 1940). So instead, Winkler named the new element "germanium" (from the Latin word, "Germania", for Germany) in honor of his homeland. Argyrodite proved empirically to be Ag8GeS6. Because this new element showed some similarities with the elements arsenic and antimony, its proper place in the periodic table was under consideration, but its similarities with Dmitri Mendeleev's predicted element "ekasilicon" confirmed that place on the periodic table. With further material from 500 kg of ore from the mines in Saxony, Winkler confirmed the chemical properties of the new element in 1887. He also determined an atomic weight of 72.32 by analyzing pure germanium tetrachloride (GeCl4), while Lecoq de Boisbaudran deduced 72.3 by a comparison of the lines in the spark spectrum of the element. Winkler was able to prepare several new compounds of germanium, including fluorides, chlorides, sulfides, dioxide, and tetraethylgermane (Ge(C2H5)4), the first organogermane. The physical data from those compounds—which corresponded well with Mendeleev's predictions—made the discovery an important confirmation of Mendeleev's idea of element periodicity. Here is a comparison between the prediction and Winkler's data: Until the late 1930s, germanium was thought to be a poorly conducting metal. Germanium did not become economically significant until after 1945 when its properties as an electronic semiconductor were recognized. During World War II, small amounts of germanium were used in some special electronic devices, mostly diodes. The first major use was the point-contact Schottky diodes for radar pulse detection during the War. The first silicon–germanium alloys were obtained in 1955. Before 1945, only a few hundred kilograms of germanium were produced in smelters each year, but by the end of the 1950s, the annual worldwide production had reached . The development of the germanium transistor in 1948 opened the door to countless applications of solid state electronics. From 1950 through the early 1970s, this area provided an increasing market for germanium, but then high-purity silicon began replacing germanium in transistors, diodes, and rectifiers. For example, the company that became Fairchild Semiconductor was founded in 1957 with the express purpose of producing silicon transistors. Silicon has superior electrical properties, but it requires much greater purity that could not be commercially achieved in the early years of semiconductor electronics. Meanwhile, the demand for germanium for fiber optic communication networks, infrared night vision systems, and polymerization catalysts increased dramatically. These end uses represented 85% of worldwide germanium consumption in 2000. The US government even designated germanium as a strategic and critical material, calling for a 146 ton (132 tonne) supply in the national defense stockpile in 1987. Germanium differs from silicon in that the supply is limited by the availability of exploitable sources, while the supply of silicon is limited only by production capacity since silicon comes from ordinary sand and quartz. While silicon could be bought in 1998 for less than $10 per kg, the price of germanium was almost $800 per kg. Characteristics. Under standard conditions, germanium is a brittle, silvery-white, metalloid element. This form constitutes an allotrope known as "α-germanium", which has a metallic luster and a diamond cubic crystal structure, the same structure as silicon and diamond. In this form, germanium has a threshold displacement energy of formula_0. At pressures above 120 kbar, germanium becomes the allotrope "β-germanium" with the same structure as β-tin. Like silicon, gallium, bismuth, antimony, and water, germanium is one of the few substances that expands as it solidifies (i.e. freezes) from the molten state. Germanium is a semiconductor having an indirect bandgap, as is crystalline silicon. Zone refining techniques have led to the production of crystalline germanium for semiconductors that has an impurity of only one part in 1010, making it one of the purest materials ever obtained. The first semi-metallic material discovered (in 2005) to become a superconductor in the presence of an extremely strong electromagnetic field was an alloy of germanium, uranium, and rhodium. Pure germanium is known to spontaneously extrude very long screw dislocations, referred to as "germanium whiskers". The growth of these whiskers is one of the primary reasons for the failure of older diodes and transistors made from germanium, as, depending on what they eventually touch, they may lead to an electrical short. Chemistry. Elemental germanium starts to oxidize slowly in air at around 250 °C, forming GeO2 . Germanium is insoluble in dilute acids and alkalis but dissolves slowly in hot concentrated sulfuric and nitric acids and reacts violently with molten alkalis to produce germanates ([GeO3]2−). Germanium occurs mostly in the oxidation state +4 although many +2 compounds are known. Other oxidation states are rare: +3 is found in compounds such as Ge2Cl6, and +3 and +1 are found on the surface of oxides, or negative oxidation states in germanides, such as −4 in Mg2Ge. Germanium cluster anions (Zintl ions) such as Ge42−, Ge94−, Ge92−, [(Ge9)2]6− have been prepared by the extraction from alloys containing alkali metals and germanium in liquid ammonia in the presence of ethylenediamine or a cryptand. The oxidation states of the element in these ions are not integers—similar to the ozonides O3−. Two oxides of germanium are known: germanium dioxide (GeO2, germania) and germanium monoxide, (GeO). The dioxide, GeO2, can be obtained by roasting germanium disulfide (GeS2), and is a white powder that is only slightly soluble in water but reacts with alkalis to form germanates. The monoxide, germanous oxide, can be obtained by the high temperature reaction of GeO2 with elemental Ge. The dioxide (and the related oxides and germanates) exhibits the unusual property of having a high refractive index for visible light, but transparency to infrared light. Bismuth germanate, Bi4Ge3O12 (BGO), is used as a scintillator. Binary compounds with other chalcogens are also known, such as the disulfide (GeS2) and diselenide (GeSe2), and the monosulfide (GeS), monoselenide (GeSe), and monotelluride (GeTe). GeS2 forms as a white precipitate when hydrogen sulfide is passed through strongly acid solutions containing Ge(IV). The disulfide is appreciably soluble in water and in solutions of caustic alkalis or alkaline sulfides. Nevertheless, it is not soluble in acidic water, which allowed Winkler to discover the element. By heating the disulfide in a current of hydrogen, the monosulfide (GeS) is formed, which sublimes in thin plates of a dark color and metallic luster, and is soluble in solutions of the caustic alkalis. Upon melting with alkaline carbonates and sulfur, germanium compounds form salts known as thiogermanates. Four tetrahalides are known. Under normal conditions germanium tetraiodide (GeI4) is a solid, germanium tetrafluoride (GeF4) a gas and the others volatile liquids. For example, germanium tetrachloride, GeCl4, is obtained as a colorless fuming liquid boiling at 83.1 °C by heating the metal with chlorine. All the tetrahalides are readily hydrolyzed to hydrated germanium dioxide. GeCl4 is used in the production of organogermanium compounds. All four dihalides are known and in contrast to the tetrahalides are polymeric solids. Additionally Ge2Cl6 and some higher compounds of formula Ge"n"Cl2"n"+2 are known. The unusual compound Ge6Cl16 has been prepared that contains the Ge5Cl12 unit with a neopentane structure. Germane (GeH4) is a compound similar in structure to methane. Polygermanes—compounds that are similar to alkanes—with formula Ge"n"H2"n"+2 containing up to five germanium atoms are known. The germanes are less volatile and less reactive than their corresponding silicon analogues. GeH4 reacts with alkali metals in liquid ammonia to form white crystalline MGeH3 which contain the GeH3− anion. The germanium hydrohalides with one, two and three halogen atoms are colorless reactive liquids. The first organogermanium compound was synthesized by Winkler in 1887; the reaction of germanium tetrachloride with diethylzinc yielded tetraethylgermane (Ge(C2H5)4). Organogermanes of the type R4Ge (where R is an alkyl) such as tetramethylgermane (Ge(CH3)4) and tetraethylgermane are accessed through the cheapest available germanium precursor germanium tetrachloride and alkyl nucleophiles. Organic germanium hydrides such as isobutylgermane ((CH3)2CHCH2GeH3) were found to be less hazardous and may be used as a liquid substitute for toxic germane gas in semiconductor applications. Many germanium reactive intermediates are known: germyl free radicals, germylenes (similar to carbenes), and germynes (similar to carbynes). The organogermanium compound 2-carboxyethylgermasesquioxane was first reported in the 1970s, and for a while was used as a dietary supplement and thought to possibly have anti-tumor qualities. Using a ligand called Eind (1,1,3,3,5,5,7,7-octaethyl-s-hydrindacen-4-yl) germanium is able to form a double bond with oxygen (germanone). Germanium hydride and germanium tetrahydride are very flammable and even explosive when mixed with air. Isotopes. Germanium occurs in five natural isotopes: Ge, Ge, Ge, Ge, and Ge. Of these, Ge is very slightly radioactive, decaying by double beta decay with a half-life of . Ge is the most common isotope, having a natural abundance of approximately 36%. Ge is the least common with a natural abundance of approximately 7%. When bombarded with alpha particles, the isotope Ge will generate stable [&lt;noinclude /&gt;[selenium-77|Se]&lt;noinclude /&gt;], releasing high energy electrons in the process. Because of this, it is used in combination with radon for nuclear batteries. At least 27 radioisotopes have also been synthesized, ranging in atomic mass from 58 to 89. The most stable of these is Ge, decaying by electron capture with a half-life of ays. The least stable is Ge, with a half-life of . While most of germanium's radioisotopes decay by beta decay, Ge and Ge decay by delayed proton emission. Ge through Ge isotopes also exhibit minor delayed neutron emission decay paths. Occurrence. Germanium is created by stellar nucleosynthesis, mostly by the s-process in asymptotic giant branch stars. The s-process is a slow neutron capture of lighter elements inside pulsating red giant stars. Germanium has been detected in some of the most distant stars and in the atmosphere of Jupiter. Germanium's abundance in the Earth's crust is approximately 1.6 ppm. Only a few minerals like argyrodite, briartite, germanite, renierite and sphalerite contain appreciable amounts of germanium. Only few of them (especially germanite) are, very rarely, found in mineable amounts. Some zinc–copper–lead ore bodies contain enough germanium to justify extraction from the final ore concentrate. An unusual natural enrichment process causes a high content of germanium in some coal seams, discovered by Victor Moritz Goldschmidt during a broad survey for germanium deposits. The highest concentration ever found was in Hartley coal ash with as much as 1.6% germanium. The coal deposits near Xilinhaote, Inner Mongolia, contain an estimated 1600 tonnes of germanium. Production. About 118 tonnes of germanium were produced in 2011 worldwide, mostly in China (80 t), Russia (5 t) and United States (3 t). Germanium is recovered as a by-product from sphalerite zinc ores where it is concentrated in amounts as great as 0.3%, especially from low-temperature sediment-hosted, massive Zn–Pb–Cu(–Ba) deposits and carbonate-hosted Zn–Pb deposits. A recent study found that at least 10,000 t of extractable germanium is contained in known zinc reserves, particularly those hosted by Mississippi-Valley type deposits, while at least 112,000 t will be found in coal reserves. In 2007 35% of the demand was met by recycled germanium. While it is produced mainly from sphalerite, it is also found in silver, lead, and copper ores. Another source of germanium is fly ash of power plants fueled from coal deposits that contain germanium. Russia and China used this as a source for germanium. Russia's deposits are located in the far east of Sakhalin Island, and northeast of Vladivostok. The deposits in China are located mainly in the lignite mines near Lincang, Yunnan; coal is also mined near Xilinhaote, Inner Mongolia. The ore concentrates are mostly sulfidic; they are converted to the oxides by heating under air in a process known as roasting: GeS2 + 3 O2 → GeO2 + 2 SO2 Some of the germanium is left in the dust produced, while the rest is converted to germanates, which are then leached (together with zinc) from the cinder by sulfuric acid. After neutralization, only the zinc stays in solution while germanium and other metals precipitate. After removing some of the zinc in the precipitate by the Waelz process, the residing Waelz oxide is leached a second time. The dioxide is obtained as precipitate and converted with chlorine gas or hydrochloric acid to germanium tetrachloride, which has a low boiling point and can be isolated by distillation: GeO2 + 4 HCl → GeCl4 + 2 H2O GeO2 + 2 Cl2 → GeCl4 + O2 Germanium tetrachloride is either hydrolyzed to the oxide (GeO2) or purified by fractional distillation and then hydrolyzed. The highly pure GeO2 is now suitable for the production of germanium glass. It is reduced to the element by reacting it with hydrogen, producing germanium suitable for infrared optics and semiconductor production: GeO2 + 2 H2 → Ge + 2 H2O The germanium for steel production and other industrial processes is normally reduced using carbon: GeO2 + C → Ge + CO2 Applications. The major end uses for germanium in 2007, worldwide, were estimated to be: 35% for fiber-optics, 30% infrared optics, 15% polymerization catalysts, and 15% electronics and solar electric applications. The remaining 5% went into such uses as phosphors, metallurgy, and chemotherapy. Optics. The notable properties of germania (GeO2) are its high index of refraction and its low optical dispersion. These make it especially useful for wide-angle camera lenses, microscopy, and the core part of optical fibers. It has replaced titania as the dopant for silica fiber, eliminating the subsequent heat treatment that made the fibers brittle. At the end of 2002, the fiber optics industry consumed 60% of the annual germanium use in the United States, but this is less than 10% of worldwide consumption. GeSbTe is a phase change material used for its optic properties, such as that used in rewritable DVDs. Because germanium is transparent in the infrared wavelengths, it is an important infrared optical material that can be readily cut and polished into lenses and windows. It is especially used as the front optic in thermal imaging cameras working in the 8 to 14 micron range for passive thermal imaging and for hot-spot detection in military, mobile night vision, and fire fighting applications. It is used in infrared spectroscopes and other optical equipment that require extremely sensitive infrared detectors. It has a very high refractive index (4.0) and must be coated with anti-reflection agents. Particularly, a very hard special antireflection coating of diamond-like carbon (DLC), refractive index 2.0, is a good match and produces a diamond-hard surface that can withstand much environmental abuse. Electronics. Germanium can be alloyed with silicon, and silicon–germanium alloys are rapidly becoming an important semiconductor material for high-speed integrated circuits. Circuits utilizing the properties of Si-SiGe heterojunctions can be much faster than those using silicon alone. The SiGe chips, with high-speed properties, can be made with low-cost, well-established production techniques of the silicon chip industry. High efficiency solar panels are a major use of germanium. Because germanium and gallium arsenide have nearly identical lattice constant, germanium substrates can be used to make gallium-arsenide solar cells. Germanium is the substrate of the wafers for high-efficiency multijunction photovoltaic cells for space applications, such as the Mars Exploration Rovers, which use triple-junction gallium arsenide on germanium cells. High-brightness LEDs, used for automobile headlights and to backlight LCD screens, are also an important application. Germanium-on-insulator (GeOI) substrates are seen as a potential replacement for silicon on miniaturized chips. CMOS circuit based on GeOI substrates has been reported recently. Other uses in electronics include phosphors in fluorescent lamps and solid-state light-emitting diodes (LEDs). Germanium transistors are still used in some effects pedals by musicians who wish to reproduce the distinctive tonal character of the "fuzz"-tone from the early rock and roll era, most notably the Dallas Arbiter Fuzz Face. Germanium has been studied as a potential material for implantable bioelectronic sensors that are resorbed in the body without generating harmful hydrogen gas, replacing zinc oxide- and indium gallium zinc oxide-based implementations. Other uses. Germanium dioxide is also used in catalysts for polymerization in the production of polyethylene terephthalate (PET). The high brilliance of this polyester is especially favored for PET bottles marketed in Japan. In the United States, germanium is not used for polymerization catalysts. Due to the similarity between silica (SiO2) and germanium dioxide (GeO2), the silica stationary phase in some gas chromatography columns can be replaced by GeO2. In recent years germanium has seen increasing use in precious metal alloys. In sterling silver alloys, for instance, it reduces firescale, increases tarnish resistance, and improves precipitation hardening. A tarnish-proof silver alloy trademarked Argentium contains 1.2% germanium. Semiconductor detectors made of single crystal high-purity germanium can precisely identify radiation sources—for example in airport security. Germanium is useful for monochromators for beamlines used in single crystal neutron scattering and synchrotron X-ray diffraction. The reflectivity has advantages over silicon in neutron and high energy X-ray applications. Crystals of high purity germanium are used in detectors for gamma spectroscopy and the search for dark matter. Germanium crystals are also used in X-ray spectrometers for the determination of phosphorus, chlorine and sulfur. Germanium is emerging as an important material for spintronics and spin-based quantum computing applications. In 2010, researchers demonstrated room temperature spin transport and more recently donor electron spins in germanium has been shown to have very long coherence times. Strategic importance. Due to its use in advanced electronics and optics, Germanium is considered a technology-critical element (by e.g. the European Union), essential to fulfill the green and digital transition. As China controls 60% of global Germanium production it holds a dominant position over the world's supply chains. On 3 July 2023 China suddenly imposed restrictions on the exports of germanium (and gallium), ratcheting up trade tensions with Western allies. Invoking "national security interests," the Chinese Ministry of Commerce informed that companies that intend to sell products containing germanium would need an export licence. The products/compounds targeted are: germanium dioxide, germanium epitaxial growth substrate, germanium ingot, germanium metal, germanium tetrachloride and zinc germanium phosphide. It sees such products as "dual-use" items that may have military purposes and therefore warrant an extra layer of oversight. The new dispute opened a new chapter in the increasingly fierce technology race that has pitted the United States, and to a lesser extent Europe, against China. The US wants its allies to heavily curb, or downright prohibit, advanced electronic components bound to the Chinese market in order to prevent Beijing from securing global technology supremacy. China denied any tit-for-tat intention behind the Germanium export restrictions. Following China's export restrictions, Russian state-owned company Rostec announced an increase in germanium production to meet domestic demand. Germanium and health. Germanium is not considered essential to the health of plants or animals. Germanium in the environment has little or no health impact. This is primarily because it usually occurs only as a trace element in ores and carbonaceous materials, and the various industrial and electronic applications involve very small quantities that are not likely to be ingested. For similar reasons, end-use germanium has little impact on the environment as a biohazard. Some reactive intermediate compounds of germanium are poisonous (see precautions, below). Germanium supplements, made from both organic and inorganic germanium, have been marketed as an alternative medicine capable of treating leukemia and lung cancer. There is, however, no medical evidence of benefit; some evidence suggests that such supplements are actively harmful. U.S. Food and Drug Administration (FDA) research has concluded that inorganic germanium, when used as a nutritional supplement, "presents potential human health hazard". Some germanium compounds have been administered by alternative medical practitioners as non-FDA-allowed injectable solutions. Soluble inorganic forms of germanium used at first, notably the citrate-lactate salt, resulted in some cases of renal dysfunction, hepatic steatosis, and peripheral neuropathy in individuals using them over a long term. Plasma and urine germanium concentrations in these individuals, several of whom died, were several orders of magnitude greater than endogenous levels. A more recent organic form, beta-carboxyethylgermanium sesquioxide (propagermanium), has not exhibited the same spectrum of toxic effects. Certain compounds of germanium have low toxicity to mammals, but have toxic effects against certain bacteria. Precautions for chemically reactive germanium compounds. While use of germanium itself does not require precautions, some of germanium's artificially produced compounds are quite reactive and present an immediate hazard to human health on exposure. For example, Germanium tetrachloride and germane (GeH4) are a liquid and gas, respectively, that can be very irritating to the eyes, skin, lungs, and throat. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "19.7^{+0.6}_{-0.5}~\\text{eV}" } ]
https://en.wikipedia.org/wiki?curid=12242
12242175
9-simplex
In geometry, a 9-simplex is a self-dual regular 9-polytope. It has 10 vertices, 45 edges, 120 triangle faces, 210 tetrahedral cells, 252 5-cell 4-faces, 210 5-simplex 5-faces, 120 6-simplex 6-faces, 45 7-simplex 7-faces, and 10 8-simplex 8-faces. Its dihedral angle is cos−1(1/9), or approximately 83.62°. It can also be called a decayotton, or deca-9-tope, as a 10-facetted polytope in 9-dimensions.. The name "decayotton" is derived from "deca" for ten facets in Greek and yotta (a variation of "oct" for eight), having 8-dimensional facets, and "-on". Coordinates. The Cartesian coordinates of the vertices of an origin-centered regular decayotton having edge length 2 are: formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 formula_8 More simply, the vertices of the "9-simplex" can be positioned in 10-space as permutations of (0,0,0,0,0,0,0,0,0,1). These are the vertices of one Facet of the 10-orthoplex.
[ { "math_id": 0, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ \\sqrt{1/3},\\ \\pm1\\right)" }, { "math_id": 1, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ -2\\sqrt{1/3},\\ 0\\right)" }, { "math_id": 2, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ -\\sqrt{3/2},\\ 0,\\ 0\\right)" }, { "math_id": 3, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ -2\\sqrt{2/5},\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 4, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ -\\sqrt{5/3},\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 5, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ -\\sqrt{12/7},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 6, "text": "\\left(\\sqrt{1/45},\\ 1/6,\\ -\\sqrt{7/4},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 7, "text": "\\left(\\sqrt{1/45},\\ -4/3,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 8, "text": "\\left(-3\\sqrt{1/5},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" } ]
https://en.wikipedia.org/wiki?curid=12242175
1224250
Breakdown voltage
Voltage at which insulator becomes conductive The breakdown voltage of an insulator is the minimum voltage that causes a portion of an insulator to experience electrical breakdown and become electrically conductive. For diodes, the breakdown voltage is the minimum reverse voltage that makes the diode conduct appreciably in reverse. Some devices (such as TRIACs) also have a "forward breakdown voltage". Electrical breakdown. Materials are often classified as conductors or insulators based on their resistivity. A conductor is a substance which contains many mobile charged particles called charge carriers which are free to move about inside the material. An electric field is created across a piece of the material by applying a voltage difference between electrical contacts on different sides of the material. The force of the field causes the charge carriers within the material to move, creating an electric current from the positive contact to the negative contact. For example, in metals one or more of the negatively charged electrons in each atom, called conduction electrons, are free to move about the crystal lattice. An electric field causes a large current to flow, so metals have low resistivity, making them good conductors. In contrast in materials like plastics and ceramics all the electrons are tightly bound to atoms, so under normal conditions there are very few mobile charge carriers in the material. Applying a voltage causes only a very small current to flow, giving the material a very high resistivity, and these are classed as insulators. However, if a strong enough electric field is applied, all insulators become conductors. If the voltage applied across a piece of insulator is increased, at a certain electric field strength the number of charge carriers in the material suddenly increases enormously and its resistivity drops, causing a strong current to flow through it. This is called electrical breakdown. Breakdown occurs when the electric field becomes strong enough to pull electrons from the molecules of the material, ionizing them. The released electrons are accelerated by the field and strike other atoms, creating more free electrons and ions in a chain reaction, flooding the material with charged particles. This occurs at a characteristic electric field strength in each material, measured in volts per centimeter, called its dielectric strength. When a voltage is applied across a piece of insulator, the electric field at each point is equal to the gradient of the voltage. The voltage gradient may vary at different points across the object, due to its shape or local variations in composition. Electrical breakdown occurs when the field first exceeds the dielectric strength of the material in some region of the object. Once one area has broken down and become conductive, that area has almost no voltage drop and the full voltage is applied across the remaining length of the insulator, resulting in a higher gradient and electric field, causing additional areas in the insulator to break down. The breakdown quickly spreads in a conductive path through the insulator until it extends from the positive to the negative contact. The voltage at which this occurs is called the "breakdown voltage" of that object. Breakdown voltage varies with the material composition, shape of an object, and the length of material between the electrical contacts. Solids. Breakdown voltage is a characteristic of an insulator that defines the maximum voltage difference that can be applied across the material before the insulator conducts. In solid insulating materials, this usually creates a weakened path within the material by creating permanent molecular or physical changes by the sudden current. Within rarefied gases found in certain types of lamps, breakdown voltage is also sometimes called the "striking voltage". The breakdown voltage of a material is not a definite value because it is a form of failure and there is a statistical probability whether the material will fail at a given voltage. When a value is given it is usually the mean breakdown voltage of a large sample. Another term is "withstand voltage", where the probability of failure at a given voltage is so low it is considered, when designing insulation, that the material will not fail at this voltage. Two different breakdown voltage measurements of a material are the AC and impulse breakdown voltages. The AC voltage is the line frequency of the mains. The impulse breakdown voltage is simulating lightning strikes, and usually uses a 1.2 microsecond rise for the wave to reach 90% amplitude, then drops back down to 50% amplitude after 50 microseconds. Two technical standards governing performing these tests are ASTM D1816 and ASTM D3300 published by ASTM. Gases and vacuum. In standard conditions at atmospheric pressure, air serves as an excellent insulator, requiring the application of a significant voltage of 3.0 kV/mm before breaking down (e.g., lightning, or sparking across plates of a capacitor, or the electrodes of a spark plug). Using other gases, this breakdown potential may decrease to an extent that two uninsulated surfaces with different potentials might induce the electrical breakdown of the surrounding gas. This may damage an apparatus, as a breakdown is analogous to a short circuit. In a gas, the breakdown voltage can be determined by Paschen's law. The breakdown voltage in a partial vacuum is represented as formula_0 where formula_1 is the breakdown potential in volts DC, formula_2 and formula_3 are constants that depend on the surrounding gas, formula_4 represents the pressure of the surrounding gas, formula_5 represents the distance in centimetres between the electrodes, and formula_6 represents the Secondary Electron Emission Coefficient. A detailed derivation, and some background information, is given in the article about Paschen's law. Diodes and other semiconductors. Breakdown voltage is a parameter of a diode that defines the largest reverse voltage that can be applied without causing an exponential increase in the leakage current in the diode. Exceeding the breakdown voltage of a diode, per se, is not destructive; although, exceeding its current capacity will be. In fact, Zener diodes are essentially just heavily doped normal diodes that exploit the breakdown voltage of a diode to provide regulation of voltage levels. Rectifier diodes (semiconductor or tube/valve) may have several voltage ratings, such as the peak inverse voltage (PIV) across the diode, and the maximum RMS input voltage to the rectifier circuit (which will be much less). Many small-signal transistors need to have any breakdown currents limited to much lower values to avoid excessive heating. To avoid damage to the device, and to limit the effects excessive leakage current may have on the surrounding circuit, the following bipolar transistor maximum ratings are often specified: Field-effect transistors have similar maximum ratings, the most important one for junction FETs is the gate-drain voltage rating. Some devices may also have a "maximum rate of change" of voltage specified. Electrical apparatus. Power transformers, circuit breakers, switchgear and other electrical apparatus connected to overhead transmission lines are exposed to transient lightning surge voltages induced on the power circuit. Electrical apparatus will have a "basic lightning impulse level" (BIL) specified. This is the crest value of an impulse waveform with a standardized wave shape, intended to simulate the electrical stress of a lightning surge or a surge induced by circuit switching. The BIL is coordinated with the typical operating voltage of the apparatus. For high-voltage transmission lines, the impulse level is related to the clearance to ground of energized components. As an example, a transmission line rated 138 kV would be designed for a BIL of 650 kV. A higher BIL may be specified than the minimum, where the exposure to lightning is severe. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nV_\\mathrm{b} = \\frac {B\\,p\\,d}{\\ln \\left(A\\,p\\,d\\right) - \\ln\\left[\\ln\\left(1 + \\frac {1}{\\gamma_\\mathrm{se}}\\right)\\right]}\n" }, { "math_id": 1, "text": "V_\\mathrm{b}" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "B" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": " d " }, { "math_id": 6, "text": " \\gamma_\\mathrm{se} " } ]
https://en.wikipedia.org/wiki?curid=1224250
12247661
Volume operator
Operator whose expectation value gives the volume A quantum field theory of general relativity provides operators that measure the geometry of spacetime. The volume operator formula_0 of a region formula_1 is defined as the operator that yields the expectation value of a volume measurement of the region formula_1, given a state formula_2 of quantum General Relativity. I.e.formula_3 is the expectation value for the volume of formula_1. Loop Quantum Gravity, for example, provides volume operators, area operators and length operators for regions, surfaces and path respectively.
[ { "math_id": 0, "text": "V(R)" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "\\psi" }, { "math_id": 3, "text": "\\lang \\psi, V(R) \\psi \\rang" } ]
https://en.wikipedia.org/wiki?curid=12247661
12247909
Siegel disc
A Siegel disc or Siegel disk is a connected component in the Fatou set where the dynamics is analytically conjugate to an irrational rotation. Description. Given a holomorphic endomorphism formula_0 on a Riemann surface formula_1 we consider the dynamical system generated by the iterates of formula_2 denoted by formula_3. We then call the orbit formula_4 of formula_5 as the set of forward iterates of formula_5. We are interested in the asymptotic behavior of the orbits in formula_1 (which will usually be formula_6, the complex plane or formula_7, the Riemann sphere), and we call formula_1 the phase plane or "dynamical plane". One possible asymptotic behavior for a point formula_5 is to be a fixed point, or in general a "periodic point". In this last case formula_8 where formula_9 is the period and formula_10 means formula_5 is a fixed point. We can then define the "multiplier" of the orbit as formula_11 and this enables us to classify periodic orbits as "attracting" if formula_12 "superattracting" if formula_13), "repelling" if formula_14 and indifferent if formula_15. Indifferent periodic orbits can be either "rationally indifferent" or "irrationally indifferent", depending on whether formula_16 for some formula_17 or formula_18 for all formula_17, respectively. Siegel discs are one of the possible cases of connected components in the Fatou set (the complementary set of the Julia set), according to Classification of Fatou components, and can occur around irrationally indifferent periodic points. The Fatou set is, roughly, the set of points where the iterates behave similarly to their neighbours (they form a normal family). Siegel discs correspond to points where the dynamics of formula_2 are analytically conjugate to an irrational rotation of the complex unit disc. Name. The Siegel disc is named in honor of Carl Ludwig Siegel. Formal definition. Let formula_19 be a holomorphic endomorphism where formula_1 is a Riemann surface, and let U be a connected component of the Fatou set formula_20. We say U is a Siegel disc of f around the point formula_5 if there exists a biholomorphism formula_21 where formula_22 is the unit disc and such that formula_23 for some formula_24 and formula_25. Siegel's theorem proves the existence of Siegel discs for irrational numbers satisfying a "strong irrationality condition" (a Diophantine condition), thus solving an open problem since Fatou conjectured his theorem on the Classification of Fatou components. Later Alexander D. Brjuno improved this condition on the irrationality, enlarging it to the Brjuno numbers. This is part of the result from the Classification of Fatou components. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f:S\\to S" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "f^n=f\\circ\\stackrel{\\left(n\\right)}{\\cdots}\\circ f" }, { "math_id": 4, "text": "\\mathcal{O}^+(z_0)" }, { "math_id": 5, "text": "z_0" }, { "math_id": 6, "text": "\\mathbb{C}" }, { "math_id": 7, "text": "\\mathbb{\\hat C}=\\mathbb{C}\\cup\\{\\infty\\}" }, { "math_id": 8, "text": "f^p(z_0)=z_0" }, { "math_id": 9, "text": "p" }, { "math_id": 10, "text": "p=1" }, { "math_id": 11, "text": "\\rho=(f^p)'(z_0)" }, { "math_id": 12, "text": "|\\rho|<1" }, { "math_id": 13, "text": "|\\rho|=0" }, { "math_id": 14, "text": "|\\rho|>1" }, { "math_id": 15, "text": "\\rho=1" }, { "math_id": 16, "text": "\\rho^n=1" }, { "math_id": 17, "text": "n\\in\\mathbb{Z}" }, { "math_id": 18, "text": "\\rho^n\\neq1" }, { "math_id": 19, "text": "f\\colon S\\to S" }, { "math_id": 20, "text": "\\mathcal{F}(f)" }, { "math_id": 21, "text": "\\phi:U\\to\\mathbb{D}" }, { "math_id": 22, "text": "\\mathbb{D}" }, { "math_id": 23, "text": "\\phi(f^n(\\phi^{-1}(z)))=e^{2\\pi i\\alpha n}z" }, { "math_id": 24, "text": "\\alpha\\in\\mathbb{R}\\backslash\\mathbb{Q}" }, { "math_id": 25, "text": "\\phi(z_0)=0" } ]
https://en.wikipedia.org/wiki?curid=12247909
1225337
Principal part
Widely-used term in mathematics In mathematics, the principal part has several independent meanings but usually refers to the negative-power portion of the Laurent series of a function. Laurent series definition. The principal part at formula_0 of a function formula_1 is the portion of the Laurent series consisting of terms with negative degree. That is, formula_2 is the principal part of formula_3 at formula_4. If the Laurent series has an inner radius of convergence of formula_5, then formula_6 has an essential singularity at formula_7 if and only if the principal part is an infinite sum. If the inner radius of convergence is not formula_5, then formula_6 may be regular at formula_7 despite the Laurent series having an infinite principal part. Other definitions. Calculus. Consider the difference between the function differential and the actual increment: formula_8 formula_9 The differential "dy" is sometimes called the principal (linear) part of the function increment "Δy". Distribution theory. The term principal part is also used for certain kinds of distributions having a singular support at a single point. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z=a" }, { "math_id": 1, "text": "f(z) = \\sum_{k=-\\infty}^\\infty a_k (z-a)^k" }, { "math_id": 2, "text": "\\sum_{k=1}^\\infty a_{-k} (z-a)^{-k}" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": " a " }, { "math_id": 5, "text": "0" }, { "math_id": 6, "text": "f(z)" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "\\frac{\\Delta y}{\\Delta x}=f'(x)+\\varepsilon " }, { "math_id": 9, "text": " \\Delta y=f'(x)\\Delta x +\\varepsilon \\Delta x = dy+\\varepsilon \\Delta x" } ]
https://en.wikipedia.org/wiki?curid=1225337
12253446
X and Y bosons
Hypothetical elementary particles In particle physics, the X and Y bosons (sometimes collectively called "X bosons") are hypothetical elementary particles analogous to the W and Z bosons, but corresponding to a unified force predicted by the Georgi–Glashow model, a grand unified theory (GUT). Since the X and Y boson mediate the grand unified force, they would have unusual high mass, which requires more energy to create than the reach of any current particle collider experiment. Significantly, the X and Y bosons couple quarks (constituents of protons and others) to leptons (such as positrons), allowing violation of the conservation of baryon number thus permitting proton decay. However, the Hyper-Kamiokande has put a lower bound on the proton's half-life as around 1034 years. Since some grand unified theories such as the Georgi–Glashow model predict a half-life "less" than this, then the existence of X and Y bosons, as formulated by this particular model, remain hypothetical. Details. An X boson would have the following two decay modes: +   →   L   +   R +   →   L   +   R where the two decay products in each process have opposite chirality, is an up quark, is a down antiquark, and is a positron. A Y boson would have the following three decay modes: +   →   L   +   R +   →   L   +   R +   →   L   +   R where is an up antiquark and is an electron antineutrino. The first product of each decay has left-handed chirality and the second has right-handed chirality, which always produces one fermion with the same handedness that would be produced by the decay of a W boson, and one fermion with contrary handedness ("wrong handed"). Similar decay products exist for the other quark-lepton generations. In these reactions, neither the lepton number (L) nor the baryon number (B) is separately conserved, but the combination is. Different branching ratios between the X boson and its antiparticle (as is the case with the K-meson) would explain baryogenesis. For instance, if an + / − pair is created out of energy, and they follow the two branches described above: + → L + R , − → L + R ; re-grouping the result shows it to be a hydrogen atom. Origin. The X± and Y± bosons are defined respectively as the six Q ± and the six Q ± components of the final two terms of the adjoint 24 representation of SU(5) as it transforms under the standard model's group: formula_0. The positively-charged X and Y carry anti-color charges (equivalent to having two different normal color charges), while the negatively-charged X and Y carry normal color charges, and the signs of the Y bosons' weak isospins are always opposite the signs of their electric charges. In terms of their action on formula_1 X bosons rotate between a color index and the weak isospin-up index, while Y bosons rotate between a color index and the weak isospin-down index. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{24}\\rightarrow (8,1)_0\\oplus (1,3)_0\\oplus (1,1)_0\\oplus (3,2)_{-\\frac{5}{6}}\\oplus (\\bar{3},2)_{\\frac{5}{6}}" }, { "math_id": 1, "text": "\\ \\mathbb{C}^5\\ ," } ]
https://en.wikipedia.org/wiki?curid=12253446
12253897
Sliding (motion)
Relative motion of two surfaces in contact or separated by a thin film of fluid &lt;templatestyles src="Hlist/styles.css"/&gt; Sliding is a type of motion between two surfaces in contact. This can be contrasted to "rolling" motion. Both types of motion may occur in bearings. The relative motion or tendency toward such motion between two surfaces is resisted by friction. This means that the force of friction always acts on an object in the direction opposite to its velocity (relative to the surface it's sliding on). Friction may damage or "wear" the surfaces in contact. However, wear can be reduced by lubrication. The science and technology of friction, lubrication, and wear is known as "tribology". Sliding may occur between two objects of arbitrary shape, whereas rolling friction is the frictional force associated with the rotational movement of a somewhat disclike or other circular object along a surface. Generally, the frictional force of rolling friction is less than that associated with sliding kinetic friction. Typical values for the coefficient of rolling friction are less than that of sliding friction. Correspondingly sliding friction typically produces greater sound and thermal bi-products. One of the most common examples of sliding friction is the movement of braking motor vehicle tires on a roadway, a process which generates considerable heat and sound, and is typically taken into account in assessing the magnitude of roadway noise pollution. Sliding friction. Sliding friction (also called kinetic friction) is a contact force that resists the sliding motion of two objects or an object and a surface. Sliding friction is almost always less than that of static friction; this is why it is easier to move an object once it starts moving rather than to get the object to begin moving from a rest position. formula_0 Where Fk, is the force of kinetic friction. μk is the coefficient of kinetic friction, and N is the normal force. Motion of sliding friction. The motion of sliding friction can be modelled (in simple systems of motion) by Newton's Second Law formula_1 formula_2 Where formula_3 is the external force. Motion on an inclined plane. A common problem presented in introductory physics classes is a block subject to friction as it slides up or down an inclined plane. This is shown in the free body diagram to the right. The component of the force of gravity in the direction of the incline is given by: formula_4 The normal force (perpendicular to the surface) is given by: formula_5 Therefore, since the force of friction opposes the motion of the block, formula_6 To find the coefficient of kinetic friction on an inclined plane, one must find the moment where the force parallel to the plane is equal to the force perpendicular; this occurs when the block is moving at a constant velocity at some angle formula_7 formula_8 formula_9 or formula_10 Here it is found that: formula_11 where formula_7 is the angle at which the block begins moving at a constant velocity References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nF_{k} = \\mu_{k} \\cdot N\n" }, { "math_id": 1, "text": "\\sum F = ma \n" }, { "math_id": 2, "text": "F_E - F_k = ma" }, { "math_id": 3, "text": "F_E" }, { "math_id": 4, "text": "F_g = mg\\sin{\\theta}" }, { "math_id": 5, "text": "N = mg\\cos{\\theta}\n\n" }, { "math_id": 6, "text": "F_k =\\mu_k \\cdot mg\\cos{\\theta}" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\sum F = ma = 0\n\n" }, { "math_id": 9, "text": "F_k = F_g" }, { "math_id": 10, "text": "\\mu_k mg\\cos{\\theta} = mg\\sin{\\theta}" }, { "math_id": 11, "text": "\\mu_k = \\frac{mg\\sin{\\theta}}{mg\\cos{\\theta}} = \\tan{\\theta}" } ]
https://en.wikipedia.org/wiki?curid=12253897
12254238
Kabsch algorithm
Type of algorithm The Kabsch algorithm, also known as the Kabsch-Umeyama algorithm, named after Wolfgang Kabsch and Shinji Umeyama, is a method for calculating the optimal rotation matrix that minimizes the RMSD (root mean squared deviation) between two paired sets of points. It is useful for point-set registration in computer graphics, and in cheminformatics and bioinformatics to compare molecular and protein structures (in particular, see root-mean-square deviation (bioinformatics)). The algorithm only computes the rotation matrix, but it also requires the computation of a translation vector. When both the translation and rotation are actually performed, the algorithm is sometimes called partial Procrustes superimposition (see also orthogonal Procrustes problem). Description. Let P and Q be two sets, each containing N points in formula_0. We want to find the transformation from Q to P. For simplicity, we will consider the three-dimensional case (formula_1). The sets P and Q can each be represented by "N" × 3 matrices with the first row containing the coordinates of the first point, the second row containing the coordinates of the second point, and so on, as shown in this matrix: formula_2 The algorithm works in three steps: a translation, the computation of a covariance matrix, and the computation of the optimal rotation matrix. Translation. Both sets of coordinates must be translated first, so that their centroid coincides with the origin of the coordinate system. This is done by subtracting the centroid coordinates from the point coordinates. Computation of the covariance matrix. The second step consists of calculating a matrix H. In matrix notation, formula_3 or, using summation notation, formula_4 which is a cross-covariance matrix when P and Q are seen as data matrices. Computation of the optimal rotation matrix. It is possible to calculate the optimal rotation R based on the matrix formula formula_5 but implementing a numerical solution to this formula becomes complicated when all special cases are accounted for (for example, the case of H not having an inverse). If singular value decomposition (SVD) routines are available the optimal rotation, R, can be calculated using the following simple algorithm. First, calculate the SVD of the covariance matrix H, formula_6 where U and V are orthogonal and formula_7 is diagonal. Next, record if the orthogonal matrices contain a reflection, formula_8 Finally, calculate our optimal rotation matrix R as formula_9 This R minimizes formula_10, where formula_11 and formula_12 are rows in Q and P respectively. Alternatively, optimal rotation matrix can also be directly evaluated as quaternion. This alternative description has been used in the development of a rigorous method for removing rigid-body motions from molecular dynamics trajectories of flexible molecules. In 2002 a generalization for the application to probability distributions (continuous or not) was also proposed. Generalizations. The algorithm was described for points in a three-dimensional space. The generalization to D dimensions is immediate. External links. This SVD algorithm is described in more detail at https://web.archive.org/web/20140225050055/http://cnx.org/content/m11608/latest/ A Matlab function is available at http://www.mathworks.com/matlabcentral/fileexchange/25746-kabsch-algorithm A C++ implementation (and unit test) using Eigen A Python script is available at https://github.com/charnley/rmsd. Another implementation can be found in SciPy. A free PyMol plugin easily implementing Kabsch is . (This previously linked to CEalign , but this uses the Combinatorial Extension (CE) algorithm.) VMD uses the Kabsch algorithm for its alignment. The FoldX modeling toolsuite incorporates the Kabsch algorithm to measure RMSD between Wild Type and Mutated protein structures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "n = 3" }, { "math_id": 2, "text": "\\begin{pmatrix}\nx_1 & y_1 & z_1 \\\\\nx_2 & y_2 & z_2 \\\\\n\\vdots & \\vdots & \\vdots \\\\\nx_N & y_N & z_N \\end{pmatrix}" }, { "math_id": 3, "text": " H = P^\\mathsf{T}Q \\, " }, { "math_id": 4, "text": " H_{ij} = \\sum_{k = 1}^N P_{ki} Q_{kj}, " }, { "math_id": 5, "text": " R = \\left(H^\\mathsf{T} H\\right)^\\frac12 H^{-1}, " }, { "math_id": 6, "text": " H = U \\Sigma V^\\mathsf{T} " }, { "math_id": 7, "text": "\\Sigma" }, { "math_id": 8, "text": " d = \\det\\left(U V^\\mathsf{T}\\right) = \\det(U) \\det(V)." }, { "math_id": 9, "text": " R = U \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & d \\end{pmatrix} V^\\mathsf{T}. " }, { "math_id": 10, "text": "\\sum_{k = 1}^N|R q_k - p_k|" }, { "math_id": 11, "text": "q_k" }, { "math_id": 12, "text": "p_k" } ]
https://en.wikipedia.org/wiki?curid=12254238
12257
Gamma
Third letter of the Greek alphabet Gamma (; uppercase Γ, lowercase γ; ) is the third letter of the Greek alphabet. In the system of Greek numerals it has a value of 3. In Ancient Greek, the letter gamma represented a voiced velar stop . In Modern Greek, this letter normally represents a voiced velar fricative , except before either of the two front vowels (/e/, /i/), where it represents a voiced palatal fricative ; while /g/ in foreign words is instead commonly transcribed as γκ). In the International Phonetic Alphabet and other modern Latin-alphabet based phonetic notations, it represents the voiced velar fricative. History. The Greek letter Gamma Γ is a grapheme derived from the Phoenician letter 𐤂‎ ("gīml") which was rotated from the right-to-left script of Canaanite to accommodate the Greek language's writing system of left-to-right. The Canaanite grapheme represented the /g/ phoneme in the Canaanite language, and as such is cognate with "gimel" ג of the Hebrew alphabet. Based on its name, the letter has been interpreted as an abstract representation of a camel's neck, but this has been criticized as contrived, and it is more likely that the letter is derived from an Egyptian hieroglyph representing a club or throwing stick. In Archaic Greece, the shape of gamma was closer to a classical lambda (Λ), while lambda retained the Phoenician L-shape (𐌋‎). Letters that arose from the Greek gamma include Etruscan (Old Italic) 𐌂, Roman C and G, Runic "kaunan" ᚲ, Gothic "geuua" 𐌲, the Coptic Ⲅ, and the Cyrillic letters Г and Ґ. Greek phoneme. The Ancient Greek /g/ phoneme was the voiced velar stop, continuing the reconstructed proto-Indo-European "*g", "*ǵ". The modern Greek phoneme represented by gamma is realized either as a voiced palatal fricative () before a front vowel (/e/, /i/), or as a voiced velar fricative in all other environments. Both in Ancient and in Modern Greek, before other velar consonants (κ, χ, ξ – that is, "k, kh, ks"), gamma represents a velar nasal . A double gamma γγ (e.g., άγγελος, "angel") represents the sequence (phonetically varying ) or . Phonetic transcription. Lowercase Greek gamma is used in the Americanist phonetic notation and Uralic Phonetic Alphabet to indicate voiced consonants. The gamma was also added to the Latin alphabet, as Latin gamma, in the following forms: majuscule Ɣ, minuscule ɣ, and superscript modifier letter ˠ. In the International Phonetic Alphabet the minuscule letter is used to represent a voiced velar fricative and the superscript modifier letter is used to represent velarization. It is not to be confused with the character , which looks like a lowercase Latin gamma that lies above the baseline rather than crossing, and which represents the close-mid back unrounded vowel. In certain nonstandard variations of the IPA, the uppercase form is used. It is as a full-fledged majuscule and minuscule letter in the alphabets of some of languages of Africa such as Dagbani, Dinka, Kabye, and Ewe, and Berber languages using the Berber Latin alphabet. It is sometimes also used in the romanization of Pashto. Mathematics and science. Lowercase. The lowercase letter formula_0 is used as a symbol for: The lowercase Latin gamma ɣ can also be used in contexts (such as chemical or molecule nomenclature) where gamma must not be confused with the letter y, which can occur in some computer typefaces. Uppercase. The uppercase letter formula_2 is used as a symbol for: Encoding. HTML. The HTML entities for uppercase and lowercase gamma are codice_0 and codice_1. Unicode. These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": " \\dot \\gamma " }, { "math_id": 2, "text": "\\Gamma" }, { "math_id": 3, "text": "\\Gamma_0" } ]
https://en.wikipedia.org/wiki?curid=12257
1225755
Limb darkening
Optical effect seen at the edges of stars from an astronomer's perspective Limb darkening is an optical effect seen in stars (including the Sun) and planets, where the central part of the disk appears brighter than the edge, or "limb". Its understanding offered early solar astronomers an opportunity to construct models with such gradients. This encouraged the development of the theory of radiative transfer. Basic theory. Optical depth, a measure of the opacity of an object or part of an object, combines with effective temperature gradients inside the star to produce limb darkening. The light seen is approximately the integral of all emission along the line of sight modulated by the optical depth to the viewer (i.e. 1/e times the emission at 1 optical depth, 1/e2 times the emission at 2 optical depths, etc.). Near the center of the star, optical depth is effectively infinite, causing approximately constant brightness. However, the effective optical depth decreases with increasing radius due to lower gas density and a shorter line of sight distance through the star, producing a gradual dimming, until it becomes zero at the apparent edge of the star. The effective temperature of the photosphere also decreases with increasing distance from the center of the star. The radiation emitted from a gas is approximately black-body radiation, the intensity of which is proportional to the fourth power of the temperature. Therefore, even in line of sight directions where the optical depth is effectively infinite, the emitted energy comes from cooler parts of the photosphere, resulting in less total energy reaching the viewer. The temperature in the atmosphere of a star does not always decrease with increasing height. For certain spectral lines, the optical depth is greatest in regions of increasing temperature. In this scenario, the phenomenon of "limb brightening" is seen instead. In the Sun, the existence of a temperature minimum region means that limb brightening should start to dominate at far-infrared or radio wavelengths. Above the lower atmosphere, and well above the temperature-minimum region, the Sun is surrounded by the million-kelvin solar corona. For most wavelengths this region is optically thin, i.e. has small optical depth, and must, therefore, be limb-brightened if it is spherically symmetric. Calculation of limb darkening. In the figure shown here, as long as the observer at point P is outside the stellar atmosphere, the intensity seen in the direction θ will be a function only of the angle of incidence "ψ". This is most conveniently approximated as a polynomial in cos "ψ": formula_0 where "I"("ψ") is the intensity seen at P along a line of sight forming angle "ψ" with respect to the stellar radius, and "I"(0) is the central intensity. In order that the ratio be unity for "ψ" = 0, we must have formula_1 For example, for a Lambertian radiator (no limb darkening) we will have all "a"k = 0 except "a"1 = 1. As another example, for the Sun at , the limb darkening is well expressed by "N" = 2 and formula_2 The equation for limb darkening is sometimes more conveniently written as formula_3 which now has "N" independent coefficients rather than "N" + 1 coefficients that must sum to unity. The "a""k" constants can be related to the "A""k" constants. For "N" = 2, formula_4 For the Sun at 550 nm, we then have formula_5 This model gives an intensity at the edge of the Sun's disk of only 30% of the intensity at the center of the disk. We can convert these formulas to functions of "θ" by using the substitution formula_6 where Ω is the angle from the observer to the limb of the star. For small "θ" we have formula_7 We see that the derivative of cos ψ is infinite at the edge. The above approximation can be used to derive an analytic expression for the ratio of the mean intensity to the central intensity. The mean intensity "I"m is the integral of the intensity over the disk of the star divided by the solid angle subtended by the disk: formula_8 where "dω" = sin "θ" "dθ" "dφ" is a solid angle element, and the integrals are over the disk: 0 ≤ "φ" ≤ 2"π" and 0 ≤ "θ" ≤ Ω. We may rewrite this as formula_9 Although this equation can be solved analytically, it is rather cumbersome. However, for an observer at infinite distance from the star, formula_10 can be replaced by formula_11, so we have formula_12 which gives formula_13 For the Sun at 550 nm, this says that the average intensity is 80.5% of the intensity at the center. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{I(\\psi)}{I(0)} = \\sum_{k=0}^N a_k \\cos^k \\psi,\n" }, { "math_id": 1, "text": "\n\\sum_{k=0}^N a_k = 1.\n" }, { "math_id": 2, "text": "\\begin{align}\na_0 &= 1 - a_1 - a_2 = 0.3, \\\\\na_1 &= 0.93, \\\\\na_2 &= -0.23\n\\end{align}" }, { "math_id": 3, "text": "\n\\frac{I(\\psi)}{I(0)} = 1 + \\sum_{k=1}^N A_k (1 - \\cos \\psi)^k,\n" }, { "math_id": 4, "text": "\\begin{align}\nA_1 &= - (a_1 + 2a_2), \\\\\nA_2 &= a_2.\n\\end{align}" }, { "math_id": 5, "text": "\\begin{align}\nA_1 &= -0.47,\\\\\nA_2 &= -0.23.\n\\end{align}" }, { "math_id": 6, "text": "\n\\cos \\psi =\n\\frac{\\sqrt{\\cos^2 \\theta - \\cos^2 \\Omega}}{\\sin \\Omega} = \\sqrt{1 - \\left(\\frac{\\sin \\theta}{\\sin \\Omega}\\right)^2},\n" }, { "math_id": 7, "text": "\\cos\\psi \\approx \\sqrt{1 - \\left(\\frac{\\theta}{\\sin \\Omega}\\right)^2}." }, { "math_id": 8, "text": "I_m = \\frac{\\int I(\\psi)\\,d\\omega}{\\int d\\omega}," }, { "math_id": 9, "text": "I_m = \\frac{\\int_{\\cos\\Omega}^1 I(\\psi) \\,d\\cos\\theta}{\\int_{\\cos\\Omega}^1 d\\cos\\theta} =\n\\frac{\\int_{\\cos\\Omega}^1 I(\\psi) \\,d\\cos\\theta}{1 - \\cos\\Omega}.\n" }, { "math_id": 10, "text": "d\\cos\\theta" }, { "math_id": 11, "text": "\\sin^2\\Omega \\cos\\psi \\,d\\cos\\psi" }, { "math_id": 12, "text": "I_m = \\frac{\\int_0^1 I(\\psi) \\cos\\psi \\,d\\cos\\psi}{\\int_0^1 \\cos\\psi \\,d\\cos\\psi} = 2\\int_0^1 I(\\psi) \\cos\\psi \\,d\\cos\\psi," }, { "math_id": 13, "text": "\\frac{I_m}{I(0)} = 2 \\sum_{k=0}^N \\frac{a_k}{k + 2}." } ]
https://en.wikipedia.org/wiki?curid=1225755
12259232
Order dual (functional analysis)
In mathematics, specifically in order theory and functional analysis, the order dual of an ordered vector space formula_0 is the set formula_1 where formula_2 denotes the set of all positive linear functionals on formula_0, where a linear function formula_3 on formula_0 is called positive if for all formula_4 formula_5 implies formula_6 The order dual of formula_0 is denoted by formula_7. Along with the related concept of the order bound dual, this space plays an important role in the theory of ordered topological vector spaces. Canonical ordering. An element formula_3 of the order dual of formula_0 is called positive if formula_5 implies formula_8 The positive elements of the order dual form a cone that induces an ordering on formula_7 called the canonical ordering. If formula_0 is an ordered vector space whose positive cone formula_9 is generating (that is, formula_10) then the order dual with the canonical ordering is an ordered vector space. The order dual is the span of the set of positive linear functionals on formula_0. Properties. The order dual is contained in the order bound dual. If the positive cone of an ordered vector space formula_0 is generating and if formula_11 holds for all positive formula_12 and formula_13, then the order dual is equal to the order bound dual, which is an order complete vector lattice under its canonical ordering. The order dual of a vector lattice is an order complete vector lattice. The order dual of a vector lattice formula_0 can be finite dimension (possibly even formula_14) even if formula_0 is infinite-dimensional. Order bidual. Suppose that formula_0 is an ordered vector space such that the canonical order on formula_7 makes formula_7 into an ordered vector space. Then the order bidual is defined to be the order dual of formula_7 and is denoted by formula_15. If the positive cone of an ordered vector space formula_0 is generating and if formula_11 holds for all positive formula_12 and formula_13, then formula_15 is an order complete vector lattice and the evaluation map formula_16 is order preserving. In particular, if formula_0 is a vector lattice then formula_15 is an order complete vector lattice. Minimal vector lattice. If formula_0 is a vector lattice and if formula_17 is a solid subspace of formula_7 that separates points in formula_0, then the evaluation map formula_18 defined by sending formula_19 to the map formula_20 given by formula_21, is a lattice isomorphism of formula_0 onto a vector sublattice of formula_22. However, the image of this map is in general not order complete even if formula_0 is order complete. Indeed, a regularly ordered, order complete vector lattice need not be mapped by the evaluation map onto a band in the order bidual. An order complete, regularly ordered vector lattice whose canonical image in its order bidual is order complete is called minimal and is said to be of minimal type. Examples. For any formula_23, the Banach lattice formula_24 is order complete and of minimal type; in particular, the norm topology on this space is the finest locally convex topology for which every order convergent filter converges. Properties. Let formula_0 be an order complete vector lattice of minimal type. For any formula_19 such that formula_25 the following are equivalent: Related concepts. An ordered vector space formula_0 is called regularly ordered and its order is said to be regular if it is Archimedean ordered and formula_7 distinguishes points in formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\operatorname{Pos}\\left(X^*\\right) - \\operatorname{Pos}\\left(X^*\\right)" }, { "math_id": 2, "text": "\\operatorname{Pos}\\left(X^*\\right)" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "x \\in X," }, { "math_id": 5, "text": "x \\geq 0" }, { "math_id": 6, "text": "f(x) \\geq 0." }, { "math_id": 7, "text": "X^+" }, { "math_id": 8, "text": "\\operatorname{Re} f(x) \\geq 0." }, { "math_id": 9, "text": "C" }, { "math_id": 10, "text": "X = C - C" }, { "math_id": 11, "text": "[0, x] + [0, y] = [0, x + y]" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "y" }, { "math_id": 14, "text": "\\{ 0 \\}" }, { "math_id": 15, "text": "X^{++}" }, { "math_id": 16, "text": "X \\to X^{++}" }, { "math_id": 17, "text": "G" }, { "math_id": 18, "text": "X \\to G^{+}" }, { "math_id": 19, "text": "x \\in X" }, { "math_id": 20, "text": "E_x : G^{+} \\to \\Complex" }, { "math_id": 21, "text": "E_x(f) := f(x)" }, { "math_id": 22, "text": "G^+" }, { "math_id": 23, "text": "1 < p < \\infty" }, { "math_id": 24, "text": "L^p(\\mu)" }, { "math_id": 25, "text": "x > 0," }, { "math_id": 26, "text": "f(x) > 0." }, { "math_id": 27, "text": "\\tau" }, { "math_id": 28, "text": "(X, \\tau)" } ]
https://en.wikipedia.org/wiki?curid=12259232
12259899
Pressure-correction method
Pressure-correction method is a class of methods used in computational fluid dynamics for numerically solving the Navier-Stokes equations normally for incompressible flows. Common properties. The equations solved in this approach arise from the implicit time integration of the incompressible Navier–Stokes equations. &lt;br&gt;formula_0&lt;br&gt; Due to the non-linearity of the convective term in the momentum equation that is written above, this problem is solved with a nested-loop approach. While so called "global" or "inner iterations" represent the real time-steps and are used to update the variables formula_1 and formula_2, based on a linearized system, and boundary conditions; there is also an "outer loop" for updating the coefficients of the linearized system.&lt;br&gt; The outer iterations comprise two steps: The correction for the velocity that is obtained from the second equation one has with incompressible flow, the non-divergence criterion or continuity equation formula_3 is computed by first calculating a residual value formula_4, resulting from spurious "mass flux", then using this "mass imbalance" to get a new pressure value. The pressure value that is attempted to compute, is such that when plugged into momentum equations a divergence-free velocity field results. The mass imbalance is often also used for control of the outer loop.&lt;br&gt; The name of this class of methods stems from the fact that the correction of the velocity field is computed through the pressure-field. The discretization of this is typically done with either the finite element method or the finite volume method. With the latter, one might also encounter the dual mesh, i.e. the computation grid obtained from connecting the centers of the cells that the initial subdivision into finite elements of the computation domain yielded. Implicit split-update procedures. Another approach which is typically used in FEM is the following. The aim of the correction step is to ensure "conservation of mass". In continuous form for compressible substances mass, conservation of mass is expressed by formula_5 where formula_6 is the square of the "speed of sound". For low Mach numbers and incompressible media formula_7 is assumed to be infinite, which is the reason for the above continuity equation to reduce to formula_8 The way of obtaining a velocity field satisfying the above, is to compute a pressure which when substituted into the momentum equation leads to the desired correction of a preliminary computed intermediate velocity. Applying the divergence operator to the compressible momentum equation yields formula_9 formula_10 then provides the governing equation for pressure computation. The idea of pressure-correction also exists in the case of variable density and high Mach numbers, although in this case there is a real physical meaning behind the coupling of dynamic pressure and velocity as arising from the "continuity equation" formula_11 formula_2 is with compressibility, still an additional variable that can be eliminated with algebraic operations, but its variability is not a pure artifice as in the compressible case, and the methods for its computation differ significantly from those with formula_12 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\overbrace{\\rho \\Big(\n\\underbrace{\\frac{\\partial \\mathbf{v}}{\\partial t}}_{\n\\begin{smallmatrix}\n \\text{Unsteady}\\\\\n \\text{acceleration}\n\\end{smallmatrix}} + \n\\underbrace{\\left(\\mathbf{v} \\cdot \\nabla\\right) \\mathbf{v}}_{\n\\begin{smallmatrix}\n \\text{Convective} \\\\\n \\text{acceleration}\n\\end{smallmatrix}}\\Big)}^{\\text{Inertia}} =\n\\underbrace{-\\nabla p}_{\n\\begin{smallmatrix}\n \\text{Pressure} \\\\\n \\text{gradient}\n\\end{smallmatrix}} + \n\\underbrace{\\mu \\nabla^2 \\mathbf{v}}_{\\text{Viscosity}} + \n\\underbrace{\\mathbf{f}}_{\n\\begin{smallmatrix}\n \\text{Other} \\\\\n \\text{forces}\n\\end{smallmatrix}}\n" }, { "math_id": 1, "text": "\\mathbf{v}" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "\n\\nabla\\cdot\\mathbf{v} = 0\n" }, { "math_id": 4, "text": "\\dot{m}" }, { "math_id": 5, "text": "\n \\nabla\\cdot\\left(\\rho(\\mathbf{x})\\mathbf{v}(\\mathbf{x})\\right) = \\frac{\\frac{d}{dt}p(\\mathbf{x})}{c^2}\n" }, { "math_id": 6, "text": "c^2" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "\n\\begin{align}\n \\nabla\\cdot\\mathbf{v} &= 0\n\\end{align}\n" }, { "math_id": 9, "text": "\n\\begin{align}\n \\nabla\\cdot\\partial_t \\mathbf{v} &= -\\nabla\\cdot(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla\\cdot\\nabla^2\\mathbf{v} - \\nabla^2 p\\\\\n \\partial_t \\nabla\\cdot\\mathbf{v} &= -\\nabla\\cdot(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla^2\\nabla\\cdot\\mathbf{v} - \\nabla^2 p\\\\\n 0 &= -\\nabla\\cdot(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} - \\nabla^2 p\\\\\n \\nabla^2 p &= -\\nabla\\cdot(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} & (\\ast)\n\\end{align}\n" }, { "math_id": 10, "text": "(\\ast)" }, { "math_id": 11, "text": "\n\\begin{align}\n \\partial_t \\rho &= \\nabla\\cdot(\\rho \\mathbf{v})\\\\\n \\partial_t \\rho &= \\frac{1}{c^2}\\partial_t p\n\\end{align}\n" }, { "math_id": 12, "text": "\\rho = \\text{constant}." } ]
https://en.wikipedia.org/wiki?curid=12259899
12261058
Doob's martingale convergence theorems
Theorems concerning stochastic processes In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences. Statement for discrete-time martingales. A common formulation of the martingale convergence theorem for discrete-time martingales is the following. Let formula_0 be a supermartingale. Suppose that the supermartingale is bounded in the sense that formula_1 where formula_2 is the negative part of formula_3, defined by formula_4. Then the sequence converges almost surely to a random variable formula_5 with finite expectation. There is a symmetric statement for submartingales with bounded expectation of the positive part. A supermartingale is a stochastic analogue of a non-increasing sequence, and the condition of the theorem is analogous to the condition in the monotone convergence theorem that the sequence be bounded from below. The condition that the martingale is bounded is essential; for example, an unbiased formula_6 random walk is a martingale but does not converge. As intuition, there are two reasons why a sequence may fail to converge. It may go off to infinity, or it may oscillate. The boundedness condition prevents the former from happening. The latter is impossible by a "gambling" argument. Specifically, consider a stock market game in which at time formula_7, the stock has price formula_3. There is no strategy for buying and selling the stock over time, always holding a non-negative amount of stock, which has positive expected profit in this game. The reason is that at each time the expected change in stock price, given all past information, is at most zero (by definition of a supermartingale). But if the prices were to oscillate without converging, then there would be a strategy with positive expected profit: loosely, buy low and sell high. This argument can be made rigorous to prove the result. Proof sketch. The proof is simplified by making the (stronger) assumption that the supermartingale is uniformly bounded; that is, there is a constant formula_8 such that formula_9 always holds. In the event that the sequence formula_10 does not converge, then formula_11 and formula_12 differ. If also the sequence is bounded, then there are some real numbers formula_13 and formula_14 such that formula_15 and the sequence crosses the interval formula_16 infinitely often. That is, the sequence is eventually less than formula_13, and at a later time exceeds formula_14, and at an even later time is less than formula_13, and so forth ad infinitum. These periods where the sequence starts below formula_13 and later exceeds formula_14 are called "upcrossings". Consider a stock market game in which at time formula_7, one may buy or sell shares of the stock at price formula_3. On the one hand, it can be shown from the definition of a supermartingale that for any formula_17 there is no strategy which maintains a non-negative amount of stock and has positive expected profit after playing this game for formula_18 steps. On the other hand, if the prices cross a fixed interval formula_16 very often, then the following strategy seems to do well: buy the stock when the price drops below formula_13, and sell it when the price exceeds formula_14. Indeed, if formula_19 is the number of upcrossings in the sequence by time formula_20, then the profit at time formula_20 is at least formula_21: each upcrossing provides at least formula_22 profit, and if the last action was a "buy", then in the worst case the buying price was formula_23 and the current price is formula_24. But any strategy has expected profit at most formula_25, so necessarily formula_26 By the monotone convergence theorem for expectations, this means that formula_27 so the expected number of upcrossings in the whole sequence is finite. It follows that the infinite-crossing event for interval formula_16 occurs with probability formula_25. By a union bound over all rational formula_13 and formula_14, with probability formula_28, no interval exists which is crossed infinitely often. If for all formula_29 there are finitely many upcrossings of interval formula_30, then the limit inferior and limit superior of the sequence must agree, so the sequence must converge. This shows that the martingale converges with probability formula_28. Failure of convergence in mean. Under the conditions of the martingale convergence theorem given above, it is not necessarily true that the supermartingale formula_31 converges in mean (i.e. that formula_32). As an example, let formula_33 be a formula_34 random walk with formula_35. Let formula_20 be the first time when formula_36, and let formula_37 be the stochastic process defined by formula_38. Then formula_20 is a stopping time with respect to the martingale formula_33, so formula_37 is also a martingale, referred to as a stopped martingale. In particular, formula_39 is a supermartingale which is bounded below, so by the martingale convergence theorem it converges pointwise almost surely to a random variable formula_40. But if formula_41 then formula_42, so formula_43 is almost surely zero. This means that formula_44. However, formula_45 for every formula_46, since formula_47 is a random walk which starts at formula_48 and subsequently makes mean-zero moves (alternately, note that formula_49 since formula_37 is a martingale). Therefore formula_47 cannot converge to formula_40 in mean. Moreover, if formula_50 were to converge in mean to any random variable formula_51, then some subsequence converges to formula_51 almost surely. So by the above argument formula_52 almost surely, which contradicts convergence in mean. Statements for the general case. In the following, formula_53 will be a filtered probability space where formula_54, and formula_55 will be a right-continuous supermartingale with respect to the filtration formula_56; in other words, for all formula_57, formula_58 Doob's first martingale convergence theorem. Doob's first martingale convergence theorem provides a sufficient condition for the random variables formula_59 to have a limit as formula_60 in a pointwise sense, i.e. for each formula_61 in the sample space formula_62 individually. For formula_63, let formula_64 and suppose that formula_65 Then the pointwise limit formula_66 exists and is finite for formula_67-almost all formula_68. Doob's second martingale convergence theorem. It is important to note that the convergence in Doob's first martingale convergence theorem is pointwise, not uniform, and is unrelated to convergence in mean square, or indeed in any "Lp" space. In order to obtain convergence in "L"1 (i.e., convergence in mean), one requires uniform integrability of the random variables formula_59. By Chebyshev's inequality, convergence in "L"1 implies convergence in probability and convergence in distribution. The following are equivalent: formula_70 formula_75 Doob's upcrossing inequality. The following result, called Doob's upcrossing inequality or, sometimes, Doob's upcrossing lemma, is used in proving Doob's martingale convergence theorems. A "gambling" argument shows that for uniformly bounded supermartingales, the number of upcrossings is bounded; the upcrossing lemma generalizes this argument to supermartingales with bounded expectation of their negative parts. Let formula_20 be a natural number. Let formula_33 be a supermartingale with respect to a filtration formula_76. Let formula_13, formula_14 be two real numbers with formula_15. Define the random variables formula_77 so that formula_78 is the maximum number of disjoint intervals formula_79 with formula_80, such that formula_81. These are called upcrossings with respect to interval formula_30. Then formula_82 where formula_83 is the negative part of formula_84, defined by formula_85. Applications. Convergence in "L""p". Let formula_86 be a continuous martingale such that formula_87 for some formula_88. Then there exists a random variable formula_89 such that formula_90 as formula_91 both formula_67-almost surely and in formula_92. The statement for discrete-time martingales is essentially identical, with the obvious difference that the continuity assumption is no longer necessary. Lévy's zero–one law. Doob's martingale convergence theorems imply that conditional expectations also have a convergence property. Let formula_93 be a probability space and let formula_5 be a random variable in formula_94. Let formula_95 be any filtration of formula_96, and define formula_97 to be the minimal "σ"-algebra generated by formula_98. Then formula_99 both formula_67-almost surely and in formula_94. This result is usually called Lévy's zero–one law or Levy's upwards theorem. The reason for the name is that if formula_100 is an event in formula_97, then the theorem says that formula_101 almost surely, i.e., the limit of the probabilities is 0 or 1. In plain language, if we are learning gradually all the information that determines the outcome of an event, then we will become gradually certain what the outcome will be. This sounds almost like a tautology, but the result is still non-trivial. For instance, it easily implies Kolmogorov's zero–one law, since it says that for any tail event "A", we must have formula_102 almost surely, hence formula_103. Similarly we have the Levy's downwards theorem : Let formula_93 be a probability space and let formula_5 be a random variable in formula_94. Let formula_98 be any decreasing sequence of sub-sigma algebras of formula_96, and define formula_97 to be the intersection. Then formula_99 both formula_67-almost surely and in formula_94. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " X_1, X_2, X_3, \\dots" }, { "math_id": 1, "text": " \\sup_{t \\in \\mathbf{N}} \\operatorname{E}[X_t^-] < \\infty " }, { "math_id": 2, "text": " X_t^- " }, { "math_id": 3, "text": " X_t " }, { "math_id": 4, "text": " X_t^- = -\\min(X_t, 0) " }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\pm 1" }, { "math_id": 7, "text": "t" }, { "math_id": 8, "text": " M " }, { "math_id": 9, "text": " |X_n| \\leq M " }, { "math_id": 10, "text": "X_1,X_2,\\dots" }, { "math_id": 11, "text": " \\liminf X_n " }, { "math_id": 12, "text": " \\limsup X_n " }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "b" }, { "math_id": 15, "text": "a < b" }, { "math_id": 16, "text": "[a,b]" }, { "math_id": 17, "text": " N \\in \\mathbf{N} " }, { "math_id": 18, "text": " N " }, { "math_id": 19, "text": "u_N" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": "(b-a)u_N - 2M" }, { "math_id": 22, "text": "b-a" }, { "math_id": 23, "text": "a \\leq M" }, { "math_id": 24, "text": " -M " }, { "math_id": 25, "text": "0" }, { "math_id": 26, "text": " \\operatorname{E} \\big[u_N\\big] \\leq \\frac{2M}{b-a}. " }, { "math_id": 27, "text": " \\operatorname{E} \\big[\\lim_{N \\to \\infty} u_N \\big]\\leq \\frac{2M}{b-a} ," }, { "math_id": 28, "text": "1" }, { "math_id": 29, "text": " a, b \\in \\mathbf{Q} " }, { "math_id": 30, "text": " [a,b] " }, { "math_id": 31, "text": " (X_n)_{n \\in \\mathbf{N}} " }, { "math_id": 32, "text": " \\lim_{n \\to \\infty} \\operatorname{E}[|X_n - X|] = 0 " }, { "math_id": 33, "text": "(X_n)_{n \\in \\mathbf{N}}" }, { "math_id": 34, "text": "\\pm 1 " }, { "math_id": 35, "text": "X_0 = 1" }, { "math_id": 36, "text": "X_n = 0" }, { "math_id": 37, "text": "(Y_n)_{n \\in \\mathbf{N}}" }, { "math_id": 38, "text": "Y_n := X_{\\min(N, n)}" }, { "math_id": 39, "text": " (Y_n)_{n \\in \\mathbf{N}}" }, { "math_id": 40, "text": "Y" }, { "math_id": 41, "text": "Y_n > 0 " }, { "math_id": 42, "text": "Y_{n+1} = Y_n \\pm 1" }, { "math_id": 43, "text": " Y " }, { "math_id": 44, "text": " \\operatorname{E}[Y] = 0 " }, { "math_id": 45, "text": " \\operatorname{E}[Y_n] = 1 " }, { "math_id": 46, "text": " n \\geq 1" }, { "math_id": 47, "text": " (Y_n)_{n \\in \\mathbf{N}} " }, { "math_id": 48, "text": " 1 " }, { "math_id": 49, "text": " \\operatorname{E}[Y_n] = \\operatorname{E}[Y_0] = 1 " }, { "math_id": 50, "text": " (Y_n)_{n \\in \\mathbb{N}} " }, { "math_id": 51, "text": " R " }, { "math_id": 52, "text": " R = 0 " }, { "math_id": 53, "text": " (\\Omega, F, F_*, \\mathbf{P}) " }, { "math_id": 54, "text": " F_* = (F_t)_{t \\geq 0} " }, { "math_id": 55, "text": " N: [0,\\infty) \\times \\Omega \\to \\mathbf{R} " }, { "math_id": 56, "text": " F_* " }, { "math_id": 57, "text": " 0 \\leq s \\leq t < +\\infty " }, { "math_id": 58, "text": "N_s \\geq \\operatorname{E} \\big[ N_t \\mid F_s \\big]." }, { "math_id": 59, "text": "N_t" }, { "math_id": 60, "text": "t\\to+\\infty" }, { "math_id": 61, "text": "\\omega" }, { "math_id": 62, "text": "\\Omega" }, { "math_id": 63, "text": "t\\geq 0" }, { "math_id": 64, "text": "N_t^- = \\max(-N_t,0)" }, { "math_id": 65, "text": "\\sup_{t > 0} \\operatorname{E} \\big[ N_t^{-} \\big] < + \\infty." }, { "math_id": 66, "text": "N(\\omega) = \\lim_{t \\to + \\infty} N_t (\\omega)" }, { "math_id": 67, "text": "\\mathbf{P}" }, { "math_id": 68, "text": "\\omega \\in \\Omega" }, { "math_id": 69, "text": "(N_t)_{t>0}" }, { "math_id": 70, "text": "\\lim_{C \\to \\infty} \\sup_{t > 0} \\int_{\\{ \\omega \\in \\Omega \\, \\mid \\, | N_t (\\omega) | > C \\}} \\left| N_t (\\omega) \\right| \\, \\mathrm{d} \\mathbf{P} (\\omega) = 0;" }, { "math_id": 71, "text": "N \\in L^1(\\Omega,\\mathbf{P};\\mathbf{R})" }, { "math_id": 72, "text": "N_t \\to N" }, { "math_id": 73, "text": "t\\to\\infty" }, { "math_id": 74, "text": "L^1(\\Omega,\\mathbf{P};\\mathbf{R})" }, { "math_id": 75, "text": "\\operatorname{E} \\left[ \\left| N_t - N \\right| \\right] = \\int_\\Omega \\left| N_t (\\omega) - N (\\omega) \\right| \\, \\mathrm{d} \\mathbf{P} (\\omega) \\to 0 \\text{ as } t \\to + \\infty." }, { "math_id": 76, "text": "(\\mathcal{F}_n)_{n \\in \\mathbf{N}}" }, { "math_id": 77, "text": "(U_n)_{n \\in \\mathbf{N}}" }, { "math_id": 78, "text": "U_n" }, { "math_id": 79, "text": " [n_{i_1}, n_{i_2}] " }, { "math_id": 80, "text": " n_{i_2} \\leq n " }, { "math_id": 81, "text": "X_{n_{i_1}} < a < b < X_{n_{i_2}} " }, { "math_id": 82, "text": "(b - a) \\operatorname{E}[U_n] \\le \\operatorname{E}[(X_n - a)^-].\\quad" }, { "math_id": 83, "text": " X^- " }, { "math_id": 84, "text": " X " }, { "math_id": 85, "text": " X^- = -\\min(X, 0) " }, { "math_id": 86, "text": "M:[0,\\infty) \\times \\Omega \\to \\mathbf{R}" }, { "math_id": 87, "text": "\\sup_{t > 0} \\operatorname{E} \\big[ \\big| M_t \\big|^p \\big] < + \\infty" }, { "math_id": 88, "text": "p>1" }, { "math_id": 89, "text": "M \\in L^p(\\Omega,\\mathbf{P};\\mathbf{R})" }, { "math_id": 90, "text": "M_t \\to M" }, { "math_id": 91, "text": "t\\to +\\infty" }, { "math_id": 92, "text": "L^p(\\Omega,\\mathbf{P};\\mathbf{R})" }, { "math_id": 93, "text": "(\\Omega,F,\\mathbf{P})" }, { "math_id": 94, "text": "L^1" }, { "math_id": 95, "text": "F_* = (F_k)_{k \\in \\mathbf{N}}" }, { "math_id": 96, "text": "F" }, { "math_id": 97, "text": "F_\\infty" }, { "math_id": 98, "text": "(F_k)_{k \\in \\mathbf{N}}" }, { "math_id": 99, "text": "\\operatorname{E} \\big[ X \\mid F_k \\big] \\to \\operatorname{E} \\big[ X \\mid F_\\infty \\big] \\text{ as } k \\to \\infty" }, { "math_id": 100, "text": "A" }, { "math_id": 101, "text": "\\mathbf{P}[ A \\mid F_k ] \\to \\mathbf{1}_A " }, { "math_id": 102, "text": "\\mathbf{P}[ A ] = \\mathbf{1}_A " }, { "math_id": 103, "text": "\\mathbf{P}[ A ] \\in \\{0,1\\} " } ]
https://en.wikipedia.org/wiki?curid=12261058
12261835
Variety (cybernetics)
Number of states of a cybernetic system In cybernetics, the term variety denotes the total number of distinguishable elements of a set, most often the set of states, inputs, or outputs of a finite-state machine or transformation, or the binary logarithm of the same quantity. Variety is used in cybernetics as an information theory that is easily related to deterministic finite automata, and less formally as a conceptual tool for thinking about organization, regulation, and stability. It is an early theory of complexity in automata, complex systems, and operations research. Overview. The term "variety" was introduced by W. Ross Ashby to extend his analysis of machines to their set of possible behaviors. Ashby says: The word variety, in relation to a set of distinguishable elements, will be used to mean either (i) the number of distinct elements, or (ii) the logarithm to the base 2 of the number, the context indicating the sense used. In the second case, variety is measured in bits. For example, a machine with states formula_0 has a variety of four states or two bits. The variety of a sequence or multiset is the number of distinct symbols in it. For example, the sequence formula_1 has a variety of four. As a measure of uncertainty, variety is directly related to information: formula_2. Since the number of distinguishable elements depends on both the observer and the set, "the observer and his powers of discrimination may have to be specified if the variety is to be well defined". Gordon Pask distinguished between the variety of the chosen reference frame and the variety of the system the observer builds up within the reference frame. The reference frame consists of a state space and the set of measurements available to the observer, which have total variety formula_3, where formula_4 is the number of states in the state space. The system the observer builds up begins with the full variety formula_3, which is reduced as the observer loses uncertainty about the state by learning to predict the system. If the observer can perceive the system as a deterministic machine in the given reference frame, observation may reduce the variety to zero as the machine becomes completely predictable. Laws of nature constrain the variety of phenomena by disallowing certain behavior. Ashby made two observations he considered laws of nature, the law of experience and the law of requisite variety. The law of experience holds that machines under input tend to lose information about their original state, and the law of requisite variety states a necessary, though not sufficient, condition for a regulator to exert anticipatory control by responding to its current input (rather than the previous output as in error-controlled regulation). Law of experience. The "law of experience" refers to the observation that the variety of states exhibited by a deterministic machine in isolation cannot increase, and a set of identical machines fed the same inputs cannot exhibit increasing variety of states, and tend to synchronize instead. Some name is necessary by which this phenomenon can be referred to. I shall call it the law of Experience. It can be described more vividly by the statement that information put in by change at a parameter tends to destroy and replace information about the system's initial state. This is a consequence of the "decay of variety": a deterministic transformation cannot increase the variety of a set. As a result, an observer's uncertainty about the state of the machine either remains constant or decreases with time. Ashby shows that this holds for machines with inputs as well. Under any constant input formula_5 the machines' states move toward any attractors that exist in the corresponding transformation and some may synchronize at these points. If the input changes to some other input formula_6 and the machines' behavior enacts a different transformation, more than one of these attractors may sit in the same basin of attraction under formula_6. States which arrived and possibly synchronized at those attractors under formula_5 then synchronize further under formula_6. "In other words," Ashby says, "changes at the input of a transducer tend to make the system's state (at a given moment) less dependent on the transducer's individual initial state and more dependent on the particular sequence of parameter-values used as input." While there is a law of non-increase, there is only a tendency to decrease, since the variety can hold steady without decreasing if the set undergoes a one-to-one transformation, or if the states have synchronized into a subset for which this is the case. In the formal language analysis of finite machines, an input sequence that synchronizes identical machines (no matter the variety of their initial states) is called a synchronizing word. Law of requisite variety. Ashby used variety to analyze the problem of regulation by considering a two-player game, where one player, formula_7, supplies disturbances which another player, formula_8, must regulate to ensure acceptable outcomes. formula_7 and formula_8 each have a set of available moves, which choose the outcome from a table with as many rows as formula_7 has moves and as many columns as formula_8 has moves. formula_8 is allowed full knowledge of formula_7's move, and must pick moves in response so that the outcome is acceptable. Since many games pose no difficulty for formula_8, the table is chosen so that no outcome is repeated in any column, which ensures that in the corresponding game any change in formula_7's move means a change in outcome, unless formula_8 has a move to keep the outcome from changing. With this restriction, if formula_8 never changes moves, the outcome fully depends on formula_7's choice, while if multiple moves are available to formula_8 it can reduce the variety of outcomes, if the table allows it, dividing by as much as its own variety of moves. formula_9 The "law of requisite variety" is that a deterministic strategy for formula_8 can at best limit the variety in outcomes to formula_10, and only adding variety in formula_8's moves can reduce the variety of outcomes: "only variety can destroy variety". For example, in the table above, formula_8 has a strategy (shown in bold) to reduce the variety in outcomes to formula_11, which is formula_10 in this case. Ashby considered this a fundamental observation to the theory of regulation. It is not possible for formula_8 to reduce the outcomes any further and still respond to all potential moves from formula_7, but it is possible that another table of the same shape would not allow formula_8 to do so well. Requisite variety is necessary, but not sufficient to control the outcomes. If formula_8 and formula_7 are machines, they cannot possibly choose more moves than they have states. Thus, a perfect regulator must have at least as many distinguishable states as the phenomenon it is intended to regulate (the table must be square, or wider). Stated in bits, the law is formula_12. In Shannon's information theory, formula_7, formula_8, and formula_13 are information sources. The condition that if formula_8 never changes moves, the uncertainty in outcomes is no less than the uncertainty in formula_7's move is expressed as formula_14, and since formula_8's strategy is a deterministic function of formula_7 set formula_15. With the rules of the game expressed this way, it can be shown that formula_16. Ashby described the law of requisite variety as related to the tenth theorem in Shannon's Mathematical Theory of Communication (1948): This law (of which Shannon's theorem 10 relating to the suppression of noise is a special case) says that if a certain quantity of disturbance is prevented by a regulator from reaching some essential variables, then that regulator must be capable of exerting at least that quantity of selection. Ashby also postulated that the law of requisite variety allows for the measurement of regulation, namely that the requirement for a well-functioning regulation is that the regulator or regulators in place are designed to account for all the possible states in which the variable or variables to be regulated may fall within, so as to ensure that the outcome is always within acceptable range. Ashby saw this law as relevant to problems in biology such as homeostasis, and a "wealth of possible applications". Later, in 1970, Conant working with Ashby produced the good regulator theorem which required autonomous systems to acquire an internal model of their environment to persist and achieve stability (e.g. Nyquist stability criterion) or dynamic equilibrium. Boisot and McKelvey updated this law to the "law of requisite complexity", that holds that, in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts. A further practical application of this law is the view that information systems (IS) alignment is a continuous coevolutionary process that reconciles top-down ‘rational designs’ and bottom-up ‘emergent processes’ of consciously and coherently interrelating all components of the Business/IS relationships in order to contribute to an organization’s performance over time. The application in project management of the law of requisite complexity is the model of positive, appropriate and negative complexity proposed by Stefan Morcov. Applications. Applications to organization and management were immediately apparent to Ashby. One implication is that individuals have a finite capacity for processing information, and beyond this limit what matters is the organization between individuals. Thus the limitation which holds over a team of "n" men may be much higher, perhaps "n" times as high, as the limitation holding over the individual man. To make use of the higher limit, however, the team must be efficiently organized; and until recently our understanding of organization has been pitifully small. Stafford Beer took up this analysis in his writings on management cybernetics. Beer defines variety as "the total number of "possible" states of a system, or of an element of a system". Beer restates the Law of Requisite Variety as "Variety absorbs variety." Stated more simply, the logarithmic measure of variety represents the minimum number of choices (by binary chop) needed to resolve uncertainty. Beer used this to allocate the management resources necessary to maintain process viability. The cybernetician Frank George discussed the variety of teams competing in games like football or rugby to produce goals or tries. A winning chess player might be said to have more variety than his losing opponent. Here a simple ordering is implied. The attenuation and amplification of variety were major themes in Stafford Beer's work in management (the profession of control, as he called it). The number of staff needed to answer telephones, control crowds or tend to patients are clear examples. The application of natural and analogue signals to variety analysis require an estimate of Ashby's "powers of discrimination" (see above quote). Given the butterfly effect of dynamical systems care must be taken before quantitative measures can be produced. Small quantities, which might be overlooked, can have big effects. In his "Designing Freedom" Stafford Beer discusses the patient in a hospital with a temperature denoting fever. Action must be taken immediately to isolate the patient. Here no amount of variety recording the "patients' average temperature" would detect this small signal which might have a big effect. Monitoring is required on individuals thus amplifying variety (see "Algedonic alerts" in the viable system model or VSM). Beer's work in management cybernetics and VSM is largely based on variety engineering. Further applications involving Ashby's view of state counting include the analysis of digital bandwidth requirements, redundancy and software bloat, the bit representation of data types and indexes, analogue to digital conversion, the bounds on finite state machines and data compression. See also, e.g., Excited state, State (computer science), State pattern, State (controls) and Cellular automaton. Requisite Variety can be seen in Chaitin's Algorithmic information theory where a longer, higher variety program or finite state machine produces incompressible output with more variety or information content. In general a description of the required inputs and outputs is established then encoded with the minimum variety necessary. The mapping of input bits to output bits can then produce an estimate of the minimum hardware or software components necessary to produce the desired control behaviour; for example, in a piece of computer software or computer hardware. Variety is one of nine requisites that are required by an ethical regulator.
[ { "math_id": 0, "text": "\\{a,b,c,d\\}" }, { "math_id": 1, "text": "a,b,c,c,c,d" }, { "math_id": 2, "text": "\\text{Uncertainty} = - \\text{Information}" }, { "math_id": 3, "text": "\\log_2(n)" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "P_1" }, { "math_id": 6, "text": "P_2" }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "\n\\begin{array} {c c | c}\n& & R \\\\\n& & \\begin{array} { c c c }\n\\alpha & \\beta & \\gamma\n\\end{array}\n\\\\ \\hline\nD & \\begin{array}{ c | c c c }\n 1 \\\\ 2 \\\\ 3 \\\\ 4 \\\\ 5 \\\\ 6\n \\end{array}\n& \\begin{array}{c c c }\n \\mathbf{a} & f & d \\\\\n \\mathbf{b} & e & c \\\\\n c & d & \\mathbf{b} \\\\\n d & c & \\mathbf{a} \\\\\n e & \\mathbf{b} & f \\\\\n f & \\mathbf{a} & e \\\\\n \\end{array}\n\\end{array}\n" }, { "math_id": 10, "text": "\\tfrac{D\\text{'s variety}}{R\\text{'s variety}}" }, { "math_id": 11, "text": "|\\{a,b\\}| = 2 = \\tfrac{6}{3}" }, { "math_id": 12, "text": "V_O \\ge V_D - V_R" }, { "math_id": 13, "text": "E" }, { "math_id": 14, "text": "H(E|R) \\ge H(D|R)" }, { "math_id": 15, "text": "H(R|D) = 0" }, { "math_id": 16, "text": "H(E) \\ge H(D) - H(R)" } ]
https://en.wikipedia.org/wiki?curid=12261835
12264442
Sea surface microlayer
Boundary layer where all exchange occurs between the atmosphere and the ocean The sea surface microlayer (SML) is the boundary interface between the atmosphere and ocean, covering about 70% of Earth's surface. With an operationally defined thickness between 1 and , the SML has physicochemical and biological properties that are measurably distinct from underlying waters. Recent studies now indicate that the SML covers the ocean to a significant extent, and evidence shows that it is an aggregate-enriched biofilm environment with distinct microbial communities. Because of its unique position at the air-sea interface, the SML is central to a range of global marine biogeochemical and climate-related processes. The sea surface microlayer is the boundary layer where all exchange occurs between the atmosphere and the ocean. The chemical, physical, and biological properties of the SML differ greatly from the sub-surface water just a few centimeters beneath. Despite the huge extent of the ocean's surface, until now relatively little attention has been paid to the sea surface microlayer (SML) as the ultimate interface where heat, momentum and mass exchange between the ocean and the atmosphere takes place. Via the SML, large-scale environmental changes in the ocean such as warming, acidification, deoxygenation, and eutrophication potentially influence cloud formation, precipitation, and the global radiation balance. Due to the deep connectivity between biological, chemical, and physical processes, studies of the SML may reveal multiple sensitivities to global and regional changes. Understanding the processes at the ocean's surface, in particular involving the SML as an important and determinant interface, could provide an essential contribution to the reduction of uncertainties regarding ocean-climate feedbacks. As of 2017, processes occurring within the SML, as well as the associated rates of material exchange through the SML, remained poorly understood and were rarely represented in marine and atmospheric numerical models. Overview. The sea surface microlayer (SML) is the boundary interface between the atmosphere and ocean, covering about 70% of the Earth's surface. The SML has physicochemical and biological properties that are measurably distinct from underlying waters. Because of its unique position at the air-sea interface, the SML is central to a range of global biogeochemical and climate-related processes. Although known for the last six decades, the SML often has remained in a distinct research niche, primarily as it was not thought to exist under typical oceanic conditions. Recent studies now indicate that the SML covers the ocean to a significant extent, highlighting its global relevance as the boundary layer linking two major components of the Earth system – the ocean and the atmosphere. In 1983, Sieburth hypothesised that the SML was a hydrated gel-like layer formed by a complex mixture of carbohydrates, proteins, and lipids. In recent years, his hypothesis has been confirmed, and scientific evidence indicates that the SML is an aggregate-enriched biofilm environment with distinct microbial communities. In 1999 Ellison et al. estimated that 200 Tg C yr−1 (200 million tonnes of carbon per year) accumulates in the SML, similar to sedimentation rates of carbon to the ocean's seabed, though the accumulated carbon in the SML probably has a very short residence time. Although the total volume of the microlayer is very small compared to the ocean's volume, Carlson suggested in his seminal 1993 paper that unique interfacial reactions may occur in the SML that may not occur in the underlying water or at a much slower rate there. He therefore hypothesised that the SML plays an important role in the diagenesis of carbon in the upper ocean. Biofilm-like properties and highest possible exposure to solar radiation leads to an intuitive assumption that the SML is a biochemical microreactor. Historically, the SML has been summarized as being a microhabitat composed of several layers distinguished by their ecological, chemical and physical properties with an operational total thickness of between 1 and 1000 μm. In 2005 Hunter defined the SML as a "microscopic portion of the surface ocean which is in contact with the atmosphere and which may have physical, chemical or biological properties that are measurably different from those of adjacent sub-surface waters". He avoids a definite range of thickness as it depends strongly on the feature of interest. A thickness of 60 μm has been measured based on sudden changes of the pH, and could be meaningfully used for studying the physicochemical properties of the SML. At such thickness, the SML represents a laminar layer, free of turbulence, and greatly affecting the exchange of gases between the ocean and atmosphere. As a habitat for neuston (surface-dwelling organisms ranging from bacteria to larger siphonophores), the thickness of the SML in some ways depends on the organism or ecological feature of interest. In 2005, Zaitsev described the SML and associated near-surface layer (down to 5 cm) as an incubator or nursery for eggs and larvae for a wide range of aquatic organisms. Hunter's definition includes all interlinked layers from the laminar layer to the nursery without explicit reference to defined depths. In 2017, Wurl "et al." proposed Hunter's definition be validated with a redeveloped SML paradigm that includes its global presence, biofilm-like properties and role as a nursery. The new paradigm pushes the SML into a new and wider context relevant to many ocean and climate sciences. According to Wurl "et al.", the SML can never be devoid of organics due to the abundance of surface-active substances (e.g., surfactants) in the upper ocean  and the phenomenon of surface tension at air-liquid interfaces. The SML is analogous to the thermal boundary layer, and remote sensing of the sea surface temperature shows ubiquitous anomalies between the sea surface skin and bulk temperature. Even so, the differences in both are driven by different processes. Enrichment, defined as concentration ratios of an analyte in the SML to the underlying bulk water, has been used for decades as evidence for the existence of the SML. Consequently, depletions of organics in the SML are debatable; however, the question of enrichment or depletion is likely to be a function of the thickness of the SML (which varies with sea state; including losses via sea spray, the concentrations of organics in the bulk water, and the limitations of sampling techniques to collect thin layers . Enrichment of surfactants, and changes in the sea surface temperature and salinity, serve as universal indicators for the presence of the SML. Organisms are perhaps less suitable as indicators of the SML because they can actively avoid the SML and/or the harsh conditions in the SML may reduce their populations. However, the thickness of the SML remains "operational" in field experiments because the thickness of the collected layer is governed by the sampling method. Advances in SML sampling technology are needed to improve our understanding of how the SML influences air-sea interactions. Marine surface habitats sit at the interface between the atmosphere and the ocean. The biofilm-like habitat at the surface of the ocean harbours surface-dwelling microorganisms, commonly referred to as neuston. The sea surface microlayer (SML) constitutes the uppermost layer of the ocean, only 1–1000 μm thick, with unique chemical and biological properties that distinguish it from the underlying water (ULW). Due to the location at the air-sea interface, the SML can influence exchange processes across this boundary layer, such as air-sea gas exchange and the formation of sea spray aerosols. Due to its exclusive position between the atmosphere and the hydrosphere and by spanning about 70% of the Earth's surface, the sea-surface microlayer (sea-SML) is regarded as a fundamental component in air–sea exchange processes and in biogeochemical cycling. Although having a minor thickness of &lt;1000 μm, the elusive SML is long known for its distinct physicochemical characteristics compared to the underlying water, e.g., by featuring the accumulation of dissolved and particulate organic matter, transparent exopolymer particles (TEP), and surface-active molecules. Therefore, the SML is a gelatinous biofilm, maintaining physical stability through surface tension forces. It also forms a vast habitat for different organisms, collectively termed as neuston  with a recent global estimate of 2 × 1023 microbial cells for the sea-SML. Life at air–water interfaces has never been considered easy, mainly because of the harsh environmental conditions that influence the SML. However, high abundances of microorganisms, especially of bacteria and picophytoplankton, accumulating in the SML compared to the underlying water were frequently reported, accompanied by a predominant heterotrophic activity. This is because primary production at the immediate air–water interface is often hindered by photoinhibition. However, some exceptions of photosynthetic organisms, e.g., Trichodesmium, Synechococcus, or Sargassum, show more tolerance towards high light intensities and, hence, can become enriched in the SML. Previous research has provided evidence that neustonic organisms can cope with wind and wave energy, solar and ultraviolet (UV) radiation, fluctuations in temperature and salinity, and a higher potential predation risk by the zooneuston. Furthermore, wind action promoting sea spray formation and bubbles rising from deeper water and bursting at the surface release SML-associated microbes into the atmosphere. In addition to being more concentrated compared to planktonic counterparts, the bacterioneuston, algae, and protists display distinctive community compositions compared to the underlying water, in both marine  and freshwater habitats. Furthermore, the bacterial community composition was often dependent on the SML sampling device being used. While being well defined with respect to bacterial community composition, little is known about viruses in the SML, i.e., the virioneuston. This review has its focus on virus–bacterium dynamics at air–water interfaces, even if viruses likely interact with other SML microbes, including archaea and the phytoneuston, as can be deduced from viral interference with their planktonic counterparts. Although viruses were briefly mentioned as pivotal SML components in a recent review on this unique habitat, a synopsis of the emerging knowledge and the major research gaps regarding bacteriophages at air–water interfaces is still missing in the literature. Properties. Organic compounds such as amino acids, carbohydrates, fatty acids, and phenols are highly enriched in the SML interface. Most of these come from biota in the sub-surface waters, which decay and become transported to the surface, though other sources exist also such as atmospheric deposition, coastal runoff, and anthropogenic nutrification. The relative concentration of these compounds is dependent on the nutrient sources as well as climate conditions such as wind speed and precipitation. These organic compounds on the surface create a "film," referred to as a "slick" when visible, which affects the physical and optical properties of the interface. These films occur because of the hydrophobic tendencies of many organic compounds, which causes them to protrude into the air-interface. The existence of organic surfactants on the ocean surface impedes wave formation for low wind speeds. For increasing concentrations of surfactant there is an increasing critical wind speed necessary to create ocean waves. Increased levels of organic compounds at the surface also hinders air-sea gas exchange at low wind speeds. One way in which particulates and organic compounds on the surface are transported into the atmosphere is the process called "bubble bursting". Bubbles generate the major portion of marine aerosols. They can be dispersed to heights of several meters, picking up whatever particles latch on to their surface. However, the major supplier of materials comes from the SML. Processes. Surfaces and interfaces are critical zones where major physical, chemical, and biological exchanges occur. As the ocean covers 362 million km2, about 71% of the Earth's surface, the ocean-atmosphere interface is plausibly one of the largest and most important interfaces on the planet. Every substance entering or leaving the ocean from or to the atmosphere passes through this interface, which on the water-side -and to a lesser extent on the air-side- shows distinct physical, chemical, and biological properties. On the water side the uppermost 1 to 1000 μm of this interface are referred to as the sea surface microlayer (SML). Like a skin, the SML is expected to control the rates of exchange of energy and matter between air and sea, thereby potentially exerting both short-term and long-term impacts on various Earth system processes, including biogeochemical cycling, production and uptake of radiately active gases like CO2 or DMS, thus ultimately climate regulation. As of 2017, processes occurring within the SML, as well as the associated rates of material exchange through the SML, remained poorly understood and were rarely represented in marine and atmospheric numerical models. An improved understanding of the biological, chemical, and physical processes at the ocean's upper surface could provide an essential contribution to the reduction of uncertainties regarding ocean-climate feedbacks. Due to its positioning between atmosphere and ocean, the SML is the first to be exposed to climate changes including temperature, climate relevant trace gases, wind speed, and precipitation as well as to pollution by human waste, including nutrients, toxins, nanomaterials, and plastic debris. Bacterioneuston. The term neuston describes the organisms in the SML and was first suggested by Naumann in 1917. As in other marine ecosystems, bacterioneuston communities have important roles in SML functioning. Bacterioneuston community composition of the SML has been analysed and compared to the underlying water in different habitats with varying results, and has primarily focused on coastal waters and shelf seas, with limited study of the open ocean . In the North Sea, a distinct bacterial community was found in the SML with Vibrio spp. and Pseudoalteromonas spp. dominating the bacterioneuston. During an artificially induced phytoplankton bloom in a fjord mesocosm experiment, the most dominant denaturing gradient gel electrophoresis (DGGE) bands of the bacterioneuston consisted of two bacterial families: Flavobacteriaceae and Alteromonadaceae. Other studies have however, found little or no differences in the bacterial community composition of the SML and the ULW. Difficulties in direct comparisons between studies can arise because of the different methods used to sample the SML, which result in varied sampling depths. Even less is known about the community control mechanisms in the SML and how the bacterial community assembles at the air-sea interface. The bacterioneuston community could be altered by differing wind conditions and radiation levels, with high wind speeds inhibiting the formation of a distinct bacterioneuston community. Wind speed and radiation levels refer to external controls, however, bacterioneuston community composition might also be influenced by internal factors such as nutrient availability and organic matter (OM) produced either in the SML or in the ULW. One of the principal OM components consistently enriched in the SML are transparent exopolymer particles (TEP), which are rich in carbohydrates and form by the aggregation of dissolved precursors excreted by phytoplankton in the euphotic zone. Higher TEP formation rates in the SML, facilitated through wind shear and dilation of the surface water, have been proposed as one explanation for the observed enrichment in TEP. Also, due to their natural positive buoyancy, when not ballasted by other particles sticking to them, TEP ascend through the water column and ultimately end up at the SML . A second possible pathway of TEP from the water column to the SML is by bubble scavenging. Next to rising bubbles, another potential transport mechanism for bacteria from the ULW to the SML could be ascending particles  or more specifically TEP. Bacteria readily attach to TEP in the water column. TEP can serve as microbial hotspots and can be used directly as a substrate for bacterial degradation, and as grazing protection for attached bacteria, e.g., by acting as an alternate food source for zooplankton. TEP have also been suggested to serve as light protection for microorganisms in environments with high irradiation. Virioneuston. Viruses in the sea surface microlayer, the so-called "virioneuston", have recently become of interest to researchers as enigmatic biological entities in the boundary surface layers with potentially important ecological impacts. Given this vast air–water interface sits at the intersection of major air–water exchange processes spanning more than 70% of the global surface area, it is likely to have profound implications for marine biogeochemical cycles, on the microbial loop and gas exchange, as well as the marine food web structure, the global dispersal of airborne viruses originating from the sea surface microlayer, and human health. Viruses are the most abundant biological entities in the water column of the world's oceans. In the free water column, the virioplankton typically outnumbers the bacterioplankton by one order of magnitude reaching typical bulk water concentrations of 107 viruses mL−1. Moreover, they are known as integral parts of global biogeochemical cycles to shape and drive microbial diversity  and to structure trophic networks. Like other neuston members, the virioneuston likely originates from the bulk seawater. For instance, in 1977 Baylor et al. postulated adsorption of viruses onto air bubbles as they rise to the surface, or viruses can stick to organic particles  also being transported to the SML via bubble scavenging. Within the SML, viruses interacting with the bacterioneuston will probably induce the viral shunt, a phenomenon that is well known for marine pelagic systems. The term viral shunt describes the release of organic carbon and other nutritious compounds from the virus-mediated lysis of host cells, and its addition to the local dissolved organic matter (DOM) pool. The enriched and densely packed bacterioneuston forms an excellent target for viruses compared to the bacterioplankton populating the subsurface. This is because high host-cell numbers will increase the probability of host–virus encounters. The viral shunt might effectively contribute to the SML's already high DOM content enhancing bacterial production as previously suggested for pelagic ecosystems  and in turn replenishing host cells for viral infections. By affecting the DOM pool, viruses in the SML might directly interfere with the microbial loop being initiated when DOM is microbially recycled, converted into biomass, and passed along the food web. In addition, the release of DOM from lysed host cells by viruses contributes to organic particle generation. However, the role of the virioneuston for the microbial loop has never been investigated. Measurement. Devices used to sample the concentrations of particulates and compounds of the SML include a glass fabric, metal mesh screens, and other hydrophobic surfaces. These are placed on a rotating cylinder which collects surface samples as it rotates on top of the ocean surface. The glass plate sampler is commonly used. It was first described in 1972 by Harvey and Burzell as a simple but effective method of collecting small sea surface microlayer samples.  A clean glass plate is immersed vertically into the water and then withdrawn in a controlled manner. Harvey and Burzell used a plate which was 20 cm square and 4 mm thick. They withdrew it from the sea at the rate of 20 cm per second. Typically the uppermost 20–150 μm of the surface microlayer adheres to the plate as it is withdrawn. The sample is then wiped from both sides of the plate into a sampling vial. For a plate of the size used by Harvey and Burzel, the resulting sample volumes are between about 3 and 12 cubic centimetres. The sampled SML thickness "h" in micrometres is given by: formula_0 where "V" is the sample volume in cm3, "A" is the total immersed plate area of both sides in cm2, and "N" is the number of times the sample was dipped. Remote sensing. Ocean surface habitats sit at the interface between the ocean and the atmosphere. The biofilm-like habitat at the surface of the ocean harbours surface-dwelling microorganisms, commonly referred to as neuston. This vast air–water interface sits at the intersection of major air–water exchange processes spanning more than 70% of the global surface area . Bacteria in the surface microlayer of the ocean, called "bacterioneuston", are of interest due to practical applications such as air-sea gas exchange of greenhouse gases, production of climate-active marine aerosols, and remote sensing of the ocean. Of specific interest is the production and degradation of surfactants (surface active materials) via microbial biochemical processes. Major sources of surfactants in the open ocean include phytoplankton, terrestrial runoff, and deposition from the atmosphere. Unlike coloured algal blooms, surfactant-associated bacteria may not be visible in ocean colour imagery. Having the ability to detect these "invisible" surfactant-associated bacteria using synthetic aperture radar has immense benefits in all-weather conditions, regardless of cloud, fog, or daylight. This is particularly important in very high winds, because these are the conditions when the most intense air-sea gas exchanges and marine aerosol production take place. Therefore, in addition to colour satellite imagery, SAR satellite imagery may provide additional insights into a global picture of biophysical processes at the boundary between the ocean and atmosphere, air-sea greenhouse gas exchanges and production of climate-active marine aerosols. Aeroplankton. A stream of airborne microorganisms, including marine viruses, bacteria and protists, circles the planet above weather systems but below commercial air lanes. Some peripatetic microorganisms are swept up from terrestrial dust storms, but most originate from marine microorganisms in sea spray. In 2018, scientists reported that hundreds of millions of these viruses and tens of millions of bacteria are deposited daily on every square meter around the planet. Compared to the sub-surface waters, the sea surface microlayer contains elevated concentration of bacteria and viruses, as well as toxic metals and organic pollutants. These materials can be transferred from the sea-surface to the atmosphere in the form of wind-generated aqueous aerosols due to their high vapor tension and a process known as volatilisation. When airborne, these microbes can be transported long distances to coastal regions. If they hit land they can have detrimental effects on animals, vegetation and human health. Marine aerosols that contain viruses can travel hundreds of kilometers from their source and remain in liquid form as long as the humidity is high enough (over 70%). These aerosols are able to remain suspended in the atmosphere for about 31 days. Evidence suggests that bacteria can remain viable after being transported inland through aerosols. Some reached as far as 200 meters at 30 meters above sea level. It was also noted that the process which transfers this material to the atmosphere causes further enrichment in both bacteria and viruses in comparison to either the SML or sub-surface waters (up to three orders of magnitude in some locations). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{h} = \\frac{104 V}{A N}" } ]
https://en.wikipedia.org/wiki?curid=12264442
12265304
Point distribution model
The point distribution model is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes. Background. The point distribution model concept has been developed by Cootes, Taylor "et al." and became a standard in computer vision for the statistical study of shape and for segmentation of medical images where shape priors really help interpretation of noisy and low-contrasted pixels/voxels. The latter point leads to active shape models (ASM) and active appearance models (AAM). Point distribution models rely on landmark points. A landmark is an annotating point posed by an anatomist onto a given locus for every shape instance across the training set population. For instance, the same landmark will designate the tip of the index finger in a training set of 2D hands outlines. Principal component analysis (PCA), for instance, is a relevant tool for studying correlations of movement between groups of landmarks among the training set population. Typically, it might detect that all the landmarks located along the same finger move exactly together across the training set examples showing different finger spacing for a flat-posed hands collection. Details. First, a set of training images are manually landmarked with enough corresponding landmarks to sufficiently approximate the geometry of the original shapes. These landmarks are aligned using the generalized procrustes analysis, which minimizes the least squared error between the points. formula_0 aligned landmarks in two dimensions are given as formula_1. It's important to note that each landmark formula_2 should represent the same anatomical location. For example, landmark #3, formula_3 might represent the tip of the ring finger across all training images. Now the shape outlines are reduced to sequences of formula_0 landmarks, so that a given training shape is defined as the vector formula_4. Assuming the scattering is gaussian in this space, PCA is used to compute normalized eigenvectors and eigenvalues of the covariance matrix across all training shapes. The matrix of the top formula_5 eigenvectors is given as formula_6, and each eigenvector describes a principal mode of variation along the set. Finally, a linear combination of the eigenvectors is used to define a new shape formula_7, mathematically defined as: formula_8 where formula_9 is defined as the mean shape across all training images, and formula_10 is a vector of scaling values for each principal component. Therefore, by modifying the variable formula_10 an infinite number of shapes can be defined. To ensure that the new shapes are all within the variation seen in the training set, it is common to only allow each element of formula_10 to be within formula_113 standard deviations, where the standard deviation of a given principal component is defined as the square root of its corresponding eigenvalue. PDM's can be extended to any arbitrary number of dimensions, but are typically used in 2D image and 3D volume applications (where each landmark point is formula_12 or formula_13). Discussion. An eigenvector, interpreted in euclidean space, can be seen as a sequence of formula_0 euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twisting nematode worm is used as an example in the teaching of kernel PCA-based methods. Due to the PCA properties: eigenvectors are mutually orthogonal, form a basis of the training set cloud in the shape space, and cross at the 0 in this space, which represents the mean shape. Also, PCA is a traditional way of fitting a closed ellipsoid to a Gaussian cloud of points (whatever their dimension): this suggests the concept of bounded variation. The idea behind PDMs is that eigenvectors can be linearly combined to create an infinity of new shape instances that will 'look like' the one in the training set. The coefficients are bounded alike the values of the corresponding eigenvalues, so as to ensure the generated 2n/3n-dimensional dot will remain into the hyper-ellipsoidal allowed domain—allowable shape domain (ASD).
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\mathbf{X} = (x_1, y_1, \\ldots, x_k, y_k)" }, { "math_id": 2, "text": "i \\in \\lbrace 1, \\ldots k \\rbrace " }, { "math_id": 3, "text": "(x_3, y_3)" }, { "math_id": 4, "text": "\\mathbf{X} \\in \\mathbb{R}^{2k}" }, { "math_id": 5, "text": "d" }, { "math_id": 6, "text": "\\mathbf{P} \\in \\mathbb{R}^{2k \\times d}" }, { "math_id": 7, "text": "\\mathbf{X}'" }, { "math_id": 8, "text": "\\mathbf{X}' = \\overline{\\mathbf{X}} + \\mathbf{P} \\mathbf{b}" }, { "math_id": 9, "text": "\\overline{\\mathbf{X}}" }, { "math_id": 10, "text": "\\mathbf{b}" }, { "math_id": 11, "text": "\\pm" }, { "math_id": 12, "text": "\\mathbb{R}^2" }, { "math_id": 13, "text": "\\mathbb{R}^3" } ]
https://en.wikipedia.org/wiki?curid=12265304
12266333
Modern Arabic mathematical notation
Mathematical notation based on the Arabic script Modern Arabic mathematical notation is a mathematical notation based on the Arabic script, used especially at pre-university levels of education. Its form is mostly derived from Western notation, but has some notable features that set it apart from its Western counterpart. The most remarkable of those features is the fact that it is written from right to left following the normal direction of the Arabic script. Other differences include the replacement of the Greek and Latin alphabet letters for symbols with Arabic letters and the use of Arabic names for functions and relations. Variations. Notation differs slightly from one region to another. In tertiary education, most regions use the Western notation. The notation mainly differs in numeral system used, and in mathematical symbols used. Numeral systems. There are three numeral systems used in right to left mathematical notation. Written numerals are arranged with their lowest-value digit to the right, with higher value positions added to the left. That is identical to the arrangement used by Western texts using Hindu-Arabic numerals even though Arabic script is read from right to left. The symbols "٫" and "٬" may be used as the decimal mark and the thousands separator respectively when writing with Eastern Arabic numerals, e.g. "3.14159265358", "1,000,000,000". Negative signs are written to the left of magnitudes, e.g. "−3". In-line fractions are written with the numerator and denominator on the left and right of the fraction slash respectively, e.g. "2/7". Symbols. Sometimes, symbols used in Arabic mathematical notation differ according to the region: &lt;templatestyles src="Refbegin/styles.css" /&gt; Sometimes, mirrored Latin and Greek symbols are used in Arabic mathematical notation (especially in western Arabic regions): &lt;templatestyles src="Refbegin/styles.css" /&gt; However, in Iran, usually Latin and Greek symbols are used. Examples. Trigonometric and hyperbolic functions. Hyperbolic functions. The letter ( "zayn", from the first letter of the second word of "hyperbolic function") is added to the end of trigonometric functions to express hyperbolic functions. This is similar to the way formula_0 is added to the end of trigonometric functions in Latin-based notation. Inverse trigonometric functions. For inverse trigonometric functions, the superscript in Arabic notation is similar in usage to the superscript formula_1 in Latin-based notation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{h}" }, { "math_id": 1, "text": "-1" } ]
https://en.wikipedia.org/wiki?curid=12266333
1226666
Adams operation
In mathematics, an Adams operation, denoted ψ"k" for natural numbers "k", is a cohomology operation in topological K-theory, or any allied operation in algebraic K-theory or other types of algebraic construction, defined on a pattern introduced by Frank Adams. The basic idea is to implement some fundamental identities in symmetric function theory, at the level of vector bundles or other representing object in more abstract theories. Adams operations can be defined more generally in any λ-ring. Adams operations in K-theory. Adams operations ψ"k" on K theory (algebraic or topological) are characterized by the following properties. The fundamental idea is that for a vector bundle "V" on a topological space "X", there is an analogy between Adams operators and exterior powers, in which ψ"k"("V") is to Λ"k"("V") as the power sum Σ α"k" is to the "k"-th elementary symmetric function σ"k" of the roots α of a polynomial "P"("t"). (Cf. Newton's identities.) Here Λ"k" denotes the "k"-th exterior power. From classical algebra it is known that the power sums are certain integral polynomials "Q""k" in the σ"k". The idea is to apply the same polynomials to the Λ"k"("V"), taking the place of σ"k". This calculation can be defined in a "K"-group, in which vector bundles may be formally combined by addition, subtraction and multiplication (tensor product). The polynomials here are called Newton polynomials (not, however, the Newton polynomials of interpolation theory). Justification of the expected properties comes from the line bundle case, where "V" is a Whitney sum of line bundles. In this special case the result of any Adams operation is naturally a vector bundle, not a linear combination of ones in "K"-theory. Treating the line bundle direct factors formally as roots is something rather standard in algebraic topology (cf. the Leray–Hirsch theorem). In general a mechanism for reducing to that case comes from the splitting principle for vector bundles. Adams operations in group representation theory. The Adams operation has a simple expression in group representation theory. Let "G" be a group and ρ a representation of "G" with character χ. The representation ψ"k"(ρ) has character formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\chi_{\\psi^k(\\rho)}(g) = \\chi_\\rho(g^k) \\ . " } ]
https://en.wikipedia.org/wiki?curid=1226666
12267066
Narayana number
Triangular array of natural numbers In combinatorics, the Narayana numbers formula_1 form a triangular array of natural numbers, called the Narayana triangle, that occur in various counting problems. They are named after Canadian mathematician T. V. Narayana (1930–1987). Formula. The Narayana numbers can be expressed in terms of binomial coefficients: formula_0 Numerical values. The first eight rows of the Narayana triangle read: Combinatorial interpretations. Dyck words. An example of a counting problem whose solution can be given in terms of the Narayana numbers formula_2, is the number of words containing &amp;NoBreak;&amp;NoBreak; pairs of parentheses, which are correctly matched (known as Dyck words) and which contain &amp;NoBreak;&amp;NoBreak; distinct nestings. For instance, formula_3, since with four pairs of parentheses, six sequences can be created which each contain two occurrences the sub-pattern : ()((())) (())(()) ((()))() From this example it should be obvious that formula_4, since the only way to get a single sub-pattern is to have all the opening parentheses in the first &amp;NoBreak;&amp;NoBreak; positions, followed by all the closing parentheses. Also formula_5, as &amp;NoBreak;&amp;NoBreak; distinct nestings can be achieved only by the repetitive pattern . More generally, it can be shown that the Narayana triangle is symmetric: formula_6 The sum of the rows in this triangle equal the Catalan numbers: formula_7 Monotonic lattice paths. The Narayana numbers also count the number of lattice paths from formula_8 to formula_9, with steps only northeast and southeast, not straying below the x-axis, with &amp;NoBreak;&amp;NoBreak; peaks. The following figures represent the Narayana numbers formula_10, illustrating the above mentioned symmetries. The sum of formula_10 is 1 + 6 + 6 + 1 = 14, which is the 4th Catalan number, formula_11. This sum coincides with the interpretation of Catalan numbers as the number of monotonic paths along the edges of an formula_12 grid that do not pass above the diagonal. Rooted trees. The number of unlabeled ordered rooted trees with formula_13 edges and formula_14 leaves is equal to formula_2. This is analogous to the above examples: Partitions. In the study of partitions, we see that in a set containing &amp;NoBreak;&amp;NoBreak; elements, we may partition that set in formula_16 different ways, where formula_16 is the &amp;NoBreak;&amp;NoBreak;"th" Bell number. Furthermore, the number of ways to partition a set into exactly &amp;NoBreak;&amp;NoBreak; blocks we use the Stirling numbers formula_17. Both of these concepts are a bit off-topic, but a necessary foundation for understanding the use of the Narayana numbers. In both of the above two notions crossing partitions are accounted for. To reject the crossing partitions and count only the non-crossing partitions, we may use the Catalan numbers to count the non-crossing partitions of all &amp;NoBreak;&amp;NoBreak; elements of the set, formula_18. To count the non-crossing partitions in which the set is partitioned in exactly &amp;NoBreak;&amp;NoBreak; blocks, we use the Narayana number formula_2. Generating function. The generating function for the Narayana numbers is formula_19 Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{N}(n, k) = \\frac{1}{n} {n \\choose k} {n \\choose k-1}" }, { "math_id": 1, "text": "\\operatorname{N}(n, k), n \\in \\mathbb{N}^+, 1 \\le k \\le n" }, { "math_id": 2, "text": "\\operatorname{N}(n, k)" }, { "math_id": 3, "text": "\\operatorname{N}(4, 2) = 6" }, { "math_id": 4, "text": "\\operatorname{N}(n, 1) = 1" }, { "math_id": 5, "text": "\\operatorname{N}(n, n) = 1" }, { "math_id": 6, "text": "\\operatorname{N}(n, k) = \\operatorname{N}(n, n-k+1)" }, { "math_id": 7, "text": "\\operatorname{N}(n, 1) + \\operatorname{N}(n, 2) + \\operatorname{N}(n, 3) + \\cdots + \\operatorname{N}(n, n) = C_n" }, { "math_id": 8, "text": "(0, 0)" }, { "math_id": 9, "text": "(2n, 0)" }, { "math_id": 10, "text": "\\operatorname{N}(4, k)" }, { "math_id": 11, "text": "C_4" }, { "math_id": 12, "text": "n \\times n" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "\\operatorname{N}(4, 3)" }, { "math_id": 16, "text": "B_n" }, { "math_id": 17, "text": "S(n, k)" }, { "math_id": 18, "text": "C_n" }, { "math_id": 19, "text": "\n\\sum_{n=1}^\\infty \\sum_{k=1}^n \\operatorname{N}(n, k) z^n t^{k-1} = \\frac{1-z(t+1) - \\sqrt{1-2z(t+1)+z^2(t-1)^2}}{2tz} \\;.\n" } ]
https://en.wikipedia.org/wiki?curid=12267066
1226719
Cohomology operation
In mathematics, the cohomology operation concept became central to algebraic topology, particularly homotopy theory, from the 1950s onwards, in the shape of the simple definition that if "F" is a functor defining a cohomology theory, then a cohomology operation should be a natural transformation from "F" to itself. Throughout there have been two basic points: The origin of these studies was the work of Pontryagin, Postnikov, and Norman Steenrod, who first defined the Pontryagin square, Postnikov square, and Steenrod square operations for singular cohomology, in the case of mod 2 coefficients. The combinatorial aspect there arises as a formulation of the failure of a natural diagonal map, at cochain level. The general theory of the Steenrod algebra of operations has been brought into close relation with that of the symmetric group. In the Adams spectral sequence the "bicommutant" aspect is implicit in the use of Ext functors, the derived functors of Hom-functors; if there is a bicommutant aspect, taken over the Steenrod algebra acting, it is only at a "derived" level. The convergence is to groups in stable homotopy theory, about which information is hard to come by. This connection established the deep interest of the cohomology operations for homotopy theory, and has been a research topic ever since. An extraordinary cohomology theory has its own cohomology operations, and these may exhibit a richer set on constraints. Formal definition. A cohomology operation formula_0 of type formula_1 is a natural transformation of functors formula_2 defined on CW complexes. Relation to Eilenberg–MacLane spaces. Cohomology of CW complexes is representable by an Eilenberg–MacLane space, so by the Yoneda lemma a cohomology operation of type formula_3 is given by a homotopy class of maps formula_4. Using representability once again, the cohomology operation is given by an element of formula_5. Symbolically, letting formula_6 denote the set of homotopy classes of maps from formula_7 to formula_8, formula_9
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "(n,q,\\pi,G)\\," }, { "math_id": 2, "text": "\\theta:H^{n}(-,\\pi)\\to H^{q}(-,G)\\," }, { "math_id": 3, "text": "(n,q,\\pi,G)" }, { "math_id": 4, "text": "K(\\pi,n) \\to K(G,q)" }, { "math_id": 5, "text": "H^{q}(K(\\pi,n),G)" }, { "math_id": 6, "text": "[A,B]" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "B" }, { "math_id": 9, "text": "\\begin{align}\\displaystyle\\mathrm{Nat}(H^n(-,\\pi),H^q(-,G)) &= \\mathrm{Nat}([-,K(\\pi,n)],[-,K(G,q)])\\\\ &= [K(\\pi,n),K(G,q)]\\\\ &= H^q(K(\\pi,n);G).\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1226719
12267953
6-sphere coordinates
3D coordinate system used in mathematics In mathematics, 6-sphere coordinates are a coordinate system for three-dimensional space obtained by inverting the 3D Cartesian coordinates across the unit 2-sphere formula_0. They are so named because the loci where one coordinate is constant form spheres tangent to the origin from one of six sides (depending on which coordinate is held constant and whether its value is positive or negative). This coordinate system exists independently from and has no relation to the 6-sphere. The three coordinates are formula_1 Since inversion is an involution, the equations for "x", "y", and "z" in terms of "u", "v", and "w" are similar: formula_2 This coordinate system is formula_3-separable for the 3-variable Laplace equation.
[ { "math_id": 0, "text": "x^2+y^2+z^2=1" }, { "math_id": 1, "text": "u = \\frac{x}{x^2+y^2+z^2},\\quad v = \\frac{y}{x^2+y^2+z^2},\\quad w = \\frac{z}{x^2+y^2+z^2}." }, { "math_id": 2, "text": "x = \\frac{u}{u^2+v^2+w^2},\\quad y = \\frac{v}{u^2+v^2+w^2},\\quad z = \\frac{w}{u^2+v^2+w^2}." }, { "math_id": 3, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=12267953
12269538
RevPAR
RevPAR, or revenue per available room, is a performance metric in the hotel industry that is calculated by dividing a hotel's total guestroom revenue by the room count and the number of days in the period being measured. A few data broker companies compile RevPAR information across markets via voluntary survey and provide compiled blinded information back to the industry. The STAR report is one such widely used report, and is provided by STR. Caveats. Since RevPAR is a measurement for a particular period of time (say a day, or month or year) it is most often compared to the same time frame. It is often used in comparison to competitors within a custom defined market, trading area, or advertising region or a self-selected competitive set as defined by the hotel's owner or manager, which is referred to as RevPAR Index or RGI (Revenue Generating Index). Comparisons are usually most meaningful when made between hotels of the same type, or with similar target customers, as different hotel types may have different operational costs and customer expectations. Other caveats: formula_0 Other metrics. TRevPAR (Total Revenue Per Available Room) is another closely related performance metric in the hotel industry. It is calculated by dividing total hotel revenue (instead of guestroom revenue) by available rooms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{RevPAR} = \\frac\\operatorname{Rooms Revenue}\\operatorname{Rooms Available} \\," } ]
https://en.wikipedia.org/wiki?curid=12269538
1227105
RKKY interaction
In the physical theory of spin glass magnetization, the Ruderman–Kittel–Kasuya–Yosida (RKKY) interaction models the coupling of nuclear magnetic moments or localized inner d- or f-shell electron spins through conduction electrons. It is named after Malvin Ruderman, Charles Kittel, Tadao Kasuya, and Kei Yosida, the physicists who first proposed and developed the model. Malvin Ruderman and Charles Kittel of the University of California, Berkeley first proposed the model to explain unusually broad nuclear spin resonance lines in natural metallic silver. The theory is an indirect exchange coupling: the hyperfine interaction couples the nuclear spin of one atom to a conduction electron also coupled to the spin of a different nucleus. The assumption of hyperfine interaction turns out to be unnecessary, and can be replaced equally well with the exchange interaction. The simplest treatment assumes a Bloch wavefunction basis and therefore only applies to crystalline systems; the resulting correlation energy, computed with perturbation theory, takes the following form: formula_0 where H represents the Hamiltonian, "Rij" is the distance between the nuclei i and j, "Ii" is the nuclear spin of atom i, "Δkmkm" is a matrix element that represents the strength of the hyperfine interaction, "m"* is the effective mass of the electrons in the crystal, and "km" is the Fermi momentum. Intuitively, we may picture this as when one magnetic atom scatters an electron wave, which then scatters off another magnetic atom many atoms away, thus coupling the two atoms' spins. Tadao Kasuya from Nagoya University later proposed that a similar indirect exchange coupling could occur with localized inner d-electron spins instead of nuclei. This theory was expanded more completely by Kei Yosida of the UC Berkeley, to give a Hamiltonian that describes (d-electron spin)–(d-electron spin), (nuclear spin)–(nuclear spin), and (d-electron spin)–(nuclear spin) interactions. J.H. Van Vleck clarified some subtleties of the theory, particularly the relationship between the first- and second-order perturbative contributions. Perhaps the most significant application of the RKKY theory has been to the theory of giant magnetoresistance (GMR). GMR was discovered when the coupling between thin layers of magnetic materials separated by a non-magnetic spacer material was found to oscillate between ferromagnetic and antiferromagnetic as a function of the distance between the layers. This ferromagnetic/antiferromagnetic oscillation is one prediction of the RKKY theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H(\\mathbf{R}_{ij}) = \\frac{\\mathbf{I}_i \\cdot \\mathbf{I}_j}{4} \\frac{\\left| \\Delta_{k_m k_m} \\right|^2 m^*}{(2 \\pi )^3 R_{ij}^4 \\hbar^2} \\left[ 2 k_m R_{ij} \\cos( 2 k_m R_{ij} ) - \\sin( 2 k_m R_{ij} ) \\right]\\text{,}" } ]
https://en.wikipedia.org/wiki?curid=1227105
1227155
Sort-merge join
Algorithm used in relational databases The sort-merge join (also known as merge join) is a join algorithm and is used in the implementation of a relational database management system. The basic problem of a join algorithm is to find, for each distinct value of the join attribute, the set of tuples in each relation which display that value. The key idea of the sort-merge algorithm is to first sort the relations by the join attribute, so that interleaved linear scans will encounter these sets at the same time. In practice, the most expensive part of performing a sort-merge join is arranging for both inputs to the algorithm to be presented in sorted order. This can be achieved via an explicit sort operation (often an external sort), or by taking advantage of a pre-existing ordering in one or both of the join relations. The latter condition, called interesting order, can occur because an input to the join might be produced by an index scan of a tree-based index, another merge join, or some other plan operator that happens to produce output sorted on an appropriate key. Interesting orders need not be serendipitous: the optimizer may seek out this possibility and choose a plan that is suboptimal for a specific preceding operation if it yields an interesting order that one or more downstream nodes can exploit. Complexity. Let formula_0 and formula_1 be relations where formula_2. formula_0 fits in formula_3 pages memory and formula_1 fits in formula_4 pages memory. In the worst case, a sort-merge join will run in formula_5 I/O operations. In the case that formula_0 and formula_1 are not ordered the worst case time cost will contain additional terms of sorting time: formula_6, which equals formula_7 (as linearithmic terms outweigh the linear terms, see Big O notation – Orders of common functions). Pseudocode. For simplicity, the algorithm is described in the case of an inner join of two relations "left" and "right". Generalization to other join types is straightforward. The output of the algorithm will contain only rows contained in the "left" and "right" relation and duplicates form a Cartesian product. function Sort-Merge Join(left: Relation, right: Relation, comparator: Comparator) { result = new Relation() // Ensure that at least one element is present if (!left.hasNext() || !right.hasNext()) { return result // Sort left and right relation with comparator left.sort(comparator) right.sort(comparator) // Start Merge Join algorithm leftRow = left.next() rightRow = right.next() outerForeverLoop: while (true) { while (comparator.compare(leftRow, rightRow) != 0) { if (comparator.compare(leftRow, rightRow) &lt; 0) { // Left row is less than right row if (left.hasNext()) { // Advance to next left row leftRow = left.next() } else { break outerForeverLoop } else { // Left row is greater than right row if (right.hasNext()) { // Advance to next right row rightRow = right.next() } else { break outerForeverLoop // Mark position of left row and keep copy of current left row left.mark() markedLeftRow = leftRow while (true) { while (comparator.compare(leftRow, rightRow) == 0) { // Left row and right row are equal // Add rows to result result = add(leftRow, rightRow) // Advance to next left row leftRow = left.next() // Check if left row exists if (!leftRow) { // Continue with inner forever loop break if (right.hasNext()) { // Advance to next right row rightRow = right.next() } else { break outerForeverLoop if (comparator.compare(markedLeftRow, rightRow) == 0) { // Restore left to stored mark left.restoreMark() leftRow = markedLeftRow } else { // Check if left row exists if (!leftRow) { break outerForeverLoop } else { // Continue with outer forever loop break return result Since the comparison logic is not the central aspect of this algorithm, it is hidden behind a generic comparator and can also consist of several comparison criteria (e.g. multiple columns). The compare function should return if a row is "less(-1)", "equal(0)" or "bigger(1)" than another row: function compare(leftRow: RelationRow, rightRow: RelationRow): number { // Return -1 if leftRow is less than rightRow // Return 0 if leftRow is equal to rightRow // Return 1 if leftRow is greater than rightRow Note that a relation in terms of this pseudocode supports some basic operations: interface Relation { // Returns true if relation has a next row (otherwise false) hasNext(): boolean // Returns the next row of the relation (if any) next(): RelationRow // Sorts the relation with the given comparator sort(comparator: Comparator): void // Marks the current row index mark(): void // Restores the current row index to the marked row index restoreMark(): void Simple C# implementation. Note that this implementation assumes the join attributes are unique, i.e., there is no need to output multiple tuples for a given value of the key. public class MergeJoin // Assume that left and right are already sorted public static Relation Merge(Relation left, Relation right) Relation output = new Relation(); while (!left.IsPastEnd() &amp;&amp; !right.IsPastEnd()) if (left.Key == right.Key) output.Add(left.Key); left.Advance(); right.Advance(); else if (left.Key &lt; right.Key) left.Advance(); else // if (left.Key &gt; right.Key) right.Advance(); return output; public class Relation private List&lt;int&gt; list; public const int ENDPOS = -1; public int position = 0; public int Position =&gt; position; public int Key =&gt; list[position]; public bool Advance() if (position == list.Count - 1 || position == ENDPOS) position = ENDPOS; return false; position++; return true; public void Add(int key) list.Add(key); public bool IsPastEnd() return position == ENDPOS; public void Print() foreach (int key in list) Console.WriteLine(key); public Relation(List&lt;int&gt; list) this.list = list; public Relation() this.list = new List&lt;int&gt;(); References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. C# Implementations of Various Join Algorithms
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": " |R|<|S| " }, { "math_id": 3, "text": "P_{r}" }, { "math_id": 4, "text": "P_{s}" }, { "math_id": 5, "text": "O(P_{r}+P_{s})" }, { "math_id": 6, "text": "O(P_{r}+P_{s}+P_{r}\\log(P_{r})+ P_{s}\\log(P_{s}))" }, { "math_id": 7, "text": "O(P_{r}\\log(P_{r})+ P_{s}\\log(P_{s}))" } ]
https://en.wikipedia.org/wiki?curid=1227155
1227275
T-square (fractal)
Two-dimensional fractal In mathematics, the T-square is a two-dimensional fractal. It has a boundary of infinite length bounding a finite area. Its name comes from the drawing instrument known as a T-square. Algorithmic description. It can be generated from using this algorithm: The method of creation is rather similar to the ones used to create a Koch snowflake or a Sierpinski triangle, "both based on recursively drawing equilateral triangles and the Sierpinski carpet." Properties. The T-square fractal has a fractal dimension of ln(4)/ln(2) = 2. The black surface extent is "almost" everywhere in the bigger square, for once a point has been darkened, it remains black for every other iteration; however some points remain white. The fractal dimension of the boundary equals formula_0. Using mathematical induction one can prove that for each n ≥ 2 the number of new squares that are added at stage n equals formula_1. The T-Square and the chaos game. The T-square fractal can also be generated by an adaptation of the chaos game, in which a point jumps repeatedly half-way towards the randomly chosen vertices of a square. The T-square appears when the jumping point is unable to target the vertex directly opposite the vertex previously chosen. That is, if the current vertex is "v"[i] and the previous vertex was "v"[i-1], then "v"[i] ≠ "v"[i-1] + "vinc", where "vinc" = 2 and modular arithmetic means that 3 + 2 = 1, 4 + 2 = 2: If "vinc" is given different values, allomorphs of the T-square appear that are computationally equivalent to the T-square but very different in appearance: T-square fractal and Sierpiński triangle. The T-square fractal can be derived from the Sierpiński triangle, and vice versa, by adjusting the angle at which sub-elements of the original fractal are added from the center outwards. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textstyle{\\frac{\\log{3}}{\\log{2}}=1.5849...}" }, { "math_id": 1, "text": "4*3^{(n-1)}" } ]
https://en.wikipedia.org/wiki?curid=1227275
1227284
Pulsar
Rapidly rotating neutron star A pulsar (from "pulsating radio source") is a highly magnetized rotating neutron star that emits beams of electromagnetic radiation out of its magnetic poles. This radiation can be observed only when a beam of emission is pointing toward Earth (similar to the way a lighthouse can be seen only when the light is pointed in the direction of an observer), and is responsible for the pulsed appearance of emission. Neutron stars are very dense and have short, regular rotational periods. This produces a very precise interval between pulses that ranges from milliseconds to seconds for an individual pulsar. Pulsars are one of the candidates for the source of ultra-high-energy cosmic rays. (See also centrifugal mechanism of acceleration.) The periods of pulsars make them very useful tools for astronomers. Observations of a pulsar in a binary neutron star system were used to indirectly confirm the existence of gravitational radiation. The first extrasolar planets were discovered in 1992 around a pulsar, specifically PSR B1257+12. In 1983, certain types of pulsars were detected that, at that time, exceeded the accuracy of atomic clocks in keeping time. History of observation. Discovery. Signals from the first discovered pulsar were initially observed by Jocelyn Bell while analyzing data recorded on August 6, 1967, from a newly commissioned radio telescope that she helped build. Initially dismissed as radio interference by her supervisor and developer of the telescope, Antony Hewish, the fact that the signals always appeared at the same declination and right ascension soon ruled out a terrestrial source. On November 28, 1967, Bell and Hewish using a fast strip chart recorder resolved the signals as a series of pulses, evenly spaced every 1.337 seconds. No astronomical object of this nature had ever been observed before. On December 21, Bell discovered a second pulsar, quashing speculation that these might be signals beamed at earth from an extraterrestrial intelligence. When observations with another telescope confirmed the emission, it eliminated any sort of instrumental effects. At this point, Bell said of herself and Hewish that "we did not really believe that we had picked up signals from another civilization, but obviously the idea had crossed our minds and we had no proof that it was an entirely natural radio emission. It is an interesting problem—if one thinks one may have detected life elsewhere in the universe, how does one announce the results responsibly?" Even so, they nicknamed the signal "LGM-1", for "little green men" (a playful name for intelligent beings of extraterrestrial origin). It was not until a second pulsating source was discovered in a different part of the sky that the "LGM hypothesis" was entirely abandoned. Their pulsar was later dubbed CP 1919, and is now known by a number of designators including PSR B1919+21 and PSR J1921+2153. Although CP 1919 emits in radio wavelengths, pulsars have subsequently been found to emit in visible light, X-ray, and gamma ray wavelengths. The word "pulsar" first appeared in print in 1968: { The existence of neutron stars was first proposed by Walter Baade and Fritz Zwicky in 1934, when they argued that a small, dense star consisting primarily of neutrons would result from a supernova. Based on the idea of magnetic flux conservation from magnetic main sequence stars, Lodewijk Woltjer proposed in 1964 that such neutron stars might contain magnetic fields as large as 1014 to 1016 gauss (=1010 to 1012 tesla). In 1967, shortly before the discovery of pulsars, Franco Pacini suggested that a rotating neutron star with a magnetic field would emit radiation, and even noted that such energy could be pumped into a supernova remnant around a neutron star, such as the Crab Nebula. After the discovery of the first pulsar, Thomas Gold independently suggested a rotating neutron star model similar to that of Pacini, and explicitly argued that this model could explain the pulsed radiation observed by Bell Burnell and Hewish. In 1968, Richard V. E. Lovelace with collaborators discovered period formula_0 ms of the Crab Nebula pulsar using Arecibo Observatory. The discovery of the Crab pulsar provided confirmation of the rotating neutron star model of pulsars. The Crab pulsar 33-millisecond pulse period was too short to be consistent with other proposed models for pulsar emission. Moreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and Zwicky. In 1974, Antony Hewish and Martin Ryle, who had developed revolutionary radio telescopes, became the first astronomers to be awarded the Nobel Prize in Physics, with the Royal Swedish Academy of Sciences noting that Hewish played a "decisive role in the discovery of pulsars". Considerable controversy is associated with the fact that Hewish was awarded the prize while Bell, who made the initial discovery while she was his PhD student, was not. Bell claims no bitterness upon this point, supporting the decision of the Nobel prize committee. Milestones. In 1974, Joseph Hooton Taylor, Jr. and Russell Hulse discovered for the first time a pulsar in a binary system, PSR B1913+16. This pulsar orbits another neutron star with an orbital period of just eight hours. Einstein's theory of general relativity predicts that this system should emit strong gravitational radiation, causing the orbit to continually contract as it loses orbital energy. Observations of the pulsar soon confirmed this prediction, providing the first ever evidence of the existence of gravitational waves. As of 2010, observations of this pulsar continues to agree with general relativity. In 1993, the Nobel Prize in Physics was awarded to Taylor and Hulse for the discovery of this pulsar. In 1982, Don Backer led a group that discovered PSR B1937+21, a pulsar with a rotation period of just 1.6 milliseconds (38,500 rpm). Observations soon revealed that its magnetic field was much weaker than ordinary pulsars, while further discoveries cemented the idea that a new class of object, the "millisecond pulsars" (MSPs) had been found. MSPs are believed to be the end product of X-ray binaries. Owing to their extraordinarily rapid and stable rotation, MSPs can be used by astronomers as clocks rivaling the stability of the best atomic clocks on Earth. Factors affecting the arrival time of pulses at Earth by more than a few hundred nanoseconds can be easily detected and used to make precise measurements. Physical parameters accessible through pulsar timing include the 3D position of the pulsar, its proper motion, the electron content of the interstellar medium along the propagation path, the orbital parameters of any binary companion, the pulsar rotation period and its evolution with time. (These are computed from the raw timing data by Tempo, a computer program specialized for this task.) After these factors have been taken into account, deviations between the observed arrival times and predictions made using these parameters can be found and attributed to one of three possibilities: intrinsic variations in the spin period of the pulsar, errors in the realization of Terrestrial Time against which arrival times were measured, or the presence of background gravitational waves. Scientists are currently attempting to resolve these possibilities by comparing the deviations seen between several different pulsars, forming what is known as a pulsar timing array. The goal of these efforts is to develop a pulsar-based time standard precise enough to make the first ever direct detection of gravitational waves. In 2006, a team of astronomers at LANL proposed a model to predict the likely date of pulsar glitches with observational data from the Rossi X-ray Timing Explorer. They used observations of the pulsar PSR J0537−6910, that is known to be a quasi-periodic glitching pulsar. However, no general scheme for glitch forecast is known to date. In 1992, Aleksander Wolszczan discovered the first extrasolar planets around PSR B1257+12. This discovery presented important evidence concerning the widespread existence of planets outside the Solar System, although it is very unlikely that any life form could survive in the environment of intense radiation near a pulsar. Pulsar-like white dwarfs. White dwarfs can also act as pulsars. Because the moment of inertia of a white dwarf is much higher than that of a neutron star, the white-dwarf pulsars rotate once every several minutes, far slower than neutron-star pulsars. By 2024, three pulsar-like white dwarfs have been identified. There is an alternative tentative explanation of the pulsar-like properties of these white dwarfs. In 2019, the properties of pulsars have been explained using a numerical magnetohydrodynamic model explaining was developed at Cornell University. According to this model, AE Aqr is an intermediate polar-type star, where the magnetic field is relatively weak and an accretion disc may form around the white dwarf. The star is in the propeller regime, and many of its observational properties are determined by the disc-magnetosphere interaction. A similar model for eRASSU J191213.9−441044 is supported by the results of its observations at ultraviolet wave lengths, which showed that its magnetic field strength does not exceed 50 MG. Nomenclature. Initially pulsars were named with letters of the discovering observatory followed by their right ascension (e.g. CP 1919). As more pulsars were discovered, the letter code became unwieldy, and so the convention then arose of using the letters PSR (Pulsating Source of Radio) followed by the pulsar's right ascension and degrees of declination (e.g. PSR 0531+21) and sometimes declination to a tenth of a degree (e.g. PSR 1913+16.7). Pulsars appearing very close together sometimes have letters appended (e.g. PSR 0021−72C and PSR 0021−72D). The modern convention prefixes the older numbers with a B (e.g. PSR B1919+21), with the B meaning the coordinates are for the 1950.0 epoch. All new pulsars have a J indicating 2000.0 coordinates and also have declination including minutes (e.g. PSR J1921+2153). Pulsars that were discovered before 1993 tend to retain their B names rather than use their J names (e.g. PSR J1921+2153 is more commonly known as PSR B1919+21). Recently discovered pulsars only have a J name (e.g. PSR J0437−4715). All pulsars have a J name that provides more precise coordinates of its location in the sky. Formation, mechanism, turn off. The events leading to the formation of a pulsar begin when the core of a massive star is compressed during a supernova, which collapses into a neutron star. The neutron star retains most of its angular momentum, and since it has only a tiny fraction of its progenitor's radius (and therefore its moment of inertia is sharply reduced), it is formed with very high rotation speed. A beam of radiation is emitted along the magnetic axis of the pulsar, which spins along with the rotation of the neutron star. The magnetic axis of the pulsar determines the direction of the electromagnetic beam, with the magnetic axis not necessarily being the same as its rotational axis. This misalignment causes the beam to be seen once for every rotation of the neutron star, which leads to the "pulsed" nature of its appearance. In rotation-powered pulsars, the beam is the result of the rotational energy of the neutron star, which generates an electrical field and very strong magnetic field, resulting in the acceleration of protons and electrons on the star surface and the creation of an electromagnetic beam emanating from the poles of the magnetic field. Observations by NICER of PSR J0030+0451 indicate that both beams originate from hotspots located on the south pole and that there may be more than two such hotspots on that star. This rotation slows down over time as electromagnetic power is emitted. When a pulsar's spin period slows down sufficiently, the radio pulsar mechanism is believed to turn off (the so-called "death line"). This turn-off seems to take place after about 10–100 million years, which means of all the neutron stars born in the 13.6-billion-year age of the universe, around 99% no longer pulsate. Though the general picture of pulsars as rapidly rotating neutron stars is widely accepted, Werner Becker of the Max Planck Institute for Extraterrestrial Physics said in 2006, "The theory of how pulsars emit their radiation is still in its infancy, even after nearly forty years of work." Categories. Three distinct classes of pulsars are currently known to astronomers, according to the source of the power of the electromagnetic radiation: Although all three classes of objects are neutron stars, their observable behavior and the underlying physics are quite different. There are, however, some connections. For example, X-ray pulsars are probably old rotationally-powered pulsars that have already lost most of their power, and have only become visible again after their binary companions had expanded and begun transferring matter on to the neutron star. The process of accretion can, in turn, transfer enough angular momentum to the neutron star to "recycle" it as a rotation-powered millisecond pulsar. As this matter lands on the neutron star, it is thought to "bury" the magnetic field of the neutron star (although the details are unclear), leaving millisecond pulsars with magnetic fields 1000–10,000 times weaker than average pulsars. This low magnetic field is less effective at slowing the pulsar's rotation, so millisecond pulsars live for billions of years, making them the oldest known pulsars. Millisecond pulsars are seen in globular clusters, which stopped forming neutron stars billions of years ago. Of interest to the study of the state of the matter in a neutron star are the "glitches" observed in the rotation velocity of the neutron star. This velocity decreases slowly but steadily, except for an occasional sudden variation – a "glitch". One model put forward to explain these glitches is that they are the result of "starquakes" that adjust the crust of the neutron star. Models where the glitch is due to a decoupling of the possibly superconducting interior of the star have also been advanced. In both cases, the star's moment of inertia changes, but its angular momentum does not, resulting in a change in rotation rate. Disrupted recycled pulsar. When two massive stars are born close together from the same cloud of gas, they can form a binary system and orbit each other from birth. If those two stars are at least a few times as massive as the Sun, their lives will both end in supernova explosions. The more massive star explodes first, leaving behind a neutron star. If the explosion does not kick the second star away, the binary system survives. The neutron star can now be visible as a radio pulsar, and it slowly loses energy and spins down. Later, the second star can swell up, allowing the neutron star to suck up its matter. The matter falling onto the neutron star spins it up and reduces its magnetic field. This is called "recycling" because it returns the neutron star to a quickly-spinning state. Finally, the second star also explodes in a supernova, producing another neutron star. If this second explosion also fails to disrupt the binary, a double neutron star binary is formed. Otherwise, the spun-up neutron star is left with no companion and becomes a "disrupted recycled pulsar", spinning between a few and 50 times per second. Applications. The discovery of pulsars allowed astronomers to study an object never observed before, the neutron star. This kind of object is the only place where the behavior of matter at nuclear density can be observed (though not directly). Also, millisecond pulsars have allowed a test of general relativity in conditions of an intense gravitational field. Maps. Pulsar maps have been included on the two "Pioneer" plaques as well as the "Voyager" Golden Record. They show the position of the Sun, relative to 14 pulsars, which are identified by the unique timing of their electromagnetic pulses, so that Earth's position both in space and time can be calculated by potential extraterrestrial intelligence. Because pulsars are emitting very regular pulses of radio waves, its radio transmissions do not require daily corrections. Moreover, pulsar positioning could create a spacecraft navigation system independently, or be used in conjunction with satellite navigation. Pulsar navigation. "X-ray pulsar-based navigation and timing (XNAV)" or simply "pulsar navigation" is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. Experimental demonstrations have been reported in 2018. Precise clocks. Generally, the regularity of pulsar emission does not rival the stability of atomic clocks. They can still be used as external reference. For example, J0437−4715 has a period of  s with an error of . This stability allows millisecond pulsars to be used in establishing ephemeris time or in building pulsar clocks. "Timing noise" is the name for rotational irregularities observed in all pulsars. This timing noise is observable as random wandering in the pulse frequency or phase. It is unknown whether timing noise is related to pulsar glitches. According to a study published in 2023, the timing noise observed in pulsars is believed to be caused by background gravitational waves. Alternatively, it may be caused by stochastic fluctuations in both the internal (related to the presence of superfluids or turbulence) and external (due to magnetospheric activity) torques in a pulsar. Probes of the interstellar medium. The radiation from pulsars passes through the interstellar medium (ISM) before reaching Earth. Free electrons in the warm (8000 K), ionized component of the ISM and H II regions affect the radiation in two primary ways. The resulting changes to the pulsar's radiation provide an important probe of the ISM itself. Because of the dispersive nature of the interstellar plasma, lower-frequency radio waves travel through the medium slower than higher-frequency radio waves. The resulting delay in the arrival of pulses at a range of frequencies is directly measurable as the "dispersion measure" of the pulsar. The dispersion measure is the total column density of free electrons between the observer and the pulsar: formula_1 where formula_2 is the distance from the pulsar to the observer, and formula_3 is the electron density of the ISM. The dispersion measure is used to construct models of the free electron distribution in the Milky Way. Additionally, density inhomogeneities in the ISM cause scattering of the radio waves from the pulsar. The resulting scintillation of the radio waves—the same effect as the twinkling of a star in visible light due to density variations in the Earth's atmosphere—can be used to reconstruct information about the small scale variations in the ISM. Due to the high velocity (up to several hundred km/s) of many pulsars, a single pulsar scans the ISM rapidly, which results in changing scintillation patterns over timescales of a few minutes. The exact cause of these density inhomogeneities remains an open question, with possible explanations ranging from turbulence to current sheets. Probes of space-time. Pulsars orbiting within the curved space-time around Sgr A*, the supermassive black hole at the center of the Milky Way, could serve as probes of gravity in the strong-field regime. Arrival times of the pulses would be affected by special- and general-relativistic Doppler shifts and by the complicated paths that the radio waves would travel through the strongly curved space-time around the black hole. In order for the effects of general relativity to be measurable with current instruments, pulsars with orbital periods less than about 10 years would need to be discovered; such pulsars would orbit at distances inside 0.01 pc from Sgr A*. Searches are currently underway; at present, five pulsars are known to lie within 100 pc from Sgr A*. Gravitational wave detectors. There are four consortia around the world which use pulsars to search for gravitational waves: the European Pulsar Timing Array (EPTA) in Europe, the Parkes Pulsar Timing Array (PPTA) in Australia, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) in Canada and the US, and the Indian Pulsar Timing Array (InPTA) in India. Together, the consortia form the International Pulsar Timing Array (IPTA). The pulses from Millisecond Pulsars (MSPs) are used as a system of galactic clocks. Disturbances in the clocks will be measurable at Earth. A disturbance from a passing gravitational wave will have a particular signature across the ensemble of pulsars, and will be thus detected. Significant pulsars. The pulsars listed here were either the first discovered of its type, or represent an extreme of some type among the known pulsar population, such as having the shortest measured period. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\approx 33" }, { "math_id": 1, "text": "\\mathrm{DM} = \\int_0^D n_e(s) \\,ds," }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "n_e" } ]
https://en.wikipedia.org/wiki?curid=1227284
1227519
Horn-satisfiability
In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given set of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn. A Horn clause is a clause with at most one positive literal, called the "head" of the clause, and any number of negative literals, forming the "body" of the clause. A Horn formula is a propositional formula formed by conjunction of Horn clauses. Horn satisfiability is actually one of the "hardest" or "most expressive" problems which is known to be computable in polynomial time, in the sense that it is a P-complete problem. The Horn satisfiability problem can also be asked for propositional many-valued logics. The algorithms are not usually linear, but some are polynomial; see Hähnle (2001 or 2003) for a survey. Algorithm. The problem of Horn satisfiability is solvable in linear time. The problem of deciding the truth of quantified Horn formulae can be also solved in polynomial time. A polynomial-time algorithm for Horn satisfiability is recursive: This algorithm also allows determining a truth assignment of satisfiable Horn formulae: all variables contained in a unit clause are set to the value satisfying that unit clause; all other literals are set to false. The resulting assignment is the minimal model of the Horn formula, that is, the assignment having a minimal set of variables assigned to true, where comparison is made using set containment. Using a linear algorithm for unit propagation, the algorithm is linear in the size of the formula. Examples. Trivial case. In the Horn formula (¬"a" ∨ ¬"b" ∨ "c") ∧ (¬"b" ∨ ¬"c" ∨ "d") ∧ (¬"f" ∨ ¬"a" ∨ "b") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ (¬"e" ∨ "f") ∧ (¬"d" ∨ "e") ∧ (¬"b" ∨ ¬"c"), each clause has a negated literal. Therefore, setting each variable to false satisfies all clauses, hence it is a solution. Solvable case. In the Horn formula (¬"a" ∨ ¬"b" ∨ "c") ∧ (¬"b" ∨ ¬"c" ∨ "f") ∧ (¬"f" ∨ "b") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ ("f") ∧ (¬"d" ∨ "e") ∧ (¬"b" ∨ ¬"c"), one clause forces "f" to be true. Setting "f" to true and simplifying gives (¬"a" ∨ ¬"b" ∨ "c") ∧ ("b") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ (¬"d" ∨ "e") ∧ (¬"b" ∨ ¬"c"). Now "b" must be true. Simplification gives (¬"a" ∨ "c") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ (¬"d" ∨ "e") ∧ (¬"c"). Now it is a trivial case, so the remaining variables can all be set to false. Thus, a satisfying assignment is "a" = false, "b" = true, "c" = false, "d" = false, "e" = false, "f" = true. Unsolvable case. In the Horn formula (¬"a" ∨ ¬"b" ∨ "c") ∧ (¬"b" ∨ ¬"c" ∨ "f") ∧ (¬"f" ∨ "b") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ ("f") ∧ (¬"d" ∨ "e") ∧ (¬"b"), one clause forces "f" to be true. Subsequent simplification gives (¬"a" ∨ ¬"b" ∨ "c") ∧ ("b") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ (¬"d" ∨ "e") ∧ (¬"b"). Now "b" has to be true. Simplification gives (¬"a" ∨ "c") ∧ (¬"e" ∨ ¬"c" ∨ "a") ∧ (¬"d" ∨ "e") ∧ We obtained an empty clause, hence the formula is unsatisfiable. Generalization. A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. Horn satisfiability and renamable Horn satisfiability provide one of two important subclasses of satisfiability that are solvable in polynomial time; the other such subclass is 2-satisfiability. Dual-Horn SAT. A dual variant of Horn SAT is Dual-Horn SAT, in which each clause has at most one negative literal. Negating all variables transforms an instance of Dual-Horn SAT into Horn SAT. It was proven in 1951 by Horn that Dual-Horn SAT is in P.
[ { "math_id": 0, "text": "l" }, { "math_id": 1, "text": "\\neg l" } ]
https://en.wikipedia.org/wiki?curid=1227519
12277357
Swift–Hohenberg equation
The Swift–Hohenberg equation (named after Jack B. Swift and Pierre Hohenberg) is a partial differential equation noted for its pattern-forming behaviour. It takes the form formula_0 where "u" = "u"("x", "t") or "u" = "u"("x", "y", "t") is a scalar function defined on the line or the plane, "r" is a real bifurcation parameter, and "N"("u") is some smooth nonlinearity. The equation is named after the authors of the paper, where it was derived from the equations for thermal convection. Another example where the equation appears is in the study of wrinkling morphology and pattern selection in curved elastic bilayer materials. The Swift–Hohenberg equation leads to the Ginzburg–Landau equation.
[ { "math_id": 0, "text": "\n\\frac{\\partial u}{\\partial t} = r u - (1+\\nabla^2)^2u + N(u)\n" } ]
https://en.wikipedia.org/wiki?curid=12277357
12278528
Deoxidization
Metallurgy/steelmaking method Deoxidization is a method used in metallurgy to remove the rest of oxygen content from previously reduced iron ore during steel manufacturing. In contrast, antioxidants are used for stabilization, such as in the storage of food. Deoxidation is important in the steelmaking process as oxygen is often detrimental to the quality of steel produced. Deoxidization is mainly achieved by adding a separate chemical species to neutralize the effects of oxygen or by directly removing the oxygen. Oxidation. Oxidation is the process of an element losing electrons. For example, iron will transfer two of its electrons to oxygen, forming an oxide. This occurs all throughout as an unintended part of the steelmaking process. Oxygen blowing is a method of steelmaking where oxygen is blown through pig iron to lower the carbon content. Oxygen forms oxides with the unwanted elements, such as carbon, silicon, phosphorus, and manganese, which appear from various stages of the manufacturing process. These oxides will float to the top of the steel pool and remove themselves from the pig iron. However, some of the oxygen will also react with the iron itself. Due to the high temperatures involved in smelting, oxygen in the air may dissolve into the molten iron while it is being poured. Slag, a byproduct left over after the smelting process, is used to further absorb impurities such as sulfur or oxides and protect steel from further oxidation. However, it can still be responsible for some oxidation. Some processes, while still able to lead to oxidation, are not relevant to the oxygen content of steel during its manufacture. For example, rust is a red iron oxide that forms when the iron in steel reacts with the oxygen or water in the air. This usually only occurs once the steel has been in use for varying lengths of time. Some physical components of the steelmaking process itself, such as the electric arc furnace, may also wear down and oxidize. This problem is typically dealt with by the use of refractory metals, which resist environmental conditions. If steel is not properly deoxidized, it will have lost various properties such as tensile strength, ductility, toughness, weldability, polishability, and machinability. This is due to forming non-metallic inclusions and gas pores, bubbles of gas that get trapped during the solidification process of steel. Types of deoxidizers. Metallic deoxidizers. This method of deoxidization involves adding specific metals into the steel. These metals will react with the unwanted oxygen, forming a strong oxide that, compared to pure oxygen, will reduce the steel's strength and qualities by a lesser amount. The chemical equation for deoxidization is represented by: formula_0 where n and m are coefficients, D is the deoxidizing agent, and O is oxygen. Thus, the chemical equilibrium equation involved is: formula_1 where aox is the activity, or concentration, of the oxide in the steel, aD is the activity of the deoxidizing agent, and aO is the activity of the oxygen. An increase in the equilibrium constant Keq will cause an increase in aox, and thus more of the oxide product. Keq can be manipulated by the steel temperature via the following equation: formula_2 where AD and BD are parameters specific to different deoxidizers and T is the temperature in K°. Below are the values for certain deoxidizers at a temperature of 1873 K°. Below is a list of commonly used metallic deoxidizers: Vacuum deoxidation. Vacuum deoxidation is a method which involves using a vacuum to remove impurities. A portion of the carbon and oxygen in steel will react, forming carbon monoxide. CO gas will float up to the top of the liquid steel and be removed by a vacuum system. As the chemical reaction involved in vacuum deoxidation is: formula_3 the reaction between carbon and oxygen is represented by the following chemical equilibrium equation: formula_4 where PCO is the partial pressure of the carbon monoxide formed. Decreasing the oxygen activity(aO) will result in a higher equilibrium constant, thus more product, CO. To achieve this, subjecting the pool of steel to vacuum treatment decreases the value of PCO, allowing for more CO gas to be produced. Diffusion deoxidation. This method relies on the idea that deoxidation of slag will lead to the deoxidation of steel. The chemical equilibrium equation used for this process is: formula_5 where a[O] is the activity of the oxygen in the slag, and a(O) is the activity of oxygen in the steel. Reducing the activity in the slag (a[O]) will lower the oxygen levels in the slag. Afterwards, oxygen will diffuse from the steel into the lesser concentrated slag. This method is done by using deoxidizing agents on the slag, such as coke or silicon. As these agents do not come into direct contact with the steel, non-metallic inclusions will not form in the steel itself. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "nD + mO \\longrightarrow D_ nO_m" }, { "math_id": 1, "text": "K_{eq} = a_{ox} / (a_D^n * a_O^m )" }, { "math_id": 2, "text": "log K_{eq} = A_D / T - B_D " }, { "math_id": 3, "text": "C + O \\longrightarrow CO" }, { "math_id": 4, "text": "K_{CO} = P_{CO}/(a_C * a_O)" }, { "math_id": 5, "text": "K_{FeO} = a_{[O]} / a_{(O)}" } ]
https://en.wikipedia.org/wiki?curid=12278528
12278602
Physical theories modified by general relativity
"This article will use the Einstein summation convention." The theory of general relativity required the adaptation of existing theories of physical, electromagnetic, and quantum effects to account for non-Euclidean geometries. These physical theories modified by general relativity are described below. Classical mechanics and special relativity. Classical mechanics and special relativity are lumped together here because special relativity is in many ways intermediate between general relativity and classical mechanics, and shares many attributes with classical mechanics. In the following discussion, the mathematics of general relativity is used heavily. Also, under the principle of minimal coupling, the physical equations of special relativity can be turned into their general relativity counterparts by replacing the Minkowski metric ("ηab") with the relevant metric of spacetime ("gab") and by replacing any partial derivatives with covariant derivatives. In the discussions that follow, the change of metrics is implied. Inertia. Inertial motion is motion free of all forces. In Newtonian mechanics, the force "F" acting on a particle with mass "m" is given by Newton's second law, formula_0, where the acceleration is given by the second derivative of position "r" with respect to time "t" . Zero force means that inertial motion is just motion with zero acceleration: formula_1 The idea is the same in special relativity. Using Cartesian coordinates, inertial motion is described mathematically as: formula_2 where "formula_3" is the position coordinate and "τ" is proper time. (In Newtonian mechanics, "τ ≡ t", the coordinate time). In both Newtonian mechanics and special relativity, space and then spacetime are assumed to be flat, and we can construct a global Cartesian coordinate system. In general relativity, these restrictions on the shape of spacetime and on the coordinate system to be used are lost. Therefore, a different definition of inertial motion is required. In relativity, inertial motion occurs along timelike or null geodesics as parameterized by proper time. This is expressed mathematically by the geodesic equation: formula_4 where formula_5 is a Christoffel symbol. Since general relativity describes four-dimensional spacetime, this represents four equations, with each one describing the second derivative of a coordinate with respect to proper time. In the case of flat space in Cartesian coordinates, we have formula_6, so this equation reduces to the special relativity form. Gravitation. For gravitation, the relationship between Newton's theory of gravity and general relativity is governed by the correspondence principle: General relativity must produce the same results as gravity does for the cases where Newtonian physics has been shown to be accurate. Around a spherically symmetric object, the Newtonian theory of gravity predicts that objects will be physically accelerated towards the center on the object by the rule formula_7 where "G" is Newton's Gravitational constant, "M" is the mass of the gravitating object, "r" is the distance to the gravitation object, and formula_8 is a unit vector identifying the direction to the massive object. In the weak-field approximation of general relativity, an identical coordinate acceleration must exist. For the Schwarzschild solution (which is the simplest possible spacetime surrounding a massive object), the same acceleration as that which (in Newtonian physics) is created by gravity is obtained when a constant of integration is set equal to "2MG/c2"). For more information, see Deriving the Schwarzschild solution. Transition from Newtonian mechanics to general relativity. Some of the basic concepts of general relativity can be outlined outside the relativistic domain. In particular, the idea that mass/energy generates curvature in space and that curvature affects the motion of masses can be illustrated in a Newtonian setting. General relativity generalizes the geodesic equation and the field equation to the relativistic realm in which trajectories in space are replaced with Fermi–Walker transport along world lines in spacetime. The equations are also generalized to more complicated curvatures. Transition from special relativity to general relativity. The basic structure of general relativity, including the geodesic equation and Einstein field equation, can be obtained from special relativity by examining the kinetics and dynamics of a particle in a circular orbit about the earth. In terms of symmetry, the transition involves replacing global Lorentz covariance with local Lorentz covariance. Conservation of energy–momentum. In classical mechanics, conservation laws for energy and momentum are handled separately in the two principles of conservation of energy and conservation of momentum. With the advent of special relativity, these two conservation principles were united through the concept of mass-energy equivalence. Mathematically, the general relativity statement of energy–momentum conservation is: formula_9 where formula_10 is the stress–energy tensor, the comma indicates a partial derivative and the semicolon indicates a covariant derivative. The terms involving the Christoffel symbols are absent in the special relativity statement of energy–momentum conservation. Unlike classical mechanics and special relativity, it is not usually possible to unambiguously define the total energy and momentum in general relativity, so the tensorial conservation laws are "local" statements only (see ADM energy, though). This often causes confusion in time-dependent spacetimes which apparently do not conserve energy, although the local law is always satisfied. Exact formulation of energy–momentum conservation on an arbitrary geometry requires use of a non-unique stress–energy–momentum pseudotensor. Electromagnetism. General relativity modifies the description of electromagnetic phenomena by employing a new version of Maxwell's equations. These differ from the special relativity form in that the Christoffel symbols make their presence in the equations via the covariant derivative. The source equations of electrodynamics in curved spacetime are (in cgs units) formula_11 where "Fab" is the electromagnetic field tensor representing the electromagnetic field and "Ja" is a four-current representing the sources of the electromagnetic field. The source-free equations are the same as their special relativity counterparts. The effect of an electromagnetic field on a charged object is then modified to formula_12, where "q" is the charge on the object, "m" is the rest mass of the object and "P a" is the four-momentum of the charged object. Maxwell's equations in flat spacetime are recovered in rectangular coordinates by reverting the covariant derivatives to partial derivatives. For Maxwell's equations in flat spacetime in curvilinear coordinates see or
[ { "math_id": 0, "text": "F=m \\ddot{r}" }, { "math_id": 1, "text": "\\frac{\\mathrm{d}^2 r}{\\mathrm{d}t^2}=0" }, { "math_id": 2, "text": "\\frac{\\mathrm{d}^2 x^a}{\\mathrm{d}\\tau^2} = 0" }, { "math_id": 3, "text": "x^a" }, { "math_id": 4, "text": "\\frac{\\mathrm{d}^2 x^a}{\\mathrm{d}\\tau^2} + \\Gamma^a_{bc} \\, \\frac{\\mathrm{d} x^b}{\\mathrm{d}\\tau} \\,\\frac{\\mathrm{d} x^c}{\\mathrm{d}\\tau} = 0" }, { "math_id": 5, "text": "\\Gamma^a_{bc}" }, { "math_id": 6, "text": "\\Gamma^a_{bc}=0" }, { "math_id": 7, "text": "\\mathbf{\\ddot r} = GM \\mathbf{\\hat{r}}/r^2" }, { "math_id": 8, "text": "\\mathbf{\\hat{r}}" }, { "math_id": 9, "text": "{T_a}^b{}_{; b} = {T_a}^b{}_{,b} + {\\Gamma^b}_{cb} \\, {T_a}^c - {\\Gamma^c}_{ab} \\, {T_c}^b = 0" }, { "math_id": 10, "text": "{T_a}^b" }, { "math_id": 11, "text": " F^{\\,ab}{}_{;b} = {4\\pi \\over c }\\,J^{\\,a}" }, { "math_id": 12, "text": " P^{\\, a} {}_{\\, ;\\tau} = (q/m)\\,F^{\\,ab}P_b" } ]
https://en.wikipedia.org/wiki?curid=12278602
12281
Gottfried Wilhelm Leibniz
German mathematician and philosopher (1646–1716) Gottfried Wilhelm Leibniz (1 July 1646 [O.S. 21 June] – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who invented calculus in addition to many other branches of mathematics, such as binary arithmetic, and statistics. Leibniz has been called the "last universal genius" due to his knowledge and skills in different fields and because such people became much less common after his lifetime with the coming of the Industrial Revolution and the spread of specialized labor. He is a prominent figure in both the history of philosophy and the history of mathematics. He wrote works on philosophy, theology, ethics, politics, law, history, philology, games, music, and other studies. Leibniz also made major contributions to physics and technology, and anticipated notions that surfaced much later in probability theory, biology, medicine, geology, psychology, linguistics and computer science. In addition, he contributed to the field of library science by devising a cataloguing system whilst working at the Herzog August Library in Wolfenbüttel, Germany, that would have served as a guide for many of Europe's largest libraries. Leibniz's contributions to a wide range of subjects were scattered in various learned journals, in tens of thousands of letters and in unpublished manuscripts. He wrote in several languages, primarily in Latin, French and German. As a philosopher, he was a leading representative of 17th-century rationalism and idealism. As a mathematician, his major achievement was the development of the main ideas of differential and integral calculus, independently of Isaac Newton's contemporaneous developments. Mathematicians have consistently favored Leibniz's notation as the conventional and more exact expression of calculus. In the 20th century, Leibniz's notions of the law of continuity and transcendental law of homogeneity found a consistent mathematical formulation by means of non-standard analysis. He was also a pioneer in the field of mechanical calculators. While working on adding automatic multiplication and division to Pascal's calculator, he was the first to describe a pinwheel calculator in 1685 and invented the Leibniz wheel, later used in the arithmometer, the first mass-produced mechanical calculator. In philosophy and theology, Leibniz is most noted for his optimism, i.e. his conclusion that our world is, in a qualified sense, the best possible world that God could have created, a view sometimes lampooned by other thinkers, such as Voltaire in his satirical novella "Candide". Leibniz, along with René Descartes and Baruch Spinoza, was one of the three influential early modern rationalists. His philosophy also assimilates elements of the scholastic tradition, notably the assumption that some substantive knowledge of reality can be achieved by reasoning from first principles or prior definitions. The work of Leibniz anticipated modern logic and still influences contemporary analytic philosophy, such as its adopted use of the term "possible world" to define modal notions. Biography. Early life. Gottfried Leibniz was born on July 1 [OS: June 21], 1646, in Leipzig, Saxony, to Friedrich Leibniz (1597–1652) and Catharina Schmuck (1621–1664). He was baptized two days later at St. Nicholas Church, Leipzig; his godfather was the Lutheran theologian Martin Geier. His father died when he was six years old, and Leibniz was raised by his mother. Leibniz's father had been a Professor of Moral Philosophy at the University of Leipzig, where he also served as dean of philosophy. The boy inherited his father's personal library. He was given free access to it from the age of seven, shortly after his father's death. While Leibniz's schoolwork was largely confined to the study of a small canon of authorities, his father's library enabled him to study a wide variety of advanced philosophical and theological works—ones that he would not have otherwise been able to read until his college years. Access to his father's library, largely written in Latin, also led to his proficiency in the Latin language, which he achieved by the age of 12. At the age of 13 he composed 300 hexameters of Latin verse in a single morning for a special event at school. In April 1661 he enrolled in his father's former university at age 14. There he was guided, among others, by Jakob Thomasius, previously a student of Friedrich. Leibniz completed his bachelor's degree in Philosophy in December 1662. He defended his "Disputatio Metaphysica de Principio Individui" ("Metaphysical Disputation on the Principle of Individuation"), which addressed the principle of individuation, on 9 June 1663 [O.S. 30 May], presenting an early version of monadic substance theory. Leibniz earned his master's degree in Philosophy on 7 February 1664. In December 1664 he published and defended a dissertation "Specimen Quaestionum Philosophicarum ex Jure collectarum" ("An Essay of Collected Philosophical Problems of Right"), arguing for both a theoretical and a pedagogical relationship between philosophy and law. After one year of legal studies, he was awarded his bachelor's degree in Law on 28 September 1665. His dissertation was titled "De conditionibus" ("On Conditions"). In early 1666, at age 19, Leibniz wrote his first book, "De Arte Combinatoria" ("On the Combinatorial Art"), the first part of which was also his habilitation thesis in Philosophy, which he defended in March 1666. "De Arte Combinatoria" was inspired by Ramon Llull's "Ars Magna" and contained a proof of the existence of God, cast in geometrical form, and based on the argument from motion. His next goal was to earn his license and Doctorate in Law, which normally required three years of study. In 1666, the University of Leipzig turned down Leibniz's doctoral application and refused to grant him a Doctorate in Law, most likely due to his relative youth. Leibniz subsequently left Leipzig. Leibniz then enrolled in the University of Altdorf and quickly submitted a thesis, which he had probably been working on earlier in Leipzig. The title of his thesis was "Disputatio Inauguralis de Casibus Perplexis in Jure" ("Inaugural Disputation on Ambiguous Legal Cases"). Leibniz earned his license to practice law and his Doctorate in Law in November 1666. He next declined the offer of an academic appointment at Altdorf, saying that "my thoughts were turned in an entirely different direction". As an adult, Leibniz often introduced himself as "Gottfried von Leibniz". Many posthumously published editions of his writings presented his name on the title page as "Freiherr G. W. von Leibniz." However, no document has ever been found from any contemporary government that stated his appointment to any form of nobility. 1666–1676. Leibniz's first position was as a salaried secretary to an alchemical society in Nuremberg. He knew fairly little about the subject at that time but presented himself as deeply learned. He soon met Johann Christian von Boyneburg (1622–1672), the dismissed chief minister of the Elector of Mainz, Johann Philipp von Schönborn. Von Boyneburg hired Leibniz as an assistant, and shortly thereafter reconciled with the Elector and introduced Leibniz to him. Leibniz then dedicated an essay on law to the Elector in the hope of obtaining employment. The stratagem worked; the Elector asked Leibniz to assist with the redrafting of the legal code for the Electorate. In 1669, Leibniz was appointed assessor in the Court of Appeal. Although von Boyneburg died late in 1672, Leibniz remained under the employment of his widow until she dismissed him in 1674. Von Boyneburg did much to promote Leibniz's reputation, and the latter's memoranda and letters began to attract favorable notice. After Leibniz's service to the Elector there soon followed a diplomatic role. He published an essay, under the pseudonym of a fictitious Polish nobleman, arguing (unsuccessfully) for the German candidate for the Polish crown. The main force in European geopolitics during Leibniz's adult life was the ambition of Louis XIV of France, backed by French military and economic might. Meanwhile, the Thirty Years' War had left German-speaking Europe exhausted, fragmented, and economically backward. Leibniz proposed to protect German-speaking Europe by distracting Louis as follows: France would be invited to take Egypt as a stepping stone towards an eventual conquest of the Dutch East Indies. In return, France would agree to leave Germany and the Netherlands undisturbed. This plan obtained the Elector's cautious support. In 1672, the French government invited Leibniz to Paris for discussion, but the plan was soon overtaken by the outbreak of the Franco-Dutch War and became irrelevant. Napoleon's failed invasion of Egypt in 1798 can be seen as an unwitting, late implementation of Leibniz's plan, after the Eastern hemisphere colonial supremacy in Europe had already passed from the Dutch to the British. Thus Leibniz went to Paris in 1672. Soon after arriving, he met Dutch physicist and mathematician Christiaan Huygens and realised that his own knowledge of mathematics and physics was patchy. With Huygens as his mentor, he began a program of self-study that soon pushed him to making major contributions to both subjects, including discovering his version of the differential and integral calculus. He met Nicolas Malebranche and Antoine Arnauld, the leading French philosophers of the day, and studied the writings of Descartes and Pascal, unpublished as well as published. He befriended a German mathematician, Ehrenfried Walther von Tschirnhaus; they corresponded for the rest of their lives. When it became clear that France would not implement its part of Leibniz's Egyptian plan, the Elector sent his nephew, escorted by Leibniz, on a related mission to the English government in London, early in 1673. There Leibniz came into acquaintance of Henry Oldenburg and John Collins. He met with the Royal Society where he demonstrated a calculating machine that he had designed and had been building since 1670. The machine was able to execute all four basic operations (adding, subtracting, multiplying, and dividing), and the society quickly made him an external member. The mission ended abruptly when news of the Elector's death (12 February 1673) reached them. Leibniz promptly returned to Paris and not, as had been planned, to Mainz. The sudden deaths of his two patrons in the same winter meant that Leibniz had to find a new basis for his career. In this regard, a 1669 invitation from Duke John Frederick of Brunswick to visit Hanover proved to have been fateful. Leibniz had declined the invitation, but had begun corresponding with the duke in 1671. In 1673, the duke offered Leibniz the post of counsellor. Leibniz very reluctantly accepted the position two years later, only after it became clear that no employment was forthcoming in Paris, whose intellectual stimulation he relished, or with the Habsburg imperial court. In 1675 he tried to get admitted to the French Academy of Sciences as a foreign honorary member, but it was considered that there were already enough foreigners there and so no invitation came. He left Paris in October 1676. House of Hanover, 1676–1716. Leibniz managed to delay his arrival in Hanover until the end of 1676 after making one more short journey to London, where Newton accused him of having seen his unpublished work on calculus in advance. This was alleged to be evidence supporting the accusation, made decades later, that he had stolen calculus from Newton. On the journey from London to Hanover, Leibniz stopped in The Hague where he met van Leeuwenhoek, the discoverer of microorganisms. He also spent several days in intense discussion with Spinoza, who had just completed, but had not published, his masterwork, the "Ethics". Spinoza died very shortly after Leibniz's visit. In 1677, he was promoted, at his request, to Privy Counselor of Justice, a post he held for the rest of his life. Leibniz served three consecutive rulers of the House of Brunswick as historian, political adviser, and most consequentially, as librarian of the ducal library. He thenceforth employed his pen on all the various political, historical, and theological matters involving the House of Brunswick; the resulting documents form a valuable part of the historical record for the period. Leibniz began promoting a project to use windmills to improve the mining operations in the Harz Mountains. This project did little to improve mining operations and was shut down by Duke Ernst August in 1685. Among the few people in north Germany to accept Leibniz were the Electress Sophia of Hanover (1630–1714), her daughter Sophia Charlotte of Hanover (1668–1705), the Queen of Prussia and his avowed disciple, and Caroline of Ansbach, the consort of her grandson, the future George II. To each of these women he was correspondent, adviser, and friend. In turn, they all approved of Leibniz more than did their spouses and the future king George I of Great Britain. The population of Hanover was only about 10,000, and its provinciality eventually grated on Leibniz. Nevertheless, to be a major courtier to the House of Brunswick was quite an honor, especially in light of the meteoric rise in the prestige of that House during Leibniz's association with it. In 1692, the Duke of Brunswick became a hereditary Elector of the Holy Roman Empire. The British Act of Settlement 1701 designated the Electress Sophia and her descent as the royal family of England, once both King William III and his sister-in-law and successor, Queen Anne, were dead. Leibniz played a role in the initiatives and negotiations leading up to that Act, but not always an effective one. For example, something he published anonymously in England, thinking to promote the Brunswick cause, was formally censured by the British Parliament. The Brunswicks tolerated the enormous effort Leibniz devoted to intellectual pursuits unrelated to his duties as a courtier, pursuits such as perfecting calculus, writing about other mathematics, logic, physics, and philosophy, and keeping up a vast correspondence. He began working on calculus in 1674; the earliest evidence of its use in his surviving notebooks is 1675. By 1677 he had a coherent system in hand, but did not publish it until 1684. Leibniz's most important mathematical papers were published between 1682 and 1692, usually in a journal which he and Otto Mencke founded in 1682, the "Acta Eruditorum". That journal played a key role in advancing his mathematical and scientific reputation, which in turn enhanced his eminence in diplomacy, history, theology, and philosophy. The Elector Ernest Augustus commissioned Leibniz to write a history of the House of Brunswick, going back to the time of Charlemagne or earlier, hoping that the resulting book would advance his dynastic ambitions. From 1687 to 1690, Leibniz traveled extensively in Germany, Austria, and Italy, seeking and finding archival materials bearing on this project. Decades went by but no history appeared; the next Elector became quite annoyed at Leibniz's apparent dilatoriness. Leibniz never finished the project, in part because of his huge output on many other fronts, but also because he insisted on writing a meticulously researched and erudite book based on archival sources, when his patrons would have been quite happy with a short popular book, one perhaps little more than a genealogy with commentary, to be completed in three years or less. They never knew that he had in fact carried out a fair part of his assigned task: when the material Leibniz had written and collected for his history of the House of Brunswick was finally published in the 19th century, it filled three volumes. Leibniz was appointed Librarian of the Herzog August Library in Wolfenbüttel, Lower Saxony, in 1691. In 1708, John Keill, writing in the journal of the Royal Society and with Newton's presumed blessing, accused Leibniz of having plagiarised Newton's calculus. Thus began the calculus priority dispute which darkened the remainder of Leibniz's life. A formal investigation by the Royal Society (in which Newton was an unacknowledged participant), undertaken in response to Leibniz's demand for a retraction, upheld Keill's charge. Historians of mathematics writing since 1900 or so have tended to acquit Leibniz, pointing to important differences between Leibniz's and Newton's versions of calculus. In 1712, Leibniz began a two-year residence in Vienna, where he was appointed Imperial Court Councillor to the Habsburgs. On the death of Queen Anne in 1714, Elector George Louis became King George I of Great Britain, under the terms of the 1701 Act of Settlement. Even though Leibniz had done much to bring about this happy event, it was not to be his hour of glory. Despite the intercession of the Princess of Wales, Caroline of Ansbach, George I forbade Leibniz to join him in London until he completed at least one volume of the history of the Brunswick family his father had commissioned nearly 30 years earlier. Moreover, for George I to include Leibniz in his London court would have been deemed insulting to Newton, who was seen as having won the calculus priority dispute and whose standing in British official circles could not have been higher. Finally, his dear friend and defender, the Dowager Electress Sophia, died in 1714. In 1716, while traveling in northern Europe, the Russian Tsar Peter the Great stopped in Bad Pyrmont and met Leibniz, who took interest in Russian matters since 1708 and was appointed advisor in 1711. Death. Leibniz died in Hanover in 1716. At the time, he was so out of favor that neither George I (who happened to be near Hanover at that time) nor any fellow courtier other than his personal secretary attended the funeral. Even though Leibniz was a life member of the Royal Society and the Berlin Academy of Sciences, neither organization saw fit to honor his death. His grave went unmarked for more than 50 years. He was, however, eulogized by Fontenelle, before the French Academy of Sciences in Paris, which had admitted him as a foreign member in 1700. The eulogy was composed at the behest of the Duchess of Orleans, a niece of the Electress Sophia. Personal life. Leibniz never married. He proposed to an unknown woman at age 50, but changed his mind when she took too long to decide. He complained on occasion about money, but the fair sum he left to his sole heir, his sister's stepson, proved that the Brunswicks had paid him fairly well. In his diplomatic endeavors, he at times verged on the unscrupulous, as was often the case with professional diplomats of his day. On several occasions, Leibniz backdated and altered personal manuscripts, actions which put him in a bad light during the calculus controversy. He was charming, well-mannered, and not without humor and imagination. He had many friends and admirers all over Europe. He was identified as a Protestant and a philosophical theist. Leibniz remained committed to Trinitarian Christianity throughout his life. Philosophy. Leibniz's philosophical thinking appears fragmented because his philosophical writings consist mainly of a multitude of short pieces: journal articles, manuscripts published long after his death, and letters to correspondents. He wrote two book-length philosophical treatises, of which only the "Théodicée" of 1710 was published in his lifetime. Leibniz dated his beginning as a philosopher to his "Discourse on Metaphysics", which he composed in 1686 as a commentary on a running dispute between Nicolas Malebranche and Antoine Arnauld. This led to an extensive correspondence with Arnauld; it and the "Discourse" were not published until the 19th century. In 1695, Leibniz made his public entrée into European philosophy with a journal article titled "New System of the Nature and Communication of Substances". Between 1695 and 1705, he composed his "New Essays on Human Understanding", a lengthy commentary on John Locke's 1690 "An Essay Concerning Human Understanding", but upon learning of Locke's 1704 death, lost the desire to publish it, so that the "New Essays" were not published until 1765. The "Monadologie", composed in 1714 and published posthumously, consists of 90 aphorisms. Leibniz also wrote a short paper, "Primae veritates" ("First Truths"), first published by Louis Couturat in 1903 (pp. 518–523) summarizing his views on metaphysics. The paper is undated; that he wrote it while in Vienna in 1689 was determined only in 1999, when the ongoing critical edition finally published Leibniz's philosophical writings for the period 1677–1690. Couturat's reading of this paper influenced much 20th-century thinking about Leibniz, especially among analytic philosophers. After a meticulous study (informed by the 1999 additions to the critical edition) of all of Leibniz's philosophical writings up to 1688, Mercer (2001) disagreed with Couturat's reading. Leibniz met Baruch Spinoza in 1676, read some of his unpublished writings, and had since been influenced by some of Spinoza's ideas. While Leibniz befriended him and admired Spinoza's powerful intellect, he was also dismayed by Spinoza's conclusions, especially when these were inconsistent with Christian orthodoxy. Unlike Descartes and Spinoza, Leibniz had a university education in philosophy. He was influenced by his Leipzig professor Jakob Thomasius, who also supervised his BA thesis in philosophy. Leibniz also read Francisco Suárez, a Spanish Jesuit respected even in Lutheran universities. Leibniz was deeply interested in the new methods and conclusions of Descartes, Huygens, Newton, and Boyle, but the established philosophical ideas in which he was educated influenced his view of their work. Principles. Leibniz variously invoked one or another of seven fundamental philosophical Principles: Leibniz would on occasion give a rational defense of a specific principle, but more often took them for granted. Monads. Leibniz's best known contribution to metaphysics is his theory of monads, as exposited in "Monadologie". He proposes his theory that the universe is made of an infinite number of simple substances known as monads. Monads can also be compared to the corpuscles of the mechanical philosophy of René Descartes and others. These simple substances or monads are the "ultimate units of existence in nature". Monads have no parts but still exist by the qualities that they have. These qualities are continuously changing over time, and each monad is unique. They are also not affected by time and are subject to only creation and annihilation. Monads are centers of force; substance is force, while space, matter, and motion are merely phenomenal. He argued, against Newton, that space, time, and motion are completely relative: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." Einstein, who called himself a "Leibnizian", wrote in the introduction to Max Jammer's book "Concepts of Space" that Leibnizianism was superior to Newtonianism, and his ideas would have dominated over Newton's had it not been for the poor technological tools of the time; Joseph Agassi argues that Leibniz paved the way for Einstein's theory of relativity. Leibniz's proof of God can be summarized in the "Théodicée". Reason is governed by the principle of contradiction and the principle of sufficient reason. Using the principle of reasoning, Leibniz concluded that the first reason of all things is God. All that we see and experience is subject to change, and the fact that this world is contingent can be explained by the possibility of the world being arranged differently in space and time. The contingent world must have some necessary reason for its existence. Leibniz uses a geometry book as an example to explain his reasoning. If this book was copied from an infinite chain of copies, there must be some reason for the content of the book. Leibniz concluded that there must be the "monas monadum" or God. The ontological essence of a monad is its irreducible simplicity. Unlike atoms, monads possess no material or spatial character. They also differ from atoms by their complete mutual independence, so that interactions among monads are only apparent. Instead, by virtue of the principle of pre-established harmony, each monad follows a pre-programmed set of "instructions" peculiar to itself, so that a monad "knows" what to do at each moment. By virtue of these intrinsic instructions, each monad is like a little mirror of the universe. Monads need not be "small"; e.g., each human being constitutes a monad, in which case free will is problematic. Monads are purported to have gotten rid of the problematic: Theodicy and optimism. The "Theodicy" tries to justify the apparent imperfections of the world by claiming that it is optimal among all possible worlds. It must be the best possible and most balanced world, because it was created by an all powerful and all knowing God, who would not choose to create an imperfect world if a better world could be known to him or possible to exist. In effect, apparent flaws that can be identified in this world must exist in every possible world, because otherwise God would have chosen to create the world that excluded those flaws. Leibniz asserted that the truths of theology (religion) and philosophy cannot contradict each other, since reason and faith are both "gifts of God" so that their conflict would imply God contending against himself. The "Theodicy" is Leibniz's attempt to reconcile his personal philosophical system with his interpretation of the tenets of Christianity. This project was motivated in part by Leibniz's belief, shared by many philosophers and theologians during the Enlightenment, in the rational and enlightened nature of the Christian religion. It was also shaped by Leibniz's belief in the perfectibility of human nature (if humanity relied on correct philosophy and religion as a guide), and by his belief that metaphysical necessity must have a rational or logical foundation, even if this metaphysical causality seemed inexplicable in terms of physical necessity (the natural laws identified by science). In the view of Leibniz, because reason and faith must be entirely reconciled, any tenet of faith which could not be defended by reason must be rejected. Leibniz then approached one of the central criticisms of Christian theism: if God is all good, all wise, and all powerful, then how did evil come into the world? The answer (according to Leibniz) is that, while God is indeed unlimited in wisdom and power, his human creations, as creations, are limited both in their wisdom and in their will (power to act). This predisposes humans to false beliefs, wrong decisions, and ineffective actions in the exercise of their free will. God does not arbitrarily inflict pain and suffering on humans; rather he permits both "moral evil" (sin) and "physical evil" (pain and suffering) as the necessary consequences of "metaphysical evil" (imperfection), as a means by which humans can identify and correct their erroneous decisions, and as a contrast to true good. Further, although human actions flow from prior causes that ultimately arise in God and therefore are known to God as metaphysical certainties, an individual's free will is exercised within natural laws, where choices are merely contingently necessary and to be decided in the event by a "wonderful spontaneity" that provides individuals with an escape from rigorous predestination. "Discourse on Metaphysics". For Leibniz, "God is an absolutely perfect being". He describes this perfection later in section VI as the simplest form of something with the most substantial outcome (VI). Along these lines, he declares that every type of perfection "pertains to him (God) in the highest degree" (I). Even though his types of perfections are not specifically drawn out, Leibniz highlights the one thing that, to him, does certify imperfections and proves that God is perfect: "that one acts imperfectly if he acts with less perfection than he is capable of", and since God is a perfect being, he cannot act imperfectly (III). Because God cannot act imperfectly, the decisions he makes pertaining to the world must be perfect. Leibniz also comforts readers, stating that because he has done everything to the most perfect degree; those who love him cannot be injured. However, to love God is a subject of difficulty as Leibniz believes that we are "not disposed to wish for that which God desires" because we have the ability to alter our disposition (IV). In accordance with this, many act as rebels, but Leibniz says that the only way we can truly love God is by being content "with all that comes to us according to his will" (IV). Because God is "an absolutely perfect being" (I), Leibniz argues that God would be acting imperfectly if he acted with any less perfection than what he is able of (III). His syllogism then ends with the statement that God has made the world perfectly in all ways. This also affects how we should view God and his will. Leibniz states that, in lieu of God's will, we have to understand that God "is the best of all masters" and he will know when his good succeeds, so we, therefore, must act in conformity to his good will—or as much of it as we understand (IV). In our view of God, Leibniz declares that we cannot admire the work solely because of the maker, lest we mar the glory and love God in doing so. Instead, we must admire the maker for the work he has done (II). Effectively, Leibniz states that if we say the earth is good because of the will of God, and not good according to some standards of goodness, then how can we praise God for what he has done if contrary actions are also praiseworthy by this definition (II). Leibniz then asserts that different principles and geometry cannot simply be from the will of God, but must follow from his understanding. Leibniz wrote: "Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself." Martin Heidegger called this question "the fundamental question of metaphysics". Symbolic thought and rational resolution of disputes. Leibniz believed that much of human reasoning could be reduced to calculations of a sort, and that such calculations could resolve many differences of opinion: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right. Leibniz's calculus ratiocinator, which resembles symbolic logic, can be viewed as a way of making such calculations feasible. Leibniz wrote memoranda that can now be read as groping attempts to get symbolic logic—and thus his "calculus"—off the ground. These writings remained unpublished until the appearance of a selection edited by Carl Immanuel Gerhardt (1859). Louis Couturat published a selection in 1901; by this time the main developments of modern logic had been created by Charles Sanders Peirce and by Gottlob Frege. Leibniz thought symbols were important for human understanding. He attached so much importance to the development of good notations that he attributed all his discoveries in mathematics to this. His notation for calculus is an example of his skill in this regard. Leibniz's passion for symbols and notation, as well as his belief that these are essential to a well-running logic and mathematics, made him a precursor of semiotics. But Leibniz took his speculations much further. Defining a character as any written sign, he then defined a "real" character as one that represents an idea directly and not simply as the word embodying the idea. Some real characters, such as the notation of logic, serve only to facilitate reasoning. Many characters well known in his day, including Egyptian hieroglyphics, Chinese characters, and the symbols of astronomy and chemistry, he deemed not real. Instead, he proposed the creation of a "characteristica universalis" or "universal characteristic", built on an alphabet of human thought in which each fundamental concept would be represented by a unique "real" character: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It is obvious that if we could find characters or signs suited for expressing all our thoughts as clearly and as exactly as arithmetic expresses numbers or geometry expresses lines, we could do in all matters "insofar as they are subject to reasoning" all that we can do in arithmetic and geometry. For all investigations which depend on reasoning would be carried out by transposing these characters and by a species of calculus. Complex thoughts would be represented by combining characters for simpler thoughts. Leibniz saw that the uniqueness of prime factorization suggests a central role for prime numbers in the universal characteristic, a striking anticipation of Gödel numbering. Granted, there is no intuitive or mnemonic way to number any set of elementary concepts using the prime numbers. Because Leibniz was a mathematical novice when he first wrote about the "characteristic", at first he did not conceive it as an algebra but rather as a universal language or script. Only in 1676 did he conceive of a kind of "algebra of thought", modeled on and including conventional algebra and its notation. The resulting "characteristic" included a logical calculus, some combinatorics, algebra, his "analysis situs" (geometry of situation), a universal concept language, and more. What Leibniz actually intended by his "characteristica universalis" and calculus ratiocinator, and the extent to which modern formal logic does justice to calculus, may never be established. Leibniz's idea of reasoning through a universal language of symbols and calculations remarkably foreshadows great 20th-century developments in formal systems, such as Turing completeness, where computation was used to define equivalent universal languages (see Turing degree). Formal logic. Leibniz has been noted as one of the most important logicians between the times of Aristotle and Gottlob Frege. Leibniz enunciated the principal properties of what we now call conjunction, disjunction, negation, identity, set inclusion, and the empty set. The principles of Leibniz's logic and, arguably, of his whole philosophy, reduce to two: The formal logic that emerged early in the 20th century also requires, at minimum, unary negation and quantified variables ranging over some universe of discourse. Leibniz published nothing on formal logic in his lifetime; most of what he wrote on the subject consists of working drafts. In his "History of Western Philosophy", Bertrand Russell went so far as to claim that Leibniz had developed logic in his unpublished writings to a level which was reached only 200 years later. Russell's principal work on Leibniz found that many of Leibniz's most startling philosophical ideas and claims (e.g., that each of the fundamental monads mirrors the whole universe) follow logically from Leibniz's conscious choice to reject "relations" between things as unreal. He regarded such relations as (real) "qualities" of things (Leibniz admitted unary predicates only): For him, "Mary is the mother of John" describes separate qualities of Mary and of John. This view contrasts with the relational logic of De Morgan, Peirce, Schröder and Russell himself, now standard in predicate logic. Notably, Leibniz also declared space and time to be inherently relational. Leibniz's 1690 discovery of his algebra of concepts (deductively equivalent to the Boolean algebra) and the associated metaphysics, are of interest in present-day computational metaphysics. Mathematics. Although the mathematical notion of function was implicit in trigonometric and logarithmic tables, which existed in his day, Leibniz was the first, in 1692 and 1694, to employ it explicitly, to denote any of several geometric concepts derived from a curve, such as abscissa, ordinate, tangent, chord, and the perpendicular (see History of the function concept). In the 18th century, "function" lost these geometrical associations. Leibniz was also one of the pioneers in actuarial science, calculating the purchase price of life annuities and the liquidation of a state's debt. Leibniz's research into formal logic, also relevant to mathematics, is discussed in the preceding section. The best overview of Leibniz's writings on calculus may be found in Bos (1974). Leibniz, who invented one of the earliest mechanical calculators, said of calculation: "For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used." Linear systems. Leibniz arranged the coefficients of a system of linear equations into an array, now called a matrix, in order to find a solution to the system if it existed. This method was later called Gaussian elimination. Leibniz laid down the foundations and theory of determinants, although the Japanese mathematician Seki Takakazu also discovered determinants independently of Leibniz. His works show calculating the determinants using cofactors. Calculating the determinant using cofactors is named the Leibniz formula. Finding the determinant of a matrix using this method proves impractical with large "n", requiring to calculate "n!" products and the number of n-permutations. He also solved systems of linear equations using determinants, which is now called Cramer's rule. This method for solving systems of linear equations based on determinants was found in 1684 by Leibniz (Cramer published his findings in 1750). Although Gaussian elimination requires formula_0 arithmetic operations, linear algebra textbooks still teach cofactor expansion before LU factorization. Geometry. The Leibniz formula for π states that formula_1 Leibniz wrote that circles "can most simply be expressed by this series, that is, the aggregate of fractions alternately added and subtracted". However this formula is only accurate with a large number of terms, using 10,000,000 terms to obtain the correct value of to 8 decimal places. Leibniz attempted to create a definition for a straight line while attempting to prove the parallel postulate. While most mathematicians defined a straight line as the shortest line between two points, Leibniz believed that this was merely a property of a straight line rather than the definition. Calculus. Leibniz is credited, along with Isaac Newton, with the discovery of calculus (differential and integral calculus). According to Leibniz's notebooks, a critical breakthrough occurred on 11 November 1675, when he employed integral calculus for the first time to find the area under the graph of a function y = f(x). He introduced several notations used to this day, for instance the integral sign ∫ (formula_2), representing an elongated S, from the Latin word "summa", and the d used for differentials (formula_3), from the Latin word "differentia". Leibniz did not publish anything about his calculus until 1684. Leibniz expressed the inverse relation of integration and differentiation, later called the fundamental theorem of calculus, by means of a figure in his 1693 paper "Supplementum geometriae dimensoriae...". However, James Gregory is credited for the theorem's discovery in geometric form, Isaac Barrow proved a more generalized geometric version, and Newton developed supporting theory. The concept became more transparent as developed through Leibniz's formalism and new notation. The product rule of differential calculus is still called "Leibniz's law". In addition, the theorem that tells how and when to differentiate under the integral sign is called the Leibniz integral rule. Leibniz exploited infinitesimals in developing calculus, manipulating them in ways suggesting that they had paradoxical algebraic properties. George Berkeley, in a tract called "The Analyst" and also in "De Motu", criticized these. A recent study argues that Leibnizian calculus was free of contradictions, and was better grounded than Berkeley's empiricist criticisms. From 1711 until his death, Leibniz was engaged in a dispute with John Keill, Newton and others, over whether Leibniz had invented calculus independently of Newton. The use of infinitesimals in mathematics was frowned upon by followers of Karl Weierstrass, but survived in science and engineering, and even in rigorous mathematics, via the fundamental computational device known as the differential. Beginning in 1960, Abraham Robinson worked out a rigorous foundation for Leibniz's infinitesimals, using model theory, in the context of a field of hyperreal numbers. The resulting non-standard analysis can be seen as a belated vindication of Leibniz's mathematical reasoning. Robinson's transfer principle is a mathematical implementation of Leibniz's heuristic law of continuity, while the standard part function implements the Leibnizian transcendental law of homogeneity. Topology. Leibniz was the first to use the term "analysis situs", later used in the 19th century to refer to what is now known as topology. There are two takes on this situation. On the one hand, Mates, citing a 1954 paper in German by Jacob Freudenthal, argues: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Although for Leibniz the situs of a sequence of points is completely determined by the distance between them and is altered if those distances are altered, his admirer Euler, in the famous 1736 paper solving the Königsberg Bridge Problem and its generalizations, used the term "geometria situs" in such a sense that the situs remains unchanged under topological deformations. He mistakenly credits Leibniz with originating this concept. ... [It] is sometimes not realized that Leibniz used the term in an entirely different sense and hence can hardly be considered the founder of that part of mathematics. But Hideaki Hirano argues differently, quoting Mandelbrot: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;To sample Leibniz' scientific works is a sobering experience. Next to calculus, and to other thoughts that have been carried out to completion, the number and variety of premonitory thrusts is overwhelming. We saw examples in "packing", ... My Leibniz mania is further reinforced by finding that for one moment its hero attached importance to geometric scaling. In "Euclidis Prota" ..., which is an attempt to tighten Euclid's axioms, he states ...: "I have diverse definitions for the straight line. The straight line is a curve, any part of which is similar to the whole, and it alone has this property, not only among curves but among sets." This claim can be proved today. Thus the fractal geometry promoted by Mandelbrot drew on Leibniz's notions of self-similarity and the principle of continuity: "Natura non facit saltus". We also see that when Leibniz wrote, in a metaphysical vein, that "the straight line is a curve, any part of which is similar to the whole", he was anticipating topology by more than two centuries. As for "packing", Leibniz told his friend and correspondent Des Bosses to imagine a circle, then to inscribe within it three congruent circles with maximum radius; the latter smaller circles could be filled with three even smaller circles by the same procedure. This process can be continued infinitely, from which arises a good idea of self-similarity. Leibniz's improvement of Euclid's axiom contains the same concept. Science and engineering. Leibniz's writings are currently discussed, not only for their anticipations and possible discoveries not yet recognized, but as ways of advancing present knowledge. Much of his writing on physics is included in Gerhardt's "Mathematical Writings". Physics. Leibniz contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his "Specimen Dynamicum" of 1695. Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense. For instance, he anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." Leibniz held a relationist notion of space and time, against Newton's substantivalist views. According to Newton's substantivalism, space and time are entities in their own right, existing independently of things. Leibniz's relationism, in contrast, describes space and time as systems of relations that exist between objects. The rise of general relativity and subsequent work in the history of physics has put Leibniz's stance in a more favorable light. One of Leibniz's projects was to recast Newton's theory as a vortex theory. However, his project went beyond vortex theory, since at its heart there was an attempt to explain one of the most difficult problems in physics, that of the origin of the cohesion of matter. The principle of sufficient reason has been invoked in recent cosmology, and his identity of indiscernibles in quantum mechanics, a field some even credit him with having anticipated in some sense. In addition to his theories about the nature of reality, Leibniz's contributions to the development of calculus have also had a major impact on physics. The "vis viva". Leibniz's "vis viva" (Latin for "living force") is , twice the modern kinetic energy. He realized that the total energy would be conserved in certain mechanical systems, so he considered it an innate motive characteristic of matter. Here too his thinking gave rise to another regrettable nationalistic dispute. His "vis viva" was seen as rivaling the conservation of momentum championed by Newton in England and by Descartes and Voltaire in France; hence academics in those countries tended to neglect Leibniz's idea. Leibniz knew of the validity of conservation of momentum. In reality, both energy and momentum are conserved (in closed systems), so both approaches are valid. Other natural science. By proposing that the earth has a molten core, he anticipated modern geology. In embryology, he was a preformationist, but also proposed that organisms are the outcome of a combination of an infinite number of possible microstructures and of their powers. In the life sciences and paleontology, he revealed an amazing transformist intuition, fueled by his study of comparative anatomy and fossils. One of his principal works on this subject, "Protogaea", unpublished in his lifetime, has recently been published in English for the first time. He worked out a primal organismic theory. In medicine, he exhorted the physicians of his time—with some results—to ground their theories in detailed comparative observations and verified experiments, and to distinguish firmly scientific and metaphysical points of view. Psychology. Psychology had been a central interest of Leibniz. He appears to be an "underappreciated pioneer of psychology" He wrote on topics which are now regarded as fields of psychology: attention and consciousness, memory, learning (association), motivation (the act of "striving"), emergent individuality, the general dynamics of development (evolutionary psychology). His discussions in the "New Essays" and "Monadology" often rely on everyday observations such as the behaviour of a dog or the noise of the sea, and he develops intuitive analogies (the synchronous running of clocks or the balance spring of a clock). He also devised postulates and principles that apply to psychology: the continuum of the unnoticed "petites perceptions" to the distinct, self-aware apperception, and psychophysical parallelism from the point of view of causality and of purpose: "Souls act according to the laws of final causes, through aspirations, ends and means. Bodies act according to the laws of efficient causes, i.e. the laws of motion. And these two realms, that of efficient causes and that of final causes, harmonize with one another." This idea refers to the mind-body problem, stating that the mind and brain do not act upon each other, but act alongside each other separately but in harmony. Leibniz, however, did not use the term "psychologia". Leibniz's epistemological position—against John Locke and English empiricism (sensualism)—was made clear: "Nihil est in intellectu quod non fuerit in sensu, nisi intellectu ipse." – "Nothing is in the intellect that was not first in the senses, except the intellect itself." Principles that are not present in sensory impressions can be recognised in human perception and consciousness: logical inferences, categories of thought, the principle of causality and the principle of purpose (teleology). Leibniz found his most important interpreter in Wilhelm Wundt, founder of psychology as a discipline. Wundt used the "… nisi intellectu ipse" quotation 1862 on the title page of his "Beiträge zur Theorie der Sinneswahrnehmung" (Contributions on the Theory of Sensory Perception) and published a detailed and aspiring monograph on Leibniz. Wundt shaped the term apperception, introduced by Leibniz, into an experimental psychologically based apperception psychology that included neuropsychological modelling – an excellent example of how a concept created by a great philosopher could stimulate a psychological research program. One principle in the thinking of Leibniz played a fundamental role: "the principle of equality of separate but corresponding viewpoints." Wundt characterized this style of thought (perspectivism) in a way that also applied for him—viewpoints that "supplement one another, while also being able to appear as opposites that only resolve themselves when considered more deeply." Much of Leibniz's work went on to have a great impact on the field of psychology. Leibniz thought that there are many petites perceptions, or small perceptions of which we perceive but of which we are unaware. He believed that by the principle that phenomena found in nature were continuous by default, it was likely that the transition between conscious and unconscious states had intermediary steps. For this to be true, there must also be a portion of the mind of which we are unaware at any given time. His theory regarding consciousness in relation to the principle of continuity can be seen as an early theory regarding the stages of sleep. In this way, Leibniz's theory of perception can be viewed as one of many theories leading up to the idea of the unconscious. Leibniz was a direct influence on Ernst Platner, who is credited with originally coining the term Unbewußtseyn (unconscious). Additionally, the idea of subliminal stimuli can be traced back to his theory of small perceptions. Leibniz's ideas regarding music and tonal perception went on to influence the laboratory studies of Wilhelm Wundt. Social science. In public health, he advocated establishing a medical administrative authority, with powers over epidemiology and veterinary medicine. He worked to set up a coherent medical training program, oriented towards public health and preventive measures. In economic policy, he proposed tax reforms and a national insurance program, and discussed the balance of trade. He even proposed something akin to what much later emerged as game theory. In sociology he laid the ground for communication theory. Technology. In 1906, Garland published a volume of Leibniz's writings bearing on his many practical inventions and engineering work. To date, few of these writings have been translated into English. Nevertheless, it is well understood that Leibniz was a serious inventor, engineer, and applied scientist, with great respect for practical life. Following the motto "theoria cum praxi", he urged that theory be combined with practical application, and thus has been claimed as the father of applied science. He designed wind-driven propellers and water pumps, mining machines to extract ore, hydraulic presses, lamps, submarines, clocks, etc. With Denis Papin, he created a steam engine. He even proposed a method for desalinating water. From 1680 to 1685, he struggled to overcome the chronic flooding that afflicted the ducal silver mines in the Harz Mountains, but did not succeed. Computation. Leibniz may have been the first computer scientist and information theorist. Early in life, he documented the binary numeral system (base 2), then revisited that system throughout his career. While Leibniz was examining other cultures to compare his metaphysical views, he encountered an ancient Chinese book "I Ching". Leibniz interpreted a diagram which showed yin and yang and corresponded it to a zero and one. More information can be found in the Sinophology section. Leibniz had similarities with Juan Caramuel y Lobkowitz and Thomas Harriot, who independently developed the binary system, as he was familiar with their works on the binary system. Juan Caramuel y Lobkowitz worked extensively on logarithms including logarithms with base 2. Thomas Harriot's manuscripts contained a table of binary numbers and their notation, which demonstrated that any number could be written on a base 2 system. Regardless, Leibniz simplified the binary system and articulated logical properties such as conjunction, disjunction, negation, identity, inclusion, and the empty set. He anticipated Lagrangian interpolation and algorithmic information theory. His calculus ratiocinator anticipated aspects of the universal Turing machine. In 1961, Norbert Wiener suggested that Leibniz should be considered the patron saint of cybernetics. Wiener is quoted with "Indeed, the general idea of a computing machine is nothing but a mechanization of Leibniz's Calculus Ratiocinator." In 1671, Leibniz began to invent a machine that could execute all four arithmetic operations, gradually improving it over a number of years. This "stepped reckoner" attracted fair attention and was the basis of his election to the Royal Society in 1673. A number of such machines were made during his years in Hanover by a craftsman working under his supervision. They were not an unambiguous success because they did not fully mechanize the carry operation. Couturat reported finding an unpublished note by Leibniz, dated 1674, describing a machine capable of performing some algebraic operations. Leibniz also devised a (now reproduced) cipher machine, recovered by Nicholas Rescher in 2010. In 1693, Leibniz described a design of a machine which could, in theory, integrate differential equations, which he called "integraph". Leibniz was groping towards hardware and software concepts worked out much later by Charles Babbage and Ada Lovelace. In 1679, while mulling over his binary arithmetic, Leibniz imagined a machine in which binary numbers were represented by marbles, governed by a rudimentary sort of punched cards. Modern electronic digital computers replace Leibniz's marbles moving by gravity with shift registers, voltage gradients, and pulses of electrons, but otherwise they run roughly as Leibniz envisioned in 1679. Librarian. Later in Leibniz's career (after the death of von Boyneburg), Leibniz moved to Paris and accepted a position as a librarian in the Hanoverian court of Johann Friedrich, Duke of Brunswick-Luneburg. Leibniz's predecessor, Tobias Fleischer, had already created a cataloging system for the Duke's library but it was a clumsy attempt. At this library, Leibniz focused more on advancing the library than on the cataloging. For instance, within a month of taking the new position, he developed a comprehensive plan to expand the library. He was one of the first to consider developing a core collection for a library and felt "that a library for display and ostentation is a luxury and indeed superfluous, but a well-stocked and organized library is important and useful for all areas of human endeavor and is to be regarded on the same level as schools and churches". Leibniz lacked the funds to develop the library in this manner. After working at this library, by the end of 1690 Leibniz was appointed as privy-councilor and librarian of the Bibliotheca Augusta at Wolfenbüttel. It was an extensive library with at least 25,946 printed volumes. At this library, Leibniz sought to improve the catalog. He was not allowed to make complete changes to the existing closed catalog, but was allowed to improve upon it so he started on that task immediately. He created an alphabetical author catalog and had also created other cataloging methods that were not implemented. While serving as librarian of the ducal libraries in Hanover and Wolfenbüttel, Leibniz effectively became one of the founders of library science. Seemingly, Leibniz paid a good deal of attention to the classification of subject matter, favoring a well-balanced library covering a host of numerous subjects and interests. Leibniz, for example, proposed the following classification system in the Otivm Hanoveranvm Sive Miscellanea (1737): He also designed a book indexing system in ignorance of the only other such system then extant, that of the Bodleian Library at Oxford University. He also called on publishers to distribute abstracts of all new titles they produced each year, in a standard form that would facilitate indexing. He hoped that this abstracting project would eventually include everything printed from his day back to Gutenberg. Neither proposal met with success at the time, but something like them became standard practice among English language publishers during the 20th century, under the aegis of the Library of Congress and the British Library. He called for the creation of an empirical database as a way to further all sciences. His "characteristica universalis", calculus ratiocinator, and a "community of minds"—intended, among other things, to bring political and religious unity to Europe—can be seen as distant unwitting anticipations of artificial languages (e.g., Esperanto and its rivals), symbolic logic, even the World Wide Web. Advocate of scientific societies. Leibniz emphasized that research was a collaborative endeavor. Hence he warmly advocated the formation of national scientific societies along the lines of the British Royal Society and the French Académie Royale des Sciences. More specifically, in his correspondence and travels he urged the creation of such societies in Dresden, Saint Petersburg, Vienna, and Berlin. Only one such project came to fruition; in 1700, the Berlin Academy of Sciences was created. Leibniz drew up its first statutes, and served as its first President for the remainder of his life. That Academy evolved into the German Academy of Sciences, the publisher of the ongoing critical edition of his works. Law and Morality. Leibniz's writings on law, ethics, and politics were long overlooked by English-speaking scholars, but this has changed of late. While Leibniz was no apologist for absolute monarchy like Hobbes, or for tyranny in any form, neither did he echo the political and constitutional views of his contemporary John Locke, views invoked in support of liberalism, in 18th-century America and later elsewhere. The following excerpt from a 1695 letter to Baron J. C. Boyneburg's son Philipp is very revealing of Leibniz's political sentiments: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;As for ... the great question of the power of sovereigns and the obedience their peoples owe them, I usually say that it would be good for princes to be persuaded that their people have the right to resist them, and for the people, on the other hand, to be persuaded to obey them passively. I am, however, quite of the opinion of Grotius, that one ought to obey as a rule, the evil of revolution being greater beyond comparison than the evils causing it. Yet I recognize that a prince can go to such excess, and place the well-being of the state in such danger, that the obligation to endure ceases. This is most rare, however, and the theologian who authorizes violence under this pretext should take care against excess; excess being infinitely more dangerous than deficiency. In 1677, Leibniz called for a European confederation, governed by a council or senate, whose members would represent entire nations and would be free to vote their consciences; this is sometimes considered an anticipation of the European Union. He believed that Europe would adopt a uniform religion. He reiterated these proposals in 1715. But at the same time, he arrived to propose an interreligious and multicultural project to create a universal system of justice, which required from him a broad interdisciplinary perspective. In order to propose it, he combined linguistics (especially sinology), moral and legal philosophy, management, economics, and politics. Law. Leibniz trained as a legal academic, but under the tutelage of Cartesian-sympathiser Erhard Weigel we already see an attempt to solve legal problems by rationalist mathematical methods (Weigel's influence being most explicit in the Specimen Quaestionum Philosophicarum ex Jure collectarum (An Essay of Collected Philosophical Problems of Right)). For example, the Inaugural Disputation on Perplexing Cases uses early combinatorics to solve some legal disputes, while the 1666 Dissertation on the Combinatorial Art includes simple legal problems by way of illustration. The use of combinatorial methods to solve legal and moral problems seems, via Athanasius Kircher and Daniel Schwenter to be of Llullist inspiration: Ramón Llull attempted to solve ecumenical disputes through recourse to a combinatorial mode of reasoning he regarded as universal (a mathesis universalis). In the late 1660s the enlightened Prince-Bishop of Mainz Johann Philipp von Schönborn announced a review of the legal system and made available a position to support his current law commissioner. Leibniz left Franconia and made for Mainz before even winning the role. On reaching Frankfurt am Main Leibniz penned The New Method of Teaching and Learning the Law, by way of application. The text proposed a reform of legal education and is characteristically syncretic, integrating aspects of Thomism, Hobbesianism, Cartesianism and traditional jurisprudence. Leibniz's argument that the function of legal teaching was not to impress rules as one might train a dog, but to aid the student in discovering their own public reason, evidently impressed von Schönborn as he secured the job. Leibniz's next major attempt to find a universal rational core to law and so found a legal "science of right", came when Leibniz worked in Mainz from 1667–72. Starting initially from Hobbes' mechanistic doctrine of power, Leibniz reverted to logico-combinatorial methods in an attempt to define justice. As Leibniz's so-called Elementa Juris Naturalis advanced, he built in modal notions of right (possibility) and obligation (necessity) in which we see perhaps the earliest elaboration of his possible worlds doctrine within a deontic frame. While ultimately the Elementa remained unpublished, Leibniz continued to work on his drafts and promote their ideas to correspondents up until his death. Ecumenism. Leibniz devoted considerable intellectual and diplomatic effort to what would now be called an ecumenical endeavor, seeking to reconcile the Roman Catholic and Lutheran churches. In this respect, he followed the example of his early patrons, Baron von Boyneburg and the Duke John Frederick—both cradle Lutherans who converted to Catholicism as adults—who did what they could to encourage the reunion of the two faiths, and who warmly welcomed such endeavors by others. (The House of Brunswick remained Lutheran, because the Duke's children did not follow their father.) These efforts included corresponding with French bishop Jacques-Bénigne Bossuet, and involved Leibniz in some theological controversy. He evidently thought that the thoroughgoing application of reason would suffice to heal the breach caused by the Reformation. Philology. Leibniz the philologist was an avid student of languages, eagerly latching on to any information about vocabulary and grammar that came his way. In 1710, he applied ideas of gradualism and uniformitarianism to linguistics in a short essay. He refuted the belief, widely held by Christian scholars of the time, that Hebrew was the primeval language of the human race. At the same time, he rejected the idea of unrelated language groups and considered them all to have a common source. He also refuted the argument, advanced by Swedish scholars in his day, that a form of proto-Swedish was the ancestor of the Germanic languages. He puzzled over the origins of the Slavic languages and was fascinated by classical Chinese. Leibniz was also an expert in the Sanskrit language. He published the "princeps editio" (first modern edition) of the late medieval "Chronicon Holtzatiae", a Latin chronicle of the County of Holstein. Sinophology. Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. He apparently read "Confucius Sinarum Philosophus" in the first year of its publication. He came to the conclusion that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted how the "I Ching" hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China, hoping it would convert him. Leibniz was one of the western philosophers of the time who attempted to accommodate Confucian ideas to prevailing European beliefs. Leibniz's attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own. The historian E.R. Hughes suggests that Leibniz's ideas of "simple substance" and "pre-established harmony" were directly influenced by Confucianism, pointing to the fact that they were conceived during the period when he was reading "Confucius Sinarum Philosophus". Polymath. While making his grand tour of European archives to research the Brunswick family history that he never completed, Leibniz stopped in Vienna between May 1688 and February 1689, where he did much legal and diplomatic work for the Brunswicks. He visited mines, talked with mine engineers, and tried to negotiate export contracts for lead from the ducal mines in the Harz mountains. His proposal that the streets of Vienna be lit with lamps burning rapeseed oil was implemented. During a formal audience with the Austrian Emperor and in subsequent memoranda, he advocated reorganizing the Austrian economy, reforming the coinage of much of central Europe, negotiating a Concordat between the Habsburgs and the Vatican, and creating an imperial research library, official archive, and public insurance fund. He wrote and published an important paper on mechanics. Posthumous reputation. When Leibniz died, his reputation was in decline. He was remembered for only one book, the "Théodicée", whose supposed central argument Voltaire lampooned in his popular book "Candide", which concludes with the character Candide saying, "Non liquet" (it is not clear), a term that was applied during the Roman Republic to a legal verdict of "not proven". Voltaire's depiction of Leibniz's ideas was so influential that many believed it to be an accurate description. Thus Voltaire and his "Candide" bear some of the blame for the lingering failure to appreciate and understand Leibniz's ideas. Leibniz had an ardent disciple, Christian Wolff, whose dogmatic and facile outlook did Leibniz's reputation much harm. Leibniz also influenced David Hume, who read his "Théodicée" and used some of his ideas. In any event, philosophical fashion was moving away from the rationalism and system building of the 17th century, of which Leibniz had been such an ardent proponent. His work on law, diplomacy, and history was seen as of ephemeral interest. The vastness and richness of his correspondence went unrecognized. Leibniz's reputation began to recover with the 1765 publication of the "Nouveaux Essais". In 1768, Louis Dutens edited the first multi-volume edition of Leibniz's writings, followed in the 19th century by a number of editions, including those edited by Erdmann, Foucher de Careil, Gerhardt, Gerland, Klopp, and Mollat. Publication of Leibniz's correspondence with notables such as Antoine Arnauld, Samuel Clarke, Sophia of Hanover, and her daughter Sophia Charlotte of Hanover, began. In 1900, Bertrand Russell published a critical study of Leibniz's metaphysics. Shortly thereafter, Louis Couturat published an important study of Leibniz, and edited a volume of Leibniz's heretofore unpublished writings, mainly on logic. They made Leibniz somewhat respectable among 20th-century analytical and linguistic philosophers in the English-speaking world (Leibniz had already been of great influence to many Germans such as Bernhard Riemann). For example, Leibniz's phrase "salva veritate", meaning interchangeability without loss of or compromising the truth, recurs in Willard Quine's writings. Nevertheless, the secondary literature on Leibniz did not really blossom until after World War II. This is especially true of English speaking countries; in Gregory Brown's bibliography fewer than 30 of the English language entries were published before 1946. American Leibniz studies owe much to Leroy Loemker (1904–1985) through his translations and his interpretive essays in LeClerc (1973). Leibniz's philosophy was also highly regarded by Gilles Deleuze, who in 1988 published , an important part of Deleuze's own corpus. Nicholas Jolley has surmised that Leibniz's reputation as a philosopher is now perhaps higher than at any time since he was alive. Analytic and contemporary philosophy continue to invoke his notions of identity, individuation, and possible worlds. Work in the history of 17th- and 18th-century ideas has revealed more clearly the 17th-century "Intellectual Revolution" that preceded the better-known Industrial and commercial revolutions of the 18th and 19th centuries. In Germany, various important institutions were named after Leibniz. In Hanover in particular, he is the namesake for some of the most important institutions in the town: outside of Hanover: Awards: In 1985, the German government created the Leibniz Prize, offering an annual award of 1.55 million euros for experimental results and 770,000 euros for theoretical ones. It was the world's largest prize for scientific achievement prior to the Fundamental Physics Prize. The collection of manuscript papers of Leibniz at the Gottfried Wilhelm Leibniz Bibliothek – Niedersächische Landesbibliothek was inscribed on UNESCO's Memory of the World Register in 2007. Cultural references. Leibniz still receives popular attention. The Google Doodle for 1 July 2018 celebrated Leibniz's 372nd birthday. Using a quill, his hand is shown writing "Google" in binary ASCII code. One of the earliest popular but indirect expositions of Leibniz was Voltaire's satire "Candide", published in 1759. Leibniz was lampooned as Professor Pangloss, described as "the greatest philosopher of the Holy Roman Empire". Leibniz also appears as one of the main historical figures in Neal Stephenson's series of novels "The Baroque Cycle". Stephenson credits readings and discussions concerning Leibniz for inspiring him to write the series. Leibniz also stars in Adam Ehrlich Sachs's novel "The Organs of Sense". The German biscuit Choco Leibniz is named after Leibniz, a famous resident of Hanover where the manufacturer Bahlsen is based. Writings and publication. Leibniz mainly wrote in three languages: scholastic Latin, French and German. During his lifetime, he published many pamphlets and scholarly articles, but only two "philosophical" books, the "Combinatorial Art" and the "Théodicée". (He published numerous pamphlets, often anonymous, on behalf of the House of Brunswick-Lüneburg, most notably the "De jure suprematum" a major consideration of the nature of sovereignty.) One substantial book appeared posthumously, his "Nouveaux essais sur l'entendement humain", which Leibniz had withheld from publication after the death of John Locke. Only in 1895, when Bodemann completed his catalogue of Leibniz's manuscripts and correspondence, did the enormous extent of Leibniz's "Nachlass" become clear: about 15,000 letters to more than 1000 recipients plus more than 40,000 other items. Moreover, quite a few of these letters are of essay length. Much of his vast correspondence, especially the letters dated after 1700, remains unpublished, and much of what is published has appeared only in recent decades. The more than 67,000 records of the Leibniz Edition's Catalogue cover almost all of his known writings and the letters from him and to him. The amount, variety, and disorder of Leibniz's writings are a predictable result of a situation he described in a letter as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I cannot tell you how extraordinarily distracted and spread out I am. I am trying to find various things in the archives; I look at old papers and hunt up unpublished documents. From these I hope to shed some light on the history of the [House of] Brunswick. I receive and answer a huge number of letters. At the same time, I have so many mathematical results, philosophical thoughts, and other literary innovations that should not be allowed to vanish that I often do not know where to begin. The extant parts of the critical edition of Leibniz's writings are organized as follows: The systematic cataloguing of all of Leibniz's "Nachlass" began in 1901. It was hampered by two world wars and then by decades of German division into two states with the Cold War's "iron curtain" in between, separating scholars, and also scattering portions of his literary estates. The ambitious project has had to deal with writings in seven languages, contained in some 200,000 written and printed pages. In 1985 it was reorganized and included in a joint program of German federal and state ("Länder") academies. Since then the branches in Potsdam, Münster, Hanover and Berlin have jointly published 57 volumes of the critical edition, with an average of 870 pages, and prepared index and concordance works. Selected works. The year given is usually that in which the work was completed, not of its eventual publication. Collections. Six important collections of English translations are Wiener (1951), Parkinson (1966), Loemker (1969), Ariew and Garber (1989), Woolhouse and Francks (1998), and Strickland (2006). The ongoing critical edition of all of Leibniz's writings is "Sämtliche Schriften und Briefe". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Bibliographies. An updated bibliography of more than 25.000 titles is available at Leibniz Bibliographie.
[ { "math_id": 0, "text": "O(n^3)" }, { "math_id": 1, "text": "1 \\,-\\, \\frac{1}{3} \\,+\\, \\frac{1}{5} \\,-\\, \\frac{1}{7} \\,+\\, \\cdots \\,=\\, \\frac{\\pi}{4}." }, { "math_id": 2, "text": "\\displaystyle\\int f(x)\\,dx" }, { "math_id": 3, "text": "\\frac{dy}{dx}" } ]
https://en.wikipedia.org/wiki?curid=12281
12281541
Lumer–Phillips theorem
In mathematics, the Lumer–Phillips theorem, named after Günter Lumer and Ralph Phillips, is a result in the theory of strongly continuous semigroups that gives a necessary and sufficient condition for a linear operator in a Banach space to generate a contraction semigroup. Statement of the theorem. Let "A" be a linear operator defined on a linear subspace "D"("A") of the Banach space "X". Then "A" generates a contraction semigroup if and only if An operator satisfying the last two conditions is called maximally dissipative. Variants of the theorem. Reflexive spaces. Let "A" be a linear operator defined on a linear subspace "D"("A") of the reflexive Banach space "X". Then "A" generates a contraction semigroup if and only if Note that the conditions that "D"("A") is dense and that "A" is closed are dropped in comparison to the non-reflexive case. This is because in the reflexive case they follow from the other two conditions. Dissipativity of the adjoint. Let "A" be a linear operator defined on a dense linear subspace "D"("A") of the reflexive Banach space "X". Then "A" generates a contraction semigroup if and only if In case that "X" is not reflexive, then this condition for "A" to generate a contraction semigroup is still sufficient, but not necessary. Quasicontraction semigroups. Let "A" be a linear operator defined on a linear subspace "D"("A") of the Banach space "X". Then "A" generates a quasi contraction semigroup if and only if formula_0 so that "A" is dissipative. The ordinary differential equation "u'" − "λu" = "f", "u"(1) = 0 has a unique solution u in "H"1([0, 1]; R) for any "f" in "L"2([0, 1]; R), namely formula_1 so that the surjectivity condition is satisfied. Hence, by the reflexive version of the Lumer–Phillips theorem "A" generates a contraction semigroup. Examples. There are many more examples where a direct application of the Lumer–Phillips theorem gives the desired result. In conjunction with translation, scaling and perturbation theory the Lumer–Phillips theorem is the main tool for showing that certain operators generate strongly continuous semigroups. The following is an example in point. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle u, A u \\rangle = \\int_0^1 u(x) u'(x) \\, \\mathrm{d} x = - \\frac1{2} u(0)^2 \\leq 0," }, { "math_id": 1, "text": "u(x)={\\rm e}^{\\lambda x}\\int_1^x {\\rm e}^{-\\lambda t}f(t)\\,dt " } ]
https://en.wikipedia.org/wiki?curid=12281541
12281671
Dissipative operator
In mathematics, a dissipative operator is a linear operator "A" defined on a linear subspace "D"("A") of Banach space "X", taking values in "X" such that for all "λ" &gt; 0 and all "x" ∈ "D"("A") formula_0 A couple of equivalent definitions are given below. A dissipative operator is called maximally dissipative if it is dissipative and for all "λ" &gt; 0 the operator "λI" − "A" is surjective, meaning that the range when applied to the domain "D" is the whole of the space "X". An operator that obeys a similar condition but with a plus sign instead of a minus sign (that is, the negation of a dissipative operator) is called an accretive operator. The main importance of dissipative operators is their appearance in the Lumer–Phillips theorem which characterizes maximally dissipative operators as the generators of contraction semigroups. Properties. A dissipative operator has the following properties: formula_4 for all "z" in the range of "λI" − "A". This is the same inequality as that given at the beginning of this article, with formula_5 (We could equally well write these as formula_6 which must hold for any positive κ.) Equivalent characterizations. Define the duality set of "x" ∈ "X", a subset of the dual space "X"' of "X", by formula_7 By the Hahn–Banach theorem this set is nonempty. In the Hilbert space case (using the canonical duality between a Hilbert space and its dual) it consists of the single element "x". More generally, if "X" is a Banach space with a strictly convex dual, then "J"("x") consists of a single element. Using this notation, "A" is dissipative if and only if for all "x" ∈ "D"("A") there exists a "x"' ∈ "J"("x") such that formula_8 In the case of Hilbert spaces, this becomes formula_9 for all "x" in "D"("A"). Since this is non-positive, we have formula_10 formula_11 Since "I−A" has an inverse, this implies that formula_12 is a contraction, and more generally, formula_13 is a contraction for any positive λ. The utility of this formulation is that if this operator is a contraction for some positive λ then "A" is dissipative. It is not necessary to show that it is a contraction for all positive λ (though this is true), in contrast to (λI−A)−1 which must be proved to be a contraction for all positive values of λ. formula_14 so "A" is a dissipative operator. formula_21 Hence, "A" is a dissipative operator. Furthermore, since there is a solution (almost everywhere) in "D" to formula_22 for any "f" in "H", the operator "A" is maximally dissipative. Note that in a case of infinite dimensionality like this, the range can be the whole Banach space even though the domain is only a proper subspace thereof. formula_23 so the Laplacian is a dissipative operator. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\|(\\lambda I-A)x\\|\\geq\\lambda\\|x\\|." }, { "math_id": 1, "text": "\\|(\\lambda I-A)x\\|\\ne 0," }, { "math_id": 2, "text": "\\|(\\lambda I-A)x\\|>\\lambda\\|x\\|" }, { "math_id": 3, "text": "\\|\\lambda x\\|+\\|Ax\\|\\ge\\|(\\lambda I-A)x\\|>\\lambda\\|x\\|," }, { "math_id": 4, "text": "\\|(\\lambda I-A)^{-1}z\\|\\leq\\frac{1}{\\lambda}\\|z\\|" }, { "math_id": 5, "text": "z=(\\lambda I-A)x." }, { "math_id": 6, "text": "\\|(I-\\kappa A)^{-1}z\\|\\leq\\|z\\|\\text{ or }\\|(I-\\kappa A)x\\|\\geq\\|x\\|" }, { "math_id": 7, "text": "J(x):=\\left\\{x'\\in X':\\|x'\\|_{X'}^2=\\|x\\|_{X}^2=\\langle x',x\\rangle \\right\\}." }, { "math_id": 8, "text": "{\\rm Re}\\langle Ax,x'\\rangle\\leq0." }, { "math_id": 9, "text": "{\\rm Re}\\langle Ax,x\\rangle\\leq0" }, { "math_id": 10, "text": "\\|x-Ax\\|^2=\\|x\\|^2+\\|Ax\\|^2-2{\\rm Re}\\langle Ax,x\\rangle\\geq\\|x\\|^2+\\|Ax\\|^2+2{\\rm Re}\\langle Ax,x\\rangle=\\|x+Ax\\|^2" }, { "math_id": 11, "text": "\\therefore\\|x-Ax\\|\\geq\\|x+Ax\\|" }, { "math_id": 12, "text": "(I+A)(I-A)^{-1}" }, { "math_id": 13, "text": "(\\lambda I+A)(\\lambda I-A)^{-1}" }, { "math_id": 14, "text": "x \\cdot A x = x \\cdot (-x) = - \\| x \\|^{2} \\leq 0," }, { "math_id": 15, "text": "x^{*}Ax," }, { "math_id": 16, "text": "x^{*}\\frac{A+A^{*}}2x." }, { "math_id": 17, "text": "A=\\begin{pmatrix}-1 & 3 \\\\0 & -1\\end{pmatrix}" }, { "math_id": 18, "text": "\\lambda, \\lambda-A" }, { "math_id": 19, "text": "(\\lambda+A)(\\lambda-A)^{-1}" }, { "math_id": 20, "text": "H^1([0,\\;1];\\;\\mathbf{R})" }, { "math_id": 21, "text": "\\langle u, A u \\rangle = \\int_{0}^{1} u(x) u'(x) \\, \\mathrm{d} x = - \\frac1{2} u(0)^{2} \\leq 0." }, { "math_id": 22, "text": "u-\\lambda u'=f" }, { "math_id": 23, "text": "\\langle u, \\Delta u \\rangle = \\int_\\Omega u(x) \\Delta u(x) \\, \\mathrm{d} x = - \\int_\\Omega \\big| \\nabla u(x) \\big|^{2} \\, \\mathrm{d} x = - \\| \\nabla u \\|^2_{L^{2} (\\Omega; \\mathbf{R})} \\leq 0," } ]
https://en.wikipedia.org/wiki?curid=12281671
1228320
Energy return on investment
Ratio of usable energy from a resource In energy economics and ecological energetics, energy return on investment (EROI), also sometimes called energy returned on energy invested (ERoEI), is the ratio of the amount of usable energy (the "exergy") delivered from a particular energy resource to the amount of exergy used to obtain that energy resource. Arithmetically the EROI can be defined as: formula_0. When the EROI of a source of energy is less than or equal to one, that energy source becomes a net "energy sink", and can no longer be used as a source of energy. A related measure, called energy stored on energy invested (ESOEI), is used to analyse storage systems. To be considered viable as a prominent fuel or energy source a fuel or energy must have an EROI ratio of at least 3:1. History. The energy analysis field of study is credited with being popularized by Charles A. S. Hall, a Systems ecology and biophysical economics professor at the State University of New York. Hall applied the biological methodology, developed at an Ecosystems Marine Biological Laboratory, and then adapted that method to research human industrial civilization. The concept would have its greatest exposure in 1984, with a paper by Hall that appeared on the cover of the journal "Science". Application to various technologies. Photovoltaic. Global PV market by technology in 2013. &lt;templatestyles src="Legend/styles.css" /&gt;  multi-Si (54.9%)&lt;templatestyles src="Legend/styles.css" /&gt;  mono-Si (36.0%)&lt;templatestyles src="Legend/styles.css" /&gt;  CdTe (5.1%)&lt;templatestyles src="Legend/styles.css" /&gt;  a-Si (2.0%)&lt;templatestyles src="Legend/styles.css" /&gt;  CIGS (2.0%) The issue is still subject of numerous studies, and prompting academic argument. That's mainly because the "energy invested" critically depends on technology, methodology, and system boundary assumptions, resulting in a range from a maximum of 2000 kWh/m2 of module area down to a minimum of 300 kWh/m2 with a median value of 585 kWh/m2 according to a meta-study from 2013. Regarding output, it obviously depends on the local insolation, not just the system itself, so assumptions have to be made. Some studies (see below) include in their analysis that photovoltaic produce electricity, while the invested energy may be lower grade primary energy. A 2015 review in Renewable and Sustainable Energy Reviews assessed the energy payback time and EROI of a variety of PV module technologies. In this study, which uses an insolation of 1700 kWh/m2/yr and a system lifetime of 30 years, mean harmonized EROIs between 8.7 and 34.2 were found. Mean harmonized energy payback time varied from 1.0 to 4.1 years. In 2021, the Fraunhofer Institute for Solar Energy Systems calculated an energy payback time of around 1 year for European PV installations (0.9 years for Catania in Southern Italy, 1.1 years for Brussels) with wafer-based silicon PERC cells. Wind turbines. In the scientific literature EROIs wind turbines is around 16 unbuffered and 4 buffered. Data collected in 2018 found that the EROI of operational wind turbines averaged 19.8 with high variability depending on wind conditions and wind turbine size. EROIs tend to be higher for recent wind turbines compared to older technology wind turbines. Vestas reports an EROI of 31 for its V150 model wind turbine. Hydropower plants. The EROI for hydropower plants averages to about 110 when it is run for about 100 years. Oil sands. Because much of the energy required for producing oil from oil sands (bitumen) comes from low value fractions separated out by the upgrading process, there are two ways to calculate EROI, the higher value given by considering only the external energy inputs and the lower by considering all energy inputs, including self generated. One study found that in 1970 oil sands net energy returns was about 1.0 but by 2010 had increased to about 5.23. Conventional oil. Conventional sources of oil have a rather large variation depending on various geologic factors. The EROI for refined fuel from conventional oil sources varies from around 18 to 43. Oil Shale. Due to the process heat input requirements for oil shale harvesting, the EROI is low. Typically natural gas is used, either directly combusted for process heat or used to power an electricity generating turbine, which then uses electrical heating elements to heat the underground layers of shale to produce oil from the kerogen. Resulting EROI is typically around 1.4-1.5. Economically, oil shale might be viable due to the effectively free natural gas on site used for heating the kerogen, but opponents have debated that the natural gas could be extracted directly and used for relatively inexpensive transportation fuel rather than heating shale for a lower EROI and higher carbon emissions. Oil liquids. The weighted average standard EROI of all oil liquids (including coal-to-liquids, gas-to-liquids, biofuels, etc.) is expected to decrease from 44.4 in 1950 to a plateau of 6.7 in 2050. Natural gas. The standard EROI for natural gas is estimated to decrease from 141.5 in 1950 to an apparent plateau of 16.8 in 2050. Nuclear plants. The EROI for nuclear plants ranges from 20 to 81. Non-manmade energy inputs. The natural or primary energy sources are not included in the calculation of energy invested, only the human-applied sources. For example, in the case of biofuels the solar insolation driving photosynthesis is not included, and the energy used in the stellar synthesis of fissile elements is not included for nuclear fission. The energy returned includes only human usable energy and not wastes such as waste heat. Nevertheless, heat of any form can be counted where it is actually used for heating. However the use of waste heat in district heating and water desalination in cogeneration plants is rare, and in practice it is often excluded in EROI analysis of energy sources. Competing methodology. In a 2010 paper by Murphy and Hall, the advised extended ["Ext"] boundary protocol, for all future research on EROI, was detailed. In order to produce, what they consider, a more realistic assessment and generate greater consistency in comparisons, than what Hall and others view as the "weak points" in a competing methodology. In more recent years, however, a source of continued controversy is the creation of a different methodology endorsed by certain members of the IEA which for example most notably in the case of photovoltaic solar panels, controversially generates more favorable values. In the case of photovoltaic solar panels, the IEA method tends to focus on the energy used in the factory process alone. In 2016, Hall observed that much of the published work in this field is produced by advocates or persons with a connection to business interests among the competing technologies, and that government agencies had not yet provided adequate funding for rigorous analysis by more neutral observers. Relationship to net energy gain. EROI and "Net energy (gain)" measure the same quality of an energy source or sink in numerically different ways. Net energy describes the amounts, while EROI measures the ratio or efficiency of the process. They are related simply by formula_1 or formula_2 For example, given a process with an EROI of 5, expending 1 unit of energy yields a net energy gain of 4 units. The break-even point happens with an EROI of 1 or a net energy gain of 0. The time to reach this break-even point is called energy payback period (EPP) or energy payback time (EPBT). Economic influence. Although many qualities of an energy source matter (for example oil is energy-dense and transportable, while wind is variable), when the EROI of the main sources of energy for an economy fall that energy becomes more difficult to obtain and its relative price may increase. In regard to fossil fuels, when oil was originally discovered, it took on average one barrel of oil to find, extract, and process about 100 barrels of oil. The ratio, for discovery of fossil fuels in the United States, has declined steadily over the last century from about 1000:1 in 1919 to only 5:1 in the 2010s. Since the invention of agriculture, humans have increasingly used exogenous sources of energy to multiply human muscle-power. Some historians have attributed this largely to more easily exploited (i.e. higher EROI) energy sources, which is related to the concept of energy slaves. Thomas Homer-Dixon argues that a falling EROI in the Later Roman Empire was one of the reasons for the collapse of the Western Empire in the fifth century CE. In "The Upside of Down" he suggests that EROI analysis provides a basis for the analysis of the rise and fall of civilisations. Looking at the maximum extent of the Roman Empire, (60 million) and its technological base the agrarian base of Rome was about 1:12 per hectare for wheat and 1:27 for alfalfa (giving a 1:2.7 production for oxen). One can then use this to calculate the population of the Roman Empire required at its height, on the basis of about 2,500–3,000 calories per day per person. It comes out roughly equal to the area of food production at its height. But ecological damage (deforestation, soil fertility loss particularly in southern Spain, southern Italy, Sicily and especially north Africa) saw a collapse in the system beginning in the 2nd century, as EROI began to fall. It bottomed in 1084 when Rome's population, which had peaked under Trajan at 1.5 million, was only 15,000. Evidence also fits the cycle of Mayan and Cambodian collapse too. Joseph Tainter suggests that diminishing returns of the EROI is a chief cause of the collapse of complex societies, which has been suggested as caused by peak wood in early societies. Falling EROI due to depletion of high quality fossil fuel resources also poses a difficult challenge for industrial economies, and could potentially lead to declining economic output and challenge the concept (which is very recent when considered from a historical perspective) of perpetual economic growth. Criticism of EROI. EROI is calculated by dividing the energy output by the energy input. Measuring total energy output is often easy, especially in the case for an electrical output where some appropriate electricity meter can be used. However, researchers disagree on how to determine energy input accurately and therefore arrive at different numbers for the same source of energy. How deep should the probing in the supply chain of the tools being used to generate energy go? For example, if steel is being used to drill for oil or construct a nuclear power plant, should the energy input of the steel be taken into account? Should the energy input into building the factory being used to construct the steel be taken into account and amortized? Should the energy input of the roads which are used to ferry the goods be taken into account? What about the energy used to cook the steelworkers' breakfasts? These are complex questions evading simple answers. A full accounting would require considerations of opportunity costs and comparing total energy expenditures in the presence and absence of this economic activity. However, when comparing two energy sources a standard practice for the supply chain energy input can be adopted. For example, consider the steel, but don't consider the energy invested in factories deeper than the first level in the supply chain. It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability, while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art. Richards and Watt propose an "Energy Yield Ratio" for photovoltaic systems as an alternative to EROI (which they refer to as "Energy Return Factor"). The difference is that it uses the design lifetime of the system, which is known in advance, rather than the actual lifetime. This also means that it can be adapted to multi-component systems where the components have different lifetimes. Another issue with EROI that many studies attempt to tackle is that the energy returned can be in different forms, and these forms can have different utility. For example, electricity can be converted more efficiently than thermal energy into motion, due to electricity's lower entropy. In addition, the form of energy of the input can be completely different from the output. For example, energy in the form of coal could be used in the production of ethanol. This might have an EROI of less than one, but could still be desirable due to the benefits of liquid fuels (assuming the latters are not used in the processes of extraction and transformation). Additional EROI calculations. There are three prominent expanded EROI calculations, they are point of use, extended and societal. Point of Use EROI expands the calculation to include the cost of refining and transporting the fuel during the refining process. Since this expands the bounds of the calculation to include more production process EROI will decrease. Extended EROI includes point of use expansions as well as including the cost of creating the infrastructure needed for transportation of the energy or fuel once refined. Societal EROI is a sum of all the EROIs of all the fuels used in a society or nation. A societal EROI has never been calculated and researchers believe it may currently be impossible to know all variables necessary to complete the calculation, but attempted estimates have been made for some nations. Calculations are done by summing all of the EROIs for domestically produced and imported fuels and comparing the result to the Human Development Index (HDI), a tool often used to understand well-being in a society. According to this calculation, the amount of energy a society has available to them increases the quality of life for the people living in that country, and countries with less energy available also have a harder time satisfying citizens' basic needs. This is to say that societal EROI and overall quality of life are very closely linked. EROI and payback periods of some types of power plants. The following table is a compilation of sources of energy. The minimum requirement is a breakdown of the cumulative energy expenses according to material data. Frequently in literature harvest factors are reported, for which the origin of the values is not completely transparent. These are not included in this table. The bold numbers are those given in the respective literature source, the normal printed ones are derived (see Mathematical Description). (a) The cost of fuel transportation is taken into account (b) The values refer to the total energy output. The expense for storage power plants, seasonal reserves or conventional load balancing power plants is not taken into account. (c) The data for the E-82 come from the manufacturer, but are confirmed by TÜV Rheinland. ESOEI. ESOEI (or ESOIe) is used when EROI is below 1. "ESOIe is the ratio of electrical energy stored over the lifetime of a storage device to the amount of embodied electrical energy required to build the device." One of the notable outcomes of the Stanford University team's assessment on ESOI, was that if pumped storage was not available, the combination of wind energy and the commonly suggested pairing with battery technology as it presently exists, would not be sufficiently worth the investment, suggesting instead curtailment. EROI under rapid growth. A related recent concern is energy cannibalism where energy technologies can have a limited growth rate if climate neutrality is demanded. Many energy technologies are capable of replacing significant volumes of fossil fuels and concomitant green house gas emissions. Unfortunately, neither the enormous scale of the current fossil fuel energy system nor the necessary growth rate of these technologies is well understood within the limits imposed by the net energy produced for a growing industry. This technical limitation is known as energy cannibalism and refers to an effect where rapid growth of an entire energy producing or energy efficiency industry creates a need for energy that uses (or cannibalizes) the energy of existing power plants or production plants. The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;solar breeder overcomes some of these problems. A solar breeder is a photovoltaic panel manufacturing plant which can be made energy-independent by using energy derived from its own roof using its own panels. Such a plant becomes not only energy self-sufficient but a major supplier of new energy, hence the name solar breeder. Research on the concept was conducted by Centre for Photovoltaic Engineering, University of New South Wales, Australia. The reported investigation establishes certain mathematical relationships for the solar breeder which clearly indicate that a vast amount of net energy is available from such a plant for the indefinite future. The solar module processing plant at Frederick, Maryland was originally planned as such a solar breeder. In 2009 the Sahara Solar Breeder Project was proposed by the "Science Council of Japan" as a cooperation between Japan and Algeria with the highly ambitious goal of creating hundreds of GW of capacity within 30 years. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " EROI = \\frac{\\hbox{Energy Delivered}}{\\hbox{Energy Required to Deliver that Energy}}" }, { "math_id": 1, "text": " \\hbox{GrossEnergyYield} \\div \\hbox{EnergyExpended} = EROI " }, { "math_id": 2, "text": "(\\hbox{NetEnergy} \\div \\hbox{EnergyExpended} ) + 1 = EROI " } ]
https://en.wikipedia.org/wiki?curid=1228320
12285032
Base runs
Baseball statistic Base runs (BsR) is a baseball statistic invented by sabermetrician David Smyth to estimate the number of runs a team "should have" scored given their component offensive statistics, as well as the number of runs a hitter or pitcher creates or allows. It measures essentially the same thing as Bill James' runs created, but as sabermetrician Tom M. Tango points out, base runs models the reality of the run-scoring process "significantly better than any other run estimator". Purpose and formula. Base runs has multiple variations, but all take the form formula_0 Smyth detailed the following forms of the statistic: The simplest, uses only the most common batting statistics A = H + BB - HR B = (1.4 * TB - .6 * H - 3 * HR + .1 * BB) * 1.02 C = AB - H D = HR An offshoot includes significantly more batting statistics A = H + BB + HBP - HR - .5 * IBB B = (1.4 * TB - .6 * H - 3 * HR + .1 * (BB + HBP - IBB) + .9 * (SB - CS - GIDP)) * 1.1 C = AB - H + CS + GIDP D = HR A third formula uses pitching statistics A = H + BB - HR B = (1.4 * (1.12 * H + 4 * HR) - .6 * H - 3 * HR + .1 * BB) * 1.1 C = 3 * IP D = HR Other sabermetricians have developed their own formulas using Smyth's general form, mainly by tinkering with the B factor. Because the base runs statistic attempts to model the team run scoring process, a formula cannot be applied directly to an individual player's statistics. Doing this would result in a run estimate for an entire team that puts out the individual's statistics. A workaround for this issue is to find the team's base runs with the player in the lineup and the team's base runs with a replacement level player in the lineup. The difference between these values approximates the individual's base runs statistic. Advantages of base runs. Base runs was primarily designed to provide an accurate model of the run scoring process at the Major League Baseball level, and it accomplishes that goal: in recent seasons, base runs has the lowest RMSE of any of the major run estimation methods. In addition, its accuracy holds up in even the most extreme of circumstances and leagues. For instance, when a solo home run is hit, base runs will correctly predict one run having been scored by the batting team. By contrast, when runs created assesses a solo HR, it predicts four runs to be scored; likewise, most linear weights-based formulas will predict a number close to 1.4 runs having been scored on a solo HR. This is because each of these models were developed to fit the sample of a 162-game MLB season; they work well when applied to that sample, of course, but are inaccurate when taken out of the environment for which they were designed. Base runs, on the other hand, can be applied to any sample at any level of baseball (provided it is possible to calculate the B multiplier), because it models the way the game of baseball operates, and not just for a 162-game season at the highest professional level. This means that base runs can be applied to high school or even little league statistics. Weaknesses of base runs. From the TangoTiger wiki "Base runs adheres to more of the fundamental constraints on run scoring than most other run estimators, but it is by no means perfectly compliant. Some examples of shortcomings: One avenue for possible improvement in the model is the scoring rate estimator B/(B + C). There is no deep theory behind this construct--it was chosen because it worked empirically. It is possible that a better score rate estimator could be developed, although it would most likely have to be more complex than the current one." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{{A * B \\over B + C}} + D" } ]
https://en.wikipedia.org/wiki?curid=12285032
1228638
Chemical shift
Variation in resonant frequency of identical atomic nuclei in a magnetic field In nuclear magnetic resonance (NMR) spectroscopy, the chemical shift is the resonant frequency of an atomic nucleus relative to a standard in a magnetic field. Often the position and number of chemical shifts are diagnostic of the structure of a molecule. Chemical shifts are also used to describe signals in other forms of spectroscopy such as photoemission spectroscopy. Some atomic nuclei possess a magnetic moment (nuclear spin), which gives rise to different energy levels and resonance frequencies in a magnetic field. The total magnetic field experienced by a nucleus includes local magnetic fields induced by currents of electrons in the molecular orbitals (electrons have a magnetic moment themselves). The electron distribution of the same type of nucleus (e.g. ) usually varies according to the local geometry (binding partners, bond lengths, angles between bonds, and so on), and with it the local magnetic field at each nucleus. This is reflected in the spin energy levels (and resonance frequencies). The variations of nuclear magnetic resonance frequencies of the same kind of nucleus, due to variations in the electron distribution, is called the chemical shift. The size of the chemical shift is given with respect to a reference frequency or reference sample (see also chemical shift referencing), usually a molecule with a barely distorted electron distribution. Operating frequency. The operating (or Larmor) frequency formula_0 of a magnet (usually quoted as absolute value in MHz) is calculated from the Larmor equation formula_1 where "B"0 is the induction of the magnet (SI units of Tesla), and formula_2 is the magnetogyric ratio of the nucleus — an empirically measured fundamental constant determined by the details of the structure of each nucleus. For example, the proton operating frequency for a 1 Tesla magnet is calculated as: formula_3 MRI scanners are often referred to by their field strengths "B"0 (e.g. "a 7 T scanner"), whereas NMR spectrometers are commonly referred to by the corresponding proton Larmor frequency (e.g. "a 300 MHz spectrometer", which has a "B"0 of 7 T ). While chemical shift is referenced in order that the units are equivalent across different field strengths, the actual frequency separation in Hertz scales with field strength ("B"0). As a result, the difference of chemical shift between two signals (ppm) represents a larger number of Hertz on machines that have larger "B"0 and therefore the signals are less likely to be overlapping in the resulting spectrum. This increased resolution is a significant advantage for analysis. (Larger field machines are also favoured on account of having intrinsically higher signal arising from the Boltzmann distribution of magnetic spin states.) Chemical shift referencing. Chemical shift δ is usually expressed in parts per million (ppm) by frequency, because it is calculated from: formula_4 where "ν"sample is the absolute resonance frequency of the sample and "ν"ref is the absolute resonance frequency of a standard reference compound, measured in the same applied magnetic field "B"0. Since the numerator is usually expressed in hertz, and the denominator in megahertz, δ is expressed in ppm. The detected frequencies (in Hz) for 1H, 13C, and 29Si nuclei are usually referenced against TMS (tetramethylsilane), TSP (Trimethylsilylpropanoic acid), or DSS, which by the definition above have a chemical shift of zero if chosen as the reference. Other standard materials are used for setting the chemical shift for other nuclei. Thus, an NMR signal observed at a frequency 300 Hz higher than the signal from TMS, where the TMS resonance frequency is 300 MHz, has a chemical shift of: formula_5 Although the absolute resonance frequency depends on the applied magnetic field, the chemical shift is independent of external magnetic field strength. On the other hand, the resolution of NMR will increase with applied magnetic field. Referencing methods. Practically speaking, diverse methods may be used to reference chemical shifts in an NMR experiment, which can be subdivided into "indirect" and "direct" referencing methods. Indirect referencing uses a channel other than the one of interest to adjust chemical shift scale correctly, i.e. the solvent signal in the deuterium (lock) channel can be used to reference the a 1H NMR spectrum. Both indirect and direct referencing can be done as three different procedures: Modern NMR spectrometers commonly make use of the absolute scale, which defines the 1H signal of TMS as 0 ppm in proton NMR and the center frequencies of all other nuclei as percentage of the TMS resonance frequency: formula_6 The use of the deuterium (lock) channel, so the 2H signal of the deuterated solvent, and the Ξ value of the absolute scale is a form of internal referencing and is particularly useful in heteronuclear NMR spectroscopy as local reference compounds may not be always be available or easily used (i.e. liquid NH3 for 15N NMR spectroscopy). This system, however, relies on accurately determined 2H NMR chemical shifts enlisted in the spectrometer software and correctly determined Ξ values by IUPAC. A recent study for 19F NMR spectroscopy revealed that the use of the absolute scale and lock-based internal referencing led to errors in chemical shifts. These may be negated by inclusion of calibrated reference compounds. The induced magnetic field. The electrons around a nucleus will circulate in a magnetic field and create a secondary induced magnetic field. This field opposes the applied field as stipulated by Lenz's law and atoms with higher induced fields (i.e., higher electron density) are therefore called "shielded", relative to those with lower electron density. Electron-donating alkyl groups, for example, lead to increased shielding whereas electron-withdrawing substituents such as nitro groups lead to "deshielding" of the nucleus. Not only substituents cause local induced fields. Bonding electrons can also lead to shielding and deshielding effects. A striking example of this is the pi bonds in benzene. Circular current through the hyperconjugated system causes a shielding effect at the molecule's center and a deshielding effect at its edges. Trends in chemical shift are explained based on the degree of shielding or deshielding. Nuclei are found to resonate in a wide range to the left (or more rare to the right) of the internal standard. When a signal is found with a higher chemical shift: Conversely a lower chemical shift is called a diamagnetic shift, and is upfield and more shielded. Diamagnetic shielding. In real molecules protons are surrounded by a cloud of charge due to adjacent bonds and atoms. In an applied magnetic field (B0) electrons circulate and produce an induced field (Bi) which opposes the applied field. The effective field at the nucleus will be B B0 − Bi. The nucleus is said to be experiencing a diamagnetic shielding. Factors causing chemical shifts. Important factors influencing chemical shift are electron density, electronegativity of neighboring groups and anisotropic induced magnetic field effects. Electron density shields a nucleus from the external field. For example, in proton NMR the electron-poor tropylium ion has its protons downfield at 9.17 ppm, those of the electron-rich cyclooctatetraenyl anion move upfield to 6.75 ppm and its dianion even more upfield to 5.56 ppm. A nucleus in the vicinity of an electronegative atom experiences reduced electron density and the nucleus is therefore deshielded. In proton NMR of methyl halides (CH3X) the chemical shift of the methyl protons increase in the order I &lt; Br &lt; Cl &lt; F from 2.16 ppm to 4.26 ppm reflecting this trend. In carbon NMR the chemical shift of the carbon nuclei increase in the same order from around −10 ppm to 70 ppm. Also when the electronegative atom is removed further away the effect diminishes until it can be observed no longer. Anisotropic induced magnetic field effects are the result of a local induced magnetic field experienced by a nucleus resulting from circulating electrons that can either be paramagnetic when it is parallel to the applied field or diamagnetic when it is opposed to it. It is observed in alkenes where the double bond is oriented perpendicular to the external field with pi electrons likewise circulating at right angles. The induced magnetic field lines are parallel to the external field at the location of the alkene protons which therefore shift downfield to a 4.5 ppm to 7.5 ppm range. The three-dimensional space where a diamagnetic shift is called the shielding zone with a cone-like shape aligned with the external field. The protons in aromatic compounds are shifted downfield even further with a signal for benzene at 7.73 ppm as a consequence of a diamagnetic ring current. Alkyne protons by contrast resonate at high field in a 2–3 ppm range. For alkynes the most effective orientation is the external field in parallel with electrons circulation around the triple bond. In this way the acetylenic protons are located in the cone-shaped shielding zone hence the upfield shift. Magnetic properties of most common nuclei. 1H and 13C are not the only nuclei susceptible to NMR experiments. A number of different nuclei can also be detected, although the use of such techniques is generally rare due to small relative sensitivities in NMR experiments (compared to 1H) of the nuclei in question, the other factor for rare use being their slender representation in nature and organic compounds. 1H, 13C, 15N, 19F and 31P are the five nuclei that have the greatest importance in NMR experiments: Chemical shift manipulation. In general, the associated increased signal-to-noise and resolution has driven a move towards increasingly high field strengths. In limited cases, however, lower fields are preferred; examples are for systems in chemical exchange, where the speed of the exchange relative to the NMR experiment can cause additional and confounding linewidth broadening. Similarly, while avoidance of second order coupling is generally preferred, this information can be useful for elucidation of chemical structures. Using refocussing pulses placed between recording of successive points of the free induction decay, in an analogous fashion to the spin echo technique in MRI, the chemical shift evolution can be scaled to provide apparent low-field spectra on a high-field spectrometer. In a similar fashion, it is possible to upscale the effect of J-coupling relative to the chemical shift using pulse sequences that include additional J-coupling evolution periods interspersed with conventional spin evolutions. Other chemical shifts. The Knight shift (first reported in 1949) and Shoolery's rule are observed with pure metals and methylene groups, respectively. The NMR chemical shift in its present-day meaning first appeared in journals in 1950. Chemical shifts with a different meaning appear in X-ray photoelectron spectroscopy as the shift in atomic core-level energy due to a specific chemical environment. The term is also used in Mössbauer spectroscopy, where similarly to NMR it refers to a shift in peak position due to the local chemical bonding environment. As is the case for NMR the chemical shift reflects the electron density at the atomic nucleus. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega_{0}" }, { "math_id": 1, "text": "\\omega_{0} = -\\gamma B_0\\,," }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "{{\\omega }_{0}}=-4.258\\cdot {{10}^{7}}\\frac{\\text{Hz}}{\\text{T}}\\times 1.000\\text{ T}=-42.58\\text{ MHz}" }, { "math_id": 4, "text": "\\delta = \\frac{ \\nu_\\mathrm{sample} - \\nu_\\mathrm{ref}}{ \\nu_\\mathrm{ref}}\\,," }, { "math_id": 5, "text": "\\frac{300\\,\\rm Hz}{300\\times10^6\\,\\rm Hz}=1\\times10^{-6}= 1\\,\\rm ppm \\,." }, { "math_id": 6, "text": "\\Xi [\\%] = 100 (\\upsilon^{obs}_X / \\upsilon^{obs}_{TMS})" } ]
https://en.wikipedia.org/wiki?curid=1228638
1228679
Gyromagnetic ratio
Ratio of magnetic moment to angular momentum In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic moment to its angular momentum, and it is often denoted by the symbol γ, gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1). The term "gyromagnetic ratio" is often used as a synonym for a "different" but closely related quantity, the g-factor. The g-factor only differs from the gyromagnetic ratio in being dimensionless. For a classical rotating body. Consider a nonconductive charged body rotating about an axis of symmetry. According to the laws of classical physics, it has both a magnetic dipole moment due to the movement of charge and an angular momentum due to the movement of mass arising from its rotation. It can be shown that as long as its charge and mass density and flow are distributed identically and rotationally symmetric, its gyromagnetic ratio is formula_0 where formula_1 is its charge and formula_2 is its mass. The derivation of this relation is as follows. It suffices to demonstrate this for an infinitesimally narrow circular ring within the body, as the general result then follows from an integration. Suppose the ring has radius r, area "A" = "πr"2, mass m, charge q, and angular momentum "L" = "mvr". Then the magnitude of the magnetic dipole moment is formula_3 For an isolated electron. An isolated electron has an angular momentum and a magnetic moment resulting from its spin. While an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. The above classical relation does not hold, giving the wrong result by the absolute value of the electron's g-factor, which is denoted "g"e: formula_4 where "μ"B is the Bohr magneton. The gyromagnetic ratio due to electron spin is twice that due to the orbiting of an electron. In the framework of relativistic quantum mechanics, formula_5 where formula_6 is the fine-structure constant. Here the small corrections to the relativistic result "g" = 2 come from the quantum field theory calculations of the anomalous magnetic dipole moment. The electron g-factor is known to twelve decimal places by measuring the electron magnetic moment in a one-electron cyclotron: formula_7 The electron gyromagnetic ratio is formula_8 formula_9 The electron g-factor and γ are in excellent agreement with theory; see "Precision tests of QED" for details. Gyromagnetic factor not as a consequence of relativity. Since a gyromagnetic factor equal to 2 follows from Dirac's equation, it is a frequent misconception to think that a g-factor 2 is a consequence of relativity; it is not. The factor 2 can be obtained from the linearization of both the Schrödinger equation and the relativistic Klein–Gordon equation (which leads to Dirac's). In both cases a 4-spinor is obtained and for both linearizations the g-factor is found to be equal to 2; Therefore, the factor 2 is a consequence of the minimal coupling and of the fact of having the same order of derivatives for space and time. Physical spin particles which cannot be described by the linear gauged Dirac equation satisfy the gauged Klein–Gordon equation extended by the term according to, formula_10 Here, "σ""μν" and "F""μν" stand for the Lorentz group generators in the Dirac space, and the electromagnetic tensor respectively, while "A""μ" is the electromagnetic four-potential. An example for such a particle, is the spin companion to spin in the "D"(½,1) ⊕ "D"(1,½) representation space of the Lorentz group. This particle has been shown to be characterized by and consequently to behave as a truly quadratic fermion. For a nucleus. Protons, neutrons, and many nuclei carry nuclear spin, which gives rise to a gyromagnetic ratio as above. The ratio is conventionally written in terms of the proton mass and charge, even for neutrons and for other nuclei, for the sake of simplicity and consistency. The formula is: formula_11 where formula_12 is the nuclear magneton, and formula_13 is the g-factor of the nucleon or nucleus in question. The ratio formula_14 equal to formula_15, is 7.622593285(47) MHz/T. The gyromagnetic ratio of a nucleus plays a role in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). These procedures rely on the fact that bulk magnetization due to nuclear spins precess in a magnetic field at a rate called the Larmor frequency, which is simply the product of the gyromagnetic ratio with the magnetic field strength. With this phenomenon, the sign of γ determines the sense (clockwise vs counterclockwise) of precession. Most common nuclei such as 1H and 13C have positive gyromagnetic ratios. Approximate values for some common nuclei are given in the table below. Larmor precession. Any free system with a constant gyromagnetic ratio, such as a rigid system of charges, a nucleus, or an electron, when placed in an external magnetic field B (measured in teslas) that is not aligned with its magnetic moment, will precess at a frequency f (measured in hertz), that is proportional to the external field: formula_16 For this reason, values of , in units of hertz per tesla (Hz/T), are often quoted instead of γ. Heuristic derivation. The derivation of this relation is as follows: First we must prove that the torque resulting from subjecting a magnetic moment formula_17 to a magnetic field formula_18 is formula_19 The identity of the functional form of the stationary electric and magnetic fields has led to defining the magnitude of the magnetic dipole moment equally well as formula_20, or in the following way, imitating the moment p of an electric dipole: The magnetic dipole can be represented by a needle of a compass with fictitious magnetic charges formula_21 on the two poles and vector distance between the poles formula_22 under the influence of the magnetic field of earth formula_23 By classical mechanics the torque on this needle is formula_24 But as previously stated formula_25 so the desired formula comes up. formula_26 is the unit distance vector. The model of the spinning electron we use in the derivation has an evident analogy with a gyroscope. For any rotating body the rate of change of the angular momentum formula_27 equals the applied torque formula_28: formula_29 Note as an example the precession of a gyroscope. The earth's gravitational attraction applies a force or torque to the gyroscope in the vertical direction, and the angular momentum vector along the axis of the gyroscope rotates slowly about a vertical line through the pivot. In the place of the gyroscope imagine a sphere spinning around the axis and with its center on the pivot of the gyroscope, and along the axis of the gyroscope two oppositely directed vectors both originated in the center of the sphere, upwards formula_30 and downwards formula_31 Replace the gravity with a magnetic flux density formula_32 formula_33 represents the linear velocity of the pike of the arrow formula_34 along a circle whose radius is formula_35 where formula_36 is the angle between formula_34 and the vertical. Hence the angular velocity of the rotation of the spin is formula_37 Consequently, formula_38 This relationship also explains an apparent contradiction between the two equivalent terms, gyromagnetic ratio versus magnetogyric ratio: whereas it is a ratio of a magnetic property (i.e. dipole moment) to a "gyric" (rotational, from , "turn") property (i.e. angular momentum), it is also, "at the same time", a ratio between the angular precession frequency (another "gyric" property) "ω" = 2"πf" and the magnetic field. The angular precession frequency has an important physical meaning: It is the "angular cyclotron frequency", the resonance frequency of an ionized plasma being under the influence of a static finite magnetic field, when we superimpose a high frequency electromagnetic field. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\gamma = \\frac{q}{2m} " }, { "math_id": 1, "text": "{q}" }, { "math_id": 2, "text": "{m}" }, { "math_id": 3, "text": " \\mu = I A = \\frac{q v}{2 \\pi r} \\, \\pi r^2 = \\frac{q}{2m} \\, m v r = \\frac{q}{2m} L ~." }, { "math_id": 4, "text": " \\gamma_\\mathrm{e} = \\frac{-e}{ 2 m_\\mathrm{e}} \\, |g_\\mathrm{e}| = \\frac{ g_\\mathrm{e} \\mu_\\mathrm{B} }{ \\hbar } \\, ," }, { "math_id": 5, "text": " g_\\mathrm{e} = -2 \\left(1 + \\frac{\\alpha}{\\,2\\pi\\,} + \\cdots\\right)~," }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "g_\\mathrm{e} = -2.002\\,319\\,304\\,361\\,18(27)." }, { "math_id": 8, "text": " \\gamma_\\mathrm{e} = \\mathrm{-1.760\\,859\\,630\\,23(53) \\times 10^{11} \\,rad{\\cdot}s^{-1}{\\cdot}T^{-1}}" }, { "math_id": 9, "text": " \\frac{\\gamma_\\mathrm{e}}{2\\pi} = \\mathrm{-28\\,024.951\\,4242(85) \\,MHz{\\cdot}T^{-1}} ." }, { "math_id": 10, "text": "\n\\left[\\, \\left( \\partial^\\mu \\, u + i\\, e\\, A^\\mu \\right)\\, \\left( \\partial_\\mu + i\\, e\\, A_\\mu \\right) + g \\, \n\\frac{e}{\\, 4\\,} \\, \\sigma^{\\mu\\nu} \\, F_{\\mu\\nu} + m^2 \\,\\right] \\; \\psi \\; = \\; 0 ~, \\quad g \\ne 2 ~. \n" }, { "math_id": 11, "text": " \\gamma_\\text{n} = \\frac{e}{\\, 2m_\\text{p}\\,} \\, g_{\\rm n} = g_{\\rm n}\\, \\frac{\\,\\mu_\\mathrm{N} \\,}{\\hbar}~," }, { "math_id": 12, "text": "\\mu_\\mathrm{N}" }, { "math_id": 13, "text": " g_{\\rm n} " }, { "math_id": 14, "text": "\\,\\frac{\\gamma_n}{\\, 2 \\pi \\, g_{\\rm n}\\,}\\, ," }, { "math_id": 15, "text": "\\mu_\\mathrm{N}/h" }, { "math_id": 16, "text": "f=\\frac{\\gamma}{2\\pi}B." }, { "math_id": 17, "text": "\\mathbf{m}" }, { "math_id": 18, "text": "\\mathbf{B}" }, { "math_id": 19, "text": "\\, \\boldsymbol{\\Tau}=\\mathbf{m}\\times\\mathbf{B}\\, ." }, { "math_id": 20, "text": "m=I\\pi r^2" }, { "math_id": 21, "text": "\\pm q_{\\rm m}" }, { "math_id": 22, "text": "\\mathbf{d}" }, { "math_id": 23, "text": "\\, \\mathbf{B} \\, ." }, { "math_id": 24, "text": "\\, \\boldsymbol{\\Tau} = q_{\\rm m} (\\mathbf{d}\\times\\mathbf{B}) \\, ." }, { "math_id": 25, "text": "\\, q_{\\rm m}\\mathbf{d}=I\\pi r^2\\hat{\\mathbf{d}} = \\mathbf{m} \\, ," }, { "math_id": 26, "text": "\\hat{\\mathbf{d}}" }, { "math_id": 27, "text": "\\, \\mathbf{J} \\," }, { "math_id": 28, "text": "\\mathbf{T}" }, { "math_id": 29, "text": "\\frac{d\\mathbf{J}}{dt}=\\mathbf{T}~." }, { "math_id": 30, "text": "\\mathbf{J}" }, { "math_id": 31, "text": "\\mathbf{m}." }, { "math_id": 32, "text": "\\, \\mathbf{B} ~." }, { "math_id": 33, "text": "\\frac{\\,\\operatorname{d} \\mathbf{J}\\,}{\\,\\operatorname{d} t \\,}" }, { "math_id": 34, "text": "\\,\\mathbf{J}\\," }, { "math_id": 35, "text": "\\, J\\sin{\\phi}\\, ," }, { "math_id": 36, "text": "\\,\\phi\\," }, { "math_id": 37, "text": "\\omega = 2\\pi \\,f = \\frac{1}{ \\, J \\, \\sin{\\phi}\\,}\\,\\left|\\frac{\\,\\rm{d}\\,\\mathbf{J}\\,}{\\,\\rm{d}\\,t\\,}\\right| = \\frac{\\,\\left| \\mathbf{T} \\right| \\,}{\\, J \\, \\sin{\\phi}\\,} = \\frac{\\,\\left| \\mathbf{m} \\times \\mathbf{B} \\right| \\,}{\\, J \\,\\sin{\\phi} \\,} = \\frac{\\,m\\,B\\sin{\\phi}\\,}{\\, J \\,\\sin{\\phi}\\,} = \\frac{\\, m\\, B\\,}{J} = \\gamma\\, B ~." }, { "math_id": 38, "text": "f=\\frac{\\gamma}{\\,2\\pi\\,}\\,B~.\\quad \\text{q.e.d.}" } ]
https://en.wikipedia.org/wiki?curid=1228679
12288578
Calderón–Zygmund lemma
In mathematics, the Calderón–Zygmund lemma is a fundamental result in Fourier analysis, harmonic analysis, and singular integrals. It is named for the mathematicians Alberto Calderón and Antoni Zygmund. Given an integrable function  "f"  : R"d" → C, where R"d" denotes Euclidean space and C denotes the complex numbers, the lemma gives a precise way of partitioning R"d" into two sets: one where  "f"  is essentially small; the other a countable collection of cubes where  "f"  is essentially large, but where some control of the function is retained. This leads to the associated Calderón–Zygmund decomposition of  "f" , wherein  "f"  is written as the sum of "good" and "bad" functions, using the above sets. Covering lemma. Let  "f"  : R"d" → C be integrable and α be a positive constant. Then there exists an open set Ω such that: (1) Ω is a disjoint union of open cubes, Ω ∪"k" "Qk", such that for each "Qk", formula_0 (2) | "f" ("x")| ≤ "α" almost everywhere in the complement F of Ω. Here, formula_1 denotes the measure of the set formula_2. Calderón–Zygmund decomposition. Given  "f"  as above, we may write  "f"  as the sum of a "good" function g and a "bad" function b,  "f"  "g" + "b". To do this, we define formula_3 and let "b"  "f"  − "g". Consequently we have that formula_4 formula_5 for each cube "Qj". The function b is thus supported on a collection of cubes where  "f"  is allowed to be "large", but has the beneficial property that its average value is zero on each of these cubes. Meanwhile, |"g"("x")| ≤ "α" for almost every x in F, and on each cube in Ω, g is equal to the average value of  "f"  over that cube, which by the covering chosen is not more than 2"d""α".
[ { "math_id": 0, "text": "\\alpha\\le \\frac{1}{m(Q_k)} \\int_{Q_k} |f(x)| \\, dx \\leq 2^d \\alpha." }, { "math_id": 1, "text": "m(Q_k)" }, { "math_id": 2, "text": "Q_k" }, { "math_id": 3, "text": "g(x) = \\begin{cases}f(x), & x \\in F, \\\\ \\frac{1}{m(Q_j)}\\int_{Q_j}f(t)\\,dt, & x \\in Q_j,\\end{cases}" }, { "math_id": 4, "text": "b(x) = 0,\\ x\\in F" }, { "math_id": 5, "text": "\\frac{1}{m(Q_j)}\\int_{Q_j} b(x)\\, dx = 0" } ]
https://en.wikipedia.org/wiki?curid=12288578
1229368
Asymmetric relation
A binary relation which never occurs in both directions In mathematics, an asymmetric relation is a binary relation formula_0 on a set formula_1 where for all formula_2 if formula_3 is related to formula_4 then formula_4 is "not" related to formula_5 Formal definition. Preliminaries. A binary relation on formula_1 is any subset formula_0 of formula_6 Given formula_2 write formula_7 if and only if formula_8 which means that formula_7 is shorthand for formula_9 The expression formula_7 is read as "formula_3 is related to formula_4 by formula_10" Definition. The binary relation formula_0 is called asymmetric if for all formula_2 if formula_7 is true then formula_11 is false; that is, if formula_12 then formula_13 This can be written in the notation of first-order logic as formula_14 A logically equivalent definition is: for all formula_2 at least one of formula_7 and formula_11 is false, which in first-order logic can be written as: formula_15 A relation is asymmetric if and only if it is both antisymmetric and irreflexive, so this may also be taken as a definition. Examples. An example of an asymmetric relation is the "less than" relation formula_16 between real numbers: if formula_17 then necessarily formula_18 is not less than formula_19 More generally, any strict partial order is an asymmetric relation. Not all asymmetric relations are strict partial orders. An example of an asymmetric non-transitive, even antitransitive relation is the rock paper scissors relation: if formula_1 beats formula_20 then formula_21 does not beat formula_22 and if formula_1 beats formula_21 and formula_21 beats formula_23 then formula_1 does not beat formula_24 Restrictions and converses of asymmetric relations are also asymmetric. For example, the restriction of formula_16 from the reals to the integers is still asymmetric, and the converse or dual formula_25 of formula_16 is also asymmetric. An asymmetric relation need not have the connex property. For example, the strict subset relation formula_26 is asymmetric, and neither of the sets formula_27 and formula_28 is a strict subset of the other. A relation is connex if and only if its complement is asymmetric. A non-example is the "less than or equal" relation formula_29. This is not asymmetric, because reversing for example, formula_30 produces formula_30 and both are true. The less-than-or-equal relation is an example of a relation that is neither symmetric nor asymmetric, showing that asymmetry is not the same thing as "not symmetric". The empty relation is the only relation that is (vacuously) both symmetric and asymmetric. Properties. The following conditions are sufficient for a relation formula_0 to be asymmetric: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "a, b \\in X," }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "a." }, { "math_id": 6, "text": "X \\times X." }, { "math_id": 7, "text": "a R b" }, { "math_id": 8, "text": "(a, b) \\in R," }, { "math_id": 9, "text": "(a, b) \\in R." }, { "math_id": 10, "text": "R." }, { "math_id": 11, "text": "b R a" }, { "math_id": 12, "text": "(a, b) \\in R" }, { "math_id": 13, "text": "(b, a) \\not\\in R." }, { "math_id": 14, "text": "\\forall a, b \\in X: a R b \\implies \\lnot(b R a)." }, { "math_id": 15, "text": "\\forall a, b \\in X: \\lnot(a R b \\wedge b R a)." }, { "math_id": 16, "text": "\\,<\\," }, { "math_id": 17, "text": "x < y" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "x." }, { "math_id": 20, "text": "Y," }, { "math_id": 21, "text": "Y" }, { "math_id": 22, "text": "X;" }, { "math_id": 23, "text": "Z," }, { "math_id": 24, "text": "Z." }, { "math_id": 25, "text": "\\,>\\," }, { "math_id": 26, "text": "\\,\\subsetneq\\," }, { "math_id": 27, "text": "\\{1, 2\\}" }, { "math_id": 28, "text": "\\{3, 4\\}" }, { "math_id": 29, "text": "\\leq" }, { "math_id": 30, "text": "x \\leq x" }, { "math_id": 31, "text": "aRb" }, { "math_id": 32, "text": "bRa," }, { "math_id": 33, "text": "aRa," } ]
https://en.wikipedia.org/wiki?curid=1229368
1229416
Hyperbolic motion
Isometric automorphisms of a hyperbolic space In geometry, hyperbolic motions are isometric automorphisms of a hyperbolic space. Under composition of mappings, the hyperbolic motions form a continuous group. This group is said to characterize the hyperbolic space. Such an approach to geometry was cultivated by Felix Klein in his Erlangen program. The idea of reducing geometry to its characteristic group was developed particularly by Mario Pieri in his reduction of the primitive notions of geometry to merely point and "motion". Hyperbolic motions are often taken from inversive geometry: these are mappings composed of reflections in a line or a circle (or in a hyperplane or a hypersphere for hyperbolic spaces of more than two dimensions). To distinguish the hyperbolic motions, a particular line or circle is taken as the absolute. The proviso is that the absolute must be an invariant set of all hyperbolic motions. The absolute divides the plane into two connected components, and hyperbolic motions must "not" permute these components. One of the most prevalent contexts for inversive geometry and hyperbolic motions is in the study of mappings of the complex plane by Möbius transformations. Textbooks on complex functions often mention two common models of hyperbolic geometry: the Poincaré half-plane model where the absolute is the real line on the complex plane, and the Poincaré disk model where the absolute is the unit circle in the complex plane. Hyperbolic motions can also be described on the hyperboloid model of hyperbolic geometry. This article exhibits these examples of the use of hyperbolic motions: the extension of the metric formula_0 to the half-plane, and in the location of a quasi-sphere of a hypercomplex number system. Motions on the hyperbolic plane. Every motion (transformation or isometry) of the hyperbolic plane to itself can be realized as the composition of at most three reflections. In "n"-dimensional hyperbolic space, up to "n"+1 reflections might be required. (These are also true for Euclidean and spherical geometries, but the classification below is different.) All the isometries of the hyperbolic plane can be classified into these classes: Introduction of metric in the Poincaré half-plane model. The points of the Poincaré half-plane model HP are given in Cartesian coordinates as {("x","y"): "y" &gt; 0} or in polar coordinates as {("r" cos "a", "r" sin "a"): 0 &lt; "a" &lt; π, "r" &gt; 0 }. The hyperbolic motions will be taken to be a composition of three fundamental hyperbolic motions. Let "p" = ("x,y") or "p" = ("r" cos "a", "r" sin "a"), "p" ∈ HP. The fundamental motions are: "p" → "q" = ("x" + "c", "y" ), "c" ∈ R (left or right shift) "p" → "q" = ("sx", "sy" ), "s" &gt; 0 (dilation) "p" → "q" = ( "r" −1 cos "a", "r" −1 sin "a" ) (inversion in unit semicircle). Note: the shift and dilation are mappings from inversive geometry composed of a pair of reflections in vertical lines or concentric circles respectively. Use of semi-circle Z. Consider the triangle {(0,0),(1,0),(1,tan "a")}. Since 1 + tan2"a" = sec2"a", the length of the triangle hypotenuse is sec "a", where sec denotes the secant function. Set "r" = sec "a" and apply the third fundamental hyperbolic motion to obtain "q" = ("r" cos "a", "r" sin "a") where "r" = sec−1"a" = cos "a". Now |"q" – (½, 0)|2 = (cos2"a" – ½)2 +cos2"a" sin2"a" = ¼ so that "q" lies on the semicircle "Z" of radius ½ and center (½, 0). Thus the tangent ray at (1, 0) gets mapped to "Z" by the third fundamental hyperbolic motion. Any semicircle can be re-sized by a dilation to radius ½ and shifted to "Z", then the inversion carries it to the tangent ray. So the collection of hyperbolic motions permutes the semicircles with diameters on "y" = 0 sometimes with vertical rays, and vice versa. Suppose one agrees to measure length on vertical rays by using logarithmic measure: "d"(("x","y"),("x","z")) = |log("z"/"y")|. Then by means of hyperbolic motions one can measure distances between points on semicircles too: first move the points to "Z" with appropriate shift and dilation, then place them by inversion on the tangent ray where the logarithmic distance is known. For "m" and "n" in HP, let "b" be the perpendicular bisector of the line segment connecting "m" and "n". If "b" is parallel to the abscissa, then "m" and "n" are connected by a vertical ray, otherwise "b" intersects the abscissa so there is a semicircle centered at this intersection that passes through "m" and "n". The set HP becomes a metric space when equipped with the distance "d"("m","n") for "m","n" ∈ HP as found on the vertical ray or semicircle. One calls the vertical rays and semicircles the "hyperbolic lines" in HP. The geometry of points and hyperbolic lines in HP is an example of a non-Euclidean geometry; nevertheless, the construction of the line and distance concepts for HP relies heavily on the original geometry of Euclid. Disk model motions. Consider the disk D = {"z" ∈ C : "z z"* &lt; 1 } in the complex plane C. The geometric plane of Lobachevsky can be displayed in D with circular arcs perpendicular to the boundary of D signifying "hyperbolic lines". Using the arithmetic and geometry of complex numbers, and Möbius transformations, there is the Poincaré disc model of the hyperbolic plane: Suppose "a" and "b" are complex numbers with "a a"* − "b b"* = 1. Note that |"bz" + "a"*|2 − |"az" + "b"*|2 = ("aa"* − "bb"*)(1 − |"z"|2), so that |"z"| &lt; 1 implies |("a"z + "b"*)/("bz" + "a"*)| &lt; 1 . Hence the disk D is an invariant set of the Möbius transformation f("z") = ("az" + "b"*)/("bz" + "a"*). Since it also permutes the hyperbolic lines, we see that these transformations are motions of the D model of hyperbolic geometry. A complex matrix formula_1 with "aa"* − "bb"* = 1, which is an element of the special unitary group SU(1,1). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d(a,b) = \\vert \\log(b/a) \\vert" }, { "math_id": 1, "text": "q = \\begin{pmatrix} a & b \\\\ b^* & a^* \\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=1229416
12295512
F number (chemistry)
F number is a correlation number used in the analysis of polycyclic aromatic hydrocarbons (PAHs) as a descriptor of their hydrophobicity and molecular size. It was proposed by Robert Hurtubise and co-workers in 1977. Calculation. The F number is calculated using the formula: formula_0 where: "B"2 is the number of double bonds "C"12 is the number of primary carbon and secondary carbon atoms "R" is the number of non-aromatic rings. Example. For fluorene, there are 6 apparent double bonds (three pi bonds in each side benzene-like ring); the central ring has one secondary carbon and is non-aromatic. Therefore: formula_1 Correlation. It has been found that the F number linearly correlates with the log k' value (logarithm of the retention factor) in aqueous reversed-phase liquid chromatography. This relationship can be used to understand the significance of different aspects of molecular architecture on their separation using different stationary phases. This size analysis is complementary to the length-to-breadth (L/B) ratio, which classifies molecules according to their "rodlike" or "squarelike" shape. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = B_2 + C_{12} - \\frac{1}{2}R" }, { "math_id": 1, "text": "F = 6 + 1 - \\frac{1}{2}1 = 6.5" } ]
https://en.wikipedia.org/wiki?curid=12295512
1229844
Discount function
Economics concept A discount function is used in economic models to describe the weights placed on rewards received at different points in time. For example, if time is discrete and utility is time-separable, with the discount function formula_0 having a negative first derivative and with formula_1 (or formula_2 in continuous time) defined as consumption at time "t", total utility from an infinite stream of consumption is given by formula_3. Total utility in the continuous-time case is given by formula_4 provided that this integral exists. Exponential discounting and hyperbolic discounting are the two most commonly used examples.
[ { "math_id": 0, "text": "f(t)" }, { "math_id": 1, "text": "c_t" }, { "math_id": 2, "text": "c(t)" }, { "math_id": 3, "text": "U(\\{c_t\\}_{t=0}^\\infty)=\\sum_{t=0}^\\infty {f(t)u(c_t)}" }, { "math_id": 4, "text": "U(\\{c(t)\\}_{t=0}^\\infty)=\\int_{0}^\\infty {f(t)u(c(t)) dt}" } ]
https://en.wikipedia.org/wiki?curid=1229844
1230064
Mpongwe people
Bantu ethnic group of northwest Gabon The Mpongwe are an ethnic group in Gabon, notable as the earliest known dwellers around the estuary where Libreville is now located. History. The Mpongwe language identifies them as a subgroup of the Myènè people of the Bantus, who are believed to have been in the area for some 2,000 years, although the Mpongwe clans likely began arriving in only the 16th century, possibly in order to take advantage of trading opportunities offered by visiting Europeans. The Mpongwe gradually became the middlemen between the coast and the interior peoples such as the Bakèlè and Séké. From about the 1770s, the Mpongwe also became involved in the slave trade. In the 1830s, Mpongwe trade consisted of slaves, dyewood, ebony, rubber, ivory, and gum copal in exchange for cloth, iron, firearms, and various forms of alcoholic drink. In the 1840s, at the time of the arrival of American missionaries and French naval forces, the Mpongwe consisted of 6,000-7,000 free persons and 6,000 slaves, organized into about two dozen clans. Four of these clans were preeminent; the Asiga and Agulamba on the south shore, and the Agekaza-Glass and Agekaza-Quaben on the north shore. Each of these clans was ruled by an "oga", translated as "king" by Europeans, although clan leadership was largely oligarchic. The Mpongwe engaged in extensive coastal trade across the Central African coast. An account of this trade includes that of Paul Du Chaillu in the mid-19th century. Mpongwe boats could be 60 feet in length, formula_0 in breadth and 3 feet deep. Larger vessels included masts and sails made of woven palm fronds, with a load capacity of 8–10 tons. French colonial rule. The French took advantage of longstanding inter-clan rivalry to establish a foothold; while "King Denis" (Antchouwé Kowe Rapontchombo) of the Asigas talked the French out of using his clan's area, "King Glass" (R'Ogouarowe) of the Agekaza-Glass submitted only after a bombardment in 1845, and "King Louis" (Anguilè Ré-Dowé) of Agekaza-Quaben ceded his village of Okolo and moved, leaving the French to establish Fort d'Aumale on the village's site in 1843. The combination of slave trade suppression and direct contact by Europeans with the interior reduced Mpongwe fortunes, but at the same time missionary schools enabled young Mpongwe to work in the colonial government and enterprise. The population declined greatly as a result of smallpox, and an 1884 estimate lists only about 3,000 Mpongwe. Fang migration pressure converted many Mpongwe to urban life in the early 20th century, and they came to be leaders in both the French colony and independent Gabon. Social relations with Europeans. As African and European communities converged along the coast, the Mpongwé adjusted traditional practices to incorporate interracial relationships between Mpongwé women and European men. By mid 19th century, it was commonplace for Mpongwé women to engage in sexual and domestic acts with European men in exchange for a bridewealth. As a result of centuries of contact with the Europeans, a mixed-race population emerged: the métis. Métis could be found in almost every Mpongwé family during this time. Mpongwé families even encouraged their daughters to engage with European men. Such unions were not considered legitimate marriages under French law, but were in Mpongwé communities as long as family consent and a bridewealth were given. These marriages provided an avenue for women to acquire property and to obtain French citizenship. As these interracial unions continued into the 20th century, African and French societies sought to restrict these unions as Mpongwé women began to claim their European ancestry as a means to assert their voice in society. The métis population not only confronted gender roles within the African community, but also challenged the permeability of social and legal hierarchies under colonial rule. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3\\tfrac{1}{2}" } ]
https://en.wikipedia.org/wiki?curid=1230064
12305030
Anderson impurity model
Hamiltonian used in quantum physics The Anderson impurity model, named after Philip Warren Anderson, is a Hamiltonian that is used to describe magnetic impurities embedded in metals. It is often applied to the description of Kondo effect-type problems, such as heavy fermion systems and Kondo insulators. In its simplest form, the model contains a term describing the kinetic energy of the conduction electrons, a two-level term with an on-site Coulomb repulsion that models the impurity energy levels, and a hybridization term that couples conduction and impurity orbitals. For a single impurity, the Hamiltonian takes the form formula_0, where the formula_1 operator is the annihilation operator of a conduction electron, and formula_2 is the annihilation operator for the impurity, formula_3 is the conduction electron wavevector, and formula_4 labels the spin. The on–site Coulomb repulsion is formula_5, and formula_6 gives the hybridization. Regimes. The model yields several regimes that depend on the relationship of the impurity energy levels to the Fermi level formula_7: In the local moment regime, the magnetic moment is present at the impurity site. However, for low enough temperature, the moment is Kondo screened to give non-magnetic many-body singlet state. Heavy-fermion systems. For heavy-fermion systems, a lattice of impurities is described by the periodic Anderson model. The one-dimensional model is formula_13, where formula_14 is the position of impurity site formula_15, and formula_16 is the impurity creation operator (used instead of formula_2 by convention for heavy-fermion systems). The hybridization term allows "f"-orbital electrons in heavy fermion systems to interact, although they are separated by a distance greater than the Hill limit. Other variants. There are other variants of the Anderson model, such as the SU(4) Anderson model, which is used to describe impurities which have an orbital, as well as a spin, degree of freedom. This is relevant in carbon nanotube quantum dot systems. The SU(4) Anderson model Hamiltonian is formula_17, where formula_18 and "formula_19" label the orbital degree of freedom (which can take one of two values), and "formula_20" represents the number operator for the impurity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H = \\sum_{k,\\sigma}\\epsilon_k c^{\\dagger}_{k\\sigma}c_{k\\sigma} \n+ \\sum_{\\sigma}\\epsilon_{\\sigma} d^{\\dagger}_{\\sigma}d_{\\sigma} \n+ Ud^{\\dagger}_{\\uparrow}d_{\\uparrow}d^{\\dagger}_{\\downarrow}d_{\\downarrow}\n+ \\sum_{k,\\sigma}V_k(d^{\\dagger}_{\\sigma}c_{k\\sigma} + c^{\\dagger}_{k\\sigma}d_{\\sigma})" }, { "math_id": 1, "text": "c" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "\\sigma" }, { "math_id": 5, "text": "U" }, { "math_id": 6, "text": "V" }, { "math_id": 7, "text": "E_{\\rm F}" }, { "math_id": 8, "text": "\\epsilon_d \\gg E_{\\rm F}" }, { "math_id": 9, "text": "\\epsilon_d+U \\gg E_{\\rm F}" }, { "math_id": 10, "text": "\\epsilon_d\\approx E_{\\rm F}" }, { "math_id": 11, "text": "\\epsilon_d+U\\approx E_{\\rm F}" }, { "math_id": 12, "text": "\\epsilon_d \\ll E_{\\rm F} \\ll \\epsilon_d+U" }, { "math_id": 13, "text": "H = \\sum_{k,\\sigma}\\epsilon_k c^{\\dagger}_{k\\sigma}c_{k\\sigma} \n+ \\sum_{j,\\sigma}\\epsilon_f f^{\\dagger}_{j\\sigma}f_{j\\sigma} \n+ U\\sum_{j}f^{\\dagger}_{j\\uparrow}f_{j\\uparrow}f^{\\dagger}_{j\\downarrow}f_{j\\downarrow}\n+ \\sum_{j,k,\\sigma}V_{jk}(e^{ikx_j}f^{\\dagger}_{j\\sigma}c_{k\\sigma} + e^{-ikx_j}c^{\\dagger}_{k\\sigma}f_{j\\sigma})" }, { "math_id": 14, "text": "x_j" }, { "math_id": 15, "text": "j" }, { "math_id": 16, "text": "f" }, { "math_id": 17, "text": "H = \\sum_{k,\\sigma}\\epsilon_k c^{\\dagger}_{k\\sigma}c_{k\\sigma} \n+ \\sum_{i,\\sigma}\\epsilon_d d^{\\dagger}_{i\\sigma}d_{i\\sigma} \n+ \\sum_{i,\\sigma,i'\\sigma '} \\frac{U}{2}n_{i\\sigma}n_{i'\\sigma '}\n+ \\sum_{i,k,\\sigma}V_k(d^{\\dagger}_{i\\sigma}c_{k\\sigma} + c^{\\dagger}_{k\\sigma}d_{i\\sigma})" }, { "math_id": 18, "text": "i" }, { "math_id": 19, "text": "i'" }, { "math_id": 20, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=12305030
12306
Geotechnical engineering
Scientific study of earth materials in engineering problems Geotechnical engineering, also known as geotechnics, is the branch of civil engineering concerned with the engineering behavior of earth materials. It uses the principles of soil mechanics and rock mechanics to solve its engineering problems. It also relies on knowledge of geology, hydrology, geophysics, and other related sciences. Geotechnical engineering has applications in military engineering, mining engineering, petroleum engineering, coastal engineering, and offshore construction. The fields of geotechnical engineering and engineering geology have overlapping knowledge areas. However, while geotechnical engineering is a specialty of civil engineering, engineering geology is a specialty of geology. History. Humans have historically used soil as a material for flood control, irrigation purposes, burial sites, building foundations, and construction materials for buildings. Dykes, dams, and canals dating back to at least 2000 BCE—found in parts of ancient Egypt, ancient Mesopotamia, the Fertile Crescent, and the early settlements of Mohenjo Daro and Harappa in the Indus valley—provide evidence for early activities linked to irrigation and flood control. As cities expanded, structures were erected and supported by formalized foundations. The ancient Greeks notably constructed pad footings and strip-and-raft foundations. Until the 18th century, however, no theoretical basis for soil design had been developed, and the discipline was more of an art than a science, relying on experience. Several foundation-related engineering problems, such as the Leaning Tower of Pisa, prompted scientists to begin taking a more scientific-based approach to examining the subsurface. The earliest advances occurred in the development of earth pressure theories for the construction of retaining walls. Henri Gautier, a French royal engineer, recognized the "natural slope" of different soils in 1717, an idea later known as the soil's angle of repose. Around the same time, a rudimentary soil classification system was also developed based on a material's unit weight, which is no longer considered a good indication of soil type. The application of the principles of mechanics to soils was documented as early as 1773 when Charles Coulomb, a physicist and engineer, developed improved methods to determine the earth pressures against military ramparts. Coulomb observed that, at failure, a distinct slip plane would form behind a sliding retaining wall and suggested that the maximum shear stress on the slip plane, for design purposes, was the sum of the soil cohesion, formula_0, and friction formula_1 formula_2, where formula_1 is the normal stress on the slip plane and formula_3 is the friction angle of the soil. By combining Coulomb's theory with Christian Otto Mohr's 2D stress state, the theory became known as Mohr-Coulomb theory. Although it is now recognized that precise determination of cohesion is impossible because formula_0 is not a fundamental soil property, the Mohr-Coulomb theory is still used in practice today. In the 19th century, Henry Darcy developed what is now known as Darcy's Law, describing the flow of fluids in a porous media. Joseph Boussinesq, a mathematician and physicist, developed theories of stress distribution in elastic solids that proved useful for estimating stresses at depth in the ground. William Rankine, an engineer and physicist, developed an alternative to Coulomb's earth pressure theory. Albert Atterberg developed the clay consistency indices that are still used today for soil classification. In 1885, Osborne Reynolds recognized that shearing causes volumetric dilation of dense materials and contraction of loose granular materials. Modern geotechnical engineering is said to have begun in 1925 with the publication of "Erdbaumechanik" by Karl von Terzaghi, a mechanical engineer and geologist. Considered by many to be the father of modern soil mechanics and geotechnical engineering, Terzaghi developed the principle of effective stress, and demonstrated that the shear strength of soil is controlled by effective stress. Terzaghi also developed the framework for theories of bearing capacity of foundations, and the theory for prediction of the rate of settlement of clay layers due to consolidation. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. In his 1948 book, Donald Taylor recognized that the interlocking and dilation of densely packed particles contributed to the peak strength of the soil. Roscoe, Schofield, and Wroth, with the publication of "On the Yielding of Soils" in 1958, established the interrelationships between the volume change behavior (dilation, contraction, and consolidation) and shearing behavior with the theory of plasticity using critical state soil mechanics. Critical state soil mechanics is the basis for many contemporary advanced constitutive models describing the behavior of soil. In 1960, Alec Skempton carried out an extensive review of the available formulations and experimental data in the literature about the effective stress validity in soil, concrete, and rock in order to reject some of these expressions, as well as clarify what expressions were appropriate according to several working hypotheses, such as stress-strain or strength behavior, saturated or non-saturated media, and rock, concrete or soil behavior. Roles. Geotechnical investigation. Geotechnical engineers investigate and determinate the properties of subsurface conditions and materials. They also design corresponding earthworks and retaining structures, tunnels, and structure foundations, and may supervise and evaluate sites, which may further involve site monitoring as well as the risk assessment and mitigation of natural hazards. Geotechnical engineers and engineering geologists perform geotechnical investigations to obtain information on the physical properties of soil and rock underlying, and adjacent to, a site to design earthworks and foundations for proposed structures and for the repair of distress to earthworks and structures caused by subsurface conditions. Geotechnical investigations involve both surface and subsurface exploration of a site, often including subsurface sampling and laboratory testing of soil samples retrieved. Sometimes, geophysical methods are also used to obtain data, which include measurement of seismic waves (pressure, shear, and Rayleigh waves), surface-wave methods and downhole methods, and electromagnetic surveys (magnetometer, resistivity, and ground-penetrating radar). Electrical tomography can be used to survey soil and rock properties and existing underground infrastructure in construction projects. Surface exploration can include on-foot surveys, geologic mapping, geophysical methods, and photogrammetry. Geologic mapping and interpretation of geomorphology are typically completed in consultation with a geologist or engineering geologist. Subsurface exploration usually involves in-situ testing (for example, the standard penetration test and cone penetration test). The digging of test pits and trenching (particularly for locating faults and slide planes) may also be used to learn about soil conditions at depth. Large-diameter borings are rarely used due to safety concerns and expense but are sometimes used to allow a geologist or engineer to be lowered into the borehole for direct visual and manual examination of the soil and rock stratigraphy. A variety of soil samplers exists to meet the needs of different engineering projects. The standard penetration test, which uses a thick-walled split spoon sampler, is the most common way to collect disturbed samples. Piston samplers, employing a thin-walled tube, are most commonly used for the collection of less disturbed samples. More advanced methods, such as the Sherbrooke block sampler, are superior, but expensive. Coring frozen ground provides high-quality undisturbed samples from any ground conditions, such as fill, sand, moraine, and rock fracture zones. Geotechnical centrifuge modeling is another method of testing physical scale models of geotechnical problems. The use of a centrifuge enhances the similarity of the scale model tests involving soil because the strength and stiffness of soil are very sensitive to the confining pressure. The centrifugal acceleration allows a researcher to obtain large (prototype-scale) stresses in small physical models. Foundation design. The foundation of a structure's infrastructure transmits loads from the structure to the earth. Geotechnical engineers design foundations based on the load characteristics of the structure and the properties of the soils and bedrock at the site. In general, geotechnical engineers first estimate the magnitude and location of loads to be supported, before developing an investigation plan to explore the subsurface and also determining the necessary soil parameters through field and lab testing. Following which, they may begin the design of an engineering foundation. The primary considerations for a geotechnical engineer in foundation design are bearing capacity, settlement, and ground movement beneath the foundations. Earthworks. Geotechnical engineers are also involved in the planning and execution of earthworks, which include ground improvement, slope stabilization, and stope stability analysis. Ground improvement. Various geotechnical engineering methods can be used for ground improvement, including reinforcement geosynthetics such as geocells and geogrids, which disperse loads over a larger area, increasing the load-bearing capacity of soil. Through these methods, geotechnical engineers can reduce direct and long-term costs. Slope stabilization. Geotechnical engineers can analyze and improve the stability of slopes using engineering methods. Slope stability is determined by the balance of shear stress and shear strength. A previously stable slope may be initially affected by various factors, making the slope unstable. Nonetheless, geotechnical engineers can design and implement engineered slopes to increase stability. Slope stability analysis. Stability analysis is needed for the design of engineered slopes and for estimating the risk of slope failure in natural or designed slopes by determining the conditions under which the topmost mass of soil will slip relative to the base of soil and lead to slope failure. If the interface between the mass and the base of a slope has a complex geometry, slope stability analysis is difficult and numerical solution methods are required. Typically, the exact geometry of the interface is not known and a simplified interface geometry is assumed. Finite slopes require three-dimensional models to be analyzed, so most slopes are analyzed assuming that they are infinitely wide and can be represented by two-dimensional models. Sub-disciplines. Geosynthetics. Geosynthetics are a type of plastic polymer products used in geotechnical engineering that improve engineering performance while reducing costs. This includes geotextiles, geogrids, geomembranes, geocells, and geocomposites. The synthetic nature of the products make them suitable for use in the ground where high levels of durability are required. Their main functions include drainage, filtration, reinforcement, separation, and containment. Geosynthetics are available in a wide range of forms and materials, each to suit a slightly different end-use, although they are frequently used together. Some reinforcement geosynthetics, such as geogrids and more recently, cellular confinement systems, have shown to improve bearing capacity, modulus factors and soil stiffness and strength. These products have a wide range of applications and are currently used in many civil and geotechnical engineering applications including roads, airfields, railroads, embankments, piled embankments, retaining structures, reservoirs, canals, dams, landfills, bank protection and coastal engineering. Offshore. "Offshore" (or "marine") "geotechnical engineering" is concerned with foundation design for human-made structures in the sea, away from the coastline (in opposition to "onshore" or "nearshore" engineering). Oil platforms, artificial islands and submarine pipelines are examples of such structures. There are a number of significant differences between onshore and offshore geotechnical engineering. Notably, site investigation and ground improvement on the seabed are more expensive; the offshore structures are exposed to a wider range of geohazards; and the environmental and financial consequences are higher in case of failure. Offshore structures are exposed to various environmental loads, notably wind, waves and currents. These phenomena may affect the integrity or the serviceability of the structure and its foundation during its operational lifespan and need to be taken into account in offshore design. In subsea geotechnical engineering, seabed materials are considered a two-phase material composed of rock or mineral particles and water. Structures may be fixed in place in the seabed—as is the case for piers, jetties and fixed-bottom wind turbines—or may comprise a floating structure that remains roughly fixed relative to its geotechnical anchor point. Undersea mooring of human-engineered floating structures include a large number of offshore oil and gas platforms and, since 2008, a few floating wind turbines. Two common types of engineered design for anchoring floating structures include tension-leg and catenary loose mooring systems. Observational method. First proposed by Karl Terzaghi and later discussed in a paper by Ralph B. Peck, the observational method is a managed process of construction control, monitoring, and review, which enables modifications to be incorporated during and after construction. The objective of the method is to achieve a greater overall economy, without compromising safety, by creating designs based on the most probable conditions rather than the most unfavorable. Using the observational method, gaps in available information are filled by measurements and investigation, which aid in assessing the behavior of the structure during construction, which in turn can be modified in accordance with the findings. The method was described by Peck as "learn-as-you-go". The observational method may be described as follows: The observational method is suitable for construction that has already begun when an unexpected development occurs, or when a failure or accident looms or has already occurred. It is unsuitable for projects whose design cannot be altered during construction. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Col-begin/styles.css"/&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "\\sigma\\,\\!" }, { "math_id": 2, "text": " \\tan(\\phi\\,\\!)" }, { "math_id": 3, "text": "\\phi\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=12306
12307231
Isotopy of loops
In the mathematical field of abstract algebra, isotopy is an equivalence relation used to classify the algebraic notion of loop. Isotopy for loops and quasigroups was introduced by Albert (1943), based on his slightly earlier definition of isotopy for algebras, which was in turn inspired by work of Steenrod. Isotopy of quasigroups. Each quasigroup is isotopic to a loop. Let formula_0 and formula_1 be quasigroups. A quasigroup homotopy from "Q" to "P" is a triple ("α", "β", "γ") of maps from "Q" to "P" such that formula_2 for all "x", "y" in "Q". A quasigroup homomorphism is just a homotopy for which the three maps are equal. An isotopy is a homotopy for which each of the three maps ("α", "β", "γ") is a bijection. Two quasigroups are isotopic if there is an isotopy between them. In terms of Latin squares, an isotopy ("α", "β", "γ") is given by a permutation of rows "α", a permutation of columns "β", and a permutation on the underlying element set "γ". An autotopy is an isotopy from a quasigroup formula_0 to itself. The set of all autotopies of a quasigroup form a group with the automorphism group as a subgroup. A principal isotopy is an isotopy for which "γ" is the identity map on "Q". In this case the underlying sets of the quasigroups must be the same but the multiplications may differ. Isotopy of loops. Let formula_3 and formula_4 be loops and let formula_5 be an isotopy. Then it is the product of the principal isotopy formula_6 from formula_3 and formula_7 and the isomorphism formula_8 between formula_7 and formula_4. Indeed, put formula_9, formula_10 and define the operation formula_11 by formula_12. Let formula_3 and formula_13 be loops and let "e" be the neutral element of formula_3. Let formula_14 a principal isotopy from formula_3 to formula_13. Then formula_15 and formula_16 where formula_17 and formula_18. A loop "L" is a G-loop if it is isomorphic to all its loop isotopes. Pseudo-automorphisms of loops. Let "L" be a loop and "c" an element of "L". A bijection "α" of "L" is called a right pseudo-automorphism of "L" with companion element "c" if for all "x", "y" the identity formula_19 holds. One defines left pseudo-automorphisms analogously. Universal properties. We say that a loop property "P" is universal if it is isotopy invariant, that is, "P" holds for a loop "L" if and only if "P" holds for all loop isotopes of "L". Clearly, it is enough to check if "P" holds for all principal isotopes of "L". For example, since the isotopes of a commutative loop need not be commutative, commutativity is not universal. However, associativity and being an abelian group are universal properties. In fact, every group is a G-loop. The geometric interpretation of isotopy. Given a loop "L", one can define an incidence geometric structure called a 3-net. Conversely, after fixing an origin and an order of the line classes, a 3-net gives rise to a loop. Choosing a different origin or exchanging the line classes may result in nonisomorphic coordinate loops. However, the coordinate loops are always isotopic. In other words, two loops are isotopic if and only if they are equivalent from "geometric point of view". The dictionary between algebraic and geometric concepts is as follows
[ { "math_id": 0, "text": "(Q,\\cdot)" }, { "math_id": 1, "text": "(P,\\circ)" }, { "math_id": 2, "text": "\\alpha(x)\\circ\\beta(y) = \\gamma(x\\cdot y)\\," }, { "math_id": 3, "text": "(L,\\cdot)" }, { "math_id": 4, "text": "(K,\\circ)" }, { "math_id": 5, "text": "(\\alpha,\\beta,\\gamma):L \\to K" }, { "math_id": 6, "text": "(\\alpha_0,\\beta_0,id)" }, { "math_id": 7, "text": "(L,*)" }, { "math_id": 8, "text": "\\gamma" }, { "math_id": 9, "text": "\\alpha_0=\\gamma^{-1} \\alpha" }, { "math_id": 10, "text": "\\beta_0=\\gamma^{-1} \\beta" }, { "math_id": 11, "text": " * " }, { "math_id": 12, "text": "x*y=\\alpha^{-1}\\gamma(x)\\cdot \\beta^{-1}\\gamma(y)" }, { "math_id": 13, "text": "(L,\\circ)" }, { "math_id": 14, "text": "(\\alpha,\\beta,id)" }, { "math_id": 15, "text": "\\alpha=R_b^{-1}" }, { "math_id": 16, "text": "\\beta=L_a^{-1}" }, { "math_id": 17, "text": "a=\\alpha(e)" }, { "math_id": 18, "text": "b=\\beta(e)" }, { "math_id": 19, "text": "\\alpha(xy)c=\\alpha(x)(\\alpha(y)c)" } ]
https://en.wikipedia.org/wiki?curid=12307231
12308932
Quantum stirring, ratchets, and pumping
A pump is an alternating current-driven device that generates a direct current (DC). In the simplest configuration a pump has two leads connected to two reservoirs. In such open geometry, the pump takes particles from one reservoir and emits them into the other. Accordingly, a current is produced even if the reservoirs have the same temperature and chemical potential. Stirring is the operation of inducing a circulating current with a non-vanishing DC component in a closed system. The simplest geometry is obtained by integrating a pump in a closed circuit. More generally one can consider any type of stirring mechanism such as moving a spoon in a cup of coffee. Main observations. Pumping and stirring effects in quantum physics have counterparts in purely classical stochastic and dissipative processes. The studies of quantum pumping and of quantum stirring emphasize the role of quantum interference in the analysis of the induced current. A major objective is to calculate the amount formula_0 of transported particles per a driving cycle. There are circumstances in which formula_0 is an integer number due to the topology of parameter space. More generally formula_0 is affected by inter-particle interactions, disorder, chaos, noise and dissipation. Electric stirring explicitly breaks time-reversal symmetry. This property can be used to induce spin polarization in conventional semiconductors by purely electric means. Strictly speaking, stirring is a non-linear effect, because in linear response theory (LRT) an AC driving induces an AC current with the same frequency. Still an adaptation of the LRT Kubo formalism allows the analysis of stirring. The quantum pumping problem (where we have an open geometry) can be regarded as a special limit of the quantum stirring problem (where we have a closed geometry). Optionally the latter can be analyzed within the framework of scattering theory. Pumping and Stirring devices are close relatives of ratchet systems. The latter are defined in this context as AC driven spatially periodic arrays, where DC current is induced. It is possible to induce a DC current by applying a bias, or if the particles are charged then by applying an electro-motive-force. In contrast to that a quantum pumping mechanism produces a DC current in response to a cyclic deformation of the confining potential. In order to have a DC current from an AC driving, time reversal symmetry (TRS) should be broken. In the absence of magnetic field and dissipation it is the driving itself that can break TRS. Accordingly, an adiabatic pump operation is based on varying more than one parameter, while for non-adiabatic pumps modulation of a single parameter may suffice for DC current generation. The best known example is the peristaltic mechanism that combines a cyclic squeezing operation with on/off switching of entrance/exit valves. Adiabatic quantum pumping is closely related to a class of current-driven nanomotors named Adiabatic quantum motor. While in a quantum pump, the periodic movement of some classical parameters pumps quantum particles from one reservoir to another, in a quantum motor a DC current of quantum particles induce the cyclic motion of the classical device. Said relation is due to the Onsager reciprocal relations between electric currents formula_1 and current-induced forces formula_2, taken as generalized fluxes on one hand, and the chemical potentials biases formula_3 and the velocity of the control parameters formula_4, taken as generalized forces on the other hand., formula_5. where formula_6 and formula_7 are indexes over the mechanical degrees of freedom and the leads respectively, and the subindex "formula_8" implies that the quantities should be evaluated at equilibrium, i.e. formula_9 and formula_10. Integrating the above equation for a system with two leads yields the well known relation between the pumped charge per cycle formula_0, the work done by the motor formula_11, and the voltage bias formula_12, formula_13. The Kubo approach to quantum stirring. Consider a closed system which is described by a Hamiltonian formula_14 that depends on some control parameters formula_15. If formula_16 is an Aharonov Bohm magnetic flux through the ring, then by Faraday law formula_17 is the electro motive force. If linear response theory applies we have the proportionality formula_18, where formula_19 is the called the Ohmic conductance. In complete analogy if we change formula_20 the current is formula_21, and if we change formula_22 the current is formula_23, where formula_24 and formula_25 are elements of a conductance matrix. Accordingly, for a full pumping cycle: formula_26 The conductance can be calculated and analyzed using the Kubo formula approach to quantum pumping, which is based on the theory of adiabatic processes. Here we write the expression that applies in the case of low frequency "quasi static" driving process (the popular terms "DC driving" and "adiabatic driving" turn out to be misleading so we do not use them): formula_27 where formula_28 is the current operator, and formula_29 is the generalized force that is associated with the control parameter formula_30. Though this formula is written using quantum mechanical notations it holds also classically if the commutator is replaced by Poisson brackets. In general formula_31 can be written as a sum of two terms: one has to do with dissipation, while the other, denoted as formula_32 has to do with geometry. The dissipative part vanishes in the strict quantum adiabatic limit, while the geometrical part formula_32 might be non-zero. It turns out that in the strict adiabatic limit formula_32 is the "Berry curvature" (mathematically known as ``two-form"). Using the notations formula_33 and formula_34 we can rewrite the formula for the amount of pumped particles as formula_35 where we define the normal vector formula_36 as illustrated. The advantage of this point of view is in the intuition that it gives for the result: formula_0 is related to the flux of a field formula_37 which is created (so to say) by "magnetic charges" in formula_38 space. In practice the calculation of formula_37 is done using the following formula: formula_39 This formula can be regarded as the quantum adiabatic limit of the Kubo formula. The eigenstates of the system are labeled by the index formula_40. These are in general many body states, and the energies are in general many body energies. At finite temperatures a thermal average over formula_41 is implicit. The field formula_32 can be regarded as the rotor of "vector potential" formula_42 (mathematically known as the "one-form"). Namely, formula_43. The ``Berry phase" which is acquired by a wavefunction at the end of a closed cycle is formula_44 Accordingly, one can argue that the "magnetic charge" that generates (so to say) the formula_32 field consists of quantized "Dirac monopoles". It follows from gauge invariance that the degeneracies of the system are arranged as vertical Dirac chains. The "Dirac monopoles" are situated at formula_38 points where formula_41 has a degeneracy with another level. The Dirac monopoles picture is useful for charge transport analysis: the amount of transported charge is determined by the number of the Dirac chains encircled by the pumping cycle. Optionally it is possible to evaluate the transported charge per pumping cycle from the Berry phase by differentiating it with respect to the Aharonov–Bohm flux through the device. The scattering approach to quantum pumping. The Ohmic conductance of a mesoscopic device that is connected by leads to reservoirs is given by the Landauer formula: in dimensionless units the Ohmic conductance of an open channel equals its transmission. The extension of this scattering point of view in the context of quantum pumping leads to the Brouwer-Buttiker-Pretre-Thomas (BPT) formula which relates the geometric conductance to the formula_45 matrix of the pump. In the low temperature limit it yields formula_46 Here formula_47 is a projector that restrict the trace operations to the open channels of the lead where the current is measured. This BPT formula has been originally derived using a scattering approach, but later its relation to the Kubo formula has been worked out. The effect of interactions. A very recent work considers the role of interactions in the stirring of Bose condensed particles. Otherwise the rest of the literature concerns primarily electronic devices. Typically the pump is modeled as a quantum dot. The effect of electron–electron interactions within the dot region is taken into account in the Coulomb blockade regime or in the Kondo regime. In the former case charge transport is quantized even in the case of small backscattering. Deviation from the exact quantized value is related to dissipation. In the Kondo regime, as the temperature is lowered, the pumping effect is modified. There are also works that consider interactions over the whole system (including the leads) using the Luttinger liquid model. Quantum pumping in deformable mesoscopic systems. A quantum pump, when coupled to classical mechanical degrees of freedom, may also induce cyclic variations of the mechanical degrees of freedom coupled to it. In such a configuration, the pump works similarly to an Adiabatic quantum motor.  A paradigmatic example of this class of systems is a quantum pump coupled to an elastically deformable quantum dot. The mentioned paradigm has been generalized to include non-linear effects and stochastic fluctuations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q" }, { "math_id": 1, "text": "I" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "\\delta \\mu" }, { "math_id": 4, "text": "\\dot X" }, { "math_id": 5, "text": " \\left . \\frac{\\partial F_j}{\\partial \\left ( \\delta \\mu_\\alpha \\right )} \\right |_{eq} = \n \\left . \\frac{\\partial I_\\alpha}{\\partial \\dot X_j} \\right |_{eq} " }, { "math_id": 6, "text": " j " }, { "math_id": 7, "text": " \\alpha " }, { "math_id": 8, "text": " eq " }, { "math_id": 9, "text": " \\dot X = 0 " }, { "math_id": 10, "text": " \\delta \\mu = 0 " }, { "math_id": 11, "text": "W" }, { "math_id": 12, "text": " V " }, { "math_id": 13, "text": " W=Q V" }, { "math_id": 14, "text": "\\mathcal{H}(X)" }, { "math_id": 15, "text": "X=(X_1,X_2,X_3)" }, { "math_id": 16, "text": "X_3" }, { "math_id": 17, "text": "-\\dot{X_3}" }, { "math_id": 18, "text": "I=-G^{33}\\dot{X}_3" }, { "math_id": 19, "text": "G^{33}" }, { "math_id": 20, "text": "X_1" }, { "math_id": 21, "text": "I=-G^{31}\\dot{X}_1" }, { "math_id": 22, "text": "X_2" }, { "math_id": 23, "text": "I=-G^{32}\\dot{X}_2" }, { "math_id": 24, "text": "G^{31}" }, { "math_id": 25, "text": "G^{32}" }, { "math_id": 26, "text": "Q = \\oint\\limits_{\\text{cycle}} I \\, dt = -\\oint (G^{31} \\, dX_1 + G^{32} \\, dX_2)" }, { "math_id": 27, "text": "G^{3j}= \\frac{i}{\\hbar}\\int_0^\\infty \\left\\langle\\left[\\mathcal{I}(t),\\mathcal{F}^j(0)\\right]\\right\\rangle \\, t \\, dt" }, { "math_id": 28, "text": "\\mathcal{I}" }, { "math_id": 29, "text": "\\mathcal{F}^j=-\\partial \\mathcal{H}/\\partial X_j" }, { "math_id": 30, "text": "X_j" }, { "math_id": 31, "text": "G" }, { "math_id": 32, "text": "B" }, { "math_id": 33, "text": " B_1=-G^{32}" }, { "math_id": 34, "text": "B_2=G^{31}" }, { "math_id": 35, "text": " Q = \\oint \\vec{B} \\cdot \\vec{ds} " }, { "math_id": 36, "text": "\\vec{ds}=(dX_2,-dX_1)" }, { "math_id": 37, "text": "\\vec{B}" }, { "math_id": 38, "text": "X" }, { "math_id": 39, "text": "{B}_j= \\sum_{n (\\ne n_0)} \\frac{2\\hbar \\ {\\rm Im}[\\mathcal{I}_{n_0n} \\ \\mathcal{F}^j_{nn_0}]}{(E_n-E_{n_0})^2} " }, { "math_id": 40, "text": "n" }, { "math_id": 41, "text": "n_0" }, { "math_id": 42, "text": "A" }, { "math_id": 43, "text": " \\vec{B}=\\nabla\\wedge\\vec{A} " }, { "math_id": 44, "text": "\\text{Berry phase} = \\frac{1}{\\hbar} \\oint \\vec{A} \\cdot d\\vec{X}" }, { "math_id": 45, "text": "S" }, { "math_id": 46, "text": "G^{3j}=\\frac{1}{2\\pi i}\\mathrm{trace} \\left(P_A\\frac{\\partial S}{\\partial X_j} S^\\dagger \\right)" }, { "math_id": 47, "text": "P_A" } ]
https://en.wikipedia.org/wiki?curid=12308932
12312576
Analytic Fredholm theorem
In mathematics, the analytic Fredholm theorem is a result concerning the existence of bounded inverses for a family of bounded linear operators on a Hilbert space. It is the basis of two classical and important theorems, the Fredholm alternative and the Hilbert–Schmidt theorem. The result is named after the Swedish mathematician Erik Ivar Fredholm. Statement of the theorem. Let "G" ⊆ C be a domain (an open and connected set). Let ("H", ⟨ , ⟩) be a real or complex Hilbert space and let Lin("H") denote the space of bounded linear operators from "H" into itself; let I denote the identity operator. Let "B" : "G" → Lin("H") be a mapping such that Then either
[ { "math_id": 0, "text": "\\lim_{\\lambda \\to \\lambda_{0}} \\frac{B(\\lambda) - B(\\lambda_{0})}{\\lambda - \\lambda_{0}}" }, { "math_id": 1, "text": "B(\\lambda) \\psi = \\psi" } ]
https://en.wikipedia.org/wiki?curid=12312576
12312868
Hilbert–Schmidt theorem
In mathematical analysis, the Hilbert–Schmidt theorem, also known as the eigenfunction expansion theorem, is a fundamental result concerning compact, self-adjoint operators on Hilbert spaces. In the theory of partial differential equations, it is very useful in solving elliptic boundary value problems. Statement of the theorem. Let ("H", ⟨ , ⟩) be a real or complex Hilbert space and let "A" : "H" → "H" be a bounded, compact, self-adjoint operator. Then there is a sequence of non-zero real eigenvalues "λ""i", "i" = 1, …, "N", with "N" equal to the rank of "A", such that |"λ""i"| is monotonically non-increasing and, if "N" = +∞, formula_0 Furthermore, if each eigenvalue of "A" is repeated in the sequence according to its multiplicity, then there exists an orthonormal set "φ""i", "i" = 1, …, "N", of corresponding eigenfunctions, i.e., formula_1 Moreover, the functions "φ""i" form an orthonormal basis for the range of "A" and "A" can be written as formula_2
[ { "math_id": 0, "text": "\\lim_{i \\to + \\infty} \\lambda_{i} = 0." }, { "math_id": 1, "text": "A \\varphi_{i} = \\lambda_{i} \\varphi_{i} \\mbox{ for } i = 1, \\dots, N." }, { "math_id": 2, "text": "A u = \\sum_{i = 1}^{N} \\lambda_{i} \\langle \\varphi_{i}, u \\rangle \\varphi_{i} \\mbox{ for all } u \\in H." } ]
https://en.wikipedia.org/wiki?curid=12312868
12313191
Limits of integration
Upper and lower limits applied in definite integration In calculus and mathematical analysis the limits of integration (or bounds of integration) of the integral formula_0 of a Riemann integrable function formula_1 defined on a closed and bounded interval are the real numbers formula_2 and formula_3, in which formula_2 is called the lower limit and formula_3 the upper limit. The region that is bounded can be seen as the area inside formula_2 and formula_3. For example, the function formula_4 is defined on the interval formula_5 formula_6 with the limits of integration being formula_7 and formula_8. Integration by Substitution (U-Substitution). In Integration by substitution, the limits of integration will change due to the new function being integrated. With the function that is being derived, formula_2 and formula_3 are solved for formula_9. In general, formula_10 where formula_11 and formula_12. Thus, formula_2 and formula_3 will be solved in terms of formula_13; the lower bound is formula_14 and the upper bound is formula_15. For example, formula_16 where formula_17 and formula_18. Thus, formula_19 and formula_20. Hence, the new limits of integration are formula_21 and formula_22. The same applies for other substitutions. Improper integrals. Limits of integration can also be defined for improper integrals, with the limits of integration of both formula_23 and formula_24 again being "a" and "b". For an improper integral formula_25 or formula_26 the limits of integration are "a" and ∞, or −∞ and "b", respectively. Definite Integrals. If formula_27, then formula_28 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\int_a^b f(x) \\, dx " }, { "math_id": 1, "text": " f " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": " b " }, { "math_id": 4, "text": " f(x)=x^3 " }, { "math_id": 5, "text": " [2, 4] " }, { "math_id": 6, "text": " \\int_2^4 x^3 \\, dx" }, { "math_id": 7, "text": " 2" }, { "math_id": 8, "text": " 4" }, { "math_id": 9, "text": " f(u)" }, { "math_id": 10, "text": " \\int_a^b f(g(x))g'(x) \\ dx = \\int_{g(a)}^{g(b)} f(u) \\ du " }, { "math_id": 11, "text": " u=g(x) " }, { "math_id": 12, "text": " du=g'(x)\\ dx " }, { "math_id": 13, "text": " u " }, { "math_id": 14, "text": "g(a)" }, { "math_id": 15, "text": "g(b)" }, { "math_id": 16, "text": "\\int_0^2 2x\\cos(x^2)dx = \\int_0^4\\cos(u) \\, du" }, { "math_id": 17, "text": "u=x^2" }, { "math_id": 18, "text": "du=2xdx" }, { "math_id": 19, "text": "f(0)=0^2=0" }, { "math_id": 20, "text": "f(2)=2^2=4" }, { "math_id": 21, "text": "0" }, { "math_id": 22, "text": "4" }, { "math_id": 23, "text": " \\lim_{z \\to a^+} \\int_z^b f(x) \\, dx" }, { "math_id": 24, "text": " \\lim_{z \\to b^-} \\int_a^z f(x) \\, dx" }, { "math_id": 25, "text": " \\int_a^\\infty f(x) \\, dx " }, { "math_id": 26, "text": " \\int_{-\\infty}^b f(x) \\, dx " }, { "math_id": 27, "text": "c\\in(a,b)" }, { "math_id": 28, "text": "\\int_a^b f(x)\\ dx = \\int_a^c f(x)\\ dx \\ + \\int_c^b f(x)\\ dx." } ]
https://en.wikipedia.org/wiki?curid=12313191
12316
Gamma function
Extension of the factorial function In mathematics, the gamma function (represented by Γ, the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer n, formula_0 Derived by Daniel Bernoulli, for complex numbers with a positive real part, the gamma function is defined via a convergent improper integral: formula_1 The gamma function then is defined as the analytic continuation of this integral function to a meromorphic function that is holomorphic in the whole complex plane except zero and the negative integers, where the function has simple poles. The gamma function has no zeros, so the reciprocal gamma function is an entire function. In fact, the gamma function corresponds to the Mellin transform of the negative exponential function: formula_2 Other extensions of the factorial function do exist, but the gamma function is the most popular and useful. It is a component in various probability-distribution functions, and as such it is applicable in the fields of probability and statistics, as well as combinatorics. Motivation. The gamma function can be seen as a solution to the interpolation problem of finding a smooth curve formula_3 that connects the points of the factorial sequence: formula_4 for all positive integer values of formula_5. The simple formula for the factorial, "x"! = 1 × 2 × ⋯ × "x" is only valid when x is a positive integer, and no elementary function has this property, but a good solution is the gamma function formula_6. The gamma function is not only smooth but analytic (except at the non-positive integers), and it can be defined in several explicit ways. However, it is not the only analytic function that extends the factorial, as one may add any analytic function that is zero on the positive integers, such as formula_7 for an integer formula_8. Such a function is known as a pseudogamma function, the most famous being the Hadamard function. A more restrictive requirement is the functional equation which interpolates the shifted factorial formula_9 : formula_10 But this still does not give a unique solution, since it allows for multiplication by any periodic function formula_11 with formula_12 and formula_13, such as formula_14. One way to resolve the ambiguity is the Bohr–Mollerup theorem, which shows that formula_15 is the unique interpolating function over the positive reals which is logarithmically convex (super-convex), meaning that formula_16 is convex (where formula_17 is the natural logarithm). Definition. Main definition. The notation formula_18 is due to Legendre. If the real part of the complex number z is strictly positive (formula_19), then the integral formula_20 converges absolutely, and is known as the Euler integral of the second kind. (Euler's integral of the first kind is the beta function.) Using integration by parts, one sees that: formula_21 Recognizing that formula_22 as formula_23 formula_24 We can calculate formula_25: formula_26 Thus we can show that formula_27 for any positive integer n by induction. Specifically, the base case is that formula_28, and the induction step is that formula_29 The identity formula_30 can be used (or, yielding the same result, analytic continuation can be used) to uniquely extend the integral formulation for formula_18 to a meromorphic function defined for all complex numbers z, except integers less than or equal to zero. It is this extended version that is commonly referred to as the gamma function. Alternative definitions. There are many equivalent definitions. Euler's definition as an infinite product. For a fixed integer formula_8, as the integer formula_5 increases, we have that formula_31 If formula_8 is not an integer then it is not possible to say whether this equation is true because we have not yet (in this section) defined the factorial function for non-integers. However, we do get a unique extension of the factorial function to the non-integers by insisting that this equation continue to hold when the arbitrary integer formula_8 is replaced by an arbitrary complex number formula_32, formula_33 Multiplying both sides by formula_34 gives formula_35 This infinite product, which is due to Euler, converges for all complex numbers formula_32 except the non-positive integers, which fail because of a division by zero. Intuitively, this formula indicates that formula_36 is approximately the result of computing formula_37 for some large integer formula_5, multiplying by formula_38 to approximate formula_39, and using the relationship formula_40 backwards formula_41 times to get an approximation for formula_36; and furthermore that this approximation becomes exact as formula_5 increases to infinity. The infinite product for the reciprocal formula_42 is an entire function, converging for every complex number z. Weierstrass's definition. The definition for the gamma function due to Weierstrass is also valid for all complex numbers z except the non-positive integers: formula_43 where formula_44 is the Euler–Mascheroni constant. This is the Hadamard product of formula_45 in a rewritten form. The utility of this definition cannot be overstated as it appears in a certain identity involving pi. Properties. General. Besides the fundamental property discussed above: formula_47 other important functional equations for the gamma function are Euler's reflection formula formula_48 which implies formula_49 and the Legendre duplication formula formula_50 The duplication formula is a special case of the multiplication theorem (see  Eq. 5.5.6): formula_51 A simple but useful property, which can be seen from the limit definition, is: formula_52 In particular, with "z" = "a" + "bi", this product is formula_53 If the real part is an integer or a half-integer, this can be finitely expressed in closed form: formula_54 Perhaps the best-known value of the gamma function at a non-integer argument is formula_55 which can be found by setting formula_56 in the reflection or duplication formulas, by using the relation to the beta function given below with formula_57, or simply by making the substitution formula_58 in the integral definition of the gamma function, resulting in a Gaussian integral. In general, for non-negative integer values of formula_5 we have: formula_59 where the double factorial formula_60. See Particular values of the gamma function for calculated values. It might be tempting to generalize the result that formula_61 by looking for a formula for other individual values formula_62 where formula_63 is rational, especially because according to Gauss's digamma theorem, it is possible to do so for the closely related digamma function at every rational value. However, these numbers formula_62 are not known to be expressible by themselves in terms of elementary functions. It has been proved that formula_64 is a transcendental number and algebraically independent of formula_65 for any integer formula_5 and each of the fractions formula_66. In general, when computing values of the gamma function, we must settle for numerical approximations. The derivatives of the gamma function are described in terms of the polygamma function, "ψ"(0)("z"): formula_67 For a positive integer m the derivative of the gamma function can be calculated as follows: formula_68 where H(m) is the mth harmonic number and "γ" is the Euler–Mascheroni constant. For formula_69 the formula_5th derivative of the gamma function is: formula_70 Using the identity formula_71 where formula_72 is the Riemann zeta function, and formula_73 is the formula_5-th Bell polynomial, we have in particular the Laurent series expansion of the gamma function formula_74 Inequalities. When restricted to the positive real numbers, the gamma function is a strictly logarithmically convex function. This property may be stated in any of the following three equivalent ways: The last of these statements is, essentially by definition, the same as the statement that formula_82, where formula_83 is the polygamma function of order 1. To prove the logarithmic convexity of the gamma function, it therefore suffices to observe that formula_83 has a series representation which, for positive real x, consists of only positive terms. Logarithmic convexity and Jensen's inequality together imply, for any positive real numbers formula_84 and formula_85, formula_86 There are also bounds on ratios of gamma functions. The best-known is Gautschi's inequality, which says that for any positive real number x and any "s" ∈ (0, 1), formula_87 Stirling's formula. The behavior of formula_88 for an increasing positive real variable is given by Stirling's formula formula_89 where the symbol formula_90 means asymptotic convergence: the ratio of the two sides converges to 1 in the limit This growth is faster than exponential, formula_91, for any fixed value of formula_92. Another useful limit for asymptotic approximations for formula_93 is: formula_94 When writing the error term as an infinite product, Stirling's formula can be used to define the gamma function: formula_95 Residues. The behavior for non-positive formula_32 is more intricate. Euler's integral does not converge for formula_96, but the function it defines in the positive complex half-plane has a unique analytic continuation to the negative half-plane. One way to find that analytic continuation is to use Euler's integral for positive arguments and extend the domain to negative numbers by repeated application of the recurrence formula, formula_97 choosing formula_5 such that formula_98 is positive. The product in the denominator is zero when formula_32 equals any of the integers formula_99. Thus, the gamma function must be undefined at those points to avoid division by zero; it is a meromorphic function with simple poles at the non-positive integers. For a function formula_100 of a complex variable formula_32, at a simple pole formula_101, the residue of formula_100 is given by: formula_102 For the simple pole formula_103 we rewrite recurrence formula as: formula_104 The numerator at formula_103 is formula_105 and the denominator formula_106 So the residues of the gamma function at those points are: formula_107The gamma function is non-zero everywhere along the real line, although it comes arbitrarily close to zero as "z" → −∞. There is in fact no complex number formula_32 for which formula_108, and hence the reciprocal gamma function formula_109 is an entire function, with zeros at formula_110. Minima and maxima. On the real line, the gamma function has a local minimum at "z"min ≈ where it attains the value Γ("z"min) ≈. The gamma function rises to either side of this minimum. The solution to Γ("z" − 0.5) = Γ("z" + 0.5) is "z" = +1.5 and the common value is Γ(1) = Γ(2) = +1. The positive solution to Γ("z" − 1) = Γ("z" + 1) is "z" = "φ" ≈ +1.618, the golden ratio, and the common value is Γ("φ" − 1) = Γ("φ" + 1) = "φ"! ≈. The gamma function must alternate sign between its poles at the non-positive integers because the product in the forward recurrence contains an odd number of negative factors if the number of poles between formula_32 and formula_98 is odd, and an even number if the number of poles is even. The values at the local extrema of the gamma function along the real axis between the non-positive integers are: Γ() =, Γ() =, Γ() =, Γ() =, Γ() =, etc. Integral representations. There are many formulas, besides the Euler integral of the second kind, that express the gamma function as an integral. For instance, when the real part of z is positive, formula_111 and formula_112 formula_113 where the three integrals respectively follow from the substitutions formula_114, formula_115 and formula_116 in Euler's second integral. The last integral in particular makes clear the connection between the gamma function at half integer arguments and the Gaussian integral: if we let formula_117 we get formula_118. Binet's first integral formula for the gamma function states that, when the real part of z is positive, then: formula_119 The integral on the right-hand side may be interpreted as a Laplace transform. That is, formula_120 Binet's second integral formula states that, again when the real part of z is positive, then: formula_121 Let "C" be a Hankel contour, meaning a path that begins and ends at the point ∞ on the Riemann sphere, whose unit tangent vector converges to −1 at the start of the path and to 1 at the end, which has winding number 1 around 0, and which does not cross [0, ∞). Fix a branch of formula_122 by taking a branch cut along [0, ∞) and by taking formula_122 to be real when t is on the negative real axis. Assume z is not an integer. Then Hankel's formula for the gamma function is: formula_123 where formula_124 is interpreted as formula_125. The reflection formula leads to the closely related expression formula_126 again valid whenever "z" is not an integer. Continued fraction representation. The gamma function can also be represented by a sum of two continued fractions: formula_127 where formula_128. Fourier series expansion. The logarithm of the gamma function has the following Fourier series expansion for formula_129 formula_130 which was for a long time attributed to Ernst Kummer, who derived it in 1847. However, Iaroslav Blagouchine discovered that Carl Johan Malmsten first derived this series in 1842. Raabe's formula. In 1840 Joseph Ludwig Raabe proved that formula_131 In particular, if formula_132 then formula_133 The latter can be derived taking the logarithm in the above multiplication formula, which gives an expression for the Riemann sum of the integrand. Taking the limit for formula_134 gives the formula. Pi function. An alternative notation that was originally introduced by Gauss is the formula_135-function, which, in terms of the gamma function, is formula_136 so that formula_137 for every non-negative integer formula_5. Using the pi function the reflection formula takes on the form formula_138 where sinc is the normalized sinc function, while the multiplication theorem takes on the form formula_139 We also sometimes find formula_140 which is an entire function, defined for every complex number, just like the reciprocal gamma function. That formula_141 is entire entails it has no poles, so formula_142, like formula_143, has no zeros. The volume of an "n"-ellipsoid with radii "r"1, …, "r""n" can be expressed as formula_144 Particular values. Including up to the first 20 digits after the decimal point, some particular values of the gamma function are: formula_154 The complex-valued gamma function is undefined for non-positive integers, but in these cases the value can be defined in the Riemann sphere as ∞. The reciprocal gamma function is well defined and analytic at these values (and in the entire complex plane): formula_155 Log-gamma function. Because the gamma and factorial functions grow so rapidly for moderately large arguments, many computing environments include a function that returns the natural logarithm of the gamma function (often given the name codice_0 or codice_1 in programming environments or codice_2 in spreadsheets); this grows much more slowly, and for combinatorial calculations allows adding and subtracting logs instead of multiplying and dividing very large values. It is often defined as formula_156 The digamma function, which is the derivative of this function, is also commonly seen. In the context of technical and physical applications, e.g. with wave propagation, the functional equation formula_157 is often used since it allows one to determine function values in one strip of width 1 in z from the neighbouring strip. In particular, starting with a good approximation for a z with large real part one may go step by step down to the desired z. Following an indication of Carl Friedrich Gauss, Rocktaeschel (1922) proposed for logΓ("z") an approximation for large Re("z"): formula_158 This can be used to accurately approximate logΓ("z") for z with a smaller Re("z") via (P.E.Böhmer, 1939) formula_159 A more accurate approximation can be obtained by using more terms from the asymptotic expansions of logΓ("z") and Γ("z"), which are based on Stirling's approximation. formula_160 as at constant . (See sequences and in the OEIS.) In a more "natural" presentation: formula_161 as at constant . (See sequences and in the OEIS.) The coefficients of the terms with "k" &gt; 1 of "z"1−"k" in the last expansion are simply formula_162 where the "Bk" are the Bernoulli numbers. The gamma function also has Stirling Series (derived by Charles Hermite in 1900) equal to formula_163 Properties. The Bohr–Mollerup theorem states that among all functions extending the factorial functions to the positive real numbers, only the gamma function is log-convex, that is, its natural logarithm is convex on the positive real axis. Another characterisation is given by the Wielandt theorem. The gamma function is the unique function that simultaneously satisfies In a certain sense, the log-gamma function is the more natural form; it makes some intrinsic attributes of the function clearer. A striking example is the Taylor series of logΓ around 1: formula_167 with "ζ"("k") denoting the Riemann zeta function at k. So, using the following property: formula_168 we can find an integral representation for the log-gamma function: formula_169 or, setting "z" = 1 to obtain an integral for "γ", we can replace the "γ" term with its integral and incorporate that into the above formula, to get: formula_170 There also exist special formulas for the logarithm of the gamma function for rational z. For instance, if formula_46 and formula_5 are integers with formula_171 and formula_172 then formula_173This formula is sometimes used for numerical computation, since the integrand decreases very quickly. Integration over log-gamma. The integral formula_174 can be expressed in terms of the Barnes "G"-function (see Barnes "G"-function for a proof): formula_175 where Re("z") &gt; −1. It can also be written in terms of the Hurwitz zeta function: formula_176 When formula_177 it follows that formula_178 and this is a consequence of Raabe's formula as well. O. Espinosa and V. Moll derived a similar formula for the integral of the square of formula_179: formula_180 where formula_181 is formula_182. D. H. Bailey and his co-authors gave an evaluation for formula_183 when formula_184 in terms of the Tornheim–Witten zeta function and its derivatives. In addition, it is also known that formula_185 Approximations. Complex values of the gamma function can be approximated using Stirling's approximation or the Lanczos approximation, formula_186 This is precise in the sense that the ratio of the approximation to the true value approaches 1 in the limit as goes to infinity. The gamma function can be computed to fixed precision for formula_187 by applying integration by parts to Euler's integral. For any positive number x the gamma function can be written formula_188 When Re("z") ∈ [1,2] and formula_189, the absolute value of the last integral is smaller than formula_190. By choosing a large enough formula_80, this last expression can be made smaller than formula_191 for any desired value formula_192. Thus, the gamma function can be evaluated to formula_192 bits of precision with the above series. A fast algorithm for calculation of the Euler gamma function for any algebraic argument (including rational) was constructed by E.A. Karatsuba. For arguments that are integer multiples of , the gamma function can also be evaluated quickly using arithmetic–geometric mean iterations (see particular values of the gamma function). Practical implementations. Unlike many other functions, such as a Normal Distribution, no obvious fast, accurate implementation that is easy to implement for the Gamma Function formula_36 is easily found. Therefore, it is worth investigating potential solutions. For the case that speed is more important than accuracy, published tables for formula_36 are easily found in an Internet search, such as the Online Wiley Library. Such tables may be used with linear interpolation. Greater accuracy is obtainable with the use of cubic interpolation at the cost of more computational overhead. Since formula_36 tables are usually published for argument values between 1 and 2, the property formula_193 may be used to quickly and easily translate all real values formula_194 and formula_195 into the range formula_196, such that only tabulated values of formula_32 between 1 and 2 need be used. If interpolation tables are not desirable, then the Lanczos approximation mentioned above works well for 1 to 2 digits of accuracy for small, commonly used values of z. If the Lanczos approximation is not sufficiently accurate, the Stirling's formula for the Gamma Function may be used. Applications. One author describes the gamma function as "Arguably, the most common special function, or the least 'special' of them. The other transcendental functions […] are called 'special' because you could conceivably avoid some of them by staying away from many specialized mathematical topics. On the other hand, the gamma function Γ("z") is most difficult to avoid." Integration problems. The gamma function finds application in such diverse areas as quantum physics, astrophysics and fluid dynamics. The gamma distribution, which is formulated in terms of the gamma function, is used in statistics to model a wide range of processes; for example, the time between occurrences of earthquakes. The primary reason for the gamma function's usefulness in such contexts is the prevalence of expressions of the type formula_197 which describe processes that decay exponentially in time or space. Integrals of such expressions can occasionally be solved in terms of the gamma function when no elementary solution exists. For example, if "f" is a power function and "g" is a linear function, a simple change of variables formula_198 gives the evaluation formula_199 The fact that the integration is performed along the entire positive real line might signify that the gamma function describes the cumulation of a time-dependent process that continues indefinitely, or the value might be the total of a distribution in an infinite space. It is of course frequently useful to take limits of integration other than 0 and ∞ to describe the cumulation of a finite process, in which case the ordinary gamma function is no longer a solution; the solution is then called an incomplete gamma function. (The ordinary gamma function, obtained by integrating across the entire positive real line, is sometimes called the "complete gamma function" for contrast.) An important category of exponentially decaying functions is that of Gaussian functions formula_200 and integrals thereof, such as the error function. There are many interrelations between these functions and the gamma function; notably, the factor formula_201 obtained by evaluating formula_202 is the "same" as that found in the normalizing factor of the error function and the normal distribution. The integrals we have discussed so far involve transcendental functions, but the gamma function also arises from integrals of purely algebraic functions. In particular, the arc lengths of ellipses and of the lemniscate, which are curves defined by algebraic equations, are given by elliptic integrals that in special cases can be evaluated in terms of the gamma function. The gamma function can also be used to calculate "volume" and "area" of "n"-dimensional hyperspheres. Calculating products. The gamma function's ability to generalize factorial products immediately leads to applications in many areas of mathematics; in combinatorics, and by extension in areas such as probability theory and the calculation of power series. Many expressions involving products of successive integers can be written as some combination of factorials, the most important example perhaps being that of the binomial coefficient. For example, for any complex numbers z and n, with , we can write formula_203 which closely resembles the binomial coefficient when n is a non-negative integer, formula_204 The example of binomial coefficients motivates why the properties of the gamma function when extended to negative numbers are natural. A binomial coefficient gives the number of ways to choose k elements from a set of n elements; if "k" &gt; "n", there are of course no ways. If "k" &gt; "n", ("n" − "k")! is the factorial of a negative integer and hence infinite if we use the gamma function definition of factorials—dividing by infinity gives the expected value of 0. We can replace the factorial by a gamma function to extend any such formula to the complex numbers. Generally, this works for any product wherein each factor is a rational function of the index variable, by factoring the rational function into linear expressions. If "P" and "Q" are monic polynomials of degree m and n with respective roots "p"1, …, "pm" and "q"1, …, "qn", we have formula_205 If we have a way to calculate the gamma function numerically, it is very simple to calculate numerical values of such products. The number of gamma functions in the right-hand side depends only on the degree of the polynomials, so it does not matter whether "b" − "a" equals 5 or 105. By taking the appropriate limits, the equation can also be made to hold even when the left-hand product contains zeros or poles. By taking limits, certain rational products with infinitely many factors can be evaluated in terms of the gamma function as well. Due to the Weierstrass factorization theorem, analytic functions can be written as infinite products, and these can sometimes be represented as finite products or quotients of the gamma function. We have already seen one striking example: the reflection formula essentially represents the sine function as the product of two gamma functions. Starting from this formula, the exponential function as well as all the trigonometric and hyperbolic functions can be expressed in terms of the gamma function. More functions yet, including the hypergeometric function and special cases thereof, can be represented by means of complex contour integrals of products and quotients of the gamma function, called Mellin–Barnes integrals. Analytic number theory. An application of the gamma function is the study of the Riemann zeta function. A fundamental property of the Riemann zeta function is its functional equation: formula_206 Among other things, this provides an explicit form for the analytic continuation of the zeta function to a meromorphic function in the complex plane and leads to an immediate proof that the zeta function has infinitely many so-called "trivial" zeros on the real line. Borwein "et al." call this formula "one of the most beautiful findings in mathematics". Another contender for that title might be formula_207 Both formulas were derived by Bernhard Riemann in his seminal 1859 paper "Ueber die Anzahl der Primzahlen unter einer gegebenen Größe" ("On the Number of Primes Less Than a Given Magnitude"), one of the milestones in the development of analytic number theory—the branch of mathematics that studies prime numbers using the tools of mathematical analysis. History. The gamma function has caught the interest of some of the most prominent mathematicians of all time. Its history, notably documented by Philip J. Davis in an article that won him the 1963 Chauvenet Prize, reflects many of the major developments within mathematics since the 18th century. In the words of Davis, "each generation has found something of interest to say about the gamma function. Perhaps the next generation will also." 18th century: Euler and Stirling. The problem of extending the factorial to non-integer arguments was apparently first considered by Daniel Bernoulli and Christian Goldbach in the 1720s. In particular, in a letter from Bernoulli to Goldbach dated 6 October 1729 Bernoulli introduced the product representation formula_208 which is well defined for real values of "x" other than the negative integers. Leonhard Euler later gave two different definitions: the first was not his integral but an infinite product that is well defined for all complex numbers "n" other than the negative integers, formula_209 of which he informed Goldbach in a letter dated 13 October 1729. He wrote to Goldbach again on 8 January 1730, to announce his discovery of the integral representation formula_210 which is valid when the real part of the complex number "n" is strictly greater than −1 (i.e., formula_211). By the change of variables "t" = −ln "s", this becomes the familiar Euler integral. Euler published his results in the paper "De progressionibus transcendentibus seu quarum termini generales algebraice dari nequeunt" ("On transcendental progressions, that is, those whose general terms cannot be given algebraically"), submitted to the St. Petersburg Academy on 28 November 1729. Euler further discovered some of the gamma function's important functional properties, including the reflection formula. James Stirling, a contemporary of Euler, also attempted to find a continuous expression for the factorial and came up with what is now known as Stirling's formula. Although Stirling's formula gives a good estimate of "n"!, also for non-integers, it does not provide the exact value. Extensions of his formula that correct the error were given by Stirling himself and by Jacques Philippe Marie Binet. 19th century: Gauss, Weierstrass and Legendre. Carl Friedrich Gauss rewrote Euler's product as formula_212 and used this formula to discover new properties of the gamma function. Although Euler was a pioneer in the theory of complex variables, he does not appear to have considered the factorial of a complex number, as instead Gauss first did. Gauss also proved the multiplication theorem of the gamma function and investigated the connection between the gamma function and elliptic integrals. Karl Weierstrass further established the role of the gamma function in complex analysis, starting from yet another product representation, formula_213 where "γ" is the Euler–Mascheroni constant. Weierstrass originally wrote his product as one for , in which case it is taken over the function's zeros rather than its poles. Inspired by this result, he proved what is known as the Weierstrass factorization theorem—that any entire function can be written as a product over its zeros in the complex plane; a generalization of the fundamental theorem of algebra. The name gamma function and the symbol Γ were introduced by Adrien-Marie Legendre around 1811; Legendre also rewrote Euler's integral definition in its modern form. Although the symbol is an upper-case Greek gamma, there is no accepted standard for whether the function name should be written "gamma function" or "Gamma function" (some authors simply write "Γ-function"). The alternative "pi function" notation Π("z") = "z"! due to Gauss is sometimes encountered in older literature, but Legendre's notation is dominant in modern works. It is justified to ask why we distinguish between the "ordinary factorial" and the gamma function by using distinct symbols, and particularly why the gamma function should be normalized to Γ("n" + 1) = "n"! instead of simply using "Γ("n") = "n"!". Consider that the notation for exponents, "xn", has been generalized from integers to complex numbers "xz" without any change. Legendre's motivation for the normalization does not appear to be known, and has been criticized as cumbersome by some (the 20th-century mathematician Cornelius Lanczos, for example, called it "void of any rationality" and would instead use "z"!). Legendre's normalization does simplify some formulae, but complicates others. From a modern point of view, the Legendre normalization of the gamma function is the integral of the additive character "e"−"x" against the multiplicative character "xz" with respect to the Haar measure formula_214 on the Lie group R+. Thus this normalization makes it clearer that the gamma function is a continuous analogue of a Gauss sum. 19th–20th centuries: characterizing the gamma function. It is somewhat problematic that a large number of definitions have been given for the gamma function. Although they describe the same function, it is not entirely straightforward to prove the equivalence. Stirling never proved that his extended formula corresponds exactly to Euler's gamma function; a proof was first given by Charles Hermite in 1900. Instead of finding a specialized proof for each formula, it would be desirable to have a general method of identifying the gamma function. One way to prove equivalence would be to find a differential equation that characterizes the gamma function. Most special functions in applied mathematics arise as solutions to differential equations, whose solutions are unique. However, the gamma function does not appear to satisfy any simple differential equation. Otto Hölder proved in 1887 that the gamma function at least does not satisfy any "algebraic" differential equation by showing that a solution to such an equation could not satisfy the gamma function's recurrence formula, making it a transcendentally transcendental function. This result is known as Hölder's theorem. A definite and generally applicable characterization of the gamma function was not given until 1922. Harald Bohr and Johannes Mollerup then proved what is known as the "Bohr–Mollerup theorem": that the gamma function is the unique solution to the factorial recurrence relation that is positive and "logarithmically convex" for positive z and whose value at 1 is 1 (a function is logarithmically convex if its logarithm is convex). Another characterisation is given by the Wielandt theorem. The Bohr–Mollerup theorem is useful because it is relatively easy to prove logarithmic convexity for any of the different formulas used to define the gamma function. Taking things further, instead of defining the gamma function by any particular formula, we can choose the conditions of the Bohr–Mollerup theorem as the definition, and then pick any formula we like that satisfies the conditions as a starting point for studying the gamma function. This approach was used by the Bourbaki group. Borwein &amp; Corless review three centuries of work on the gamma function. Reference tables and software. Although the gamma function can be calculated virtually as easily as any mathematically simpler function with a modern computer—even with a programmable pocket calculator—this was of course not always the case. Until the mid-20th century, mathematicians relied on hand-made tables; in the case of the gamma function, notably a table computed by Gauss in 1813 and one computed by Legendre in 1825. Tables of complex values of the gamma function, as well as hand-drawn graphs, were given in "Tables of Functions With Formulas and Curves" by Jahnke and Emde, first published in Germany in 1909. According to Michael Berry, "the publication in J&amp;E of a three-dimensional graph showing the poles of the gamma function in the complex plane acquired an almost iconic status." There was in fact little practical need for anything but real values of the gamma function until the 1930s, when applications for the complex gamma function were discovered in theoretical physics. As electronic computers became available for the production of tables in the 1950s, several extensive tables for the complex gamma function were published to meet the demand, including a table accurate to 12 decimal places from the U.S. National Bureau of Standards. Double-precision floating-point implementations of the gamma function and its logarithm are now available in most scientific computing software and special functions libraries, for example TK Solver, Matlab, GNU Octave, and the GNU Scientific Library. The gamma function was also added to the C standard library (math.h). Arbitrary-precision implementations are available in most computer algebra systems, such as Mathematica and Maple. PARI/GP, MPFR and MPFUN contain free arbitrary-precision implementations. In some software calculators, e.g. Windows Calculator and GNOME Calculator, the factorial function returns Γ("x" + 1) when the input "x" is a non-integer value. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma(n) = (n-1)!\\,." }, { "math_id": 1, "text": " \\Gamma(z) = \\int_0^\\infty t^{z-1} e^{-t}\\text{ d}t, \\ \\qquad \\Re(z) > 0\\,." }, { "math_id": 2, "text": " \\Gamma(z) = \\mathcal M \\{e^{-x} \\} (z)\\,." }, { "math_id": 3, "text": "y=f(x)" }, { "math_id": 4, "text": "(x,y) = (n, n!) " }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "f(x) = \\Gamma(x+1) " }, { "math_id": 7, "text": "k\\sin(m\\pi x)" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "f(n) = (n{-}1)! " }, { "math_id": 10, "text": "f(x+1) = x f(x)\\ \\text{ for any } x>0, \\qquad f(1) = 1." }, { "math_id": 11, "text": "g(x)" }, { "math_id": 12, "text": "g(x) = g(x+1)" }, { "math_id": 13, "text": "g(0)=1" }, { "math_id": 14, "text": "g(x) = e^{k\\sin(m\\pi x)}" }, { "math_id": 15, "text": "f(x) = \\Gamma(x)" }, { "math_id": 16, "text": "y = \\log f(x) " }, { "math_id": 17, "text": "\\log" }, { "math_id": 18, "text": "\\Gamma (z)" }, { "math_id": 19, "text": "\\Re (z) > 0" }, { "math_id": 20, "text": " \\Gamma(z) = \\int_0^\\infty t^{z-1} e^{-t}\\, dt" }, { "math_id": 21, "text": "\\begin{align}\n \\Gamma(z+1) & = \\int_0^\\infty t^{z} e^{-t} \\, dt \\\\\n&= \\Bigl[-t^z e^{-t}\\Bigr]_0^\\infty + \\int_0^\\infty z t^{z-1} e^{-t}\\, dt \\\\\n&= \\lim_{t\\to \\infty}\\left(-t^z e^{-t}\\right) - \\left(-0^z e^{-0}\\right) + z\\int_0^\\infty t^{z-1} e^{-t}\\, dt.\n\\end{align}" }, { "math_id": 22, "text": "-t^z e^{-t}\\to 0" }, { "math_id": 23, "text": "t\\to \\infty," }, { "math_id": 24, "text": "\\begin{align}\n \\Gamma(z+1) & = z\\int_0^\\infty t^{z-1} e^{-t}\\, dt \\\\\n &= z\\Gamma(z).\n\\end{align}" }, { "math_id": 25, "text": "\\Gamma(1)" }, { "math_id": 26, "text": "\\begin{align}\n \\Gamma(1) & = \\int_0^\\infty t^{1-1} e^{-t}\\,dt \\\\\n & = \\int_0^\\infty e^{-t} \\, dt \\\\\n & = 1.\n\\end{align}" }, { "math_id": 27, "text": "\\Gamma(n) = (n-1)!" }, { "math_id": 28, "text": "\\Gamma(1) = 1 = 0!" }, { "math_id": 29, "text": "\\Gamma(n+1) = n\\Gamma(n) = n(n-1)! = n!." }, { "math_id": 30, "text": "\\Gamma(z) = \\frac {\\Gamma(z + 1)} {z}" }, { "math_id": 31, "text": "\\lim_{n \\to \\infty} \\frac{n! \\, \\left(n+1\\right)^m}{(n+m)!} = 1\\,." }, { "math_id": 32, "text": "z" }, { "math_id": 33, "text": "\\lim_{n \\to \\infty} \\frac{n! \\, \\left(n+1\\right)^z}{(n+z)!} = 1\\,." }, { "math_id": 34, "text": "(z-1)!" }, { "math_id": 35, "text": "\\begin{align}\n\\Gamma(z)\n &= (z-1)! \\\\[8pt]\n &= \\frac{1}{z} \\lim_{n \\to \\infty} n!\\frac{z!}{(n+z)!} (n+1)^z \\\\[8pt]\n &= \\frac{1}{z} \\lim_{n \\to \\infty} (1 \\cdots n)\\frac{1}{(1+z) \\cdots (n+z)} \\left(\\frac{2}{1} \\cdot \\frac{3}{2} \\cdots \\frac{n+1}{n}\\right)^z \\\\[8pt]\n &= \\frac{1}{z} \\prod_{n=1}^\\infty \\left[ \\frac{1}{1+\\frac{z}{n}} \\left(1 + \\frac{1}{n}\\right)^z \\right].\n\\end{align}" }, { "math_id": 36, "text": "\\Gamma(z)" }, { "math_id": 37, "text": "\\Gamma(n+1)=n!" }, { "math_id": 38, "text": "(n+1)^z" }, { "math_id": 39, "text": "\\Gamma(n+z+1)" }, { "math_id": 40, "text": "\\Gamma(x+1) = x \\Gamma(x)" }, { "math_id": 41, "text": "n+1" }, { "math_id": 42, "text": "\\frac{1}{\\Gamma(z)} = z \\prod_{n=1}^\\infty \\left[ \\left(1+\\frac{z}{n}\\right) / {\\left(1 + \\frac{1}{n}\\right)^z} \\right]" }, { "math_id": 43, "text": "\\Gamma(z) = \\frac{e^{-\\gamma z}} z \\prod_{n=1}^\\infty \\left(1 + \\frac z n \\right)^{-1} e^{z/n}," }, { "math_id": 44, "text": "\\gamma \\approx 0.577216" }, { "math_id": 45, "text": "1/\\Gamma(z)" }, { "math_id": 46, "text": "k" }, { "math_id": 47, "text": "\\Gamma(z+1) = z\\ \\Gamma(z)" }, { "math_id": 48, "text": "\\Gamma(1-z) \\Gamma(z) = \\frac{\\pi}{\\sin \\pi z}, \\qquad z \\not\\in \\Z" }, { "math_id": 49, "text": "\\Gamma(z - n) = (-1)^{n-1} \\; \\frac{\\Gamma(-z) \\Gamma(1+z)}{\\Gamma(n+1-z)}, \\qquad n \\in \\Z" }, { "math_id": 50, "text": "\\Gamma(z) \\Gamma\\left(z + \\tfrac12\\right) = 2^{1-2z} \\; \\sqrt{\\pi} \\; \\Gamma(2z)." }, { "math_id": 51, "text": "\\prod_{k=0}^{m-1}\\Gamma\\left(z + \\frac{k}{m}\\right) = (2 \\pi)^{\\frac{m-1}{2}} \\; m^{\\frac12 - mz} \\; \\Gamma(mz)." }, { "math_id": 52, "text": "\\overline{\\Gamma(z)} = \\Gamma(\\overline{z}) \\; \\Rightarrow \\; \\Gamma(z)\\Gamma(\\overline{z}) \\in \\mathbb{R} ." }, { "math_id": 53, "text": "|\\Gamma(a+bi)|^2 = |\\Gamma(a)|^2 \\prod_{k=0}^\\infty \\frac{1}{1+\\frac{b^2}{(a+k)^2}}" }, { "math_id": 54, "text": "\n\\begin{align}\n|\\Gamma(bi)|^2 & = \\frac{\\pi}{b\\sinh \\pi b} \\\\[1ex]\n\\left|\\Gamma\\left(\\tfrac{1}{2}+bi\\right)\\right|^2 & = \\frac{\\pi}{\\cosh \\pi b} \\\\[1ex]\n\\left|\\Gamma\\left(1+bi\\right)\\right|^2 & = \\frac{\\pi b}{\\sinh \\pi b} \\\\[1ex]\n\\left|\\Gamma\\left(1+n+bi\\right)\\right|^2 & = \\frac{\\pi b}{\\sinh \\pi b} \\prod_{k=1}^n \\left(k^2 + b^2 \\right), \\quad n \\in \\N \\\\[1ex]\n\\left|\\Gamma\\left(-n+bi\\right)\\right|^2 & = \\frac{\\pi}{b \\sinh \\pi b} \\prod_{k=1}^n \\left(k^2 + b^2 \\right)^{-1}, \\quad n \\in \\N \\\\[1ex]\n\\left|\\Gamma\\left(\\tfrac{1}{2} \\pm n+bi\\right)\\right|^2 & = \\frac{\\pi}{\\cosh \\pi b} \\prod_{k=1}^n \\left(\\left( k-\\tfrac{1}{2}\\right)^2 + b^2 \\right)^{\\pm 1}, \\quad n \\in \\N\n\\\\[-1ex]&\n\\end{align}\n" }, { "math_id": 55, "text": "\\Gamma\\left(\\tfrac12\\right)=\\sqrt{\\pi}," }, { "math_id": 56, "text": "z = \\frac{1}{2}" }, { "math_id": 57, "text": "z_1 = z_2 = \\frac{1}{2}" }, { "math_id": 58, "text": "u = \\sqrt{z}" }, { "math_id": 59, "text": "\\begin{align}\n\\Gamma\\left(\\tfrac 1 2 + n\\right) &= {(2n)! \\over 4^n n!} \\sqrt{\\pi} = \\frac{(2n-1)!!}{2^n} \\sqrt{\\pi} = \\binom{n-\\frac{1}{2}}{n} n! \\sqrt{\\pi} \\\\[8pt]\n\\Gamma\\left(\\tfrac 1 2 - n\\right) &= {(-4)^n n! \\over (2n)!} \\sqrt{\\pi} = \\frac{(-2)^n}{(2n-1)!!} \\sqrt{\\pi} = \\frac{\\sqrt{\\pi}}{\\binom{-1/2}{n} n!}\n\\end{align}" }, { "math_id": 60, "text": "(2n-1)!! = (2n-1)(2n-3)\\cdots(3)(1)" }, { "math_id": 61, "text": "\\Gamma \\left( \\frac{1}{2} \\right) = \\sqrt\\pi" }, { "math_id": 62, "text": "\\Gamma(r)" }, { "math_id": 63, "text": "r" }, { "math_id": 64, "text": "\\Gamma (n + r)" }, { "math_id": 65, "text": "\\pi" }, { "math_id": 66, "text": "r = \\frac{1}{6}, \\frac{1}{4}, \\frac{1}{3}, \\frac{2}{3}, \\frac{3}{4}, \\frac{5}{6}" }, { "math_id": 67, "text": "\\Gamma'(z)=\\Gamma(z)\\psi^{(0)}(z)." }, { "math_id": 68, "text": "\\Gamma'(m+1) = m! \\left( - \\gamma + \\sum_{k=1}^m\\frac{1}{k} \\right)= m! \\left( - \\gamma + H(m) \\right)\\,," }, { "math_id": 69, "text": "\\Re(z) > 0" }, { "math_id": 70, "text": "\\frac{d^n}{dz^n}\\Gamma(z) = \\int_0^\\infty t^{z-1} e^{-t} (\\log t)^n \\, dt." }, { "math_id": 71, "text": "\\Gamma^{(n)}(1)=(-1)^n B_n(\\gamma, 1! \\zeta(2), \\ldots, (n-1)! \\zeta(n))" }, { "math_id": 72, "text": "\\zeta(z)" }, { "math_id": 73, "text": "B_n" }, { "math_id": 74, "text": "\\Gamma(z) = \\frac1z - \\gamma + \\frac12\\left(\\gamma^2 + \\frac{\\pi^2}6\\right)z - \\frac16\\left(\\gamma^3 + \\frac{\\gamma\\pi^2}2 + 2 \\zeta(3)\\right)z^2 + O(z^3)." }, { "math_id": 75, "text": "x_1" }, { "math_id": 76, "text": "x_2" }, { "math_id": 77, "text": "t \\in [0, 1]" }, { "math_id": 78, "text": "\\Gamma(tx_1 + (1 - t)x_2) \\le \\Gamma(x_1)^t\\Gamma(x_2)^{1 - t}." }, { "math_id": 79, "text": " \\left(\\frac{\\Gamma(x_2)}{\\Gamma(x_1)}\\right)^{\\frac{1}{x_2 - x_1}} > \\exp\\left(\\frac{\\Gamma'(x_1)}{\\Gamma(x_1)}\\right)." }, { "math_id": 80, "text": "x" }, { "math_id": 81, "text": " \\Gamma''(x) \\Gamma(x) > \\Gamma'(x)^2." }, { "math_id": 82, "text": "\\psi^{(1)}(x) > 0" }, { "math_id": 83, "text": "\\psi^{(1)}" }, { "math_id": 84, "text": "x_1, \\ldots, x_n" }, { "math_id": 85, "text": "a_1, \\ldots, a_n" }, { "math_id": 86, "text": "\\Gamma\\left(\\frac{a_1x_1 + \\cdots + a_nx_n}{a_1 + \\cdots + a_n}\\right) \\le \\bigl(\\Gamma(x_1)^{a_1} \\cdots \\Gamma(x_n)^{a_n}\\bigr)^{\\frac{1}{a_1 + \\cdots + a_n}}." }, { "math_id": 87, "text": "x^{1 - s} < \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < \\left(x + 1\\right)^{1 - s}." }, { "math_id": 88, "text": "\\Gamma(x)" }, { "math_id": 89, "text": "\\Gamma(x+1)\\sim\\sqrt{2\\pi x}\\left(\\frac{x}{e}\\right)^x," }, { "math_id": 90, "text": "\\sim" }, { "math_id": 91, "text": "\\exp(\\beta x)" }, { "math_id": 92, "text": "\\beta" }, { "math_id": 93, "text": "x \\to + \\infty" }, { "math_id": 94, "text": " {\\Gamma(x+\\alpha)}\\sim{\\Gamma(x)x^\\alpha}, \\qquad \\alpha \\in \\Complex. " }, { "math_id": 95, "text": " \\Gamma\\left(x\\right)=\\sqrt{\\frac{2\\pi}{x}}\\left( \\frac{x}{e} \\right)^{x}\\prod_{n=0}^{\\infty}e^{-1}\\left(1+\\frac{1}{x+n}\\right)^{\\frac{1}{2}+x+n} " }, { "math_id": 96, "text": "\\Re(z) \\le 0" }, { "math_id": 97, "text": "\\Gamma(z)=\\frac{\\Gamma(z+n+1)}{z(z+1)\\cdots(z+n)}," }, { "math_id": 98, "text": "z + n" }, { "math_id": 99, "text": "0, -1, -2, \\ldots" }, { "math_id": 100, "text": "f" }, { "math_id": 101, "text": "c" }, { "math_id": 102, "text": "\\operatorname{Res}(f,c)=\\lim_{z\\to c}(z-c)f(z)." }, { "math_id": 103, "text": "z = -n," }, { "math_id": 104, "text": "(z+n) \\Gamma(z)=\\frac{\\Gamma(z+n+1)}{z(z+1)\\cdots(z+n-1)}." }, { "math_id": 105, "text": "\\Gamma(z+n+1) = \\Gamma(1) = 1" }, { "math_id": 106, "text": "z(z+1)\\cdots(z+n-1) = -n(1-n)\\cdots(n-1-n) = (-1)^n n!." }, { "math_id": 107, "text": "\\operatorname{Res}(\\Gamma,-n)=\\frac{(-1)^n}{n!}." }, { "math_id": 108, "text": "\\Gamma (z) = 0" }, { "math_id": 109, "text": "\\frac {1}{\\Gamma (z)}" }, { "math_id": 110, "text": "z = 0, -1, -2, \\ldots" }, { "math_id": 111, "text": "\\Gamma (z)=\\int_{-\\infty}^\\infty e^{zt-e^t}\\, dt" }, { "math_id": 112, "text": "\\Gamma(z) = \\int_0^1 \\left(\\log \\frac{1}{t}\\right)^{z-1}\\,dt," }, { "math_id": 113, "text": "\\Gamma(z) = 2\\int_{0}^{\\infty}t^{2z-1}e^{-t^{2}}\\,dt" }, { "math_id": 114, "text": "t=e^{-x}" }, { "math_id": 115, "text": "t=-\\log x" }, { "math_id": 116, "text": "t=x^2" }, { "math_id": 117, "text": "z=1/2" }, { "math_id": 118, "text": "\\Gamma(1/2)=\\sqrt{\\pi}=2\\int_{0}^{\\infty}e^{-t^{2}}\\,dt" }, { "math_id": 119, "text": "\\operatorname{log\\Gamma}(z) = \\left(z - \\frac{1}{2}\\right)\\log z - z + \\frac{1}{2}\\log (2\\pi) + \\int_0^\\infty \\left(\\frac{1}{2} - \\frac{1}{t} + \\frac{1}{e^t - 1}\\right)\\frac{e^{-tz}}{t}\\,dt." }, { "math_id": 120, "text": "\\log\\left(\\Gamma(z)\\left(\\frac{e}{z}\\right)^z\\sqrt{\\frac{z}{2\\pi}}\\right) = \\mathcal{L}\\left(\\frac{1}{2t} - \\frac{1}{t^2} + \\frac{1}{t(e^t - 1)}\\right)(z)." }, { "math_id": 121, "text": "\\operatorname{log\\Gamma}(z) = \\left(z - \\frac{1}{2}\\right)\\log z - z + \\frac{1}{2}\\log(2\\pi) + 2\\int_0^\\infty \\frac{\\arctan(t/z)}{e^{2\\pi t} - 1}\\,dt." }, { "math_id": 122, "text": "\\log(-t)" }, { "math_id": 123, "text": "\\Gamma(z) = -\\frac{1}{2i\\sin \\pi z}\\int_C (-t)^{z-1}e^{-t}\\,dt," }, { "math_id": 124, "text": "(-t)^{z-1}" }, { "math_id": 125, "text": "\\exp((z-1)\\log(-t))" }, { "math_id": 126, "text": "\\frac{1}{\\Gamma(z)} = \\frac{i}{2\\pi}\\int_C (-t)^{-z}e^{-t}\\,dt," }, { "math_id": 127, "text": "\\begin{aligned}\n \\Gamma (z) &= \\cfrac{e^{-1}}{\n 2 + 0 - z + 1\\cfrac{z-1}{\n 2 + 2 - z + 2\\cfrac{z-2}{\n 2 + 4 - z + 3\\cfrac{z-3}{\n 2 + 6 - z + 4\\cfrac{z-4}{\n 2 + 8 - z + 5\\cfrac{z-5}{\n 2 + 10 - z + \\ddots\n }\n }\n }\n }\n }\n } \\\\\n &+\\ \\cfrac{e^{-1}}{\n z + 0 - \\cfrac{z+0}{\n z + 1 + \\cfrac{1}{\n z + 2 - \\cfrac{z+1}{\n z + 3 + \\cfrac{2}{\n z + 4 - \\cfrac{z+2}{\n z + 5 + \\cfrac{3}{\n z + 6 - \\ddots\n }\n }\n }\n }\n }\n }\n }\n\\end{aligned}" }, { "math_id": 128, "text": "z\\in\\mathbb{C}" }, { "math_id": 129, "text": "0 < z < 1:" }, { "math_id": 130, "text": "\\operatorname{log\\Gamma}(z) = \\left(\\frac{1}{2} - z\\right)(\\gamma + \\log 2) + (1 - z)\\log\\pi - \\frac{1}{2}\\log\\sin(\\pi z) + \\frac{1}{\\pi}\\sum_{n=1}^\\infty \\frac{\\log n}{n} \\sin (2\\pi n z)," }, { "math_id": 131, "text": "\\int_a^{a+1}\\operatorname{log\\Gamma}(z)\\, dz = \\tfrac12\\log2\\pi + a\\log a - a,\\quad a>0." }, { "math_id": 132, "text": "a = 0" }, { "math_id": 133, "text": "\\int_0^1\\operatorname{log\\Gamma}(z)\\, dz = \\tfrac12\\log2\\pi." }, { "math_id": 134, "text": "a \\to \\infty" }, { "math_id": 135, "text": "\\Pi" }, { "math_id": 136, "text": "\\Pi(z) = \\Gamma(z+1) = z \\Gamma(z) = \\int_0^\\infty e^{-t} t^z\\, dt," }, { "math_id": 137, "text": "\\Pi(n) = n!" }, { "math_id": 138, "text": "\\Pi(z) \\Pi(-z) = \\frac{\\pi z}{\\sin( \\pi z)} = \\frac{1}{\\operatorname{sinc}(z)}" }, { "math_id": 139, "text": "\\Pi\\left(\\frac{z}{m}\\right) \\, \\Pi\\left(\\frac{z-1}{m}\\right) \\cdots \\Pi\\left(\\frac{z-m+1}{m}\\right) = (2 \\pi)^{\\frac{m-1}{2}} m^{-z-\\frac12} \\Pi(z)\\ ." }, { "math_id": 140, "text": "\\pi(z) = \\frac{1}{\\Pi(z)}\\ ," }, { "math_id": 141, "text": "\\pi(z)" }, { "math_id": 142, "text": "\\Pi\\left(z\\right)" }, { "math_id": 143, "text": "\\Gamma\\left(z\\right)" }, { "math_id": 144, "text": "V_n(r_1,\\dotsc,r_n)=\\frac{\\pi^{\\frac{n}{2}}}{\\Pi\\left(\\frac{n}{2}\\right)} \\prod_{k=1}^n r_k." }, { "math_id": 145, "text": "\\Beta(z_1,z_2) = \\int_0^1 t^{z_1-1}(1-t)^{z_2-1}\\,dt = \\frac{\\Gamma(z_1)\\,\\Gamma(z_2)}{\\Gamma(z_1+z_2)}." }, { "math_id": 146, "text": "\\zeta (z)" }, { "math_id": 147, "text": "\\pi^{-\\frac{z}{2}} \\; \\Gamma\\left(\\frac{z}{2}\\right) \\zeta(z) = \\pi^{-\\frac{1-z}{2}} \\; \\Gamma\\left(\\frac{1-z}{2}\\right) \\; \\zeta(1-z)." }, { "math_id": 148, "text": "\\zeta(z) \\Gamma(z) = \\int_0^\\infty \\frac{u^{z}}{e^u - 1} \\, \\frac{du}{u}," }, { "math_id": 149, "text": "\\Re (z) > 1" }, { "math_id": 150, "text": "\\operatorname{log\\Gamma}(z) = \\zeta_H'(0,z) - \\zeta'(0)," }, { "math_id": 151, "text": "\\zeta_H" }, { "math_id": 152, "text": "\\zeta" }, { "math_id": 153, "text": "\\langle\\tau^n\\rangle \\equiv \\int_0^\\infty t^{n-1}\\, e^{ - \\left( \\frac{t}{\\tau} \\right)^\\beta} \\, \\mathrm{d}t = \\frac{\\tau^n}{\\beta}\\Gamma \\left({n \\over \\beta }\\right)." }, { "math_id": 154, "text": "\\begin{array}{rcccl}\n\\Gamma\\left(-\\tfrac{3}{2}\\right) &=& \\tfrac{4\\sqrt{\\pi}}{3} &\\approx& +2.36327\\,18012\\,07354\\,70306 \\\\\n\\Gamma\\left(-\\tfrac{1}{2}\\right) &=& -2\\sqrt{\\pi} &\\approx& -3.54490\\,77018\\,11032\\,05459 \\\\\n\\Gamma\\left(\\tfrac{1}{2}\\right) &=& \\sqrt{\\pi} &\\approx& +1.77245\\,38509\\,05516\\,02729 \\\\\n\\Gamma(1) &=& 0! &=& +1 \\\\\n\\Gamma\\left(\\tfrac{3}{2}\\right) &=& \\tfrac{\\sqrt{\\pi}}{2} &\\approx& +0.88622\\,69254\\,52758\\,01364 \\\\\n\\Gamma(2) &=& 1! &=& +1 \\\\\n\\Gamma\\left(\\tfrac{5}{2}\\right) &=& \\tfrac{3\\sqrt{\\pi}}{4} &\\approx& +1.32934\\,03881\\,79137\\,02047 \\\\\n\\Gamma(3) &=& 2! &=& +2 \\\\\n\\Gamma\\left(\\tfrac{7}{2}\\right) &=& \\tfrac{15\\sqrt{\\pi}}{8} &\\approx& +3.32335\\,09704\\,47842\\,55118 \\\\\n\\Gamma(4) &=& 3! &=& +6\n\\end{array}" }, { "math_id": 155, "text": "\\frac{1}{\\Gamma(-3)} = \\frac{1}{\\Gamma(-2)} = \\frac{1}{\\Gamma(-1)} = \\frac{1}{\\Gamma(0)} = 0." }, { "math_id": 156, "text": "\\operatorname{log\\Gamma}(z) = - \\gamma z - \\log z + \\sum_{k = 1}^\\infty \\left[ \\frac z k - \\log \\left( 1 + \\frac z k \\right) \\right]." }, { "math_id": 157, "text": " \\operatorname{log\\Gamma}(z) = \\operatorname{log\\Gamma}(z+1) - \\log z" }, { "math_id": 158, "text": " \\operatorname{log\\Gamma}(z) \\approx (z - \\tfrac{1}{2}) \\log z - z + \\tfrac{1}{2}\\log(2\\pi)." }, { "math_id": 159, "text": " \\operatorname{log\\Gamma}(z-m) = \\operatorname{log\\Gamma}(z) - \\sum_{k=1}^m \\log(z-k)." }, { "math_id": 160, "text": "\\Gamma(z)\\sim z^{z - \\frac12} e^{-z} \\sqrt{2\\pi} \\left( 1 + \\frac{1}{12z} + \\frac{1}{288z^2} - \\frac{139}{51\\,840 z^3} - \\frac{571}{2\\,488\\,320 z^4} \\right) " }, { "math_id": 161, "text": "\\operatorname{log\\Gamma}(z) = z \\log z - z - \\tfrac12 \\log z + \\tfrac12 \\log 2\\pi + \\frac{1}{12z} - \\frac{1}{360z^3} +\\frac{1}{1260 z^5} +o\\left(\\frac1{z^5}\\right)" }, { "math_id": 162, "text": "\\frac{B_k}{k(k-1)}" }, { "math_id": 163, "text": "\\operatorname{log\\Gamma}(1+x)=\\frac{x(x-1)}{2!} \\log(2)+\\frac{x(x-1)(x-2)}{3!} (\\log(3)-2\\log(2))+\\cdots,\\quad\\Re (x)> 0." }, { "math_id": 164, "text": "\\Gamma(1) = 1" }, { "math_id": 165, "text": "\\Gamma(z+1) = z \\Gamma(z)" }, { "math_id": 166, "text": "\\lim_{n \\to \\infty} \\frac{\\Gamma(n+z)}{\\Gamma(n)\\;n^z} = 1" }, { "math_id": 167, "text": "\\operatorname{log\\Gamma}(z+1)= -\\gamma z +\\sum_{k=2}^\\infty \\frac{\\zeta(k)}{k} \\, (-z)^k \\qquad \\forall\\; |z| < 1" }, { "math_id": 168, "text": "\\zeta(s) \\Gamma(s) = \\int_0^\\infty \\frac{t^s}{e^t-1} \\, \\frac{dt}{t}" }, { "math_id": 169, "text": "\\operatorname{log\\Gamma}(z+1)= -\\gamma z + \\int_0^\\infty \\frac{e^{-zt} - 1 + z t}{t \\left(e^t - 1\\right)} \\, dt " }, { "math_id": 170, "text": "\\operatorname{log\\Gamma}(z+1)= \\int_0^\\infty \\frac{e^{-zt} - ze^{-t} - 1 + z}{t \\left(e^t -1\\right)} \\, dt\\,. " }, { "math_id": 171, "text": "k<n" }, { "math_id": 172, "text": "k\\neq n/2 \\,," }, { "math_id": 173, "text": "\n\\begin{align}\n\\operatorname{log\\Gamma} \\left(\\frac{k}{n}\\right) = {} & \\frac{\\,(n-2k)\\log2\\pi\\,}{2n} + \\frac{1}{2}\\left\\{\\,\\log\\pi-\\log\\sin\\frac{\\pi k}{n} \\,\\right\\} + \\frac{1}{\\pi}\\!\\sum_{r=1}^{n-1}\\frac{\\,\\gamma+\\log r\\,}{r}\\cdot\\sin\\frac{\\,2\\pi r k\\,}{n} \\\\\n& {} - \\frac{1}{2\\pi}\\sin\\frac{2\\pi k}{n}\\cdot\\!\\int_0^\\infty \\!\\!\\frac{\\,e^{-nx}\\!\\cdot\\log x\\,}{\\,\\cosh x -\\cos( 2\\pi k/n )\\,}\\,{\\mathrm d}x.\n\\end{align}\n" }, { "math_id": 174, "text": " \\int_0^z \\operatorname{log\\Gamma} (x) \\, dx" }, { "math_id": 175, "text": "\\int_0^z \\operatorname{log\\Gamma}(x) \\, dx = \\frac{z}{2} \\log (2 \\pi) + \\frac{z(1-z)}{2} + z \\operatorname{log\\Gamma}(z) - \\log G(z+1)" }, { "math_id": 176, "text": "\\int_0^z \\operatorname{log\\Gamma}(x) \\, dx = \\frac{z}{2} \\log(2 \\pi) + \\frac{z(1-z)}{2} - \\zeta'(-1) + \\zeta'(-1,z) ." }, { "math_id": 177, "text": "z=1" }, { "math_id": 178, "text": " \\int_0^1 \\operatorname{log\\Gamma}(x) \\, dx = \\frac 1 2 \\log(2\\pi), " }, { "math_id": 179, "text": "\\operatorname{log\\Gamma}" }, { "math_id": 180, "text": "\\int_{0}^{1} \\log ^{2} \\Gamma(x) d x=\\frac{\\gamma^{2}}{12}+\\frac{\\pi^{2}}{48}+\\frac{1}{3} \\gamma L_{1}+\\frac{4}{3} L_{1}^{2}-\\left(\\gamma+2 L_{1}\\right) \\frac{\\zeta^{\\prime}(2)}{\\pi^{2}}+\\frac{\\zeta^{\\prime \\prime}(2)}{2 \\pi^{2}}," }, { "math_id": 181, "text": "L_1" }, { "math_id": 182, "text": "\\frac12\\log(2\\pi)" }, { "math_id": 183, "text": "L_n:=\\int_0^1 \\log^n \\Gamma(x) \\, dx" }, { "math_id": 184, "text": "n=1,2" }, { "math_id": 185, "text": "\n\\lim_{n\\to\\infty} \\frac{L_n}{n!}=1.\n" }, { "math_id": 186, "text": "\\Gamma(z) \\sim \\sqrt{2\\pi}z^{z-1/2}e^{-z}\\quad\\hbox{as }z\\to\\infty\\hbox{ in } \\left|\\arg(z)\\right|<\\pi." }, { "math_id": 187, "text": "\\operatorname{Re} (z) \\in [1, 2]" }, { "math_id": 188, "text": "\\begin{align}\n\\Gamma(z) &= \\int_0^x e^{-t} t^z \\, \\frac{dt}{t} + \\int_x^\\infty e^{-t} t^z\\, \\frac{dt}{t} \\\\\n&= x^z e^{-x} \\sum_{n=0}^\\infty \\frac{x^n}{z(z+1) \\cdots (z+n)} + \\int_x^\\infty e^{-t} t^z \\, \\frac{dt}{t}.\n\\end{align}" }, { "math_id": 189, "text": "x \\geq 1" }, { "math_id": 190, "text": "(x + 1)e^{-x}" }, { "math_id": 191, "text": "2^{-N}" }, { "math_id": 192, "text": "N" }, { "math_id": 193, "text": "\\Gamma(z+1) = z\\ \\Gamma(z)" }, { "math_id": 194, "text": "z <1 " }, { "math_id": 195, "text": "z>2" }, { "math_id": 196, "text": "1\\leq z \\leq 2" }, { "math_id": 197, "text": "f(t)e^{-g(t)}" }, { "math_id": 198, "text": "u:=a\\cdot t" }, { "math_id": 199, "text": "\\int_0^\\infty t^b e^{-at} \\,dt = \\frac{1}{a^b} \\int_0^\\infty u^b e^{-u} d\\left(\\frac{u}{a}\\right) = \\frac{\\Gamma(b+1)}{a^{b+1}}." }, { "math_id": 200, "text": "ae^{-\\frac{(x-b)^2}{c^2}}" }, { "math_id": 201, "text": "\\sqrt{\\pi}" }, { "math_id": 202, "text": "\\Gamma \\left( \\frac{1}{2} \\right)" }, { "math_id": 203, "text": "(1 + z)^n = \\sum_{k=0}^\\infty \\frac{\\Gamma(n+1)}{k!\\Gamma(n-k+1)} z^k," }, { "math_id": 204, "text": "(1 + z)^n = \\sum_{k=0}^n \\frac{n!}{k!(n-k)!} z^k = \\sum_{k=0}^n \\binom{n}{k} z^k." }, { "math_id": 205, "text": "\\prod_{i=a}^b \\frac{P(i)}{Q(i)} = \\left( \\prod_{j=1}^m \\frac{\\Gamma(b-p_j+1)}{\\Gamma(a-p_j)} \\right) \\left( \\prod_{k=1}^n \\frac{\\Gamma(a-q_k)}{\\Gamma(b-q_k+1)} \\right)." }, { "math_id": 206, "text": "\\Gamma\\left(\\frac{s}{2}\\right)\\zeta(s)\\pi^{-\\frac{s}{2}} = \\Gamma\\left(\\frac{1-s}{2}\\right)\\zeta(1-s)\\pi^{-\\frac{1-s}{2}}." }, { "math_id": 207, "text": "\\zeta(s) \\; \\Gamma(s) = \\int_0^\\infty \\frac{t^s}{e^t-1} \\, \\frac{dt}{t}." }, { "math_id": 208, "text": "x!=\\lim_{n\\to\\infty}\\left(n+1+\\frac{x}{2}\\right)^{x-1}\\prod_{k=1}^{n}\\frac{k+1}{k+x}" }, { "math_id": 209, "text": "n! = \\prod_{k=1}^\\infty \\frac{\\left(1+\\frac{1}{k}\\right)^n}{1+\\frac{n}{k}}\\,," }, { "math_id": 210, "text": "n!=\\int_0^1 (-\\log s)^n\\, ds\\,," }, { "math_id": 211, "text": "\\Re (n) > -1" }, { "math_id": 212, "text": "\\Gamma(z) = \\lim_{m\\to\\infty}\\frac{m^z m!}{z(z+1)(z+2)\\cdots(z+m)}" }, { "math_id": 213, "text": "\\Gamma(z) = \\frac{e^{-\\gamma z}}{z} \\prod_{k=1}^\\infty \\left(1 + \\frac{z}{k}\\right)^{-1} e^\\frac{z}{k}," }, { "math_id": 214, "text": "\\frac{dx}{x}" } ]
https://en.wikipedia.org/wiki?curid=12316
12316009
Nemytskii operator
In mathematics, Nemytskii operators are a class of nonlinear operators on "L""p" spaces with good continuity and boundedness properties. They take their name from the mathematician Viktor Vladimirovich Nemytskii. General definition of Superposition operator. Let formula_0 be non-empty sets, then formula_1 — sets of mappings from formula_2 with values in formula_3 and formula_4 respectively. The Nemytskii superposition operator formula_5 is the mapping induced by the function formula_6, and such that for any function formula_7 its image is given by the rule formula_8 The function formula_9 is called the generator of the Nemytskii operator formula_10. Definition of Nemytskii operator. Let Ω be a domain (an open and connected set) in "n"-dimensional Euclidean space. A function "f" : Ω × R"m" → R is said to satisfy the Carathéodory conditions if Given a function "f" satisfying the Carathéodory conditions and a function "u" : Ω → R"m", define a new function "F"("u") : Ω → R by formula_11 The function "F" is called a Nemytskii operator. Theorem on Lipschitzian Operators. Suppose that formula_12, formula_13 and formula_14 where operator formula_10 is defined as formula_15 formula_16 for any function formula_17 and any formula_18. Under these conditions the operator formula_10 is Lipschitz continuous if and only if there exist functions formula_19 such that formula_20 Boundedness theorem. Let Ω be a domain, let 1 &lt; "p" &lt; +∞ and let "g" ∈ "L""q"(Ω; R), with formula_21 Suppose that "f" satisfies the Carathéodory conditions and that, for some constant "C" and all "x" and "u", formula_22 Then the Nemytskii operator "F" as defined above is a bounded and continuous map from "L""p"(Ω; R"m") into "L""q"(Ω; R).
[ { "math_id": 0, "text": "\\mathbb{X},\\ \\mathbb{Y},\\ \\mathbb{Z} \\neq \\varnothing" }, { "math_id": 1, "text": "\\mathbb{Y}^ \\mathbb{X},\\ \\mathbb{Z}^\\mathbb{X}" }, { "math_id": 2, "text": "\\mathbb{X}" }, { "math_id": 3, "text": "\\mathbb{Y}" }, { "math_id": 4, "text": "\\mathbb{Z}" }, { "math_id": 5, "text": "H\\ \\colon \\mathbb{Y}^\\mathbb{X} \\to \\mathbb{Z}^\\mathbb{X}" }, { "math_id": 6, "text": "h\\ \\colon \\mathbb{X} \\times \\mathbb{Y} \\to \\mathbb{Z}" }, { "math_id": 7, "text": "\\varphi \\in \\mathbb{Y}^\\mathbb{X}" }, { "math_id": 8, "text": "(H\\varphi)(x) = h(x, \\varphi(x)) \\in \\mathbb{Z}, \\quad \\mbox{for all}\\ x\\in \\mathbb{X}." }, { "math_id": 9, "text": "h" }, { "math_id": 10, "text": "H" }, { "math_id": 11, "text": "F(u)(x) = f \\big( x, u(x) \\big)." }, { "math_id": 12, "text": "h: [a, b] \\times \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 13, "text": "X = \\text{Lip} [a, b]" }, { "math_id": 14, "text": "H: \\text{Lip} [a, b] \\to \\text{Lip} [a, b]" }, { "math_id": 15, "text": "\\left( Hf \\right) \\left(x\\right)" }, { "math_id": 16, "text": "= h(x, f(x))" }, { "math_id": 17, "text": "f : [a,b] \\to \\mathbb{R}" }, { "math_id": 18, "text": "x \\in [a,b]" }, { "math_id": 19, "text": "G, H \\in \\text{Lip} [a, b]" }, { "math_id": 20, "text": "h(x, y) = G(x)y + H(x), \\quad x \\in [a, b], \\quad y \\in \\mathbb{R}." }, { "math_id": 21, "text": "\\frac1{p} + \\frac1{q} = 1." }, { "math_id": 22, "text": "\\big| f(x, u) \\big| \\leq C | u |^{p - 1} + g(x)." } ]
https://en.wikipedia.org/wiki?curid=12316009
12316417
10-simplex
Convex regular 10-polytope In geometry, a 10-simplex is a self-dual regular 10-polytope. It has 11 vertices, 55 edges, 165 triangle faces, 330 tetrahedral cells, 462 5-cell 4-faces, 462 5-simplex 5-faces, 330 6-simplex 6-faces, 165 7-simplex 7-faces, 55 8-simplex 8-faces, and 11 9-simplex 9-faces. Its dihedral angle is cos−1(1/10), or approximately 84.26°. It can also be called a hendecaxennon, or hendeca-10-tope, as an 11-facetted polytope in 10-dimensions. The name "hendecaxennon" is derived from "hendeca" for 11 facets in Greek and -xenn (variation of ennea for nine), having 9-dimensional facets, and "-on". Coordinates. The Cartesian coordinates of the vertices of an origin-centered regular 10-simplex having edge length 2 are: formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 formula_8 formula_9 More simply, the vertices of the "10-simplex" can be positioned in 11-space as permutations of (0,0,0,0,0,0,0,0,0,0,1). This construction is based on facets of the 11-orthoplex. Related polytopes. The 2-skeleton of the 10-simplex is topologically related to the 11-cell abstract regular polychoron which has the same 11 vertices, 55 edges, but only 1/3 the faces (55).
[ { "math_id": 0, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ \\sqrt{1/3},\\ \\pm1\\right)" }, { "math_id": 1, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ \\sqrt{1/6},\\ -2\\sqrt{1/3},\\ 0\\right)" }, { "math_id": 2, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ \\sqrt{1/10},\\ -\\sqrt{3/2},\\ 0,\\ 0\\right)" }, { "math_id": 3, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ \\sqrt{1/15},\\ -2\\sqrt{2/5},\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 4, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ \\sqrt{1/21},\\ -\\sqrt{5/3},\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 5, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ \\sqrt{1/28},\\ -\\sqrt{12/7},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 6, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ 1/6,\\ -\\sqrt{7/4},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 7, "text": "\\left(\\sqrt{1/55},\\ \\sqrt{1/45},\\ -4/3,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 8, "text": "\\left(\\sqrt{1/55},\\ -3\\sqrt{1/5},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" }, { "math_id": 9, "text": "\\left(-\\sqrt{20/11},\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0,\\ 0\\right)" } ]
https://en.wikipedia.org/wiki?curid=12316417