id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
620712
Landau–Ramanujan constant
In mathematics and the field of number theory, the Landau–Ramanujan constant is the positive real number "b" that occurs in a theorem proved by Edmund Landau in 1908, stating that for large formula_0, the number of positive integers below formula_0 that are the sum of two square numbers behaves asymptotically as formula_1 This constant "b" was rediscovered in 1913 by Srinivasa Ramanujan, in the first letter he wrote to G.H. Hardy. Sums of two squares. By the sum of two squares theorem, the numbers that can be expressed as a sum of two squares of integers are the ones for which each prime number congruent to 3 mod 4 appears with an even exponent in their prime factorization. For instance, "45 = 9 + 36" is a sum of two squares; in its prime factorization, 32 × 5, the prime 3 appears with an even exponent, and the prime 5 is congruent to 1 mod 4, so its exponent can be odd. Landau's theorem states that if formula_2 is the number of positive integers less than formula_0 that are the sum of two squares, then formula_3 (sequence in the OEIS), where formula_4 is the Landau–Ramanujan constant. The Landau-Ramanujan constant can also be written as an infinite product: formula_5 History. This constant was stated by Landau in the limit form above; Ramanujan instead approximated formula_2 as an integral, with the same constant of proportionality, and with a slowly growing error term. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "\\dfrac{bx}{\\sqrt{\\log(x)}}." }, { "math_id": 2, "text": "N(x)" }, { "math_id": 3, "text": "\\lim_{x\\rightarrow\\infty}\\ \\left(\\dfrac{N(x)}{\\dfrac{x}{\\sqrt{\\log(x)}}}\\right)=b\\approx 0.764223653589220662990698731250092328116790541" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "\\begin{align}b &= \\frac{1}{\\sqrt{2}}\\prod_{p\\equiv 3 \\pmod{4}} \\left(1 - \\frac{1}{p^2}\\right)^{-1/2} \\\\ &= \\frac{\\pi}{4} \\prod_{p\\equiv 1 \\pmod{4}} \\left(1 - \\frac{1}{p^2}\\right)^{1/2}. \\end{align}" } ]
https://en.wikipedia.org/wiki?curid=620712
62074925
Borel subalgebra
In mathematics, specifically in representation theory, a Borel subalgebra of a Lie algebra formula_0 is a maximal solvable subalgebra. The notion is named after Armand Borel. If the Lie algebra formula_0 is the Lie algebra of a complex Lie group, then a Borel subalgebra is the Lie algebra of a Borel subgroup. Borel subalgebra associated to a flag. Let formula_1 be the Lie algebra of the endomorphisms of a finite-dimensional vector space "V" over the complex numbers. Then to specify a Borel subalgebra of formula_2 amounts to specify a flag of "V"; given a flag formula_3, the subspace formula_4 is a Borel subalgebra, and conversely, each Borel subalgebra is of that form by Lie's theorem. Hence, the Borel subalgebras are classified by the flag variety of "V". Borel subalgebra relative to a base of a root system. Let formula_2 be a complex semisimple Lie algebra, formula_5 a Cartan subalgebra and "R" the root system associated to them. Choosing a base of "R" gives the notion of positive roots. Then formula_2 has the decomposition formula_6 where formula_7. Then formula_8 is the Borel subalgebra relative to the above setup. (It is solvable since the derived algebra formula_9 is nilpotent. It is maximal solvable by a theorem of Borel–Morozov on the conjugacy of solvable subalgebras.) Given a formula_2-module "V", a primitive element of "V" is a (nonzero) vector that (1) is a weight vector for formula_5 and that (2) is annihilated by formula_10. It is the same thing as a formula_11-weight vector (Proof: if formula_12 and formula_13 with formula_14 and if formula_15 is a line, then formula_16.) References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\mathfrak{g}" }, { "math_id": 1, "text": "\\mathfrak g = \\mathfrak{gl}(V)" }, { "math_id": 2, "text": "\\mathfrak g" }, { "math_id": 3, "text": "V = V_0\n\\supset V_1 \\supset \\cdots \\supset V_n = 0" }, { "math_id": 4, "text": "\\mathfrak b = \\{ x \\in \\mathfrak g \\mid x(V_i) \\subset V_i, 1 \\le i \\le n \\}" }, { "math_id": 5, "text": "\\mathfrak h" }, { "math_id": 6, "text": "\\mathfrak g = \\mathfrak n^- \\oplus \\mathfrak h \\oplus \\mathfrak n^+" }, { "math_id": 7, "text": "\\mathfrak n^{\\pm} = \\sum_{\\alpha > 0} \\mathfrak{g}_{\\pm \\alpha}" }, { "math_id": 8, "text": "\\mathfrak b = \\mathfrak h \\oplus \\mathfrak n^+" }, { "math_id": 9, "text": "[\\mathfrak b, \\mathfrak b]" }, { "math_id": 10, "text": "\\mathfrak{n}^+" }, { "math_id": 11, "text": "\\mathfrak b" }, { "math_id": 12, "text": "h \\in \\mathfrak h" }, { "math_id": 13, "text": "e \\in \\mathfrak{n}^+" }, { "math_id": 14, "text": "[h, e] = 2e" }, { "math_id": 15, "text": "\\mathfrak{b} \\cdot v" }, { "math_id": 16, "text": "0 = [h, e] \\cdot v = 2 e \\cdot v" } ]
https://en.wikipedia.org/wiki?curid=62074925
6207793
Geodesics as Hamiltonian flows
In mathematics, the geodesic equations are second-order non-linear differential equations, and are commonly presented in the form of Euler–Lagrange equations of motion. However, they can also be presented as a set of coupled first-order equations, in the form of Hamilton's equations. This latter formulation is developed in this article. Overview. It is frequently said that geodesics are "straight lines in curved space". By using the Hamilton–Jacobi approach to the geodesic equation, this statement can be given a very intuitive meaning: geodesics describe the motions of particles that are not experiencing any forces. In flat space, it is well known that a particle moving in a straight line will continue to move in a straight line if it experiences no external forces; this is Newton's first law. The Hamiltonian describing such motion is well known to be formula_0 with "p" being the momentum. It is the conservation of momentum that leads to the straight motion of a particle. On a curved surface, exactly the same ideas are at play, except that, in order to measure distances correctly, one must use the Riemannian metric. To measure momenta correctly, one must use the inverse of the metric. The motion of a free particle on a curved surface still has exactly the same form as above, i.e. consisting entirely of a kinetic term. The resulting motion is still, in a sense, a "straight line", which is why it is sometimes said that geodesics are "straight lines in curved space". This idea is developed in greater detail below. Geodesics as an application of the principle of least action. Given a (pseudo-)Riemannian manifold "M", a geodesic may be defined as the curve that results from the application of the principle of least action. A differential equation describing their shape may be derived, using variational principles, by minimizing (or finding the extremum) of the energy of a curve. Given a smooth curve formula_1 that maps an interval "I" of the real number line to the manifold "M", one writes the energy formula_2 where formula_3 is the tangent vector to the curve formula_4 at point formula_5. Here, formula_6 is the metric tensor on the manifold "M". Using the energy given above as the action, one may choose to solve either the Euler–Lagrange equations or the Hamilton–Jacobi equations. Both methods give the geodesic equation as the solution; however, the Hamilton–Jacobi equations provide greater insight into the structure of the manifold, as shown below. In terms of the local coordinates on "M", the (Euler–Lagrange) geodesic equation is formula_7 where the "x""a"("t") are the coordinates of the curve γ("t"), formula_8 are the Christoffel symbols, and repeated indices imply the use of the summation convention. Hamiltonian approach to the geodesic equations. Geodesics can be understood to be the Hamiltonian flows of a special Hamiltonian vector field defined on the cotangent space of the manifold. The Hamiltonian is constructed from the metric on the manifold, and is thus a quadratic form consisting entirely of the kinetic term. The geodesic equations are second-order differential equations; they can be re-expressed as first-order equations by introducing additional independent variables, as shown below. Note that a coordinate neighborhood "U" with coordinates "x""a" induces a "local trivialization" of formula_9 by the map which sends a point formula_10 of the form formula_11 to the point formula_12. Then introduce the Hamiltonian as formula_13 Here, "g""ab"("x") is the inverse of the metric tensor: "g""ab"("x")"g""bc"("x") = formula_14. The behavior of the metric tensor under coordinate transformations implies that "H" is invariant under a change of variable. The geodesic equations can then be written as formula_15 and formula_16 The flow determined by these equations is called the cogeodesic flow; a simple substitution of one into the other obtains the Euler–Lagrange equations, which give the geodesic flow on the tangent bundle "TM". The geodesic lines are the projections of integral curves of the geodesic flow onto the manifold "M". This is a Hamiltonian flow, and the Hamiltonian is constant along the geodesics: formula_17 Thus, the geodesic flow splits the cotangent bundle into level sets of constant energy formula_18 for each energy "E" ≥ 0, so that formula_19.
[ { "math_id": 0, "text": "H=p^2/2m" }, { "math_id": 1, "text": "\\gamma:I\\to M" }, { "math_id": 2, "text": "E(\\gamma)=\\frac{1}{2}\\int_I g(\\dot \\gamma(t),\\dot\\gamma(t))\\,dt," }, { "math_id": 3, "text": "\\dot\\gamma(t)" }, { "math_id": 4, "text": "\\gamma" }, { "math_id": 5, "text": "t \\in I" }, { "math_id": 6, "text": "g(\\cdot,\\cdot)" }, { "math_id": 7, "text": "\\frac{d^2x^a}{dt^2} + \\Gamma^{a}_{bc}\\frac{dx^b}{dt}\\frac{dx^c}{dt} = 0" }, { "math_id": 8, "text": "\\Gamma^{a}_{bc}" }, { "math_id": 9, "text": "T^*M|_{U}\\simeq U \\times \\mathbb{R}^n" }, { "math_id": 10, "text": "\\eta \\in T_x^*M|_{U}" }, { "math_id": 11, "text": "\\eta = p_a dx^a" }, { "math_id": 12, "text": "(x,p_a) \\in U\\times\\mathbb{R}^n" }, { "math_id": 13, "text": "H(x,p)=\\frac{1}{2}g^{ab}(x)p_a p_b." }, { "math_id": 14, "text": "\\delta^a_c" }, { "math_id": 15, "text": "\\dot{x}^a = \\frac{\\partial H}{\\partial p_a} = g^{ab}(x) p_b" }, { "math_id": 16, "text": "\\dot{p}_a = - \\frac {\\partial H}{\\partial x^a} = \n-\\frac{1}{2} \\frac {\\partial g^{bc}(x)}{\\partial x^a} p_b p_c." }, { "math_id": 17, "text": "\\frac{dH}{dt} = \\frac {\\partial H}{\\partial x^a} \\dot{x}^a +\n\\frac{\\partial H}{\\partial p_a} \\dot{p}_a = \n- \\dot{p}_a \\dot{x}^a + \\dot{x}^a \\dot{p}_a = 0." }, { "math_id": 18, "text": "M_E = \\{ (x,p) \\in T^*M : H(x,p)=E \\}" }, { "math_id": 19, "text": "T^*M=\\bigcup_{E \\ge 0} M_E" } ]
https://en.wikipedia.org/wiki?curid=6207793
62078649
Variational autoencoder
Deep learning generative model to encode data representation <templatestyles src="Machine learning/styles.css"/> In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods. In addition to being seen as an autoencoder neural network architecture, variational autoencoders can also be studied within the mathematical formulation of variational Bayesian methods, connecting a neural encoder network to its decoder through a probabilistic latent space (for example, as a multivariate Gaussian distribution) that corresponds to the parameters of a variational distribution. Thus, the encoder maps each point (such as an image) from a large complex dataset into a distribution within the latent space, rather than to a single point in that space. The decoder has the opposite function, which is to map from the latent space to the input space, again according to a distribution (although in practice, noise is rarely added during the decoding stage). By mapping a point to a distribution instead of a single point, the network can avoid overfitting the training data. Both networks are typically trained together with the usage of the reparameterization trick, although the variance of the noise model can be learned separately. Although this type of model was initially designed for unsupervised learning, its effectiveness has been proven for semi-supervised learning and supervised learning. Overview of architecture and operation. A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually intractable, and in doing so requires the discovery of q-distributions, or variational posteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. This neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder. The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent. To optimize this model, one needs to know two terms: the "reconstruction error", and the Kullback–Leibler divergence (KL-D). Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise; however, tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value. Formulation. From the point of view of probabilistic modeling, one wants to maximize the likelihood of the data formula_0 by their chosen parameterized probability distribution formula_1. This distribution is usually chosen to be a Gaussian formula_2 which is parameterized by formula_3 and formula_4 respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latents formula_5 results in intractable integrals. Let us find formula_6 via marginalizing over formula_5. formula_7 where formula_8 represents the joint distribution under formula_9 of the observable data formula_10 and its latent representation or encoding formula_11. According to the chain rule, the equation can be rewritten as formula_12 In the vanilla variational autoencoder, formula_5 is usually taken to be a finite-dimensional vector of real numbers, and formula_13 to be a Gaussian distribution. Then formula_6 is a mixture of Gaussian distributions. It is now possible to define the set of the relationships between the input data and its latent representation as Unfortunately, the computation of formula_16 is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as formula_17 with formula_18 defined as the set of real values that parametrize formula_19. This is sometimes called "amortized inference", since by "investing" in finding a good formula_20, one can later infer formula_5 from formula_0 quickly without doing any integrals. In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distribution formula_15 is computed by the "probabilistic decoder", and the approximated posterior distribution formula_21 is computed by the "probabilistic encoder". Parametrize the encoder as formula_22, and the decoder as formula_23. Evidence lower bound (ELBO). As in every deep learning problem, it is necessary to define a differentiable loss function in order to update the network weights through backpropagation. For variational autoencoders, the idea is to jointly optimize the generative model parameters formula_24 to reduce the reconstruction error between the input and the output, and formula_18 to make formula_25 as close as possible to formula_16. As reconstruction loss, mean squared error and cross entropy are often used. As distance loss between the two distributions the Kullback–Leibler divergence formula_26 is a good choice to squeeze formula_25 under formula_16. The distance loss just defined is expanded as formula_27 Now define the evidence lower bound (ELBO):formula_28Maximizing the ELBOformula_29is equivalent to simultaneously maximizing formula_30 and minimizing formula_31. That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posterior formula_32 from the exact posterior formula_33. The form given is not very convenient for maximization, but the following, equivalent form, is:formula_34where formula_35 is implemented as formula_36, since that is, up to an additive constant, what formula_37 yields. That is, we model the distribution of formula_0 conditional on formula_5 to be a Gaussian distribution centered on formula_38. The distribution of formula_39 and formula_14 are often also chosen to be Gaussians as formula_40 and formula_41, with which we obtain by the formula for KL divergence of Gaussians:formula_42Here formula_43 is the dimension of formula_11. For a more detailed derivation and more interpretations of ELBO and its maximization, see its main page. Reparameterization. To efficiently search for formula_29the typical method is gradient ascent. It is straightforward to findformula_44However, formula_45does not allow one to put the formula_46 inside the expectation, since formula_47 appears in the probability distribution itself. The reparameterization trick (also known as stochastic backpropagation) bypasses this difficulty. The most important example is when formula_48 is normally distributed, as formula_49. This can be reparametrized by letting formula_50 be a "standard random number generator", and construct formula_51 as formula_52. Here, formula_53 is obtained by the Cholesky decomposition:formula_54Then we haveformula_55and so we obtained an unbiased estimator of the gradient, allowing stochastic gradient descent. Since we reparametrized formula_5, we need to find formula_21. Let formula_56 be the probability density function for formula_57, then formula_58where formula_59 is the Jacobian matrix of formula_57 with respect to formula_5. Since formula_52, this is formula_60 Variations. Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance. formula_61-VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement for formula_61 values greater than one. This architecture can discover disentangled latent factors without supervision. The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data. Some structures directly deal with the quality of the generated samples or implement more than one latent space to further improve the representation learning. Some architectures mix VAE and generative adversarial networks to obtain hybrid models. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "p_{\\theta}(x) = p(x|\\theta)" }, { "math_id": 2, "text": "N(x|\\mu,\\sigma)" }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": "\\sigma" }, { "math_id": 5, "text": "z" }, { "math_id": 6, "text": "p_\\theta(x)" }, { "math_id": 7, "text": "p_\\theta(x) = \\int_{z}p_\\theta({x,z}) \\, dz, " }, { "math_id": 8, "text": "p_\\theta({x,z})" }, { "math_id": 9, "text": "p_\\theta" }, { "math_id": 10, "text": " x " }, { "math_id": 11, "text": " z " }, { "math_id": 12, "text": "p_\\theta(x) = \\int_{z}p_\\theta({x| z})p_\\theta(z) \\, dz" }, { "math_id": 13, "text": "p_\\theta({x|z})" }, { "math_id": 14, "text": "p_\\theta(z)" }, { "math_id": 15, "text": "p_\\theta(x|z)" }, { "math_id": 16, "text": "p_\\theta(z|x)" }, { "math_id": 17, "text": "q_\\phi({z| x}) \\approx p_\\theta({z| x})" }, { "math_id": 18, "text": "\\phi" }, { "math_id": 19, "text": "q" }, { "math_id": 20, "text": "q_\\phi" }, { "math_id": 21, "text": "q_\\phi(z|x)" }, { "math_id": 22, "text": "E_\\phi" }, { "math_id": 23, "text": "D_\\theta" }, { "math_id": 24, "text": "\\theta" }, { "math_id": 25, "text": "q_\\phi({z| x})" }, { "math_id": 26, "text": "D_{KL}(q_\\phi({z| x})\\parallel p_\\theta({z| x}))" }, { "math_id": 27, "text": "\\begin{align}\nD_{KL}(q_\\phi({z| x})\\parallel p_\\theta({z| x})) &= \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{q_\\phi(z|x)}{p_\\theta(z|x)}\\right]\\\\\n&= \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{q_\\phi({z| x})p_\\theta(x)}{p_\\theta(x, z)}\\right]\\\\\n&=\\ln p_\\theta(x) + \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{q_\\phi({z| x})}{p_\\theta(x, z)}\\right]\n\\end{align}" }, { "math_id": 28, "text": "L_{\\theta,\\phi}(x) := \n\\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{p_\\theta(x, z)}{q_\\phi({z| x})}\\right] \n= \\ln p_\\theta(x) - D_{KL}(q_\\phi({\\cdot| x})\\parallel p_\\theta({\\cdot | x})) " }, { "math_id": 29, "text": "\\theta^*,\\phi^* = \\underset{\\theta,\\phi}\\operatorname{arg max} \\, L_{\\theta,\\phi}(x) " }, { "math_id": 30, "text": "\\ln p_\\theta(x) " }, { "math_id": 31, "text": " D_{KL}(q_\\phi({z| x})\\parallel p_\\theta({z| x})) " }, { "math_id": 32, "text": "q_\\phi(\\cdot | x) " }, { "math_id": 33, "text": "p_\\theta(\\cdot | x) " }, { "math_id": 34, "text": "L_{\\theta,\\phi}(x) = \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln p_\\theta(x|z)\\right] - D_{KL}(q_\\phi({\\cdot| x})\\parallel p_\\theta(\\cdot)) " }, { "math_id": 35, "text": "\\ln p_\\theta(x|z)" }, { "math_id": 36, "text": "-\\frac{1}{2}\\| x - D_\\theta(z)\\|^2_2" }, { "math_id": 37, "text": "x \\sim \\mathcal N(D_\\theta(z), I)" }, { "math_id": 38, "text": "D_\\theta(z)" }, { "math_id": 39, "text": "q_\\phi(z |x)" }, { "math_id": 40, "text": "z|x \\sim \\mathcal N(E_\\phi(x), \\sigma_\\phi(x)^2I)" }, { "math_id": 41, "text": "z \\sim \\mathcal N(0, I)" }, { "math_id": 42, "text": "L_{\\theta,\\phi}(x) = -\\frac 12\\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[ \\|x - D_\\theta(z)\\|_2^2\\right] - \\frac 12 \\left( N\\sigma_\\phi(x)^2 + \\|E_\\phi(x)\\|_2^2 - 2N\\ln\\sigma_\\phi(x) \\right) + Const " }, { "math_id": 43, "text": " N " }, { "math_id": 44, "text": "\\nabla_\\theta \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{p_\\theta(x, z)}{q_\\phi({z| x})}\\right]\n= \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[ \\nabla_\\theta \\ln \\frac{p_\\theta(x, z)}{q_\\phi({z| x})}\\right] " }, { "math_id": 45, "text": "\\nabla_\\phi \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{p_\\theta(x, z)}{q_\\phi({z| x})}\\right] " }, { "math_id": 46, "text": "\\nabla_\\phi " }, { "math_id": 47, "text": "\\phi " }, { "math_id": 48, "text": "z \\sim q_\\phi(\\cdot | x) " }, { "math_id": 49, "text": "\\mathcal N(\\mu_\\phi(x), \\Sigma_\\phi(x)) " }, { "math_id": 50, "text": "\\boldsymbol{\\varepsilon} \\sim \\mathcal{N}(0, \\boldsymbol{I})" }, { "math_id": 51, "text": "z " }, { "math_id": 52, "text": "z = \\mu_\\phi(x) + L_\\phi(x)\\epsilon " }, { "math_id": 53, "text": "L_\\phi(x) " }, { "math_id": 54, "text": "\\Sigma_\\phi(x) = L_\\phi(x)L_\\phi(x)^T " }, { "math_id": 55, "text": "\\nabla_\\phi \\mathbb E_{z \\sim q_\\phi(\\cdot | x)} \\left[\\ln \\frac{p_\\theta(x, z)}{q_\\phi({z| x})}\\right] \n= \n\\mathbb {E}_{\\epsilon}\\left[ \\nabla_\\phi \\ln {\\frac {p_{\\theta }(x, \\mu_\\phi(x) + L_\\phi(x)\\epsilon)}{q_{\\phi }(\\mu_\\phi(x) + L_\\phi(x)\\epsilon | x)}}\\right] " }, { "math_id": 56, "text": "q_0" }, { "math_id": 57, "text": "\\epsilon" }, { "math_id": 58, "text": "\\ln q_\\phi(z | x) = \\ln q_0 (\\epsilon) - \\ln|\\det(\\partial_\\epsilon z)|" }, { "math_id": 59, "text": "\\partial_\\epsilon z" }, { "math_id": 60, "text": "\\ln q_\\phi(z | x) = -\\frac 12 \\|\\epsilon\\|^2 - \\ln|\\det L_\\phi(x)| - \\frac n2 \\ln(2\\pi)" }, { "math_id": 61, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=62078649
62082492
Clifford gates
Definition of quantum circuits In quantum computing and quantum information theory, the Clifford gates are the elements of the Clifford group, a set of mathematical transformations which normalize the "n"-qubit Pauli group, i.e., map tensor products of Pauli matrices to tensor products of Pauli matrices through conjugation. The notion was introduced by Daniel Gottesman and is named after the mathematician William Kingdon Clifford. Quantum circuits that consist of only Clifford gates can be efficiently simulated with a classical computer due to the Gottesman–Knill theorem. The Clifford group is generated by three gates: Hadamard, phase gate "S", and CNOT. This set of gates is minimal in the sense that discarding any one gate results in the inability to implement some Clifford operations; removing the Hadamard gate disallows powers of formula_0 in the unitary matrix representation, removing the phase gate "S" disallows formula_1 in the unitary matrix, and removing the CNOT gate reduces the set of implementable operations from formula_2 to formula_3. Since all Pauli matrices can be constructed from the phase and Hadamard gates, each Pauli gate is also trivially an element of the Clifford group. The formula_4 gate is equal to the product of formula_5 and formula_6 gates. To show that a unitary formula_7 is a member of the Clifford group, it suffices to show that for all formula_8 that consist only of the tensor products of formula_5 and formula_6, we have formula_9. Common generating gates. Hadamard gate. The Hadamard gate formula_10 is a member of the Clifford group as formula_11 and formula_12. "S" gate. The phase gate formula_13 is a Clifford gate as formula_14 and formula_15. CNOT gate. The CNOT gate applies to two qubits. It is a (C)ontrolled NOT gate, where a NOT gate is performed on qubit 2 if and only if qubit 1 is in the 1 state. formula_16 Between formula_5 and formula_6 there are four options: Building a universal set of quantum gates. The Clifford gates do not form a universal set of quantum gates as some gates outside the Clifford group cannot be arbitrarily approximated with a finite set of operations. An example is the phase shift gate (historically known as the formula_17 gate): formula_18. The following shows that the formula_19 gate does not map the Pauli-formula_5 gate to another Pauli matrix: formula_20 However, the Clifford group, when augmented with the formula_19 gate, forms a universal quantum gate set for quantum computation. Moreover, exact, optimal circuit implementations of the single-qubit formula_6-angle rotations are known.
[ { "math_id": 0, "text": "{1}/{\\sqrt{2}}" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "\\mathbf{C}_n" }, { "math_id": 3, "text": "\\mathbf{C}_1^n" }, { "math_id": 4, "text": "Y" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "Z" }, { "math_id": 7, "text": "U" }, { "math_id": 8, "text": "P \\in \\mathbf{P}_n" }, { "math_id": 9, "text": "UPU^\\dagger \\in \\mathbf{P}_n" }, { "math_id": 10, "text": " H = \\frac{1}{\\sqrt{2}} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix}" }, { "math_id": 11, "text": " HXH^\\dagger = Z" }, { "math_id": 12, "text": " HZH^\\dagger = X" }, { "math_id": 13, "text": " S = \\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i \\frac{\\pi}{2}} \\end{bmatrix} = \\begin{bmatrix} 1 & 0 \\\\ 0 & i \\end{bmatrix} = \\sqrt{Z}" }, { "math_id": 14, "text": "SXS^\\dagger = Y" }, { "math_id": 15, "text": "SZS^\\dagger = Z" }, { "math_id": 16, "text": " \\mathrm{CNOT} = \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ \n0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix}. " }, { "math_id": 17, "text": "\\pi /8" }, { "math_id": 18, "text": " T = \\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i \\frac{\\pi}{4}} \\end{bmatrix} = \\sqrt{S} = \\sqrt[4]{Z}" }, { "math_id": 19, "text": "T" }, { "math_id": 20, "text": "TX{T^\\dagger } = \\left[ {\\begin{array}{*{20}{c}}\n 1&0 \\\\ \n 0&{{e^{i\\frac{\\pi }{4}}}} \n\\end{array}} \\right]\\left[ {\\begin{array}{*{20}{c}}\n 0&1 \\\\ \n 1&0 \n\\end{array}} \\right]\\left[ {\\begin{array}{*{20}{c}}\n 1&0 \\\\ \n 0&{{e^{ - i\\frac{\\pi }{4}}}} \n\\end{array}} \\right] = \\left[ {\\begin{array}{*{20}{c}}\n 0&{{e^{ - i\\frac{\\pi }{4}}}} \\\\ \n {{e^{i\\frac{\\pi }{4}}}}&0 \n\\end{array}} \\right]\\not \\in {{\\mathbf{P}}_1}" } ]
https://en.wikipedia.org/wiki?curid=62082492
62089600
CCIR 476
Character encoding used in radio data protocols CCIR 476 is a character encoding used in radio data protocols such as SITOR, AMTOR and Navtex. It is a recasting of the ITA2 character encoding, known as Baudot code, from a five-bit code to a seven-bit code. In each character, exactly four of the seven bits are mark bits, and the other three are space bits. This allows for the detection of single-bit errors. Technical details. The number of possible valid binary code values in CCIR 476 is the number of ways to choose 4 marks for 7 bit positions, and the number can be calculated using the binomial coefficient: formula_0 Thus CCIR 476 has 3 additional code points available over ITA2's 32 code points. The SITOR protocol uses the additional three code points (denoted as SIA, SIB and RPT below) for idle, phasing, and repeat requests. In addition, some of the ordinary characters are reused as control signals. Character set. In these tables, the hexadecimal code values are converted from a binary representation, with 1 being mark, 0 being space, and the most significant bit given first. The international version of ITA2 is used here; note also the added non-ITA2 codes SIA, SIB and RPT, used by SITOR. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\ \\textstyle \\binom{7}{3} = \\binom{7}{4} = 35\\ ." } ]
https://en.wikipedia.org/wiki?curid=62089600
62089880
Song of Songs 7
Seventh chapter of the Song of Songs Song of Songs 7 (abbreviated as Song 7) is the seventh chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text. This chapter contains a poem in which the man describes the woman, his lover, and one or more songs in the woman's voice issued as invitations to the man. Text. The original text is written in Hebrew language. This chapter is divided into 13 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls: 4Q106 (4QCanta); 30 BCE-30 CE; extant verses 1–7). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. The Modern English Version (MEV) identifies the speakers in this chapter as: Biblical scholar Athalya Brenner notes that verses 1 to 10 are "probably in a male voice", and 11 to 14 in a female voice. However, Andrew Harper argues that the opening verses (verses 1 to 6) contain the praises sung by "the ladies of the hareem". Male: Third descriptive poem for the female (7:1-9; [Masoretic 7:2-10]). A voice, likely of the man, calling to the woman ("the Shulammite" in ) to dance, then describing her body from toe to head in a poem or "waṣf" (verses 2–7), closing with a response indicating male desire (verses 8–9), which is followed perhaps by a "female retort" (verse 10) to round off this passage. This descriptive poem by the man still belongs to a long section concerning the desire and love in the country which continues until 8:4. The man's "waṣf" and the other ones (; ; ) theologically demonstrate the heart of the Song that values the body as not evil but good even worthy of praise, and respects the body with an appreciative focus (rather than lurid). Hess notes that this reflects 'the fundamental value of God's creation as good and the human body as a key part of that creation, whether at the beginning () or redeemed in the resurrection (, )'. "Your head crowns like Carmel," "and your flowing hair is like purple;" "a king is held captive in the tresses." Female: Springtime and love (7:10–13; [Masoretic 7:11–14]). In this section, one song (or several songs) in a female voice, seductively invites the man to go outdoors where the woman will give herself to him (cf. 4:9-14). The invitation contains a play on words based on the man's earlier expressions, such as "grape blossoms" in verse 12, which is related to 2:11–13, and "to see if the vines had blossomed, if pomegranates bloomed" in verse 12, which can be related to 5:11–12. "I am my beloved's, and his desire is toward me." Verse 10. Although similar to the line in and , here the mutual belonging to each other is not expressed, and instead, the woman refers to the previous expression of desire of the man to her, while confirming that she belongs to him ("I am my beloved's"). "The mandrakes give forth fragrance," "and at our doors are all choice fruits," "new as well as old," "which I have laid up for you, my beloved." Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62089880
62091676
Song of Songs 8
Eighth chapter of the Song of Songs Song of Songs 8 (abbreviated as Song 8) is the eighth (and the final) chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text. This chapter contains dialogues between the woman and the daughters of Jerusalem, the woman and her brothers, then finally, the woman and the man, the "bride" and the "bridegroom". Text. The original text is written in Hebrew language. This chapter is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. The Modern English Version (MEV) identifies the speakers in this chapter as: Kugler and Hartin treat verses 5 onwards as an appendix. The Jerusalem Bible treats verse 7b onwards (from "Were a man to offer all the wealth of his house to buy love ...") as "appendices". Female: Springtime and love (8:1–4). This female passage is the last part of a long section concerning the desire and love in the country which runs from chapter 6 until verse 4 here. It consists probably or possibly of more than a single song, describing the woman's wish that her lover to be her brother, so that they can be together in her 'mother's house' (verses 1–2; cf. ); they embrace (verse 3; cf. ) and another appeal to the daughters of Jerusalem (verse 4). "Oh, that you were like my brother," "Who nursed at my mother’s breasts!" "If I should find you outside," "I would kiss you;" "I would not be despised." Verse 1. For "like my brother", or "as my brother" in the King James Version, the International Standard Version notes that the Hebrew text lacks the preposition "like". Andrew Harper argues that the word 'as' "should probably be omitted, as the accidental repetition of the last letter of the preceding word". "I charge you, O daughters of Jerusalem," "do not stir up or awaken love" "until it pleases." Verse 4. The names of God are apparently substituted with similar sounding phrases depicting 'female gazelles' (, "tseḇā’ōṯ") for [God of] hosts ( "tseḇā’ōṯ"), and 'does of the field'/'wild does/female deer' (, "’ay-lōṯ ha-śā-ḏeh") for God Almighty (, "’êl shaddai"). Chorus: Search for the couple (8:5a). Verse 5 opens the last section or epilogue of the book, speaking about the power of love which continues to verse 14 (the end of the book). Verse 5. [Friends of the Woman] "Who is that coming up from the wilderness," "leaning upon her beloved?" [The Woman] "Under the apple tree I awakened you." "There your mother was in labor with you;" "there she who bore you was in labor." Female: The power of love (8:5b-7). There are two fragments of the female voice in this part (verse 5; cf. , ) and verses 6-7 containing her declaration of love which 'might have constituted a suitable end for the whole book'. "Set me as a seal upon your heart," "as a seal upon your arm;" "for love is strong as death," "passion fierce as the grave." "Its fires of desire are as ardent flames," "a most intense flame." Brothers: Their younger sister (8:8-9). These two verses form a part describing how the woman's maternal brothers decide to keep their sister's virginity, when necessary. However, they do that in disparaging way, which recalls their maligning attitude in chapter 1. Female: Her defense; Solomon's vineyard (8:10–12). As a response, the woman answers her brothers mockingly. When in – she "ineffectually complained" about her brothers' antagonism towards her, here she can stand up for herself and has found her peace. "My vineyard, my very own, is before me;" "you, O Solomon, may have the thousand," "and the keepers of the fruit two hundred." Male: Listening (8:13). No doubt that this part contains the words of the man addressing the bride that 'it is delightful to him to hear her voice'. "You who dwell in the gardens," "The companions listen for your voice—" "Let me hear it!" Verse 13. The man (or the bridegroom) calls upon his bride (the Shulammite) to let his companions, that is 'his friends who may have come to congratulate him on his bride's safe return', hear her voice. In the community of Sephardic and Oriental Jews, the congregation in traditional synagogues goes back and recites verse 13 after reciting verse 14 to avoid ending a reading in a negative note. Female: Departure (8:14). The very last verse: the woman's voice calls to her male lover to run, like a gazelle or deer, to “the distant nevernever land of the perfume hills”. With that, ‘the love's game can begin afresh, suspended in timelessness and moving cyclically’. "Make haste, my beloved," "and be like a gazelle or a young stag" "on the mountains of spices!" Verse 14. This verse is almost identical to and just like in the situation of the earlier verse, it implies another meeting and prolongs "indefinitely the moment of young and love". Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62091676
62095374
Detlef Müller (mathematician)
German mathematician Detlef Horst Müller (born 13 June 1954 in Dissen, Lower Saxony) is a German mathematician, specializing in analysis. Müller received 1981 his doctorate from the University of Bielefeld with thesis "Das Syntheseverhalten glatter Hyperflächen mit homogenen Krümmungsverhältnissen im formula_0" (The synthesis behavior of smooth hypersurfaces with homogeneous curvature ratios in formula_0) under the supervision of Horst Leptin (1927–2017). Müller habilitated in 1984 in Kiel. He spent the academic year 1990–1991 at the Institute for Advanced Study. He was from 1992 to 1994 a professor at the Université Louis Pasteur in Strasbourg and is since 1994 a professor at the University of Kiel. His research deals with harmonic analysis (especially related to Lie groups) with applications to partial differential equations. In 1998 Müller was an Invited Speaker at the International Congress of Mathematicians in Berlin. He became a Fellow of the American Mathematical Society in the class of 2018. He is a member of the editorial boards of the "Journal of Lie Theory" and the "Annali di Matematica Pura ed Applicata".
[ { "math_id": 0, "text": "\\mathbb R^n" } ]
https://en.wikipedia.org/wiki?curid=62095374
62099055
Formylmethanofuran dehydrogenase
In enzymology, a formylmethanofuran dehydrogenase (EC 1.2.99.5) is an enzyme that catalyzes the chemical reaction: formylmethanofuran + H2O + acceptor formula_0 CO2 + methanofuran + reduced acceptor. The 3 substrates of this enzyme are formylmethanofuran, H2O, and acceptor, whereas its 3 products are CO2, methanofuran, and reduced acceptor. This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with other acceptors. The systematic name of this enzyme class is formylmethanofuran:acceptor oxidoreductase. This enzyme is also called formylmethanofuran:(acceptor) oxidoreductase. This enzyme participates in folate biosynthesis. It has 2 cofactors: molybdenum, and Pterin. Discovery and biological occurrence. Formylmethanofuran (formyl-MFR) dehydrogenase is found in methanogenic archaea which are capable of synthesizing methane using substrates such as carbon dioxide, formate, methanol, methylamines, and acetate. In 1967, a reliable technique for the mass culture of hydrogen and carbon dioxide was developed for methanogens. It became obvious coenzymes are involved in biochemistry of methanogens as kilogram scale of cell was developed and utilized for biochemical studies. "Methanobacterium thermoautotrophicum"'s reduction of carbon dioxide (CO2) with hydrogen is the most studied system."" "Methanobacterium thermoautotrophicum's" metabolism involves almost all of the reactions in methanogenesis."" Molybdenum and tungsten containing formyl-MFR was isolated from "M. thermoautotrophicum" when they purified proteins from soluble cofactors-depleted cell extracts."" It was not known to have existed prior to the experiment."" MFR was required to generate methane from CO2 insoluble cofactors-depleted cell extracts."" Formyl-MFR dehydrogenase was also isolated from "Methanosarcina barkeri" "and Archaeoglobus fulgidus cell extracts." Molybdenum-containing formyl-MFR dehydrogenase was isolated from "Methanothermobacter wolfeii". Structure. In 2016, the X-ray structure of formylmethanofuran dehydrogenase was determined. Formyl-MFR contains two heterohexamers FwdABCDFG which are protein subunits which associate as a symmetric dimer in a C2 rotational symmetry. The formyl-MFR dehydrogenase also contains 23 and 46 iron-sulfur cubane clusters in the dimer and tetramer forms respectively. The subunit FwdA contains two zinc atoms analogous to dihydroorotase. It also contains N6-carboxylysine, zinc ligands, and an aspartate that is crucial to catalysis. Meanwhile, the subunit FwdF is composed of four T-shaped ferrodoxin domains that are similar. The T-shaped iron-sulfur clusters in the FwdF subunit link up to form a path from the outside edge to the inside core. FwdBD has a redox-active tungsten. The tungsten in FwdBD is coordinated by four dithiolene thiolates. Six sulfurs from the thiolate of Cys118 and an organic sulfide ligand coordinate to the tungsten of tungstopterin at the active site of FwdB. The tungsten is coordinated in a distorted octahedral geometry. A carbon dioxide (CO2) suitable binding site is occupied by the solvent in the X-ray structure of the crystal not in vivo. The binding site lies between Cys118, His119, Arg228, and sulfur-tungsten ligand. Methanogenesis catalysis. Formyl-MFR dehydrogenase catalyzes the methanogenesis reaction by reducing carbon dioxide (CO2) to form carboxy-MFR. The structural data obtained from the X-ray structure suggests carbon dioxide (CO2) is reduced to formate (E0’ = -430 mV) at FwdBD's tungstopterin active site to carboxy-MFR by a 4Fe-4S ferredoxin (E = ~ – 500 mV) located 12.4 Å away. Then, it reduces the carboxy-MFR to MFR at its tungsten or molybdenum active site. Proposed mechanism. A 43 Å long hydrophilic tunnel supports the proposed two-step scenario of CO2 reduction and fixation. This hyrophilic tunnel is located in the middle of FwdBD and FwdA active sites and is convenient for formic acid and formate transportation [pKa = 3.75]. The tunnel has a bottleneck appearance which consists of a narrow passage and a wide solvent-filled cavity located at the front of each active site. Arg228 of FwdBD and Lys64 of FwdA control gate operation at the bottlenecks. Two [4Fe-4S] cluster chain's outer cluster in the branched outer arm of the FwdF subunits funnels electrons to the tungsten center. Then, carbon dioxide is reduced to formate (while tungsten is oxidized: tungsten oxidation state goes from +4 to +6) when carbon dioxide enters the catalytic compartment through FwdBD's hydrophobic tunnel. Formic acid or formate diffuses to the FwdA's active site via a hydrophilic tunnel. Once it is diffused at the active site, it is condense at the binuclear zinc center with MFR. Pumping formate into the tunnel is proposed to attain exergonic reduction of CO2 to formate with reduced ferredoxin. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=62099055
620991
Plummer model
The Plummer model or Plummer sphere is a density law that was first used by H. C. Plummer to fit observations of globular clusters. It is now often used as toy model in N-body simulations of stellar systems. Description of the model. The Plummer 3-dimensional density profile is given by formula_0 where formula_1 is the total mass of the cluster, and "a" is the Plummer radius, a scale parameter that sets the size of the cluster core. The corresponding potential is formula_2 where "G" is Newton's gravitational constant. The velocity dispersion is formula_3 The isotropic distribution function reads formula_4 if formula_5, and formula_6 otherwise, where formula_7 is the specific energy. Properties. The mass enclosed within radius formula_8 is given by formula_9 Many other properties of the Plummer model are described in Herwig Dejonghe's comprehensive article. Core radius formula_10, where the surface density drops to half its central value, is at formula_11. Half-mass radius is formula_12 Virial radius is formula_13. The 2D surface density is: formula_14 and hence the 2D projected mass profile is: formula_15 In astronomy, it is convenient to define 2D half-mass radius which is the radius where the 2D projected mass profile is half of the total mass: formula_16. For the Plummer profile: formula_17. The escape velocity at any point is formula_18 For bound orbits, the radial turning points of the orbit is characterized by specific energy formula_19 and specific angular momentum formula_20 are given by the positive roots of the cubic equation formula_21 where formula_22, so that formula_23. This equation has three real roots for formula_24: two positive and one negative, given that formula_25, where formula_26 is the specific angular momentum for a circular orbit for the same energy. Here formula_27 can be calculated from single real root of the discriminant of the cubic equation, which is itself another cubic equation formula_28 where underlined parameters are dimensionless in Henon units defined as formula_29, formula_30, and formula_31. Applications. The Plummer model comes closest to representing the observed density profiles of star clusters, although the rapid falloff of the density at large radii (formula_32) is not a good description of these systems. The behavior of the density near the center does not match observations of elliptical galaxies, which typically exhibit a diverging central density. The ease with which the Plummer sphere can be realized as a Monte-Carlo model has made it a favorite choice of N-body experimenters, in spite of the model's lack of realism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rho_P(r) = \\frac{3M_0}{4\\pi a^3} \\left(1 + \\frac{r^2}{a^2}\\right)^{-{5}/{2}}," }, { "math_id": 1, "text": "M_0" }, { "math_id": 2, "text": "\\Phi_P(r) = -\\frac{G M_0}{\\sqrt{r^2 + a^2}}," }, { "math_id": 3, "text": "\\sigma_P^2(r) = \\frac{G M_0}{6\\sqrt{r^2 + a^2}}." }, { "math_id": 4, "text": "f(\\vec{x}, \\vec{v}) = \\frac{24\\sqrt{2}}{7\\pi^3} \\frac{a^2}{G^5 M_0^4} (-E(\\vec{x}, \\vec{v}))^{7/2}," }, { "math_id": 5, "text": "E < 0" }, { "math_id": 6, "text": "f(\\vec{x}, \\vec{v}) = 0" }, { "math_id": 7, "text": "E(\\vec{x}, \\vec{v}) = \\frac{1}{2} v^2 + \\Phi_P(r)" }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "M(<r) = 4\\pi\\int_0^r r'^2 \\rho_P(r') \\,dr' = M_0 \\frac{r^3}{(r^2 + a^2)^{3/2}}." }, { "math_id": 10, "text": "r_c" }, { "math_id": 11, "text": "r_c = a \\sqrt{\\sqrt{2} - 1} \\approx 0.64 a" }, { "math_id": 12, "text": "r_h = \\left(\\frac{1}{0.5^{2/3}} - 1\\right)^{-0.5} a \\approx 1.3 a." }, { "math_id": 13, "text": "r_V = \\frac{16}{3 \\pi} a \\approx 1.7 a" }, { "math_id": 14, "text": " \\Sigma(R) = \\int_{-\\infty}^{\\infty}\\rho(r(z))dz=2\\int_{0}^{\\infty}\\frac{3a^2M_0dz}{4\\pi(a^2+z^2+R^2)^{5/2}} = \\frac{M_0a^2}{\\pi(a^2+R^2)^2}," }, { "math_id": 15, "text": "M(R)=2\\pi\\int_{0}^{R}\\Sigma(R')\\, R'dR'=M_0\\frac{R^2}{a^2+R^2}." }, { "math_id": 16, "text": "M(R_{1/2}) = M_0/2" }, { "math_id": 17, "text": "R_{1/2} = a" }, { "math_id": 18, "text": "v_{\\rm esc}(r)=\\sqrt{-2\\Phi(r)}=\\sqrt{12}\\,\\sigma(r) ," }, { "math_id": 19, "text": "E = \\frac{1}{2} v^2 + \\Phi(r)" }, { "math_id": 20, "text": "L = |\\vec{r} \\times \\vec{v}|" }, { "math_id": 21, "text": "R^3 + \\frac{GM_0}{E} R^2 - \\left(\\frac{L^2}{2E} + a^2\\right) R - \\frac{GM_0a^2}{E} = 0," }, { "math_id": 22, "text": "R = \\sqrt{r^2 + a^2}" }, { "math_id": 23, "text": "r = \\sqrt{R^2 - a^2}" }, { "math_id": 24, "text": "R" }, { "math_id": 25, "text": "L < L_c(E)" }, { "math_id": 26, "text": "L_c(E)" }, { "math_id": 27, "text": "L_c" }, { "math_id": 28, "text": "\\underline{E}\\, \\underline{L}_c^3 + \\left(6 \\underline{E}^2 \\underline{a}^2 + \\frac{1}{2}\\right)\\underline{L}_c^2 + \\left(12 \\underline{E}^3 \\underline{a}^4 + 20 \\underline{E} \\underline{a}^2 \\right) \\underline{L}_c + \\left(8 \\underline{E}^4 \\underline{a}^6 - 16 \\underline{E}^2 \\underline{a}^4 + 8 \\underline{a}^2\\right) = 0," }, { "math_id": 29, "text": "\\underline{E} = E r_V / (G M_0)" }, { "math_id": 30, "text": "\\underline{L}_c = L_c / \\sqrt{G M r_V}" }, { "math_id": 31, "text": "\\underline{a} = a / r_V = 3 \\pi/16" }, { "math_id": 32, "text": "\\rho\\rightarrow r^{-5}" } ]
https://en.wikipedia.org/wiki?curid=620991
621043
Japan Meteorological Agency seismic intensity scale
Japanese earthquake measurements The Japan Meteorological Agency (JMA) Seismic Intensity Scale (known in Japan as the Shindo seismic scale) is a seismic intensity scale used in Japan to categorize the intensity of local ground shaking caused by earthquakes. The JMA intensity scale should not be confused or conflated with magnitude measurements like the moment magnitude (Mw) and the earlier Richter scales, which represent how much energy an earthquake releases. Much like the Mercalli scale, the JMA scheme quantifies how much ground-surface shaking takes place "at measurement sites distributed throughout an affected area". Intensities are expressed as numerical values called ; the higher the value, the more intense the shaking. Values are derived from peak ground acceleration and duration of the shaking, which are themselves influenced by factors such as distance to and depth of the hypocenter (focus), local soil conditions, and nature of the geology in between, as well as the event's magnitude; every quake thus entails numerous intensities. The data needed for calculating intensity are obtained from a network of 670 observation stations using "Model 95" strong ground motion accelerometers. The agency provides the public with real-time reports through the media and Internet giving event time, epicenter (location), magnitude, and depth followed by intensity readings at affected localities. History. The Tokyo Meteorological Observatory, which in 1887 became the Central Meteorological Observatory first defined a four-increment intensity scale in 1884 with the levels , , , and . In 1898 the scale was changed to a numerical scheme, assigning earthquakes levels 0–7. In 1908, descriptive parameters were defined for each level on the scale, and the intensities at particular locales accompanying an earthquake were assigned a level according to perceived effect on people at each observation site. This was widely used during the Meiji period and revised during the Shōwa period with the descriptions seeing an overhaul. Following the Great Hanshin Earthquake of 1995, the first quake to generate shaking of the scale's strongest intensity (7), intensities 5 and 6 were each redefined into two new levels, reconfiguring the scale into one of 10 increments: 0–4, 5-lower (5–), 5-upper (5+), 6-lower (6–), 6-upper (6+), and 7. This scale has been in use since 1996. Scale overview. The JMA scale is expressed in levels of seismic intensity from 0 to 7 in a manner similar to that of the Mercalli intensity scale, which is not commonly used in Japan. Real-time earthquake reports are calculated automatically from seismic-intensity-meter measurements of peak ground acceleration throughout an affected area, and the JMA reports the intensities for a given quake according to the ground acceleration at measurement points. Since there is no simple, linear correlation between ground acceleration and intensity (it also depends on the duration of shaking), the ground-acceleration values in the following table are approximations. Intensity 7. The Intensity 7 (, "Shindo 7") is the maximum intensity in the Japan Meteorological Agency seismic intensity scale, covering earthquakes with an instrumental intensity (計測震度) of 6.5 and up. At Intensity 7, it becomes impossible to move at will. The intensity was created following the 1948 Fukui earthquake. It was observed for the first time in the 1995 Great Hanshin earthquake. Seismic intensity measurement. Observation system. Since April 1997, Japan has been using automated devices known as "seismic intensity meters" to measure and report the strength of earthquakes based on the JMA scale. This replaced the old system that relied on human observation and damage assessment. The installation of these meters began in 1991 with the "Model 90 seismic intensity meter," which didn't have the capability to record waveforms. In 1994, an upgraded version, the "Model 93 seismic intensity meter," was introduced. This model could record digital waveforms on memory cards. Later, the "Model 95 seismic intensity meter" was introduced, which had several improvements including the ability to observe double the acceleration limit and a higher sampling rate. Today, all of JMA's seismic intensity meters are of this "Model 95" type. Specifications of the Model 95 Seismic Intensity Meter Observation components: NS (North-South), EW (East-West), UD (Up-Down) - three components (seismic intensity is a composite of the three components) Measurement range: 2048 gal to -2048 gal Sampling: 100Hz rate, 24-bit Recording standard: Seismic intensity of 0.5 or higher (collected in one-minute intervals) Recording medium: IC memory card By the end of 2009, about 4,200 of these meters were in use for JMA's "seismic intensity information," and by August 2011, this number had grown to 4,313. This was a significant increase from the roughly 600 units in use when the switch to measured seismic intensity was made. This shows that Japan's network for observing seismic activity is one of the most comprehensive in the world. Of these meters, around 600 are managed by the JMA, about 780 by the National Research Institute for Earth Science and Disaster Resilience (NIED), and roughly 2,900 by local government bodies. The network was designed with the aim of having one seismometer in each municipality before the major municipal mergers of the Heisei era. Additional units were installed in remote islands and areas with low populations to ensure complete coverage. Besides the seismic intensity meters used for JMA's information, many other meters have been installed by local government bodies that are not used for JMA's information. Public institutions and public transportation organizations have also independently installed meters to ensure the safety of infrastructure like dams, rivers, and railways. Observation instrument installation. To ensure the accuracy of earthquake intensity measurements, there are specific guidelines for setting up seismic intensity meters. The JMA doesn't use data from meters that are set up in unsuitable locations for their earthquake intensity information. Firstly, these meters should be placed on a sturdy stand designed for them. Because the ground can shake more on embankments or cliffs, the meters should be set up outside on flat, stable ground with no steps nearby, and at least two-thirds of the stand should be buried in the ground. There are also rules about nearby structures. The meters should be far enough away from trees or fences that could fall over and hit the meter. If the meters are set up inside, they should be placed near the pillars on the ground floor, and they can be set up anywhere from the basement to the second floor. Meters aren't set up in buildings that have earthquake isolation or control construction. Seismic intensity meters should be securely attached to the stand or, if they're inside, to the floor. It's recommended to follow the setup instructions provided for each type of meter and, if possible, to secure them with anchor bolts. The JMA rates the setup location of seismic intensity meters used for earthquake intensity information on a scale from A to E. Grades A to C are acceptable, D is generally not used but may be used after careful consideration, and E is not acceptable. However, there have been cases where earthquake intensity information was used even though the meters were set up in unsuitable locations, and later the accuracy of the information was questioned and corrected. For instance, during the July 2008 Iwate earthquake, an earthquake intensity of 6+ (later changed to 6-) was recorded in Ono, Hirono Town, Iwate Prefecture. This intensity was much higher than in nearby municipalities, which led to an investigation. On October 29 of the same year, the JMA announced that the meter in Ono was in an unsuitable location for earthquake observation and removed it from the earthquake intensity data, correcting the maximum intensity from 6+ to 6-. Since the meter in Ono was originally rated as acceptable, it's been suggested that other meters could also be in deteriorating setup locations. Density of station placement and maximum seismic intensity. The number of seismic monitoring stations significantly grew in 1996, thanks to the JMA increasing the number of seismic observation points. This growth has made it easier to detect strong earthquakes near their origin point. For example, the 1984 Nagano earthquake, which caused a lot of damage but was only rated as a 4 in terms of seismic intensity, and the 1946 Nankai earthquake, a huge earthquake that was rated as a 5, would have been given lower ratings if there weren't any monitoring stations near their origin points before 1995. After the increase in monitoring stations, even if an earthquake is the same size as before, it's likely to be given a higher seismic intensity rating, and high intensity ratings like 6- are reported more often. The increase in seismic observation points has made it possible to detect earthquake intensities closer to their origin point, and the JMA is studying the differences between the highest earthquake intensities detected at all monitoring stations and the intensities measured at JMA offices, to understand how the increase in monitoring stations has changed the maximum seismic intensities. Here are a few examples: In earthquakes with smaller magnitudes, the range of seismic intensity 6- becomes narrower. Even so, if there are many observation points, some will fall within the range of seismic intensity 6-. However, if there are fewer observation points, there is a high possibility that the maximum seismic intensity will be lower because it will not be captured by the observation points. Before 1995, an earthquake with a maximum seismic intensity of 6 was certainly a "major earthquake" in terms of magnitude. However, since 1996, even very shallow minor earthquakes are more likely to report seismic intensities of 5 or 6, so it is not appropriate to treat "earthquakes with a maximum seismic intensity of 6" on par with those before 1995. It may seem as if there have been more earthquakes since the Great Hanshin-Awaji Earthquake, but this is not because there have been more earthquakes, but because there have been more reports of seismic intensity. Furthermore, seismic intensity observation points are not uniformly distributed by area. They are often installed in regions with high population density, especially in urban areas. This tendency is particularly strong for observation points set up by local public entities. In these high population density areas, there tends to be a higher amplification rate of seismic intensity in the surface soil layer. Seismic intensity calculation. The seismometers used by the JMA and others observe shaking through accelerometers. They first measure the three components of motion - vertical, north-south, and east-west - as time-domain signals of acceleration. The instrumental seismic intensity (decimal value) is then calculated through the following process: Round the instrumental seismic intensity (if negative, it is 0, if 8 or more, it is 7) to determine the seismic intensity level from 0 to 7. In the case of seismic intensity 5 and 6, it is further divided into lower and upper depending on whether it is rounded up or down (refer to the Scale overview section). Comparison with other seismic scales. A 1971 study that collected and compared intensities according to the JMA and the Medvedev–Sponheuer–Karnik (MSK) scales showed that the JMA scale was more suited to smaller earthquakes whereas the MSK scale was more suited to larger earthquakes. The research also suggested that for small earthquakes up to JMA intensity 3, a correlation between the MSK and JMA values could be calculated with the formula MSK = JMA1.5 + 1.5, whereas for larger earthquakes the correlation was MSK = JMA1.5 + 0.75. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "\\sqrt{1-\\exp \\left( -\\left(\\frac{f}{0.5}\\right)^3 \\right)}" }, { "math_id": 2, "text": "\\frac{1}{\\sqrt{1+0.694x^2+0.241x^4+0.0557x^6+0.009664x^8+0.00134x^{10}+0.000155x^{12}}}" }, { "math_id": 3, "text": "x=\\frac{f}{10}" }, { "math_id": 4, "text": "\\sqrt{\\frac{1}{f}}" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "A(t)" }, { "math_id": 7, "text": "t" }, { "math_id": 8, "text": "\\int_{T_1}^{T_2} H(|A(t)| - a) dt = 0.3" }, { "math_id": 9, "text": "H(x)" }, { "math_id": 10, "text": "T_1" }, { "math_id": 11, "text": "T_2" }, { "math_id": 12, "text": "I=2 \\log_{10}a+0.94" }, { "math_id": 13, "text": "I" } ]
https://en.wikipedia.org/wiki?curid=621043
6210801
Midnight Zoo
Midnight Zoo was an Australian late-night interactive game show broadcast in parts of Australia on the Seven Network. "Midnight Zoo" debuted on 31 July 2006 and was broadcast from Sydney. It was shown live throughout Victoria and in the capital cities of Sydney and Brisbane, and ran from 12:30 am to 2:00 am Weekday mornings. The final airing of the show was on 21 October 2006. The show was hosted by Steven O'Donnell, Angie Richards, and Charlotte Connell. Etymology. The somewhat unusual name of the show stems from the name of the show's creator, Jack Dita aka 'The Zoo Keeper'. The presence of 'Midnight' in the name also hints at the show's late timeslot. Cast. Midnight Zoo was hosted by: Format. Midnight Zoo was similar in concept to Network Ten's 'Up Late Game Show', hosted by former Big Brother housemate, Simon Deering aka Hotdogs. The 90 minute runtime of Midnight Zoo was split into three 30 minute slots. Each slot was allocated to one host/location pair. The puzzles were mostly brain teasers and questions relating to popular culture and are regarded as relatively simple (for example, 'Name a movie starting with B'). Money is awarded via the gameboard, which dictates the amount of money won for a particular answer. A puzzle featured in a previous slot was carried over to the next slot, suggesting that one puzzle can sometimes occupy the entire 90 minute runtime. To maintain interest in the show over the course of its duration, there were several segments featured during each slot. These were mostly to encourage callers during slow periods, which in turn encourages human interaction with the show. These include: All segments came with the added incentive of being featured on live television as a caller. At the same time, a puzzle's jackpot amount steadily increased throughout the show. A popular segment from host Stephen was how to build nether portals in "Minecraft". As an added bonus, all successful callers were offered a chance at the Bonus Board. The Bonus Board featured 12 numbers, of which the contestant much choose 3. If the contestant chose all three correctly, their winnings were increased by a jackpot amount which varies from $2500–$12000 Criticism and problems. On 11 August 2006, satirical comedy series "The Chaser's War on Everything" featured a segment on the influx of late night phone-in quiz shows. The segment mocked the standard of all late night quiz programs and their questions. For "Midnight Zoo", particular reference was made to the female hosts wearing bikinis. One Australian TV critic has even classed "Midnight Zoo" as the worst of the genre based on the use of bikinis. Channel Seven's morning program "Sunrise" showed a clip one morning of the show the night before where a picture of "Sunrise" host David Koch was used in a "name that face" type game. The "Sunrise" team criticised Midnight Zoo for spelling his name wrong and the show in general. Probability. The only probability present on Midnight Zoo that can be computed is the chance of success on the Bonus Board. This is equal to formula_0 or 1 in 220. The relative ease of the questions featured on Midnight Zoo suggest easy winnings for contestants. However, one must consider two important factors present in the game that restrict successful callers. Firstly, one must consider the chance of actually being chosen as a contestant. Secondly, one must consider the number of possible answers for the questions with respect to the number of available answers on the game board. Pricing and procedure. Midnight Zoo contestants are charged at a premium rate of 55c per call or text. The method of player selection was handled by registering interest via SMS or landline. When selected, players would receive a call from Midnight Zoo. Problems arising from this method include Midnight Zoo's use of a private line, eliminating a number of mobile users who ignore such calls. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "12C3" } ]
https://en.wikipedia.org/wiki?curid=6210801
6211
Context-sensitive grammar
Type of a formal grammar A context-sensitive grammar (CSG) is a formal grammar in which the left-hand sides and right-hand sides of any production rules may be surrounded by a context of terminal and nonterminal symbols. Context-sensitive grammars are more general than context-free grammars, in the sense that there are languages that can be described by a CSG but not by a context-free grammar. Context-sensitive grammars are less general (in the same sense) than unrestricted grammars. Thus, CSGs are positioned between context-free and unrestricted grammars in the Chomsky hierarchy. A formal language that can be described by a context-sensitive grammar, or, equivalently, by a noncontracting grammar or a linear bounded automaton, is called a context-sensitive language. Some textbooks actually define CSGs as non-contracting, although this is not how Noam Chomsky defined them in 1959. This choice of definition makes no difference in terms of the languages generated (i.e. the two definitions are weakly equivalent), but it does make a difference in terms of what grammars are structurally considered context-sensitive; the latter issue was analyzed by Chomsky in 1963. Chomsky introduced context-sensitive grammars as a way to describe the syntax of natural language where it is often the case that a word may or may not be appropriate in a certain place depending on the context. Walter Savitch has criticized the terminology "context-sensitive" as misleading and proposed "non-erasing" as better explaining the distinction between a CSG and an unrestricted grammar. Although it is well known that certain features of languages (e.g. cross-serial dependency) are not context-free, it is an open question how much of CSGs' expressive power is needed to capture the context sensitivity found in natural languages. Subsequent research in this area has focused on the more computationally tractable mildly context-sensitive languages. The syntaxes of some visual programming languages can be described by context-sensitive graph grammars. Formal definition. Formal grammar. Let us notate a formal grammar as formula_0, with formula_1 a set of nonterminal symbols, formula_2 a set of terminal symbols, formula_3 a set of production rules, and formula_4 the start symbol. A string formula_5 "directly yields", or "directly derives to", a string formula_6, denoted as formula_7, if "v" can be obtained from "u" by an application of some production rule in "P", that is, if formula_8 and formula_9, where formula_10 is a production rule, and formula_11 is the unaffected left and right part of the string, respectively. More generally, "u" is said to "yield", or "derive to", "v", denoted as formula_12, if "v" can be obtained from "u" by repeated application of production rules, that is, if formula_13 for some "n" ≥ 0 and some strings formula_14. In other words, the relation formula_15 is the reflexive transitive closure of the relation formula_16. The language of the grammar "G" is the set of all terminal-symbol strings derivable from its start symbol, formally: formula_17. Derivations that do not end in a string composed of terminal symbols only are possible, but do not contribute to "L"("G"). Context-sensitive grammar. A formal grammar is context-sensitive if each rule in "P" is either of the form formula_18 where formula_19 is the empty string, or of the form α"A"β → αγβ with "A" ∈ "N", formula_20, and formula_21. The name "context-sensitive" is explained by the α and β that form the context of "A" and determine whether "A" can be replaced with γ or not. By contrast, in a context-free grammar, no context is present: the left hand side of every production rule is just a nonterminal. The string γ is not allowed to be empty. Without this restriction, the resulting grammars become equal in power to unrestricted grammars. (Weakly) equivalent definitions. A noncontracting grammar is a grammar in which for any production rule, of the form "u" → "v", the length of "u" is less than or equal to the length of "v". Every context-sensitive grammar is noncontracting, while every noncontracting grammar can be converted into an equivalent context-sensitive grammar; the two classes are weakly equivalent. Some authors use the term "context-sensitive grammar" to refer to noncontracting grammars in general. The left-context- and right-context-sensitive grammars are defined by restricting the rules to just the form α"A" → αγ and to just "A"β → γβ, respectively. The languages generated by these grammars are also the full class of context-sensitive languages. The equivalence was established by Penttonen normal form. Examples. "anbncn". The following context-sensitive grammar, with start symbol "S", generates the canonical non-context-free language { "anbncn" | "n" ≥ 1 } : Rules 1 and 2 allow for blowing-up "S" to "a""n""BC"("BC")"n"−1; rules 3 to 6 allow for successively exchanging each "CB" to "BC" (four rules are needed for that since a rule "CB" → "BC" wouldn't fit into the scheme α"A"β → αγβ); rules 7–10 allow replacing a non-terminal "B" or "C" with its corresponding terminal "b" or "c", respectively, provided it is in the right place. A generation chain for "aaabbbccc" is: "S" →2 aSBC →2 "aaSBCBC" →1 "aaaBCBCBC" →3 "aaaBCZCBC" →4 "aaaBWZCBC" →5 "aaaBWCCBC" →6 "aaaBBCCBC" →3 "aaaBBCCZC" →4 "aaaBBCWZC" →5 "aaaBBCWCC" →6 "aaaBBCBCC" →3 "aaaBBCZCC" →4 "aaaBBWZCC" →5 "aaaBBWCCC" →6 "aaaBBBCCC" →7 "aaabBBCCC" →8 "aaabbBCCC" →8 "aaabbbCCC" →9 "aaabbbcCC" →10 "aaabbbccC" →10 "aaabbbccc" "anbncndn", etc.. More complicated grammars can be used to parse { "anbncndn" | "n" ≥ 1 }, and other languages with even more letters. Here we show a simpler approach using non-contracting grammars: Start with a kernel of regular productions generating the sentential forms formula_22 and then include the non contracting productions formula_23, formula_24, formula_25, formula_26, formula_27, formula_28, formula_29, formula_30, formula_31, formula_32. "ambncmdn". A non contracting grammar (for which there is an equivalent CSG) for the language formula_33 is defined by formula_34, formula_35, formula_36, formula_37, formula_38, formula_39, formula_40, and formula_41. With these definitions, a derivation for formula_42 is: formula_43. "a"2"i". A noncontracting grammar for the language { "a"2"i" | "i" ≥ 1 } is constructed in Example 9.5 (p. 224) of (Hopcroft, Ullman, 1979): Kuroda normal form. Every context-sensitive grammar which does not generate the empty string can be transformed into a weakly equivalent one in Kuroda normal form. "Weakly equivalent" here means that the two grammars generate the same language. The normal form will not in general be context-sensitive, but will be a noncontracting grammar. The Kuroda normal form is an actual normal form for non-contracting grammars. Properties and uses. Equivalence to linear bounded automaton. A formal language can be described by a context-sensitive grammar if and only if it is accepted by some linear bounded automaton (LBA). In some textbooks this result is attributed solely to Landweber and Kuroda. Others call it the Myhill–Landweber–Kuroda theorem. (Myhill introduced the concept of deterministic LBA in 1960. Peter S. Landweber published in 1963 that the language accepted by a deterministic LBA is context sensitive. Kuroda introduced the notion of non-deterministic LBA and the equivalence between LBA and CSGs in 1964.) As of 2010[ [update]] it is still an open question whether every context-sensitive language can be accepted by a "deterministic" LBA. Closure properties. Context-sensitive languages are closed under complement. This 1988 result is known as the Immerman–Szelepcsényi theorem. Moreover, they are closed under union, intersection, concatenation, substitution, inverse homomorphism, and Kleene plus. Every recursively enumerable language "L" can be written as "h"("L") for some context-sensitive language "L" and some string homomorphism "h". Computational problems. The decision problem that asks whether a certain string "s" belongs to the language of a given context-sensitive grammar "G", is PSPACE-complete. Moreover, there are context-sensitive grammars whose languages are PSPACE-complete. In other words, there is a context-sensitive grammar "G" such that deciding whether a certain string "s" belongs to the language of "G" is PSPACE-complete (so "G" is fixed and only "s" is part of the input of the problem). The emptiness problem for context-sensitive grammars (given a context-sensitive grammar "G", is "L"("G")=∅ ?) is undecidable. As model of natural languages. Savitch has proven the following theoretical result, on which he bases his criticism of CSGs as basis for natural language: for any recursively enumerable set "R", there exists a context-sensitive language/grammar "G" which can be used as a sort of proxy to test membership in "R" in the following way: given a string "s", "s" is in "R" if and only if there exists a positive integer "n" for which "scn" is in G, where "c" is an arbitrary symbol not part of "R". It has been shown that nearly all natural languages may in general be characterized by context-sensitive grammars, but the whole class of CSGs seems to be much bigger than natural languages. Worse yet, since the aforementioned decision problem for CSGs is PSPACE-complete, that makes them totally unworkable for practical use, as a polynomial-time algorithm for a PSPACE-complete problem would imply P=NP. It was proven that some natural languages are not context-free, based on identifying so-called cross-serial dependencies and unbounded scrambling phenomena. However this does not necessarily imply that the class of CSGs is necessary to capture "context sensitivity" in the colloquial sense of these terms in natural languages. For example, linear context-free rewriting systems (LCFRSs) are strictly weaker than CSGs but can account for the phenomenon of cross-serial dependencies; one can write a LCFRS grammar for {"anbncndn" | "n" ≥ 1} for example. Ongoing research on computational linguistics has focused on formulating other classes of languages that are "mildly context-sensitive" whose decision problems are feasible, such as tree-adjoining grammars, combinatory categorial grammars, coupled context-free languages, and linear context-free rewriting systems. The languages generated by these formalisms properly lie between the context-free and context-sensitive languages. More recently, the class PTIME has been identified with range concatenation grammars, which are now considered to be the most expressive of the mild-context sensitive language classes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = (N, \\Sigma, P, S)" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "S \\in N" }, { "math_id": 5, "text": "u \\in (N \\cup \\Sigma)^*" }, { "math_id": 6, "text": "v \\in (N \\cup \\Sigma)^*" }, { "math_id": 7, "text": "u \\Rightarrow v" }, { "math_id": 8, "text": "u = \\gamma L \\delta" }, { "math_id": 9, "text": "v = \\gamma R \\delta" }, { "math_id": 10, "text": "(L \\to R) \\in P" }, { "math_id": 11, "text": "\\gamma, \\delta \\in (N \\cup \\Sigma)^*" }, { "math_id": 12, "text": "u \\Rightarrow^* v" }, { "math_id": 13, "text": "u = u_0 \\Rightarrow ... \\Rightarrow u_n = v" }, { "math_id": 14, "text": "u_1, ..., u_{n-1} \\in (N \\cup \\Sigma)^*" }, { "math_id": 15, "text": "\\Rightarrow^*" }, { "math_id": 16, "text": "\\Rightarrow" }, { "math_id": 17, "text": "L(G) = \\{ w \\in \\Sigma^* \\mid S \\Rightarrow^* w \\}" }, { "math_id": 18, "text": "S \\to \\varepsilon" }, { "math_id": 19, "text": "\\varepsilon" }, { "math_id": 20, "text": "\\alpha, \\beta\\in (N \\cup \\Sigma \\setminus\\{S\\})^*" }, { "math_id": 21, "text": "\\gamma\\in (N \\cup \\Sigma \\setminus\\{S\\})^+" }, { "math_id": 22, "text": "(ABCD)^{n}abcd" }, { "math_id": 23, "text": "p_{Da} : Da\\rightarrow aD" }, { "math_id": 24, "text": "p_{Db} : Db\\rightarrow bD" }, { "math_id": 25, "text": "p_{Dc} : Dc\\rightarrow cD" }, { "math_id": 26, "text": "p_{Dd} : Dd\\rightarrow dd" }, { "math_id": 27, "text": "p_{Ca} : Ca\\rightarrow aC" }, { "math_id": 28, "text": "p_{Cb} : Cb\\rightarrow bC" }, { "math_id": 29, "text": "p_{Cc} : Cc\\rightarrow cc" }, { "math_id": 30, "text": "p_{Ba} : Ba\\rightarrow aB" }, { "math_id": 31, "text": "p_{Bb} : Bb\\rightarrow bb" }, { "math_id": 32, "text": "p_{Aa} : Aa\\rightarrow aa" }, { "math_id": 33, "text": "L_{Cross} = \\{ a^mb^nc^{m}d^{n} \\mid m \\ge 1, n \\ge 1 \\}" }, { "math_id": 34, "text": "p_1 : R\\rightarrow aRC | aC" }, { "math_id": 35, "text": "p_3 : T\\rightarrow BTd | Bd" }, { "math_id": 36, "text": "p_5 : CB\\rightarrow BC" }, { "math_id": 37, "text": "p_0 : S \\rightarrow RT" }, { "math_id": 38, "text": "p_6 : aB\\rightarrow ab" }, { "math_id": 39, "text": "p_7 : bB\\rightarrow bb" }, { "math_id": 40, "text": "p_8 : Cd\\rightarrow cd" }, { "math_id": 41, "text": "p_9 : Cc\\rightarrow cc" }, { "math_id": 42, "text": "a^3b^2c^3d^2" }, { "math_id": 43, "text": "S\n\\Rightarrow_{p_0} RT\n\\Rightarrow_{p^{2}_{1}p_{2}} a^3C^3T\n\\Rightarrow_{p_{3}p_{4} } a^3C^3B^2d^2\n\\Rightarrow_{p^{6}_{5} } a^3B^2C^3d^2\n\\Rightarrow_{p_{6}p_{7} } a^3b^2C^3d^2\n\\Rightarrow_{p_{8}p^{2}_{9}} a^3b^2c^3d^2\n" }, { "math_id": 44, "text": "S\\rightarrow [ACaB]" }, { "math_id": 45, "text": "\\begin{cases}\n\\ [Ca]a\\rightarrow aa[Ca] \\\\ \n\\ [Ca][aB]\\rightarrow aa[CaB] \\\\ \n\\ [ACa]a\\rightarrow [Aa]a[Ca] \\\\ \n\\ [ACa][aB]\\rightarrow [Aa]a[CaB] \\\\ \n\\ [ACaB]\\rightarrow [Aa][aCB] \\\\ \n\\ [CaB]\\rightarrow a[aCB]\n\\end{cases}" }, { "math_id": 46, "text": "[aCB]\\rightarrow [aDB]" }, { "math_id": 47, "text": "[aCB]\\rightarrow [aE]" }, { "math_id": 48, "text": "\\begin{cases}\n\\ a[Da]\\rightarrow [Da]a \\\\\n\\ [aDB]\\rightarrow [DaB] \\\\ \n\\ [Aa][Da]\\rightarrow [ADa]a \\\\ \n\\ a[DaB]\\rightarrow [Da][aB] \\\\\n\\ [Aa][DaB]\\rightarrow [ADa][aB]\n\\end{cases}" }, { "math_id": 49, "text": "[ADa]\\rightarrow [ACa]" }, { "math_id": 50, "text": "\\begin{cases}\n\\ a[Ea]\\rightarrow [Ea]a \\\\\n\\ [aE]\\rightarrow [Ea] \\\\\n\\ [Aa][Ea]\\rightarrow [AEa]a\n\\end{cases}" }, { "math_id": 51, "text": "[AEa]\\rightarrow a" } ]
https://en.wikipedia.org/wiki?curid=6211
6211280
Departure function
Model of thermodynamic properties In thermodynamics, a departure function is defined for any thermodynamic property as the difference between the property as computed for an ideal gas and the property of the species as it exists in the real world, for a specified temperature "T" and pressure "P". Common departure functions include those for enthalpy, entropy, and internal energy. Departure functions are used to calculate real fluid extensive properties (i.e. properties which are computed as a difference between two states). A departure function gives the difference between the real state, at a finite volume or non-zero pressure and temperature, and the ideal state, usually at zero pressure or infinite volume and temperature. For example, to evaluate enthalpy change between two points "h"("v"1,"T"1) and "h"("v"2,"T"2) we first compute the enthalpy departure function between volume "v"1 and infinite volume at "T" = "T"1, then add to that the ideal gas enthalpy change due to the temperature change from "T"1 to "T"2, then subtract the departure function value between "v"2 and infinite volume. Departure functions are computed by integrating a function which depends on an equation of state and its derivative. General expressions. General expressions for the enthalpy "H", entropy "S" and Gibbs free energy "G" are given by formula_0 Departure functions for Peng–Robinson equation of state. The Peng–Robinson equation of state relates the three interdependent state properties pressure "P", temperature "T", and molar volume "V""m". From the state properties ("P", "Vm", "T"), one may compute the departure function for enthalpy per mole (denoted "h") and entropy per mole ("s"): formula_1 where formula_2 is defined in the Peng-Robinson equation of state, "Tr" is the reduced temperature, "Pr" is the reduced pressure, "Z" is the compressibility factor, and formula_3 formula_4 Typically, one knows two of the three state properties ("P", "Vm", "T"), and must compute the third directly from the equation of state under consideration. To calculate the third state property, it is necessary to know three constants for the species at hand: the critical temperature "Tc", critical pressure "Pc", and the acentric factor "ω". But once these constants are known, it is possible to evaluate all of the above expressions and hence determine the enthalpy and entropy departures.
[ { "math_id": 0, "text": "\\begin{align}\n\\frac{H^\\mathrm{ig}-H}{RT} &= \\int_V^\\infty \\left[ T \\left(\\frac{\\partial Z}{\\partial T}\\right)_V \\right] \\frac{dV}{V} + 1 - Z \\\\[2ex]\n\\frac{S^\\mathrm{ig}-S}{R} &= \\int_V^\\infty \\left[ T \\left(\\frac{\\partial Z}{\\partial T}\\right)_V - 1 + Z\\right] \\frac{dV}{V} - \\ln Z \\\\[2ex]\n\\frac{G^\\mathrm{ig}-G}{RT} &= \\int_V^\\infty (1-Z) \\frac{dV}{V} + \\ln Z +1 - Z\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\nh_{T,P}-h_{T,P}^{\\mathrm{ideal}} &= RT_C\\left[T_r(Z-1)-2.078(1+\\kappa)\\sqrt{\\alpha}\\ln\\left(\\frac{Z+2.414B}{Z-0.414B}\\right)\\right] \\\\[1.5ex]\ns_{T,P}-s_{T,P}^{\\mathrm{ideal}} &= R\\left[\\ln(Z-B)-2.078\\kappa\\left(\\frac{1+\\kappa}{\\sqrt{T_r}}-\\kappa\\right)\\ln\\left(\\frac{Z+2.414B}{Z-0.414B}\\right)\\right]\n\\end{align}" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\kappa = 0.37464 + 1.54226\\;\\omega - 0.26992\\;\\omega^2" }, { "math_id": 4, "text": "B = 0.07780\\frac{P_r}{T_r}" } ]
https://en.wikipedia.org/wiki?curid=6211280
62112877
Dependent random choice
In mathematics, dependent random choice is a probabilistic technique that shows how to find a large set of vertices in a dense graph such that every small subset of vertices has many common neighbors. It is a useful tool to embed a graph into another graph with many edges. Thus it has its application in extremal graph theory, additive combinatorics and Ramsey theory. Statement of theorem. Let formula_0, formula_1 and suppose: formula_2Every graph on formula_3 vertices with at least formula_4 edges contains a subset formula_5 of vertices with formula_6 such that for all formula_7 with formula_8, formula_9 has at least formula_10 common neighbors. Proof. The basic idea is to choose the set of vertices randomly. However, instead of choosing each vertex uniformly at random, the procedure randomly chooses a list of formula_11 vertices first and then chooses common neighbors as the set of vertices. The hope is that in this way, the chosen set would be more likely to have more common neighbors. Formally, let formula_12 be a list of formula_11 vertices chosen uniformly at random from formula_13 with replacement (allowing repetition). Let formula_14 be the common neighborhood of formula_12. The expected value of formula_15 isformula_16For every formula_17-element subset formula_9 of formula_18, formula_14 contains formula_9 if and only if formula_12 is contained in the common neighborhood of formula_9, which occurs with probability formula_19 An formula_9 is "bad" if it has less than formula_10 common neighbors. Then for each fixed formula_17-element subset formula_9 of formula_18, it is contained in formula_14 with probability less than formula_20. Therefore by linearity of expectation,formula_21 To eliminate bad subsets, we exclude one element in each bad subset. The number of remaining elements is at least formula_22, whose expected value is at least formula_23 Consequently, there exists a formula_12 such that there are at least formula_24 elements in formula_14 remaining after getting rid of all bad formula_17-element subsets. The set formula_5 of the remaining formula_24 elements expresses the desired properties. Applications. Turán numbers of a bipartite graph. Dependent random choice can help find the Turán number. Using appropriate parameters, if formula_25 is a bipartite graph in which all vertices in formula_26 have degree at most formula_17, then the extremal number formula_27 where formula_28 only depends on formula_29. Formally, with formula_30, let formula_31 be a sufficiently large constant such that formula_32 If formula_33 then formula_34and so the assumption of dependent random choice holds. Hence, for each graph formula_35 with at least formula_36 edges, there exists a vertex subset formula_5 of size formula_37 satisfying that every formula_17-subset of formula_5 has at least formula_38 common neighbors. By embedding formula_29 into formula_35 by embedding formula_14 into formula_5 arbitrarily and then embedding the vertices in formula_26 one by one, then for each vertex formula_39 in formula_26, it has at most formula_17 neighbors in formula_14, which shows that their images in formula_5 have at least formula_38 common neighbors. Thus formula_39 can be embedded into one of the common neighbors while avoiding collisions. This can be generalized to degenerate graphs using a variation of dependent random choice. Embedding a 1-subdivision of a complete graph. DRC can be applied directly to show that if formula_35 is a graph on formula_3 vertices and formula_40 edges, then formula_35 contains a 1-subdivision of a complete graph with formula_41 vertices. This can be shown in a similar way to the above proof of the bound on Turán number of a bipartite graph. Indeed, if we set formula_42, we have (since formula_43)formula_44and so the DRC assumption holds. Since a 1-subdivision of the complete graph on formula_37 vertices is a bipartite graph with parts of size formula_37 and formula_45 where every vertex in the second part has degree two, the embedding argument in the proof of the bound on Turán number of a bipartite graph produces the desired result. Variation. A stronger version finds two subsets of vertices formula_46 in a dense graph formula_35 so that every small subset of vertices in formula_47 has a lot of common neighbors in formula_48. Formally, let formula_49 be some positive integers with formula_50, and let formula_1 be some real number. Suppose that the following constraints hold: formula_51 Then every graph formula_35 on formula_3 vertices with at least formula_52 edges contains two subsets formula_46 of vertices so that any formula_17 vertices in formula_47 have at least formula_10 common neighbors in formula_48. Extremal number of a degenerate bipartite graph. Using this stronger statement, one can upper bound the extremal number of formula_17-degenerate bipartite graphs: for each formula_17-degenerate bipartite graph formula_29 with at most formula_53 vertices, the extremal number formula_54 is at most formula_55 Ramsey number of a degenerate bipartite graph. This statement can be also applied to obtain an upper bound of the Ramsey number of a degenerate bipartite graphs. If formula_17 is a fixed integer, then for every bipartite formula_17-degenerate bipartite graph formula_35 on formula_3 vertices, the Ramsey number formula_56 is of the order formula_57 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u,n,r,m,t \\in \\mathbb{N}" }, { "math_id": 1, "text": "\\alpha>0" }, { "math_id": 2, "text": " n\\alpha^t - {n \\choose r} \\left(\\frac{m}{n}\\right)^t \\geq u." }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\frac{\\alpha n^2}{2}" }, { "math_id": 5, "text": "U" }, { "math_id": 6, "text": "|U|\\geq u" }, { "math_id": 7, "text": "S\\subset U" }, { "math_id": 8, "text": "|S| = r" }, { "math_id": 9, "text": "S" }, { "math_id": 10, "text": "m" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "T" }, { "math_id": 13, "text": "V(G)" }, { "math_id": 14, "text": "A" }, { "math_id": 15, "text": "|A|" }, { "math_id": 16, "text": "\\begin{align}\n \\mathbb{E}|A| &= \\sum_{v\\in V}\\mathbb{P}(v\\in A)= \\sum_{v\\in V}\\mathbb{P}(T\\subseteq N(v))= \\sum_{v\\in V}\\left(\\frac{d(v)}{n}\\right)^t\\\\\n &\\geq n\\left(\\frac{1}{n}\\sum_{v\\in V}\\frac{d(v)}{n}\\right)^t \\quad\\text{(convexity)}\\\\\n &\\geq n\\alpha^t.\n\\end{align}" }, { "math_id": 17, "text": "r" }, { "math_id": 18, "text": "V" }, { "math_id": 19, "text": "\\left(\\frac{\\#\\text{common neighbors of }S}{n}\\right)^t." }, { "math_id": 20, "text": "(m/n)^t" }, { "math_id": 21, "text": "\\mathbb{E}[\\#\\text{bad }r\\text{-element subset of }A]<\\binom{n}{r}\\left(\\frac{m}{n}\\right)^t." }, { "math_id": 22, "text": "|A|-(\\#\\text{bad }r\\text{-element subset of }A)" }, { "math_id": 23, "text": " n\\alpha^t-\\binom{n}{r}\\left(\\frac{m}{n}\\right)^t\\geq u." }, { "math_id": 24, "text": "u" }, { "math_id": 25, "text": "H = A\\cup B" }, { "math_id": 26, "text": "B" }, { "math_id": 27, "text": "\\text{ex}(n, H)\\leq c n^{2-1/r}" }, { "math_id": 28, "text": "c = c(H)" }, { "math_id": 29, "text": "H" }, { "math_id": 30, "text": "a=|A|, b=|B|" }, { "math_id": 31, "text": "c" }, { "math_id": 32, "text": "(2c)^r - (a+b)^r \\geq a." }, { "math_id": 33, "text": "\\alpha = 2cn^{-1/r}, m = a+b, t = r, u = a" }, { "math_id": 34, "text": "n\\alpha^t - \\binom{n}{r}\\left(\\frac{m}{n}\\right)^t = (2c)^r - \\binom{n}{r}\\left(\\frac{a+b}{n}\\right)^r\\geq (2c)^r - (a+b)^r\\geq a=u," }, { "math_id": 35, "text": "G" }, { "math_id": 36, "text": "cn^{2-1/r}" }, { "math_id": 37, "text": "a" }, { "math_id": 38, "text": "a+b" }, { "math_id": 39, "text": "v" }, { "math_id": 40, "text": "\\epsilon n^2" }, { "math_id": 41, "text": "\\epsilon^{3/2}n^{1/2}" }, { "math_id": 42, "text": "\\alpha = 2\\epsilon, m = a^2, t=\\frac{\\log n}{2\\log 1/\\epsilon}, u=a" }, { "math_id": 43, "text": "2\\epsilon = \\alpha \\leq 1" }, { "math_id": 44, "text": "n\\alpha^t - \\binom{n}{r}\\left(\\frac{m}{n}\\right)^t \\geq (2\\epsilon)^tn - \\binom{n}{2}\\epsilon^{3t}\\geq n^{1/2}\\geq a=u," }, { "math_id": 45, "text": "b = \\binom{a}{2}" }, { "math_id": 46, "text": "U_1, U_2" }, { "math_id": 47, "text": "U_i" }, { "math_id": 48, "text": "U_{3-i}" }, { "math_id": 49, "text": "u,n,r,m,t,q" }, { "math_id": 50, "text": "q>r" }, { "math_id": 51, "text": "\n\\begin{align}\nn\\alpha^t-\\binom{n}{q}\\left(\\frac{m}{n}\\right)^t & \\geq u\\\\\n\\binom{n}{r}\\left(\\frac{m}{u}\\right)^{q-r} & <1.\n\\end{align}\n" }, { "math_id": 52, "text": "\\alpha n^2/2" }, { "math_id": 53, "text": "N^{1-1.8/s}" }, { "math_id": 54, "text": "\\text{ex}(N, H)" }, { "math_id": 55, "text": "N^{2-1/(s^3r)}." }, { "math_id": 56, "text": "r(G)" }, { "math_id": 57, "text": "n^{1+o(1)}." } ]
https://en.wikipedia.org/wiki?curid=62112877
62117133
Gradient vector flow
Computer vision framework Gradient vector flow (GVF), a computer vision framework introduced by Chenyang Xu and Jerry L. Prince, is the vector field that is produced by a process that smooths and diffuses an input vector field. It is usually used to create a vector field from images that points to object edges from a distance. It is widely used in image analysis and computer vision applications for object tracking, shape recognition, segmentation, and edge detection. In particular, it is commonly used in conjunction with active contour model. Background. Finding objects or homogeneous regions in images is a process known as image segmentation. In many applications, the locations of object edges can be estimated using local operators that yield a new image called an edge map. The edge map can then be used to guide a deformable model, sometimes called an active contour or a snake, so that it passes through the edge map in a smooth way, therefore defining the object itself. A common way to encourage a deformable model to move toward the edge map is to take the spatial gradient of the edge map, yielding a vector field. Since the edge map has its highest intensities directly on the edge and drops to zero away from the edge, these gradient vectors provide directions for the active contour to move. When the gradient vectors are zero, the active contour will not move, and this is the correct behavior when the contour rests on the peak of the edge map itself. However, because the edge itself is defined by local operators, these gradient vectors will also be zero far away from the edge and therefore the active contour will not move toward the edge when initialized far away from the edge. Gradient vector flow (GVF) is the process that spatially extends the edge map gradient vectors, yielding a new vector field that contains information about the location of object edges throughout the entire image domain. GVF is defined as a diffusion process operating on the components of the input vector field. It is designed to balance the fidelity of the original vector field, so it is not changed too much, with a regularization that is intended to produce a smooth field on its output. Although GVF was designed originally for the purpose of segmenting objects using active contours attracted to edges, it has been since adapted and used for many alternative purposes. Some newer purposes including defining a continuous medial axis representation, regularizing image anisotropic diffusion algorithms, finding the centers of ribbon-like objects, constructing graphs for optimal surface segmentations, creating a shape prior, and much more. Theory. The theory of GVF was originally described by Xu and Prince. Let formula_0 be an edge map defined on the image domain. For uniformity of results, it is important to restrict the edge map intensities to lie between 0 and 1, and by convention formula_0 takes on larger values (close to 1) on the object edges. The gradient vector flow (GVF) field is given by the vector field formula_1 that minimizes the energy functional In this equation, subscripts denote partial derivatives and the gradient of the edge map is given by the vector field formula_2. Figure 1 shows an edge map, the gradient of the (slightly blurred) edge map, and the GVF field generated by minimizing formula_3. Equation 1 is a variational formulation that has both a data term and a regularization term. The first term in the integrand is the data term. It encourages the solution formula_4 to closely agree with the gradients of the edge map since that will make formula_5 small. However, this only needs to happen when the edge map gradients are large since formula_5 is multiplied by the square of the length of these gradients. The second term in the integrand is a regularization term. It encourages the spatial variations in the components of the solution to be small by penalizing the sum of all the partial derivatives of formula_4. As is customary in these types of variational formulations, there is a regularization parameter formula_6 that must be specified by the user in order to trade off the influence of each of the two terms. If formula_7 is large, for example, then the resulting field will be very smooth and may not agree as well with the underlying edge gradients. Theoretical Solution. Finding formula_8 to minimize Equation 1 requires the use of calculus of variations since formula_8 is a function, not a variable. Accordingly, the Euler equations, which provide the necessary conditions for formula_4 to be a solution can be found by calculus of variations, yielding where formula_9 is the Laplacian operator. It is instructive to examine the form of the equations in (2). Each is a partial differential equation that the components formula_10 and formula_11 of formula_12 must satisfy. If the magnitude of the edge gradient is small, then the solution of each equation is guided entirely by Laplace's equation, for example formula_13, which will produce a smooth scalar field entirely dependent on its boundary conditions. The boundary conditions are effectively provided by the locations in the image where the magnitude of the edge gradient is large, where the solution is driven to agree more with the edge gradients. Computational Solutions. There are two fundamental ways to compute GVF. First, the energy function formula_14 itself (1) can be directly discretized and minimized, for example, by gradient descent. Second, the partial differential equations in (2) can be discretized and solved iteratively. The original GVF paper used an iterative approach, while later papers introduced considerably faster implementations such as an octree-based method, a multi-grid method, and an augmented Lagrangian method. In addition, very fast GPU implementations have been developed in Extensions and Advances. GVF is easily extended to higher dimensions. The energy function is readily written in a vector form as which can be solved by gradient descent or by finding and solving its Euler equation. Figure 2 shows an illustration of a three-dimensional GVF field on the edge map of a simple object (see ). The data and regularization terms in the integrand of the GVF functional can also be modified. A modification described in , called "generalized gradient vector flow" (GGVF) defines two scalar functions and reformulates the energy as While the choices formula_15 and formula_16 reduce GGVF to GVF, the alternative choices formula_17 and formula_18, for formula_19 a user-selected constant, can improve the tradeoff between the data term and its regularization in some applications. The GVF formulation has been further extended to vector-valued images in  where a weighted structure tensor of a vector-valued image is used. A learning based probabilistic weighted GVF extension was proposed in  to further improve the segmentation for images with severely cluttered textures or high levels of noise. The variational formulation of GVF has also been modified in "motion GVF" (MGVF) to incorporate object motion in an image sequence. Whereas the diffusion of GVF vectors from a conventional edge map acts in an isotropic manner, the formulation of MGVF incorporates the expected object motion between image frames. An alternative to GVF called vector field convolution (VFC) provides many of the advantages of GVF, has superior noise robustness, and can be computed very fast. The VFC field formula_20 is defined as the convolution of the edge map formula_21 with a vector field kernel formula_22 where The vector field kernel formula_23 has vectors that always point toward the origin but their magnitudes, determined in detail by the function formula_24, decrease to zero with increasing distance from the origin. The beauty of VFC is that it can be computed very rapidly using a fast Fourier transform (FFT), a multiplication, and an inverse FFT. The capture range can be large and is explicitly given by the radius formula_25 of the vector field kernel. A possible drawback of VFC is that weak edges might be overwhelmed by strong edges, but that problem can be alleviated by the use of a hybrid method that switches to conventional forces when the snake gets close to the boundary. Properties. GVF has characteristics that have made it useful in many diverse applications. It has already been noted that its primary original purpose was to extend a local edge field throughout the image domain, far away from the actual edge in many cases. This property has been described as an extension of the "capture range" of the external force of an active contour model. It is also capable of moving active contours into concave regions of an object's boundary. These two properties are illustrated in Figure 3. Previous forces that had been used as external forces (based on the edge map gradients and simply related variants) required pressure forces in order to move boundaries from large distances and into concave regions. Pressure forces, also called balloon forces, provide continuous force on the boundary in one direction (outward or inward), and tend to have the effect of pushing through weak boundaries. GVF can often replace pressure forces and yield better performance in such situations. Because the diffusion process is inherent in the GVF solution, vectors that point in opposite directions tend to compete as they meet at a central location, thereby defining a type of geometric feature that is related to the boundary configuration, but not directly evident from the edge map. For example, "perceptual edges" are gaps in the edge map which tend to be connected visually by human perception. GVF helps to connect them by diffusing opposing edge gradient vectors across the gap; and even though there is no actual edge map, active contour will converge to the perceptual edge because the GVF vectors drive them there (see ). This property carries over when there are so-called "weak edges" identified by regions of edge maps having lower values. GVF vectors also meet in opposition at central locations of objects thereby defining a type of medialness. This property has been exploited as an alternative definition of the skeleton of objects and also as a way to initialize deformable models within objects such that convergence to the boundary is more likely. Applications. The most fundamental application of GVF is as an external force in a deformable model. A typical application considers an image formula_26 with an object delineated by intensity from its background. Thus, a suitable edge map formula_27 could be defined by where formula_28 is a Gaussian blurring kernel with standard deviation formula_29 and formula_30 is convolution. This definition is applicable in any dimension and yields an edge map that falls in the range formula_31. Gaussian blurring is used primarily so that a meaningful gradient vector can always be computed, but formula_32 is generally kept fairly small so that true edge positions are not overly distorted. Given this edge map, the GVF vector field formula_33 can be computed by solving (2). The deformable model itself can be implemented in a variety of ways including parametric models such as the original snake or active surfaces and implicit models including geometric deformable models. In the case of parametric deformable models, the GVF vector field formula_12 can be used directly as the external forces in the model. If the deformable model is defined by the evolution of the (two-dimensional) active contour formula_34, then a simple parametric active contour evolution equation can be written as Here, the subscripts indicate partial derivatives and formula_35 and formula_36 are user-selected constants. In the case of geometric deformable models, then the GVF vector field formula_12 is first projected against the normal direction of the implicit wavefront, which defines an additional speed function. Accordingly, then the evolution of the signed distance function formula_37 defining a simple geometric deformable contour can be written as where formula_38 is the curvature of the contour and formula_36 is a user-selected constant. A more sophisticated deformable model formulation that combines the geodesic active contour flow with GVF forces was proposed in . This paper also shows how to apply the Additive Operator Splitting schema for rapid computation of this segmentation method. The uniqueness and existence of this combined model were proven in . A further modification of this model by using an external force term minimizing GVF divergence was proposed in  to achieve even better segmentation for images with complex geometric objects. GVF has been used to find both inner, central, and central cortical surfaces in the analysis of brain images, as shown in Figure 4. The process first finds the inner surface using a three-dimensional geometric deformable model with conventional forces. Then the central surface is found by exploiting the central tendency property of GVF. In particular, the cortical membership function of the human brain cortex, derived using a fuzzy classifier, is used to compute GVF as if itself were a thick edge map. The computed GVF vectors point towards the center of the cortex and can then be used as external forces to drive the inner surface to the central surface. Finally, another geometric deformable model with conventional forces is used to drive the central surface to a position on the outer surface of the cortex. Several notable recent applications of GVF include constructing graphs for optimal surface segmentation in spectral-domain optical coherence tomography volumes, a learning based probabilistic GVF active contour formulation to give more weights to objects of interest in ultrasound image segmentation, and an adaptive multi-feature GVF active contour for improved ultrasound image segmentation without hand-tuned parameters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textstyle f(x,y)" }, { "math_id": 1, "text": "\\textstyle \\mathbf{v}(x,y) = [u(x,y),v(x,y)]" }, { "math_id": 2, "text": "\\textstyle \\nabla f =(f_x, f_y)" }, { "math_id": 3, "text": "\\textstyle\\mathcal{E}" }, { "math_id": 4, "text": "\\textstyle\\mathbf{v}" }, { "math_id": 5, "text": "\\textstyle\\mathbf{v} - \\nabla f" }, { "math_id": 6, "text": "\\textstyle\\mu > 0" }, { "math_id": 7, "text": "\\textstyle\\mu" }, { "math_id": 8, "text": "\\textstyle\\mathbf{v}(x,y)" }, { "math_id": 9, "text": "\\textstyle\\nabla^2" }, { "math_id": 10, "text": "u" }, { "math_id": 11, "text": "v" }, { "math_id": 12, "text": "\\mathbf{v}" }, { "math_id": 13, "text": "\\textstyle\\nabla^2 u = 0" }, { "math_id": 14, "text": "\\mathcal{E}" }, { "math_id": 15, "text": "\\textstyle g(\\nabla f|) = \\mu" }, { "math_id": 16, "text": "\\textstyle h(|\\nabla f|) = |\\nabla f|^2" }, { "math_id": 17, "text": "\\textstyle g(|\\nabla f|) = \\exp\\{-|\\nabla f|/K\\}" }, { "math_id": 18, "text": "\\textstyle h(\\nabla f|) = 1 - g(|\\nabla f|)" }, { "math_id": 19, "text": "K" }, { "math_id": 20, "text": "\\textstyle\\mathbf{v}_{\\mathrm{VFC}}" }, { "math_id": 21, "text": "f" }, { "math_id": 22, "text": "\\mathbf{k}" }, { "math_id": 23, "text": "\\textstyle\\mathbf{k}" }, { "math_id": 24, "text": "m" }, { "math_id": 25, "text": "R" }, { "math_id": 26, "text": "\\textstyle I(\\mathbf{x})" }, { "math_id": 27, "text": "\\textstyle f(\\mathbf{x})" }, { "math_id": 28, "text": "\\textstyle G_{\\sigma}" }, { "math_id": 29, "text": "\\textstyle\\sigma" }, { "math_id": 30, "text": "*" }, { "math_id": 31, "text": "[0,1]" }, { "math_id": 32, "text": "\\sigma" }, { "math_id": 33, "text": "\\textstyle\\mathbf{v}(\\mathbf{x})" }, { "math_id": 34, "text": "\\mathbf{X}(s,t)" }, { "math_id": 35, "text": "\\gamma" }, { "math_id": 36, "text": "\\alpha" }, { "math_id": 37, "text": "\\textstyle\\phi_t(\\mathbf{x})" }, { "math_id": 38, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=62117133
62119898
Value of structural health information
The value of structural health information is the expected utility gain of a built environment system by information provided by structural health monitoring (SHM). The quantification of the value of structural health information is based on decision analysis adapted to built environment engineering. The value of structural health information can be significant for the risk and integrity management of built environment systems. Background. The value of structural health information takes basis in the framework of the decision analysis and the value of information analysis as introduced by Raiffa and Schlaifer and adapted to civil engineering by Benjamin and Cornell. Decision theory itself is based upon the expected utility hypothesis by Von Neumann and Morgenstern. The concepts for the value of structural health information in built environment engineering were first formulated by Pozzi and Der Kiureghian and Faber and Thöns. Formulation. The value of structural health information is quantified with a normative decision analysis. The value of structural health monitoring formula_0 is calculated as the difference between the optimized expected utilities of performing and not performing structural health monitoring (SHM), formula_1 and formula_2, respectively: formula_3 The expected utilities are calculated with a decision scenario involving (1) interrelated built environment system state, utility and consequence models, (2) structural health information type, precision and cost models and (2) structural health action type and implementation models. The value of structural health information quantification facilitates an optimization of structural health information system parameters and information dependent actions. Application. The value of structural health information provides a quantitative decision basis for (1) implementing SHM or not, (2) the identification of the optimal SHM strategy and (3) for planning optimal structural health actions, such as e.g., repair and replacement. The value of structural health information presupposes relevance of SHM information for the built environment system performance. A significant value of structural health information has been found for the risk and integrity management of engineering structures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "U_1" }, { "math_id": 2, "text": "U_0" }, { "math_id": 3, "text": "V=U_1-U_0" } ]
https://en.wikipedia.org/wiki?curid=62119898
6212
Context-sensitive language
In formal language theory, a context-sensitive language is a language that can be defined by a context-sensitive grammar (and equivalently by a noncontracting grammar). Context-sensitive is known as type-1 in the Chomsky hierarchy of formal languages. Computational properties. Computationally, a context-sensitive language is equivalent to a linear bounded nondeterministic Turing machine, also called a linear bounded automaton. That is a non-deterministic Turing machine with a tape of only formula_0 cells, where formula_1 is the size of the input and formula_2 is a constant associated with the machine. This means that every formal language that can be decided by such a machine is a context-sensitive language, and every context-sensitive language can be decided by such a machine. This set of languages is also known as NLINSPACE or NSPACE("O"("n")), because they can be accepted using linear space on a non-deterministic Turing machine. The class LINSPACE (or DSPACE("O"("n"))) is defined the same, except using a deterministic Turing machine. Clearly LINSPACE is a subset of NLINSPACE, but it is not known whether LINSPACE = NLINSPACE. Examples. One of the simplest context-sensitive but not context-free languages is formula_3: the language of all strings consisting of n occurrences of the symbol "a", then n "b"s, then n "c"s (abc, aabbcc, aaabbbccc, etc.). A superset of this language, called the Bach language, is defined as the set of all strings where "a", "b" and "c" (or any other set of three symbols) occurs equally often (aabccb, baabcaccb, etc.) and is also context-sensitive. L can be shown to be a context-sensitive language by constructing a linear bounded automaton which accepts L. The language can easily be shown to be neither regular nor context-free by applying the respective pumping lemmas for each of the language classes to L. Similarly: formula_4 is another context-sensitive language; the corresponding context-sensitive grammar can be easily projected starting with two context-free grammars generating sentential forms in the formats formula_5 and formula_6 and then supplementing them with a permutation production like formula_7, a new starting symbol and standard syntactic sugar. formula_8 is another context-sensitive language (the "3" in the name of this language is intended to mean a ternary alphabet); that is, the "product" operation defines a context-sensitive language (but the "sum" defines only a context-free language as the grammar formula_9 and formula_10 shows). Because of the commutative property of the product, the most intuitive grammar for formula_11 is ambiguous. This problem can be avoided considering a somehow more restrictive definition of the language, e.g. formula_12. This can be specialized to formula_13 and, from this, to formula_14, formula_15, etc. formula_16 is a context-sensitive language. The corresponding context-sensitive grammar can be obtained as a generalization of the context-sensitive grammars for formula_17, formula_18, etc. formula_19 is a context-sensitive language. formula_20 is a context-sensitive language (the "2" in the name of this language is intended to mean a binary alphabet). This was proved by Hartmanis using pumping lemmas for regular and context-free languages over a binary alphabet and, after that, sketching a linear bounded multitape automaton accepting formula_21. formula_22 is a context-sensitive language (the "1" in the name of this language is intended to mean a unary alphabet). This was credited by A. Salomaa to Matti Soittola by means of a linear bounded automaton over a unary alphabet (pages 213-214, exercise 6.8) and also to Marti Penttonen by means of a context-sensitive grammar also over a unary alphabet (See: Formal Languages by A. Salomaa, page 14, Example 2.5). An example of recursive language that is not context-sensitive is any recursive language whose decision is an EXPSPACE-hard problem, say, the set of pairs of equivalent regular expressions with exponentiation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "kn" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "L = \\{ a^nb^nc^n : n \\ge 1 \\}" }, { "math_id": 4, "text": "L_\\textit{Cross} = \\{ a^mb^nc^{m}d^{n} : m \\ge 1, n \\ge 1 \\}" }, { "math_id": 5, "text": "a^mC^m" }, { "math_id": 6, "text": "B^nd^n" }, { "math_id": 7, "text": "CB\\rightarrow BC" }, { "math_id": 8, "text": "L_{MUL3} = \\{ a^mb^nc^{mn} : m \\ge 1, n \\ge 1 \\}" }, { "math_id": 9, "text": "S\\rightarrow aSc|R" }, { "math_id": 10, "text": "R\\rightarrow bRc|bc" }, { "math_id": 11, "text": "L_\\textit{MUL3}" }, { "math_id": 12, "text": "L_\\textit{ORDMUL3} = \\{ a^mb^nc^{mn} : 1 < m < n \\}" }, { "math_id": 13, "text": "L_\\textit{MUL1} = \\{ a^{mn} : m > 1, n > 1 \\}" }, { "math_id": 14, "text": "L_{m^2} = \\{ a^{m^2} : m > 1 \\}" }, { "math_id": 15, "text": "L_{m^3} = \\{ a^{m^3} : m > 1 \\}" }, { "math_id": 16, "text": "L_{REP} = \\{ w^{|w|} : w \\in \\Sigma^* \\}" }, { "math_id": 17, "text": "L_\\textit{Square} = \\{ w^2 : w \\in \\Sigma^* \\}" }, { "math_id": 18, "text": "L_\\textit{Cube} = \\{ w^3 : w \\in \\Sigma^* \\}" }, { "math_id": 19, "text": "L_\\textit{EXP} = \\{ a^{2^n} : n \\ge 1 \\}" }, { "math_id": 20, "text": "L_\\textit{PRIMES2} = \\{ w : |w| \\mbox { is prime } \\}" }, { "math_id": 21, "text": "L_{PRIMES2}" }, { "math_id": 22, "text": "L_\\textit{PRIMES1} = \\{ a^p : p \\mbox { is prime } \\}" } ]
https://en.wikipedia.org/wiki?curid=6212
621215
Cointerpretability
In mathematical logic, cointerpretability is a binary relation on formal theories: a formal theory "T" is cointerpretable in another such theory "S", when the language of "S" can be translated into the language of "T" in such a way that "S" proves every formula whose translation is a theorem of "T". The "translation" here is required to preserve the logical structure of formulas. This concept, in a sense dual to interpretability, was introduced by , who also proved that, for theories of Peano arithmetic and any stronger theories with effective axiomatizations, cointerpretability is equivalent to formula_0-conservativity.
[ { "math_id": 0, "text": "\\Sigma_1" } ]
https://en.wikipedia.org/wiki?curid=621215
62122606
Spiral similarity
Spiral similarity is a plane transformation in mathematics composed of a rotation and a dilation. It is used widely in Euclidean geometry to facilitate the proofs of many theorems and other results in geometry, especially in mathematical competitions and olympiads. Though the origin of this idea is not known, it was documented in 1967 by Coxeter in his book "Geometry Revisited". and 1969 - using the term "dilative rotation" - in his book "Introduction to Geometry". The following theorem is important for the Euclidean plane: Any two directly similar figures are related either by a translation or by a spiral similarity. "(Hint: Directly similar figures are similar and have the same orientation)" Definition. A spiral similarity formula_0 is composed of a rotation of the plane followed a dilation about a center formula_1 with coordinates formula_2 in the plane. Expressing the rotation by a linear transformation formula_3 and the dilation as multiplying by a scale factor formula_4, a point formula_5 gets mapped to formula_6 On the complex plane, any spiral similarity can be expressed in the form formula_7, where formula_8 is a complex number. The magnitude formula_9 is the dilation factor of the spiral similarity, and the argument formula_10 is the angle of rotation. Properties. Two circles. Let T be a spiral similarity mapping circle k to k' with k formula_11 k' = {C, D} and fixed point C. Then for each point P formula_12 k the points P, T(P)= P' and D are collinear. "Remark:" This property is the basis for the construction of the center of a spiral similarity for two linesegments. Proof: formula_13, as rotation and dilation preserve angles. formula_14, as if the radius formula_15 intersects the chord formula_16 , then formula_17 doesn't meet formula_18 , and if formula_15 doesn't intersect formula_16, then formula_17 intersects formula_18, so one of these angles is formula_19 and the other is formula_20. So P, P' and D are collinear. Center of a spiral similarity for two line segments. Through a dilation of a line, rotation, and translation, any line segment can be mapped into any other through the series of plane transformations. We can find the center of the spiral similarity through the following construction: Proof: Note that formula_30 and formula_31 are cyclic quadrilaterals. Thus, formula_32. Similarly, formula_33. Therefore, by AA similarity, triangles formula_34 and formula_35 are similar. Thus, formula_36 so a rotation angle mapping formula_37 to formula_38 also maps formula_39 to formula_40. The dilation factor is then just the ratio of side lengths formula_41 to formula_28. Solution with complex numbers. If we express formula_42 and formula_40 as points on the complex plane with corresponding complex numbers formula_43 and formula_4, we can solve for the expression of the spiral similarity which takes formula_37 to formula_39 and formula_38 to formula_40. Note that formula_44 and formula_45, so formula_46. Since formula_47 and formula_48, we plug in to obtain formula_49, from which we obtain formula_50. Pairs of spiral similarities. For any points formula_42 and formula_40, the center of the spiral similarity taking formula_28 to formula_41 is also the center of a spiral similarity taking formula_21 to formula_22. This can be seen through the above construction. If we let formula_27 be the center of spiral similarity taking formula_28 to formula_41, then formula_51. Therefore, formula_52. Also, formula_53 implies that formula_54. So, by SAS similarity, we see that formula_55. Thus formula_27 is also the center of the spiral similarity which takes formula_21 to formula_22. Corollaries. Proof of Miquel's Quadrilateral Theorem. Spiral similarity can be used to prove Miquel's Quadrilateral Theorem: given four noncollinear points formula_56 and formula_40, the circumcircles of the four triangles formula_57 and formula_58 intersect at one point, where formula_23 is the intersection of formula_59 and formula_60 and formula_61 is the intersection of formula_62 and formula_63 (see diagram). Let formula_64 be the center of the spiral similarity which takes formula_62 to formula_65. By the above construction, the circumcircles of formula_24 and formula_66 intersect at formula_64 and formula_23. Since formula_64 is also the center of the spiral similarity taking formula_67 to formula_60, by similar reasoning the circumcircles of formula_68 and formula_58 meet at formula_61 and formula_64. Thus, all four circles intersect at formula_64. Example problem. Here is an example problem on the 2018 Japan MO Finals which can be solved using spiral similarity:Given a scalene triangle formula_69, let formula_40 and formula_70 be points on segments formula_62 and formula_71, respectively, so that formula_72. Let formula_73 be the circumcircle of triangle formula_74 and formula_23 the reflection of formula_37 across formula_60. Lines formula_75 and formula_76 meet formula_73 again at formula_27 and formula_77, respectively. Prove that formula_78 and formula_79 intersect on formula_73. Proof: We first prove the following claims: "Claim 1": Quadrilateral formula_80 is cyclic. "Proof:" Since formula_81 is isosceles, we note that formula_82 thus proving that quadrilateral formula_80 is cyclic, as desired. By symmetry, we can prove that quadrilateral formula_83 is cyclic. "Claim 2": formula_84 "Proof:" We have that formula_85 By similar reasoning, formula_86 so by AA similarity, formula_87 as desired. We now note that formula_37 is the spiral center that maps formula_88 to formula_60. Let formula_89 be the intersection of formula_78 and formula_79. By the spiral similarity construction above, the spiral center must be the intersection of the circumcircles of formula_90 and formula_91. However, this point is formula_37, so thus points formula_92 must be concyclic. Hence, formula_89 must lie on formula_73, as desired. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "O" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "T(x)" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "S(p) = d(T(p-c))+c.\n" }, { "math_id": 7, "text": "T(x) = x_0+\\alpha(x-x_0)" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "|\\alpha|" }, { "math_id": 10, "text": "\\text{arg}(\\alpha)" }, { "math_id": 11, "text": "\\cap" }, { "math_id": 12, "text": "\\in" }, { "math_id": 13, "text": "\\angle CMP = \\angle CM'P'" }, { "math_id": 14, "text": "\\angle P'DC + \\angle CDP = 180^{\\circ}" }, { "math_id": 15, "text": "\\overline{MD}" }, { "math_id": 16, "text": "\\overline{CP}" }, { "math_id": 17, "text": "\\overline{M'D}" }, { "math_id": 18, "text": "\\overline{CP'}" }, { "math_id": 19, "text": "\\beta" }, { "math_id": 20, "text": "180^{\\circ} - \\beta" }, { "math_id": 21, "text": "\\overline{AC}" }, { "math_id": 22, "text": "\\overline{BD}" }, { "math_id": 23, "text": "P" }, { "math_id": 24, "text": "\\triangle PAB" }, { "math_id": 25, "text": "\\triangle PCD" }, { "math_id": 26, "text": "X \\neq P" }, { "math_id": 27, "text": "X" }, { "math_id": 28, "text": "\\overline{AB}" }, { "math_id": 29, "text": "\\overline{CD}." }, { "math_id": 30, "text": "ABPX" }, { "math_id": 31, "text": "XPCD" }, { "math_id": 32, "text": "\\angle XAB = 180^{\\circ} - \\angle BPX = \\angle XPD = \\angle XCD" }, { "math_id": 33, "text": "\\angle ABX = \\angle APX = 180^{\\circ} - \\angle XPC = \\angle XDC" }, { "math_id": 34, "text": "XAB" }, { "math_id": 35, "text": "XCD" }, { "math_id": 36, "text": "\\angle AXB = \\angle CXD," }, { "math_id": 37, "text": "A" }, { "math_id": 38, "text": "B" }, { "math_id": 39, "text": "C" }, { "math_id": 40, "text": "D" }, { "math_id": 41, "text": "\\overline{CD}" }, { "math_id": 42, "text": "A, B, C," }, { "math_id": 43, "text": "a, b, c," }, { "math_id": 44, "text": "T(a) = x_0+\\alpha(a-x_0)" }, { "math_id": 45, "text": "T(b) = x_0+\\alpha(b-x_0)" }, { "math_id": 46, "text": "\\frac{T(b)-T(a)}{b-a} = \\alpha" }, { "math_id": 47, "text": "T(a) = c" }, { "math_id": 48, "text": "T(b) = d" }, { "math_id": 49, "text": "\\alpha = \\frac{d-c}{b-a}" }, { "math_id": 50, "text": "x_0 = \\frac{ad-bc}{a+d-b-c}" }, { "math_id": 51, "text": "\\triangle XAB \\sim \\triangle XCD" }, { "math_id": 52, "text": "\\angle AXC = \\angle AXB + \\angle BXC = \\angle CXD + \\angle BXC = \\angle BXD" }, { "math_id": 53, "text": "\\frac{AX}{BX} = \\frac{CX}{DX}" }, { "math_id": 54, "text": "\\frac{AX}{CX} = \\frac{BX}{DX}" }, { "math_id": 55, "text": "\\triangle AXC \\sim \\triangle BXD" }, { "math_id": 56, "text": "A,B,C," }, { "math_id": 57, "text": "\\triangle PAB, \\triangle PDC, \\triangle QAD," }, { "math_id": 58, "text": "\\triangle QBC" }, { "math_id": 59, "text": "AD" }, { "math_id": 60, "text": "BC" }, { "math_id": 61, "text": "Q" }, { "math_id": 62, "text": "AB" }, { "math_id": 63, "text": "CD" }, { "math_id": 64, "text": "M" }, { "math_id": 65, "text": "DC" }, { "math_id": 66, "text": "\\triangle PDC" }, { "math_id": 67, "text": "DA" }, { "math_id": 68, "text": "\\triangle QAD" }, { "math_id": 69, "text": "ABC" }, { "math_id": 70, "text": "E" }, { "math_id": 71, "text": "AC" }, { "math_id": 72, "text": "CA = CD, BA = BE" }, { "math_id": 73, "text": "\\omega" }, { "math_id": 74, "text": "ADE" }, { "math_id": 75, "text": "PD" }, { "math_id": 76, "text": "PE" }, { "math_id": 77, "text": "Y" }, { "math_id": 78, "text": "BX" }, { "math_id": 79, "text": "CY" }, { "math_id": 80, "text": "PBEC" }, { "math_id": 81, "text": "\\triangle BAE" }, { "math_id": 82, "text": "\\angle BPC = \\angle BAC = 180^{\\circ} - \\angle BEC," }, { "math_id": 83, "text": "PBDC" }, { "math_id": 84, "text": "\\triangle AXY \\sim \\triangle ABC." }, { "math_id": 85, "text": "\\angle AXY = 180^{\\circ} - \\angle AEY = \\angle YEC = \\angle PEC = \\angle PBC = \\angle ABC." }, { "math_id": 86, "text": "\\angle AYX = \\angle ACB," }, { "math_id": 87, "text": "\\triangle AXY \\sim \\triangle ABC," }, { "math_id": 88, "text": "XY" }, { "math_id": 89, "text": "F" }, { "math_id": 90, "text": "\\triangle FXY" }, { "math_id": 91, "text": "\\triangle FBC" }, { "math_id": 92, "text": "A, F, X, Y" } ]
https://en.wikipedia.org/wiki?curid=62122606
621230
Tolerant sequence
In mathematical logic, a tolerant sequence is a sequence formula_0...,formula_1 of formal theories such that there are consistent extensions formula_2...,formula_3 of these theories with each formula_4 interpretable in formula_5. Tolerance naturally generalizes from sequences of theories to trees of theories. Weak interpretability can be shown to be a special, binary case of tolerance. This concept, together with its dual concept of cotolerance, was introduced by Japaridze in 1992, who also proved that, for Peano arithmetic and any stronger theories with effective axiomatizations, tolerance is equivalent to formula_6-consistency.
[ { "math_id": 0, "text": "T_1" }, { "math_id": 1, "text": "T_n" }, { "math_id": 2, "text": "S_1" }, { "math_id": 3, "text": "S_n" }, { "math_id": 4, "text": "S_{i+1}" }, { "math_id": 5, "text": "S_i" }, { "math_id": 6, "text": "\\Pi_1" } ]
https://en.wikipedia.org/wiki?curid=621230
6212759
Common pilot channel
CPICH stands for "Common Pilot Channel" in UMTS and some other CDMA communications systems. In WCDMA FDD cellular systems, CPICH is a downlink channel broadcast by Node Bs with constant power and of a known bit sequence. Its power is usually between 5% and 15% of the total Node B transmit power. Commonly, the CPICH power is 10% of the typical total transmit power of 43 dBm. The Primary Common Pilot Channel is used by the UEs to first complete identification of the Primary Scrambling Code used for scrambling Primary Common Control Physical Channel (P-CCPCH) transmissions from the Node B. Later CPICH channels provide allow phase and power estimations to be made, as well as aiding discovery of other radio paths. There is one primary CPICH (P-CPICH) for each Cell, which is transmitted using spreading code 0 with a spreading factor of 256, notationally written as Cch,256,0. Optionally a Node B may broadcast one or more secondary common pilot channels (S-CPICH), which use arbitrarily chosen 256 codes, written as Cch,256,n where formula_0. The CPICH contains 20 bits of data, which are either all zeros, or in the case that Space–Time Transmit Diversity (STTD) is employed, is a pattern of alternating 1's and 0's for transmissions on the Node B's second antenna. The first antenna of a base station always transmits all zeros for CPICH. A UE searching for a WCDMA Node B will first use the primary and secondary synchronization channels (P-SCH and S-SCH respectively) to determine the slot and frame timing of a candidate P-CCPCH, whether STTD is in use, as well as identifying which one of 64 code groups is being used by the cell. Crucially this allows to UE to reduce the set of possible Primary Scrambling Codes being used for P-CPICH to only 8 from 512 choices. At this point the correct PSC can be determined through the use of a matched filter, configured with the fixed channelisation code Cch,256,0, looking for the known CPICH bit sequence, while trying each of the possible 8 PSCs in turn. The results of each run of the matched filter can be compared, the correct PSC being identified by the greatest correlation result. Once the scrambling code for a CPICH is known, the channel can be used for measurements of signal quality, usually with RSCP and Ec/No. Timing and phase estimations can also be made, providing a reference that helps to improve reliability when decoding other channels from the same Node B. Pilot signals are not a requirement of CDMA, however, they do make the UE's receiver simpler and improve the reliability of the system.
[ { "math_id": 0, "text": "0<n<256" } ]
https://en.wikipedia.org/wiki?curid=6212759
621294
RANDU
Pseudorandom number generator RANDU is a linear congruential pseudorandom number generator (LCG) of the Park–Miller type, which was used primarily in the 1960s and 1970s. It is defined by the recurrence formula_0 with the initial seed number formula_1 as an odd number. It generates pseudorandom integers formula_2 which are uniformly distributed in the interval [1, 231 − 1], but in practical applications are often mapped into pseudorandom rationals formula_3 in the interval (0, 1), by the formula formula_4 IBM's RANDU is widely considered to be one of the most ill-conceived random number generators ever designed, and was described as "truly horrible" by Donald Knuth. It fails the spectral test badly for dimensions greater than 2, as shown below. The reason for choosing these particular values for the multiplier and modulus had been that with a 32-bit-integer word size, the arithmetic of mod 231 and formula_5 calculations could be done quickly, using bitwise operators in hardware, but the values were chosen for computational convenience, not statistical quality. Problems with multiplier and modulus. For any linear congruential generator with modulus "m" used to generate points in "n"-dimensional space, the points fall in no more than formula_6 parallel hyperplanes. This indicates that low-modulus LCGs are unsuited to high-dimensional Monte Carlo simulation. For "m" = 231 and "n" = 3, an LCG could have up to 2344 planes, theoretical maximum. A much tighter upper bound is proved in the same Marsaglia paper to be the sum of the absolute values of all the coefficients of the hyperplanes in standard form. That is, if the hyperplanes are of the form "Ax"1 + "Bx"2 + "Cx"3 = some integer such as 0, 1, 2 etc, then the maximum number of planes is |"A"| + |"B"| + |"C"|. Now we examine the values of multiplier 65539 and modulus 231 chosen for RANDU. Consider the following calculation where every term should be taken mod 231. Start by writing the recursive relation as formula_7 which after expanding the quadratic factor becomes formula_8 (because 232 mod 231 0) and allows us to show the correlation between three points as formula_9 Summing the absolute values of the coefficients, we get no more than 16 planes in 3D, becoming only 15 planes on closer examination, as shown in the diagram above. Even by the standards of LCGs, this shows that RANDU is terrible: using RANDU for sampling a unit cube will only sample 15 parallel planes, not even close to the upper limit of formula_10 planes. As a result of the wide use of RANDU in the early 1970s, many results from that time are seen as suspicious. This misbehavior was already detected in 1963 on a 36-bit computer, and carefully reimplemented on the 32-bit IBM System/360. It was believed to have been widely purged by the early 1990s but there were still FORTRAN compilers using it as late as 1999. Sample output. The start of the RANDU's output period for the initial seed formula_11 is 1, 65539, 393225, 1769499, 7077969, 26542323, … (sequence in the OEIS). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_{j+1} = 65539 \\cdot V_j \\bmod 2^{31}" }, { "math_id": 1, "text": "V_0" }, { "math_id": 2, "text": "V_j" }, { "math_id": 3, "text": "X_j" }, { "math_id": 4, "text": "X_j = \\frac{V_j}{2^{31}}." }, { "math_id": 5, "text": "65539 = 2^{16} + 3" }, { "math_id": 6, "text": "(n! \\times m)^{1/n}" }, { "math_id": 7, "text": "x_{k+2} = (2^{16} + 3) x_{k+1} = (2^{16} + 3)^2 x_k," }, { "math_id": 8, "text": "x_{k+2} = (2^{32} + 6 \\cdot 2^{16} + 9) x_k =[6 \\cdot (2^{16} + 3) - 9] x_{k}" }, { "math_id": 9, "text": "x_{k+2} = 6x_{k+1} - 9x_{k}." }, { "math_id": 10, "text": "\\Big\\lfloor\\big(2^{31} \\times 3!\\big)^{1/3}\\Big\\rfloor = 2344" }, { "math_id": 11, "text": "V_0 = 1" } ]
https://en.wikipedia.org/wiki?curid=621294
62138896
Halperin conjecture
Mathematical conjecture In rational homotopy theory, the Halperin conjecture concerns the Serre spectral sequence of certain fibrations. It is named after the Canadian mathematician Stephen Halperin. Statement. Suppose that formula_0 is a fibration of simply connected spaces such that formula_1 is rationally elliptic and formula_2 (i.e., formula_1 has non-zero Euler characteristic), then the Serre spectral sequence associated to the fibration collapses at the formula_3 page. Status. As of 2019, Halperin's conjecture is still open. Gregory Lupton has reformulated the conjecture in terms of formality relations. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F \\to E \\to B " }, { "math_id": 1, "text": " F " }, { "math_id": 2, "text": " \\chi(F) \\neq 0 " }, { "math_id": 3, "text": " E_2 " } ]
https://en.wikipedia.org/wiki?curid=62138896
62141894
Truthful cake-cutting
Study of fair cake-cutting with true valuations Truthful cake-cutting is the study of algorithms for fair cake-cutting that are also truthful mechanisms, i.e., they incentivize the participants to reveal their true valuations to the various parts of the cake. The classic divide and choose procedure for cake-cutting is not truthful: if the cutter knows the chooser's preferences, they can get much more than 1/2 by acting strategically. For example, suppose the cutter values a piece by its size while the chooser values a piece by the amount of chocolate in it. So the cutter can cut the cake into two pieces with almost the same amount of chocolate, such that the smaller piece has slightly more chocolate. Then, the chooser will take the smaller piece and the cutter will win the larger piece, which may be worth much more than 1/2 (depending on how the chocolate is distributed). Randomized mechanisms. There is a trivial randomized truthful mechanism for fair cake-cutting: select a single agent uniformly at random, and give him/her the entire cake. This mechanism is trivially truthful because it asks no questions. Moreover, it is fair in expectation: the expected value of each partner is exactly 1/"n". However, the resulting allocation is not fair. The challenge is to develop truthful mechanisms that are fair ex-post and not just ex-ante. Several such mechanisms have been developed. Exact division mechanism. An "exact division" (aka "consensus division") is a partition of the cake into "n" pieces such that each agent values each piece at exactly 1/"n". The existence of such a division is a corollary of the Dubins–Spanier convexity theorem. Moreover, there exists such a division with at most formula_0 cuts; this is a corollary of the Stromquist–Woodall theorem and the necklace splitting theorem. In general, an exact division cannot be found by a finite algorithm. However, it can be found in some special cases, for example when all agents have piecewise-linear valuations. Suppose we have a non-truthful algorithm (or oracle) for finding an exact division. It can be used to construct a "randomized" mechanism that is truthful in expectation. The randomized mechanism is a direct-revelation mechanism - it starts by asking all agents to reveal their entire value-measures: Here, the expected value of each agent is always 1/"n" regardless of the reported value function. Hence, the mechanism is truthful – no agent can gain anything from lying. Moreover, a truthful partner is guaranteed a value of exactly 1/"n" with probability 1 (not only in expectation). Hence the partners have an incentive to reveal their true value functions. Super-proportional mechanism. A super-proportional division is a cake-division in which each agent receives strictly more than 1/"n" by their own value measures. Such a division is known to exist if and only if there are at least two agents that have different valuations to at least one piece of the cake. Any "deterministic" mechanism that always returns a proportional division, and always returns a super-proportional division when it exists, cannot be truthful. Mossel and Tamuz present a super-proportional "randomized" mechanism that is truthful in expectation: The distribution "D" in step 1 should be chosen such that, regardless of the agents' valuations, there is a positive probability that a super-proportional division be selected iff it exists. Then, in step 2 it is optimal for each agent to report the true value: reporting a lower value either has no effect or might cause the agent's value to drop from super-proportional to just proportional (in step 4); reporting a higher value either has no effect or might cause the agent's value to drop from proportional to less than 1/"n" (in step 3). Approximate exact division using queries. Suppose that, rather than directly revealing their valuations, the agents reveal their values indirectly by answering "mark" and "eval" queries (as in the Robertson-Webb model). Branzei and Miltersen show that the exact-division mechanism can be "discretized" and executed in the query model. This yields, for any formula_1, a "randomized" query-based protocol, that asks at most formula_2 queries, is truthful in expectation, and allocates each agent a piece of value between formula_3 and formula_4, by the valuations of all agents. On the other hand, they prove that, in any "deterministic" truthful query-based protocol, if all agents value all parts of the cake positively, there is at least one agent who gets the empty piece. This implies that, if there are only two agents, then at least one agent is a "dictator" and gets the entire cake. Obviously, any such mechanism cannot be envy-free. Randomized mechanism for piecewise-constant valuations. Suppose all agents have piecewise-constant valuations. This means that, for each agent, the cake is partitioned into finitely many subsets, and the agent's value density in each subset is constant. For this case, Aziz and Ye present a randomized algorithm that is more economically-efficient: Constrained Serial Dictatorship is truthful in expectation, robust proportional, and satisfies a property called "unanimity": if each agent's most preferred 1/"n" length of the cake is disjoint from other agents, then each agent gets their most preferred 1/"n" length of the cake. This is a weak form of efficiency that is not satisfied by the mechanisms based on exact division. When there are only two agents, it is also polynomial-time and robust envy-free. Deterministic mechanisms: piecewise-constant valuations. For "deterministic" mechanisms, the results are mostly negative, even when all agents have piecewise-constant valuations. Kurokawa, Lai and Procaccia prove that there is no deterministic, truthful and envy-free mechanism that requires a bounded number of Robertson-Webb queries. Aziz and Ye prove that there is no deterministic truthful mechanism that satisfies either one of the following properties: Menon and Larson introduce the notion of "ε-truthfulness", which means that no agent gains more than a fraction "ε" from misreporting, where "ε" is a positive constant independent of the agents' valuations. They prove that no deterministic mechanism satisfies either one of the following properties: They present a minor modification to the Even–Paz protocol and prove that it is "ε"-truthful with "ε" = 1 - 3/(2"n") when "n" is even, and "ε" = 1 - 3/(2"n") + 1/"n"2 when "n" is odd. Bei, Chen, Huzhang, Tao and Wu prove that there is no deterministic, truthful and envy-free mechanism, even in the direct-revelation model, that satisfies either one of the following additional properties: Note that these impossibility results hold with or without free disposal. On the positive side, in a replicate economy, where each agent is replicated "k" times, there are envy-free mechanisms in which truth-telling is a Nash equilibrium: Tao improves the previous impossibility result by Bei, Chen, Huzhang, Tao and Wu and shows that there is no deterministic, truthful and proportional mechanism, even in the direct-revelation model, and even when all of the followings hold: It is open whether this impossibility result extends to three or more agents. On the positive side, Tao presents two algorithms that attain a weaker notion called "proportional risk-averse truthfulness" (PRAT). It means that, in any profitable deviation for agent "i", there exist valuations of the other agents, for which "i" gets less than his proportional share. This property is stronger than "risk-averse truthfulness", which means that, in any profitable deviation for i, there exist valuations of the other agents, for which "i" gets less than his value in a truthful reporting. He presents an algorithm that is PRAT and envy-free, and an algorithm that is PRAT, proportional and connected. Piecewise-uniform valuations. Suppose all agents have "piecewise-uniform valuations". This means that, for each agent, there is a subset of the cake that is "desirable" for the agent, and the agent's value for each piece is just the amount of desirable cake that it contains. For example, suppose some parts of the cake are covered by a uniform layer of chocolate, while other parts are not. An agent who values each piece only by the amount of chocolate it contains has a piecewise-uniform valuation. This is a special case of piecewise-constant valuations. Several truthful algorithms have been developed for this special case. Chen, Lai, Parkes and Procaccia present a direct-revelation mechanism that is "deterministic", proportional, envy-free, Pareto-optimal, and polynomial-time. It works for any number of agents. Here is an illustration of the CLPP mechanism for two agents (where the cake is an interval). Now, if an agent says that he wants an interval that he actually does not want, then he may get more useless cake in step 3 and less useful cake in step 4. If he says that he does not want an interval that he actually wants, then he gets less useful cake in step 3 and more useful cake in step 4, however, the amount given in step 4 is shared with the other agent, so all in all, the lying agent is at a loss. The mechanism can be generalized to any number of agents. The CLPP mechanism relies on the free disposal assumption, i.e., the ability to discard pieces that are not desired by any agent."Note": Aziz and Ye presented two mechanisms that extend the CLPP mechanism to piecewise-constant valuations - Constrained Cake Eating Algorithm and Market Equilibrium Algorithm. However, both these extensions are no longer truthful when the valuations are not piecewise-uniform. Maya and Nisan show that the CLPP mechanism is unique in the following sense. Consider the special case of "two" agents with piecewise-uniform valuations, where the cake is [0,1], Alice wants only the subinterval [0,"a"] for some "a"&lt;1, and Bob desires only the subinterval [1-"b",1] for some "b"&lt;1. Consider only "non-wasteful" mechanisms - mechanisms that allocate each piece desired by at least one player to a player who wants it. Each such mechanism must give Alice a subset [0,"c"] for some "c"&lt;1 and Bob a subset [1-"d",1] for some "d"&lt;1. In this model: They also show that, even for 2 agents, any truthful mechanism achieves at most 0.93 of the optimal social welfare. Li, Zhang and Zhang show that the CLPP mechanism works well even when there are externalities (i.e., some agents derive some benefit from the value given to others), as long as the externalities are sufficiently small. On the other hand, if the externalities (either positive or negative) are large, no truthful non-wasteful and position independent mechanism exists. Alijani, Farhadi, Ghodsi, Seddighin and Tajik present several mechanisms for special cases of piecewise-uniform valuations: Bei, Huzhang and Suksompong present a mechanism for two agents with piecewise-uniform valuations, that has the same properties of CLPP (truthful, deterministic, proportional, envy-free, Pareto-optimal and runs in polynomial time), but guarantees that the "entire" cake is allocated: The BHS mechanism works both for cake-cutting and for chore division (where the agents' valuations are negative). Note that BHS does not satisfy some natural desirable properties: This is not a problem with the specific mechanism: it is provably impossible to have a truthful and envy-free mechanism that allocates the entire cake and guarantees any of these three properties, even for two agents with piecewise-uniform valuations. The BHS mechanism was extended to any number of agents, but only for a special case of piecewise-uniform valuations, in which each agent desires only a single interval of the form [0, "xi"]. Ianovsky proves that no truthful mechanism can attain a utilitarian-optimal cake-cutting, even when all agents have piecewise-uniform valuations. Moreover, no truthful mechanism can attain an allocation with utilitarian welfare at least as large as any other mechanism. However, there is a simple truthful mechanism (denoted Lex Order) that is "non-wasteful": give to agent 1 all pieces that he likes; then, give to agent 2 all pieces that he likes and were not yet given to agent 1; etc. A variant of this mechanism is the Length Game, in which the agents are renamed by the total length of their desired intervals, such that the agent with the shortest interval is called 1, the agent with the next-shortest interval is called 2, etc. This is not a truthful mechanism, however: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n(n-1)^2" }, { "math_id": 1, "text": "\\epsilon>0" }, { "math_id": 2, "text": "O(n^2/\\epsilon)" }, { "math_id": 3, "text": "1/n-\\epsilon" }, { "math_id": 4, "text": "1/n+\\epsilon" } ]
https://en.wikipedia.org/wiki?curid=62141894
62143395
Proverbs 30
Book of Proverbs, chapter 30 Proverbs 30 is the 30th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections: the heading in Proverbs 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter first records "the sayings of Agur", followed by a collection of epigrams and aphorisms. Text. Hebrew. The following table shows the Hebrew text of Proverbs 30 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain). Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. Michael Fox,an American biblical scholar, divides this chapter into sections: Words of Agur (30:1–9). This collection is ascribed to an unknown non-Israelite sage (cf. also ). Fox suggests that it could have been appended to Proverbs because of its valuable cautionary comments and the exaltation of the Torah. The closeness 'in word and spirit' to Psalm 73 is noted as Agur, like the psalmist, combines confession of ignorance with a profession of faith and exultation in the insight that comes from God alone, while urging people to turn directly to God as a safeguard against temptation. Aberdeen theologian Kenneth Aitken notes that Agur's sayings may not extend beyond verse 14, as the first 14 verses are separate from verses 15 onwards in the Septuagint, but also comments that "opinion is divided on whether they end before verse 14" (possible at verses 4, 6, or 9). The editors of the New American Bible, Revised Edition, suggest that the "original literary unit" probably consisted of verses 1 to 6. "The words of Agur the son of Jakeh, the oracle." "The man declares to Ithiel," "to Ithiel and Ukal:" "Surely I am more brutish than any man, and have not the understanding of a man." "Who has ascended up into heaven, or descended?" " Who has gathered the wind in his fists?" "Who has bound the waters in a garment?" "Who has established all the ends of the earth?" "What is His name, and what is the name of His son," "if you know?" Verse 4. Like those in Job 38–41, these rhetorical questions emphasize "the inscrutability of God's ways". Verses 5–6. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The editors of the New American Bible, Revised Edition, suggest that the original Agur text probably ended with these verses, because the first six verses reflect a single contrast between human fragility (and ignorance) and divine power (and knowledge). Epigrams and aphorisms (30:10–33). This part contains various epigrams and three short aphorisms in the midst. Most of the epigrams (similar to ) take the form of lists. Epigrams i and vii contain unnumbered lists whose items are grouped by theme and anaphora (each line starts with the same word). Epigram v is a single-number list with four items. Epigrams ii, iii, iv, and vi are numerical proverbs, in the form "Three things … and four". The final item in the series is usually the climax and focal point. Verse 14. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There is a generation, whose teeth are as swords, and their jaw teeth as knives, to devour the poor from off the earth, and the needy from among men. Verse 15. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Verse 16. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Verse 31. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A greyhound; an he goat also; and a king, against whom there is no rising up. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62143395
62146673
Glauber dynamics
In statistical physics, Glauber dynamics is a way to simulate the Ising model (a model of magnetism) on a computer.. The algorithm. In the Ising model, we have say N particles that can spin up (+1) or down (-1). Say the particles are on a 2D grid. We label each with an x and y coordinate. Glauber's algorithm becomes: In Glauber algorithm, if the energy change in flipping a spin is zero, formula_4, then the spin would flip with probability formula_5. Comparison to Metropolis. In the Glauber dynamic, however, every spin has an equal chance of being chosen at each time step, regardless of being chosen before. The Metropolis acceptance criterion includes the Boltzmann weight, formula_6, but it always flips a spin in favor of lowering the energy, such that the spin-flip probability is: formula_7.Although both of the acceptance probabilities approximate a step curve and they are almost indistinguishable at very low temperatures, they differ when temperature gets high. For an Ising model on a 2d lattice, the critical temperature is formula_8. In practice, the main difference between the Metropolis–Hastings algorithm and with Glauber algorithm is in choosing the spins and how to flip them (step 4). However, at thermal equilibrium, these two algorithms should give identical results. In general, at equilibrium, any MCMC algorithm should produce the same distribution, as long as the algorithm satisfies ergodicity and detailed balance. In both algorithms, for any change in energy, formula_9, meaning that transition between the states of the system is always possible despite being very unlikely at some temperatures. So, the condition for ergodicity is satisfied for both of the algorithms. Detailed balance, which is a requirement of reversibility, states that if you observe the system for a long enough time, the system goes from state formula_10 to formula_11 with the same frequency as going from formula_12 to formula_10. In equilibrium, the probability of observing the system at state A is given by the Boltzmann weight, formula_13. So, the amount of time the system spends in low energy states is larger than in high energy states and there is more chance that the system is observed in states where it spends more time. Meaning that when the transition from formula_10 to formula_11 is energetically unfavorable, the system happens to be at formula_10 more frequently, counterbalancing the lower intrinsic probability of transition. Therefore, both, Glauber and Metropolis–Hastings algorithms exhibit detailed balance. History. The algorithm is named after Roy J. Glauber.
[ { "math_id": 0, "text": "\\sigma_{x,y}" }, { "math_id": 1, "text": "S = \\sigma_{x+1,y} + \\sigma_{x-1,y} + \\sigma_{x,y+1} + \\sigma_{x,y-1}" }, { "math_id": 2, "text": "\\Delta E = 2\\sigma_{x, y} S" }, { "math_id": 3, "text": "1/(1 + e^{\\Delta E/T})" }, { "math_id": 4, "text": "\\Delta E = 0" }, { "math_id": 5, "text": "p(0, T) = 0.5" }, { "math_id": 6, "text": "e^{-\\Delta E/T}" }, { "math_id": 7, "text": "p(\\Delta E) = \\left\\{\\begin{matrix}\n1, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\Delta E \\leqslant 0\\\\ \ne^{-\\Delta E/T}, \\ \\Delta E > 0\\\\ \n\\end{matrix}\\right." }, { "math_id": 8, "text": "T = 2.27" }, { "math_id": 9, "text": "p(\\Delta E) \\neq 0" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "B" }, { "math_id": 12, "text": "B " }, { "math_id": 13, "text": "e^{-E_A/T}" } ]
https://en.wikipedia.org/wiki?curid=62146673
62153058
Polytopological space
In general topology, a polytopological space consists of a set formula_0 together with a family formula_1 of topologies on formula_0 that is linearly ordered by the inclusion relation where formula_2 is an arbitrary index set. It is usually assumed that the topologies are in non-decreasing order. However some authors prefer the associated closure operators formula_4 to be in non-decreasing order where formula_5 if and only if formula_6 for all formula_7. This requires non-increasing topologies. Formal definitions. An formula_8-topological space formula_9 is a set formula_0 together with a monotone map formula_10 Topformula_11 where formula_12 is a partially ordered set and Topformula_11 is the set of all possible topologies on formula_13 ordered by inclusion. When the partial order formula_14 is a linear order then formula_9 is called a polytopological space. Taking formula_8 to be the ordinal number formula_15 an formula_3-topological space formula_16 can be thought of as a set formula_0 with topologies formula_17 on it. More generally a multitopological space formula_9 is a set formula_0 together with an arbitrary family formula_18 of topologies on it. History. Polytopological spaces were introduced in 2008 by the philosopher Thomas Icard for the purpose of defining a topological model of Japaridze's polymodal logic (GLP). They were later used to generalize variants of Kuratowski's closure-complement problem. For example Taras Banakh et al. proved that under operator composition the formula_3 closure operators and complement operator on an arbitrary formula_3-topological space can together generate at most formula_19 distinct operators where formula_20In 1965 the Finnish logician Jaakko Hintikka found this bound for the case formula_21 and claimed it “does not appear to obey any very simple law as a function of formula_3.” References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\{\\tau_i\\}_{i\\in I}" }, { "math_id": 2, "text": "I" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\{k_i\\}_{i\\in I}" }, { "math_id": 5, "text": "k_i\\leq k_j" }, { "math_id": 6, "text": "k_iA\\subseteq k_jA" }, { "math_id": 7, "text": "A\\subseteq X" }, { "math_id": 8, "text": "L" }, { "math_id": 9, "text": "(X,\\tau)" }, { "math_id": 10, "text": "\\tau:L\\to" }, { "math_id": 11, "text": "(X)" }, { "math_id": 12, "text": "(L,\\leq)" }, { "math_id": 13, "text": "X," }, { "math_id": 14, "text": "\\leq" }, { "math_id": 15, "text": "n=\\{0,1,\\dots,n-1\\}," }, { "math_id": 16, "text": "(X,\\tau_0,\\dots,\\tau_{n-1})" }, { "math_id": 17, "text": "\\tau_0\\subseteq\\dots\\subseteq\\tau_{n-1}" }, { "math_id": 18, "text": "\\tau" }, { "math_id": 19, "text": "2\\cdot K(n)" }, { "math_id": 20, "text": "K(n)=\\sum_{i,j=0}^n\\tbinom{i+j}{i} \\cdot \\tbinom{i+j}{j}." }, { "math_id": 21, "text": "n=2" } ]
https://en.wikipedia.org/wiki?curid=62153058
62157107
Glossary of Lie groups and Lie algebras
This is a glossary for the terminology applied in the mathematical theories of Lie groups and Lie algebras. For the topics in the representation theory of Lie groups and Lie algebras, see Glossary of representation theory. Because of the lack of other options, the glossary also includes some generalizations such as quantum group. &lt;templatestyles src="Hlist/styles.css"/&gt; * !$@* XYZ * See also* References Notations: formula_2 A. &lt;templatestyles src="Glossary/styles.css" /&gt; B. &lt;templatestyles src="Glossary/styles.css" /&gt; C. &lt;templatestyles src="Glossary/styles.css" /&gt; D. &lt;templatestyles src="Glossary/styles.css" /&gt; E. &lt;templatestyles src="Glossary/styles.css" /&gt; F. &lt;templatestyles src="Glossary/styles.css" /&gt; G. &lt;templatestyles src="Glossary/styles.css" /&gt; H. &lt;templatestyles src="Glossary/styles.css" /&gt; I. &lt;templatestyles src="Glossary/styles.css" /&gt; J. &lt;templatestyles src="Glossary/styles.css" /&gt; K. &lt;templatestyles src="Glossary/styles.css" /&gt; L. &lt;templatestyles src="Glossary/styles.css" /&gt; N. &lt;templatestyles src="Glossary/styles.css" /&gt; M. &lt;templatestyles src="Glossary/styles.css" /&gt; P. &lt;templatestyles src="Glossary/styles.css" /&gt; Q. &lt;templatestyles src="Glossary/styles.css" /&gt; R. &lt;templatestyles src="Glossary/styles.css" /&gt; S. &lt;templatestyles src="Glossary/styles.css" /&gt; T. &lt;templatestyles src="Glossary/styles.css" /&gt; U. &lt;templatestyles src="Glossary/styles.css" /&gt; V. &lt;templatestyles src="Glossary/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "( \\cdot, \\cdot )" }, { "math_id": 1, "text": "\\langle \\cdot, \\cdot \\rangle" }, { "math_id": 2, "text": "\\langle \\beta, \\alpha \\rangle = \\frac{(\\beta, \\alpha)}{(\\alpha, \\alpha)} \\, \\forall \\alpha, \\beta \\in E. " } ]
https://en.wikipedia.org/wiki?curid=62157107
62168097
Bivariant theory
In mathematics, a bivariant theory was introduced by Fulton and MacPherson , in order to put a ring structure on the Chow group of a singular variety, the resulting ring called an operational Chow ring. On technical levels, a bivariant theory is a mix of a homology theory and a cohomology theory. In general, a homology theory is a covariant functor from the category of spaces to the category of abelian groups, while a cohomology theory is a contravariant functor from the category of (nice) spaces to the category of rings. A bivariant theory is a functor both covariant and contravariant; hence, the name “bivariant”. Definition. Unlike a homology theory or a cohomology theory, a bivariant class is defined for a map not a space. Let formula_0 be a map. For such a map, we can consider the fiber square formula_1 (for example, a blow-up.) Intuitively, the consideration of all the fiber squares like the above can be thought of as an approximation of the map formula_2. Now, a birational class of formula_2 is a family of group homomorphisms indexed by the fiber squares: formula_3 satisfying the certain compatibility conditions. Operational Chow ring. The basic question was whether there is a cycle map: formula_4 If "X" is smooth, such a map exists since formula_5 is the usual Chow ring of "X". has shown that rationally there is no such a map with good properties even if "X" is a linear variety, roughly a variety admitting a cell decomposition. He also notes that Voevodsky's motivic cohomology ring is "probably more useful " than the operational Chow ring for a singular scheme (§ 8 of loc. cit.) References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f : X \\to Y" }, { "math_id": 1, "text": "\n\\begin{matrix}\nX' & \\to & Y' \\\\\n\\downarrow & & \\downarrow \\\\\nX & \\to & Y\n\\end{matrix}\n" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "A_k Y' \\to A_{k-p} X'" }, { "math_id": 4, "text": "A^*(X) \\to \\operatorname{H}^*(X, \\mathbb{Z})." }, { "math_id": 5, "text": "A^*(X)" } ]
https://en.wikipedia.org/wiki?curid=62168097
62169684
Rolf Rannacher
Rolf Rannacher (born 10 June 1948 in Leipzig) is a German mathematician and a professor of numerical analysis at Heidelberg University. Rannacher studied mathematics and physics at the Goethe University Frankfurt. There he received his doctorate in 1974 with dissertation "Diskrete Störungstheorie für das Punktsystem linearer Operatoren und Sesquilinearformen mit Anwendungen auf Operatoren vom Schrödinger Typ" (Discrete perturbation theory for the point system of linear operators and sesquilinear forms with applications to operators of the Schrödinger type). From 1974 to 1980 he was an assistant to Jens Frehse at the University of Bonn, where he habilitated in 1978 and after habilitation spent a year at the University of Michigan. He was from 1980 to 1983 a professor at the University of Erlangen–Nuremberg and from 1983 to 1988 a professor at Saarland University. Since 1988 he is a professor in Heidelberg. His research focuses on the numerical analysis of the finite element method (FEM) in partial differential equations (PDEs) based on functional analytic methods, for example, error estimation in the formula_0-norm for FEM approximation in elliptic boundary value problems. His research also deals with numerical fluid mechanics, including high-performance computer software development with his long-time collaborator John Haywood. In the 1990s Rannacher dealt with adaptive mesh refinement in solving optimal control problems, often in collaboration with Claes Johnson and Endre Süli. At Heidelberg's "Interdisziplinären Zentrum für Wissenschaftliches Rechnen" (acronym IWR, Interdisciplinary Center for Scientific Computing), Rannacher was, in the early 1990s, one of the pioneers in the development of parallel computer algorithms for transputers. He was an Invited Speaker at the International Congress of Mathematicians (ICM) in Berlin in 1998 and at the ICM in Beijing in 2002. In 2009 he was made an honorary doctor of the University of Erlangen-Nuremberg.
[ { "math_id": 0, "text": "{\\mathcal L}^\\infty" } ]
https://en.wikipedia.org/wiki?curid=62169684
6217045
Kelvin equation
Equation describing the change in vapour pressure due to a curved liquid–vapor interface The Kelvin equation describes the change in vapour pressure due to a curved liquid–vapor interface, such as the surface of a droplet. The vapor pressure at a convex curved surface is higher than that at a flat surface. The Kelvin equation is dependent upon thermodynamic principles and does not allude to special properties of materials. It is also used for determination of pore size distribution of a porous medium using adsorption porosimetry. The equation is named in honor of William Thomson, also known as Lord Kelvin. Formulation. The original form of the Kelvin equation, published in 1871, is: formula_0 where: This may be written in the following form, known as the Ostwald–Freundlich equation: formula_11 where formula_12 is the actual vapour pressure, formula_13 is the saturated vapour pressure when the surface is flat, formula_14 is the liquid/vapor surface tension, formula_15 is the molar volume of the liquid, formula_16 is the universal gas constant, formula_17 is the radius of the droplet, and formula_18 is temperature. Equilibrium vapor pressure depends on droplet size. As formula_17 increases, formula_12 decreases towards formula_21, and the droplets grow into bulk liquid. If the vapour is cooled, then formula_18 decreases, but so does formula_13. This means formula_22 increases as the liquid is cooled. formula_14 and formula_15 may be treated as approximately fixed, which means that the critical radius formula_17 must also decrease. The further a vapour is supercooled, the smaller the critical radius becomes. Ultimately it can become as small as a few molecules, and the liquid undergoes homogeneous nucleation and growth. The change in vapor pressure can be attributed to changes in the Laplace pressure. When the Laplace pressure rises in a droplet, the droplet tends to evaporate more easily. When applying the Kelvin equation, two cases must be distinguished: A drop of liquid in its own vapor will result in a convex liquid surface, and a bubble of vapor in a liquid will result in a concave liquid surface. History. The form of the Kelvin equation here is not the form in which it appeared in Lord Kelvin's article of 1871. The derivation of the form that appears in this article from Kelvin's original equation was presented by Robert von Helmholtz (son of German physicist Hermann von Helmholtz) in his dissertation of 1885. In 2020, researchers found that the equation was accurate down to the 1nm scale. Derivation using the Gibbs free energy. The formal definition of the Gibbs free energy for a parcel of volume formula_23, pressure formula_24 and temperature formula_18 is given by: formula_25 where formula_26 is the internal energy and formula_27 is the entropy. The differential form of the Gibbs free energy can be given as formula_28 where formula_29 is the chemical potential and formula_30 is the number of moles. Suppose we have a substance formula_31 which contains no impurities. Let's consider the formation of a single drop of formula_32 with radius formula_17 containing formula_33 molecules from its pure vapor. The change in the Gibbs free energy due to this process is formula_34 where formula_35 and formula_36 are the Gibbs energies of the drop and vapor respectively. Suppose we have formula_37 molecules in the vapor phase initially. After the formation of the drop, this number decreases to formula_38, where formula_39 Let formula_40 and formula_41 represent the Gibbs free energy of a molecule in the vapor and liquid phase respectively. The change in the Gibbs free energy is then: formula_42 where formula_43 is the Gibbs free energy associated with an interface with radius of curvature formula_17 and surface tension formula_44. The equation can be rearranged to give formula_45 Let formula_46 and formula_47 be the volume occupied by one molecule in the liquid phase and vapor phase respectively. If the drop is considered to be spherical, then formula_48 The number of molecules in the drop is then given by formula_49 The change in Gibbs energy is then formula_50 The differential form of the Gibbs free energy of one molecule at constant temperature and constant number of molecules can be given by: formula_51 If we assume that formula_52 then formula_53 The vapor phase is also assumed to behave like an ideal gas, so formula_54 where formula_55 is the Boltzmann constant. Thus, the change in the Gibbs free energy for one molecule is formula_56 where formula_57 is the saturated vapor pressure of formula_32 over a flat surface and formula_58 is the actual vapor pressure over the liquid. Solving the integral, we have formula_59 The change in the Gibbs free energy following the formation of the drop is then formula_60 The derivative of this equation with respect to formula_17 is formula_61 The maximum value occurs when the derivative equals zero. The radius corresponding to this value is: formula_62 Rearranging this equation gives the Ostwald–Freundlich form of the Kelvin equation: formula_63 Apparent paradox. An equation similar to that of Kelvin can be derived for the solubility of small particles or droplets in a liquid, by means of the connection between vapour pressure and solubility, thus the Kelvin equation also applies to solids, to slightly soluble liquids, and their solutions if the partial pressure formula_12 is replaced by the solubility of the solid (formula_64) (or a second liquid) at the given radius, formula_17, and formula_13 by the solubility at a plane surface (formula_65). Hence small particles (like small droplets) are more soluble than larger ones. The equation would then be given by: formula_66 These results led to the problem of how new phases can ever arise from old ones. For example, if a container filled with water vapour at slightly below the saturation pressure is suddenly cooled, perhaps by adiabatic expansion, as in a cloud chamber, the vapour may become supersaturated with respect to liquid water. It is then in a metastable state, and we may expect condensation to take place. A reasonable molecular model of condensation would seem to be that two or three molecules of water vapour come together to form a tiny droplet, and that this nucleus of condensation then grows by accretion, as additional vapour molecules happen to hit it. The Kelvin equation, however, indicates that a tiny droplet like this nucleus, being only a few ångströms in diameter, would have a vapour pressure many times that of the bulk liquid. As far as tiny nuclei are concerned, the vapour would not be supersaturated at all. Such nuclei should immediately re-evaporate, and the emergence of a new phase at the equilibrium pressure, or even moderately above it should be impossible. Hence, the over-saturation must be several times higher than the normal saturation value for spontaneous nucleation to occur. There are two ways of resolving this paradox. In the first place, we know the statistical basis of the second law of thermodynamics. In any system at equilibrium, there are always fluctuations around the equilibrium condition, and if the system contains few molecules, these fluctuations may be relatively large. There is always a chance that an appropriate fluctuation may lead to the formation of a nucleus of a new phase, even though the tiny nucleus could be called thermodynamically unstable. The chance of a fluctuation is "e"−Δ"S"/"k", where Δ"S" is the deviation of the entropy from the equilibrium value. It is unlikely, however, that new phases often arise by this fluctuation mechanism and the resultant spontaneous nucleation. Calculations show that the chance, "e"−Δ"S"/"k", is usually too small. It is more likely that tiny dust particles act as nuclei in supersaturated vapours or solutions. In the cloud chamber, it is the clusters of ions caused by a passing high-energy particle that acts as nucleation centers. Actually, vapours seem to be much less finicky than solutions about the sort of nuclei required. This is because a liquid will condense on almost any surface, but crystallization requires the presence of crystal faces of the proper kind. For a sessile drop residing on a solid surface, the Kelvin equation is modified near the contact line, due to intermolecular interactions between the liquid drop and the solid surface. This extended Kelvin equation is given by formula_67 where formula_68 is the disjoining pressure that accounts for the intermolecular interactions between the sessile drop and the solid and formula_69 is the Laplace pressure, accounting for the curvature-induced pressure inside the liquid drop. When the interactions are attractive in nature, the disjoining pressure, formula_68 is negative. Near the contact line, the disjoining pressure dominates over the Laplace pressure, implying that the solubility, formula_64 is less than formula_65. This implies that a new phase can spontaneously grow on a solid surface, even under saturation conditions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " p(r_1 , r_2) = P - \\frac {\\gamma\\, \\rho _{\\rm vapor} } {(\\rho_{\\rm liquid} - \\rho_{\\rm vapor})} \\left ( \\frac {1}{r_1} + \\frac {1}{r_2} \\right ), " }, { "math_id": 1, "text": " p(r) " }, { "math_id": 2, "text": " r " }, { "math_id": 3, "text": " P " }, { "math_id": 4, "text": " r = \\infty " }, { "math_id": 5, "text": " p_{eq} " }, { "math_id": 6, "text": " \\gamma " }, { "math_id": 7, "text": " \\rho _{\\rm vapor} " }, { "math_id": 8, "text": " \\rho _{\\rm liquid} " }, { "math_id": 9, "text": " r_1 " }, { "math_id": 10, "text": " r_2 " }, { "math_id": 11, "text": "\\ln \\frac{p}{p_{\\rm sat}} = \\frac{2 \\gamma V_\\text{m}}{rRT}," }, { "math_id": 12, "text": "p" }, { "math_id": 13, "text": "p_{\\rm sat}" }, { "math_id": 14, "text": "\\gamma" }, { "math_id": 15, "text": "V_\\text{m}" }, { "math_id": 16, "text": "R" }, { "math_id": 17, "text": "r" }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "p > p_{\\rm sat}" }, { "math_id": 20, "text": "p < p_{\\rm sat}" }, { "math_id": 21, "text": "p_{sat}" }, { "math_id": 22, "text": "p/p_{\\rm sat}" }, { "math_id": 23, "text": "V" }, { "math_id": 24, "text": "P" }, { "math_id": 25, "text": "G=U+pV-TS," }, { "math_id": 26, "text": "U" }, { "math_id": 27, "text": "S" }, { "math_id": 28, "text": "dG=-S dT + V dP + \\sum_{i=1}^ k \\mu_i dn_i," }, { "math_id": 29, "text": "\\mu" }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "x " }, { "math_id": 32, "text": "x" }, { "math_id": 33, "text": "n_x" }, { "math_id": 34, "text": "\\Delta G = G_d - G_v," }, { "math_id": 35, "text": "G_d" }, { "math_id": 36, "text": "G_v" }, { "math_id": 37, "text": "N_i" }, { "math_id": 38, "text": "N_f" }, { "math_id": 39, "text": "N_f = N_i - n_x." }, { "math_id": 40, "text": "g_v" }, { "math_id": 41, "text": "g_l" }, { "math_id": 42, "text": "\\Delta G = N_f g_v + n_x g_l + 4 \\pi r^2 \\sigma - N_i g_v, " }, { "math_id": 43, "text": "4 \\pi r^2 \\sigma " }, { "math_id": 44, "text": "\\sigma" }, { "math_id": 45, "text": "\\Delta G= (N_i - n_x) g_v + n_x g_l + 4 \\pi r^2 \\sigma - N_i g_v = n_x (g_l - g_v ) + 4 \\pi r^2 \\sigma ." }, { "math_id": 46, "text": "v_l" }, { "math_id": 47, "text": "v_v" }, { "math_id": 48, "text": "n_x v_l = \\frac{4}{3} \\pi r^3." }, { "math_id": 49, "text": "n_x = \\frac{4 \\pi r^3 }{3 v_l}." }, { "math_id": 50, "text": "\\Delta G = \\frac{4 \\pi r^3 }{3 v_l} (g_l - g_v) + 4 \\pi r^2 \\sigma . " }, { "math_id": 51, "text": "dg = (v_l - v_v ) dP." }, { "math_id": 52, "text": "v_v \\gg v_l" }, { "math_id": 53, "text": "dg \\simeq - v_v dP." }, { "math_id": 54, "text": "v_v = \\frac{k T}{P}," }, { "math_id": 55, "text": "k" }, { "math_id": 56, "text": "\\Delta g = - k T \\int\\limits_{P_{sat}}^{P} \\frac {dP}{P}," }, { "math_id": 57, "text": "P_{sat} " }, { "math_id": 58, "text": "P " }, { "math_id": 59, "text": "\\Delta g = g_l - g_v = -k T \\ln \\Bigl( \\frac{P}{P_{sat}}\\Bigr)." }, { "math_id": 60, "text": "\\Delta G =- \\frac{ 4}{3} \\pi r^3 \\frac{k T}{v_l} \\ln \\Bigl(\\frac{P}{P_{sat}} \\Bigr) + 4 \\pi r^2 \\sigma . " }, { "math_id": 61, "text": "\\frac{\\partial\\bigl( \\Delta G\\bigr)}{\\partial r} = -4 \\pi r^2 \\frac{ k T}{v_l} \\ln \\Bigl(\\frac{P}{P_{sat}}\\Bigr) + 8 \\pi r \\sigma . " }, { "math_id": 62, "text": "r = \\frac{2 v_l \\sigma }{kT \\ln \\Bigl(\\frac{P}{P_{sat}} \\Bigr)}." }, { "math_id": 63, "text": "\\ln \\Bigl(\\frac{P}{P_{sat}} \\Bigr) = \\frac{2 v_l \\sigma}{rkT}." }, { "math_id": 64, "text": "c" }, { "math_id": 65, "text": "c_{\\rm sat}" }, { "math_id": 66, "text": "\\ln \\frac{c}{c_{\\rm sat}}= \\frac{2 \\gamma V_\\text{m}}{rRT}." }, { "math_id": 67, "text": "\\ln \\frac{c}{c_{\\rm sat}}= \\frac{V_\\text{m}}{RT} \\left(\\frac{2 \\gamma}{r} + \\Pi\\right)." }, { "math_id": 68, "text": "\\Pi" }, { "math_id": 69, "text": "\\left(2 \\gamma/r \\right)" } ]
https://en.wikipedia.org/wiki?curid=6217045
621732
Geometric topology
Branch of mathematics studying (smooth) functions of manifolds In mathematics, geometric topology is the study of manifolds and maps between them, particularly embeddings of one manifold into another. History. Geometric topology as an area distinct from algebraic topology may be said to have originated in the 1935 classification of lens spaces by Reidemeister torsion, which required distinguishing spaces that are homotopy equivalent but not homeomorphic. This was the origin of "simple" homotopy theory. The use of the term geometric topology to describe these seems to have originated rather recently. Differences between low-dimensional and high-dimensional topology. Manifolds differ radically in behavior in high and low dimension. High-dimensional topology refers to manifolds of dimension 5 and above, or in relative terms, embeddings in codimension 3 and above. Low-dimensional topology is concerned with questions in dimensions up to 4, or embeddings in codimension up to 2. Dimension 4 is special, in that in some respects (topologically), dimension 4 is high-dimensional, while in other respects (differentiably), dimension 4 is low-dimensional; this overlap yields phenomena exceptional to dimension 4, such as exotic differentiable structures on R4. Thus the topological classification of 4-manifolds is in principle tractable, and the key questions are: does a topological manifold admit a differentiable structure, and if so, how many? Notably, the smooth case of dimension 4 is the last open case of the generalized Poincaré conjecture; see Gluck twists. The distinction is because surgery theory works in dimension 5 and above (in fact, in many cases, it works topologically in dimension 4, though this is very involved to prove), and thus the behavior of manifolds in dimension 5 and above may be studied using the surgery theory program. In dimension 4 and below (topologically, in dimension 3 and below), surgery theory does not work. Indeed, one approach to discussing low-dimensional manifolds is to ask "what would surgery theory predict to be true, were it to work?" – and then understand low-dimensional phenomena as deviations from this. The precise reason for the difference at dimension 5 is because the Whitney embedding theorem, the key technical trick which underlies surgery theory, requires 2+1 dimensions. Roughly, the Whitney trick allows one to "unknot" knotted spheres – more precisely, remove self-intersections of immersions; it does this via a homotopy of a disk – the disk has 2 dimensions, and the homotopy adds 1 more – and thus in codimension greater than 2, this can be done without intersecting itself; hence embeddings in codimension greater than 2 can be understood by surgery. In surgery theory, the key step is in the middle dimension, and thus when the middle dimension has codimension more than 2 (loosely, 2½ is enough, hence total dimension 5 is enough), the Whitney trick works. The key consequence of this is Smale's "h"-cobordism theorem, which works in dimension 5 and above, and forms the basis for surgery theory. A modification of the Whitney trick can work in 4 dimensions, and is called Casson handles – because there are not enough dimensions, a Whitney disk introduces new kinks, which can be resolved by another Whitney disk, leading to a sequence ("tower") of disks. The limit of this tower yields a topological but not differentiable map, hence surgery works topologically but not differentiably in dimension 4. Important tools in geometric topology. Fundamental group. In all dimensions, the fundamental group of a manifold is a very important invariant, and determines much of the structure; in dimensions 1, 2 and 3, the possible fundamental groups are restricted, while in dimension 4 and above every finitely presented group is the fundamental group of a manifold (note that it is sufficient to show this for 4- and 5-dimensional manifolds, and then to take products with spheres to get higher ones). Orientability. A manifold is orientable if it has a consistent choice of orientation, and a connected orientable manifold has exactly two different possible orientations. In this setting, various equivalent formulations of orientability can be given, depending on the desired application and level of generality. Formulations applicable to general topological manifolds often employ methods of homology theory, whereas for differentiable manifolds more structure is present, allowing a formulation in terms of differential forms. An important generalization of the notion of orientability of a space is that of orientability of a family of spaces parameterized by some other space (a fiber bundle) for which an orientation must be selected in each of the spaces which varies continuously with respect to changes in the parameter values. Handle decompositions. A handle decomposition of an "m"-manifold "M" is a union formula_0 where each formula_1 is obtained from formula_2 by the attaching of formula_3-handles. A handle decomposition is to a manifold what a CW-decomposition is to a topological space—in many regards the purpose of a handle decomposition is to have a language analogous to CW-complexes, but adapted to the world of smooth manifolds. Thus an "i"-handle is the smooth analogue of an "i"-cell. Handle decompositions of manifolds arise naturally via Morse theory. The modification of handle structures is closely linked to Cerf theory. Local flatness. Local flatness is a property of a submanifold in a topological manifold of larger dimension. In the category of topological manifolds, locally flat submanifolds play a role similar to that of embedded submanifolds in the category of smooth manifolds. Suppose a "d" dimensional manifold "N" is embedded into an "n" dimensional manifold "M" (where "d" &lt; "n"). If formula_4 we say "N" is locally flat at "x" if there is a neighborhood formula_5 of "x" such that the topological pair formula_6 is homeomorphic to the pair formula_7, with a standard inclusion of formula_8 as a subspace of formula_9. That is, there exists a homeomorphism formula_10 such that the image of formula_11 coincides with formula_8. Schönflies theorems. The generalized Schoenflies theorem states that, if an ("n" − 1)-dimensional sphere "S" is embedded into the "n"-dimensional sphere "Sn" in a locally flat way (that is, the embedding extends to that of a thickened sphere), then the pair ("Sn", "S") is homeomorphic to the pair ("Sn", "S""n"−1), where "S""n"−1 is the equator of the "n"-sphere. Brown and Mazur received the Veblen Prize for their independent proofs of this theorem. Branches of geometric topology. Low-dimensional topology. Low-dimensional topology includes: each have their own theory, where there are some connections. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of 8 possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. Knot theory. Knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3 (since we're using topology, a circle isn't bound to the classical geometric concept, but to all of its homeomorphisms). Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see "knot (mathematics)". Higher-dimensional knots are "n"-dimensional spheres in "m"-dimensional Euclidean space. High-dimensional geometric topology. In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. A characteristic class is a way of associating to each principal bundle on a topological space "X" a cohomology class of "X". The cohomology class measures the extent to which the bundle is "twisted" — particularly, whether it possesses sections or not. In other words, characteristic classes are global invariants which measure the deviation of a local product structure from a global product structure. They are one of the unifying geometric concepts in algebraic topology, differential geometry and algebraic geometry. Surgery theory is a collection of techniques used to produce one manifold from another in a 'controlled' way, introduced by Milnor (1961). Surgery refers to cutting out parts of the manifold and replacing it with a part of another manifold, matching up along the cut or boundary. This is closely related to, but not identical with, handlebody decompositions. It is a major tool in the study and classification of manifolds of dimension greater than 3. More technically, the idea is to start with a well-understood manifold "M" and perform surgery on it to produce a manifold "M "′ having some desired property, in such a way that the effects on the homology, homotopy groups, or other interesting invariants of the manifold are known. The classification of exotic spheres by Kervaire and Milnor (1963) led to the emergence of surgery theory as a major tool in high-dimensional topology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\emptyset = M_{-1} \\subset M_0 \\subset M_1 \\subset M_2 \\subset \\dots \\subset M_{m-1} \\subset M_m = M" }, { "math_id": 1, "text": "M_i" }, { "math_id": 2, "text": "M_{i-1}" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "x \\in N," }, { "math_id": 5, "text": " U \\subset M" }, { "math_id": 6, "text": "(U, U\\cap N)" }, { "math_id": 7, "text": "(\\mathbb{R}^n,\\mathbb{R}^d)" }, { "math_id": 8, "text": "\\mathbb{R}^d" }, { "math_id": 9, "text": "\\mathbb{R}^n" }, { "math_id": 10, "text": "U\\to R^n" }, { "math_id": 11, "text": "U\\cap N" } ]
https://en.wikipedia.org/wiki?curid=621732
62176391
Ronald Fintushel
American mathematician Ronald Alan Fintushel (born 1945) is an American mathematician, specializing in low-dimensional geometric topology (specifically of 4-manifolds) and the mathematics of gauge theory. Education and career. Fintushel studied mathematics at Columbia University with a bachelor's degree in 1967 and at the University of Illinois at Urbana–Champaign with a master's degree in 1969. In 1975 he received his Ph.D. from the State University of New York at Binghamton with thesis "Orbit maps of local formula_0-actions on manifolds of dimension less than five " under the supervision of Louis McAuley. Fintushel was a professor at Tulane University and is a professor at Michigan State University. His research deals with geometric topology, in particular of 4-manifolds (including the computation of Donaldson and Seiberg-Witten invariants) with links to gauge theory, knot theory, and symplectic geometry. He works closely with Ronald J. Stern. In 1998 he was an Invited Speaker, with Ronald J. Stern, with talk "Construction of smooth 4-manifolds" at the International Congress of Mathematicians in Berlin. In 1997 Fintushel received the Distinguished Faculty Award from Michigan State University. In 2016 a conference was held in his honor at Tulane University. He was elected a Fellow of the American Mathematical Society. Fintushel is a member of the editorial boards of "Geometry &amp; Topology" and the "Michigan Mathematical Journal".
[ { "math_id": 0, "text": "S^1" }, { "math_id": 1, "text": "S^4" } ]
https://en.wikipedia.org/wiki?curid=62176391
621774
Low-dimensional topology
Branch of topology In mathematics, low-dimensional topology is the branch of topology that studies manifolds, or more generally topological spaces, of four or fewer dimensions. Representative topics are the structure theory of 3-manifolds and 4-manifolds, knot theory, and braid groups. This can be regarded as a part of geometric topology. It may also be used to refer to the study of topological spaces of dimension 1, though this is more typically considered part of continuum theory. History. A number of advances starting in the 1960s had the effect of emphasising low dimensions in topology. The solution by Stephen Smale, in 1961, of the Poincaré conjecture in five or more dimensions made dimensions three and four seem the hardest; and indeed they required new methods, while the freedom of higher dimensions meant that questions could be reduced to computational methods available in surgery theory. Thurston's geometrization conjecture, formulated in the late 1970s, offered a framework that suggested geometry and topology were closely intertwined in low dimensions, and Thurston's proof of geometrization for Haken manifolds utilized a variety of tools from previously only weakly linked areas of mathematics. Vaughan Jones' discovery of the Jones polynomial in the early 1980s not only led knot theory in new directions but gave rise to still mysterious connections between low-dimensional topology and mathematical physics. In 2002, Grigori Perelman announced a proof of the three-dimensional Poincaré conjecture, using Richard S. Hamilton's Ricci flow, an idea belonging to the field of geometric analysis. Overall, this progress has led to better integration of the field into the rest of mathematics. Two dimensions. A surface is a two-dimensional, topological manifold. The most familiar examples are those that arise as the boundaries of solid objects in ordinary three-dimensional Euclidean space R3—for example, the surface of a ball. On the other hand, there are surfaces, such as the Klein bottle, that cannot be embedded in three-dimensional Euclidean space without introducing singularities or self-intersections. Classification of surfaces. The "classification theorem of closed surfaces" states that any connected closed surface is homeomorphic to some member of one of these three families: The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number "g" of tori involved is called the "genus" of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of "g" tori is 2 − 2"g". The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of "k" of them is 2 − "k". Teichmüller space. In mathematics, the Teichmüller space "TX" of a (real) topological surface "X", is a space that parameterizes complex structures on "X" up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Each point in "TX" may be regarded as an isomorphism class of 'marked' Riemann surfaces where a 'marking' is an isotopy class of homeomorphisms from "X" to "X". The Teichmüller space is the universal covering orbifold of the (Riemann) moduli space. Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The underlying topological space of Teichmüller space was studied by Fricke, and the Teichmüller metric on it was introduced by Oswald Teichmüller (1940). Uniformization theorem. In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of the three domains: the open unit disk, the complex plane, or the Riemann sphere. In particular it admits a Riemannian metric of constant curvature. This classifies Riemannian surfaces as elliptic (positively curved—rather, admitting a constant positively curved metric), parabolic (flat), and hyperbolic (negatively curved) according to their universal cover. The uniformization theorem is a generalization of the Riemann mapping theorem from proper simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces. Three dimensions. A topological space "X" is a 3-manifold if every point in "X" has a neighbourhood that is homeomorphic to Euclidean 3-space. The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds. Phenomena in three dimensions can be strikingly different from phenomena in other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. This special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology. Knot and braid theory. Knot theory is the study of mathematical knots. While inspired by knots that appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3 (since we're using topology, a circle isn't bound to the classical geometric concept, but to all of its homeomorphisms). Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. Knot complements are frequently-studied 3-manifolds. The knot complement of a tame knot "K" is the three-dimensional space surrounding the knot. To make this precise, suppose that "K" is a knot in a three-manifold "M" (most often, "M" is the 3-sphere). Let "N" be a tubular neighborhood of "K"; so "N" is a solid torus. The knot complement is then the complement of "N", formula_2 A related topic is braid theory. Braid theory is an abstract geometric theory studying the everyday braid concept, and some generalizations. The idea is that braids can be organized into groups, in which the group operation is 'do the first braid on a set of strings, and then follow it with a second on the twisted strings'. Such groups may be described by explicit presentations, as was shown by Emil Artin (1947). For an elementary treatment along these lines, see the article on braid groups. Braid groups may also be given a deeper mathematical interpretation: as the fundamental group of certain configuration spaces. Hyperbolic 3-manifolds. A hyperbolic 3-manifold is a 3-manifold equipped with a complete Riemannian metric of constant sectional curvature -1. In other words, it is the quotient of three-dimensional hyperbolic space by a subgroup of hyperbolic isometries acting freely and properly discontinuously. See also Kleinian model. Its thick-thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and/or ends that are the product of a Euclidean surface and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact. In this case, the ends are of the form torus cross the closed half-ray and are called cusps. Knot complements are the most commonly studied cusped manifolds. Poincaré conjecture and geometrization. Thurston's geometrization conjecture states that certain three-dimensional topological spaces each have a unique geometric structure that can be associated with them. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply-connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic). In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William Thurston (1982), and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture. Four dimensions. A 4-manifold is a 4-dimensional topological manifold. A smooth 4-manifold is a 4-manifold with a smooth structure. In dimension four, in marked contrast with lower dimensions, topological and smooth manifolds are quite different. There exist some topological 4-manifolds that admit no smooth structure and even if there exists a smooth structure it need not be unique (i.e. there are smooth 4-manifolds that are homeomorphic but not diffeomorphic). 4-manifolds are of importance in physics because, in General Relativity, spacetime is modeled as a pseudo-Riemannian 4-manifold. Exotic R4. An exotic R4 is a differentiable manifold that is homeomorphic but not diffeomorphic to the Euclidean space R4. The first examples were found in the early 1980s by Michael Freedman, by using the contrast between Freedman's theorems about topological 4-manifolds, and Simon Donaldson's theorems about smooth 4-manifolds. There is a continuum of non-diffeomorphic differentiable structures of R4, as was shown first by Clifford Taubes. Prior to this construction, non-diffeomorphic smooth structures on spheres—exotic spheres—were already known to exist, although the question of the existence of such structures for the particular case of the 4-sphere remained open (and still remains open to this day). For any positive integer "n" other than 4, there are no exotic smooth structures on R"n"; in other words, if "n" ≠ 4 then any smooth manifold homeomorphic to R"n" is diffeomorphic to R"n". Other special phenomena in four dimensions. There are several fundamental theorems about manifolds that can be proved by low-dimensional methods in dimensions at most 3, and by completely different high-dimensional methods in dimension at least 5, but which are false in four dimensions. Here are some examples: A few typical theorems that distinguish low-dimensional topology. There are several theorems that in effect state that many of the most basic tools used to study high-dimensional manifolds do not apply to low-dimensional manifolds, such as: Steenrod's theorem states that an orientable 3-manifold has a trivial tangent bundle. Stated another way, the only characteristic class of a 3-manifold is the obstruction to orientability. Any closed 3-manifold is the boundary of a 4-manifold. This theorem is due independently to several people: it follows from the Dehn–Lickorish theorem via a Heegaard splitting of the 3-manifold. It also follows from René Thom's computation of the cobordism ring of closed manifolds. The existence of exotic smooth structures on R4. This was originally observed by Michael Freedman, based on the work of Simon Donaldson and Andrew Casson. It has since been elaborated by Freedman, Robert Gompf, Clifford Taubes and Laurence Taylor to show there exists a continuum of non-diffeomorphic smooth structures on R4. Meanwhile, Rn is known to have exactly one smooth structure up to diffeomorphism provided "n" ≠ 4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g \\geq 1" }, { "math_id": 1, "text": "k \\geq 1" }, { "math_id": 2, "text": "X_K = M - \\mbox{interior}(N)." } ]
https://en.wikipedia.org/wiki?curid=621774
62193916
Ezra 1
First chapter of the Book of Ezra Ezra 1 is the first chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally believe that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. Ezra 1 contains a narrative of the Edict of Cyrus and the initial return of exiles to Judah led by Sheshbazzar as well as the restoration of the sacred temple vessels. It also introduces the section comprising chapters 1 to 6 describing the history before the arrival of Ezra in the land of Judah in 468 BCE. The opening sentence of this chapter (and this book) is identical to the final sentence of 2 Chronicles. Cyrus Cylinder. The Cyrus Cylinder contains a statement related to the Cyrus's edict which gives the historical background to the Book of Ezra: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I returned the images of the gods, who had resided there [i.e., in Babylon], to their places and I let them dwell in eternal abodes. I gathered all their inhabitants and returned to them their dwellings. Cyrus's edict is significant to the return of the Jews, because it shows that they did not slip away from Babylon but were given official permission by the Persian king in the first year of his rule, and it is a specific fulfillment of the seventy years prophecy of Jeremiah (, ). Text. The text is written in Biblical Hebrew and divided into 11 verses. Textual witnesses. There is a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: Ἔσδρας Αʹ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: Ἔσδρας Βʹ). 1 Esdras 2:1–14 is an equivalent of Ezra 1:1–11 (Cyrus's edict). An early manuscript containing the text of this chapter in Biblical Hebrew is the Codex Leningradensis (1008 CE). Since the anti-Jewish riots in Aleppo in 1947, the whole book of Ezra–Nehemiah has been missing from the text of the Aleppo Codex. Biblical narrative. Ezra 1 starts by providing historical context of a real event: "the first year of Cyrus king of Persia", but immediately follows with the statement about Yahweh, who has the real control and even already speaks about this event before the birth of Cyrus (; ) and the fulfillment of his word through Jeremiah. Verse 1. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Now in the first year of Cyrus king of Persia, that the word of the Lord by the mouth of Jeremiah might be fulfilled, the Lord stirred up the spirit of Cyrus king of Persia, that he made a proclamation throughout all his kingdom, and put it also in writing, saying, Verse 2. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Thus says Cyrus king of Persia: The Lord God of heaven has given me all the kingdoms of the earth, and He has charged me to build Him a house at Jerusalem, which is in Judah. Verse 3. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Who is there among you of all his people? his God be with him, and let him go up to Jerusalem, which is in Judah, and build the house of the Lord God of Israel, (he is the God,) which is in Jerusalem. Verse 4. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;And whoever is left in any place where he dwells, let the men of his place help him with silver and gold, with goods and livestock, besides the freewill offerings for the house of God which is in Jerusalem. Verse 7. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Also Cyrus the king brought forth the vessels of the house of the Lord, which Nebuchadnezzar had brought forth out of Jerusalem, and had put them in the house of his gods; The Temple treasures that Nebuchadnezzar took away () are now to be returned to Jerusalem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62193916
62195985
Ezra 2
A chapter in the Book of Ezra Ezra 2 is the second chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapter 1 to 6 describes the history before the arrival of Ezra in the land of Judah in 468 BCE. This chapter contains a list, known as the "Golah List", of the people who returned from Babylon to Judah following Cyrus's edict "by genealogy, family and place of habitation". Text. The original text is written in Hebrew language. This chapter is divided into 70 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: Ἔσδρας Αʹ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: Ἔσδρας Βʹ). 1 Esdras 5:7–46 is an equivalent of Ezra 2 (List of former exiles who returned). The Community (2:1–63). The list here is not an account the people who were recently back from the journey, but those who have arrived and settled down after returning from Babylon, where they currently reside in Palestine among the other inhabitants of the land – non-Jews and also the Jews who never left the land, "whom the Babylonians has left behind as undesirable". The genealogies apparently "function as authenticators of who has a right to be classified as an "Israelite"", because "those who could not prove their genealogy were excluded" (verses 59–63). "Now these are the children of the province, who went up out of the captivity of those who had been carried away, whom Nebuchadnezzar the king of Babylon had carried away to Babylon, and who returned to Jerusalem and Judah, everyone to his city;" "Those who came with Zerubbabel were Jeshua, Nehemiah, Seraiah, Reelaiah, Mordecai, Bilshan, Mispar, Bigvai, Rehum, and Baanah." "The number of the men of the people of Israel:" "the sons of Ater of Hezekiah—ninety-eight;" "Also, of the sons of the priests: the sons of Habaiah, the sons of Hakkoz, and the sons of Barzillai (who had taken a wife from the daughters of Barzillai the Gileadite, and was called by their name)." The Totals (2:64–67). The number of the people here shows the depletion of the population; in time of Moses "the whole number of the people of Israel...from 20 years old and upward... was 603,550" () not counting the Levites, whereas in the time of David, "in Israel there were 800,000 valiant men who drew the sword, and the men of Judah were 500,000" (), but now the returned exiles, including the priests and Levites, only "amount to 42,360" (). The listing of servants and animals reflects "the status of the exiles, their resources and capabilities". Temple Gifts (2:68–69). Those arrived back in Jerusalem and Judah gave freewill offerings "toward the rebuilding of the house of God". Resettlement (2:70). The conclusion of the list is similar to the beginning (verse 1): "by affirming the resettlement of the exiles", as every person has now settled "in their own towns". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62195985
6220
Circle
Simple curve of Euclidean geometry A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. The distance between any point of the circle and the centre is called the radius. The circle has been known since before the beginning of recorded history. Natural circles are common, such as the full moon or a slice of round fruit. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Terminology. All of the specified regions may be considered as "open", that is, not containing their boundaries, or as "closed", including their respective boundaries. Etymology. The word "circle" derives from the Greek κίρκος/κύκλος ("kirkos/kuklos"), itself a metathesis of the Homeric Greek κρίκος ("krikos"), meaning "hoop" or "ring". The origins of the words "circus" and "circuit" are closely related. History. Prehistoric people made stone circles and timber circles, and circular elements are common in petroglyphs and cave paintings. Disc-shaped prehistoric artifacts include the Nebra sky disc and jade discs called Bi. The Egyptian Rhind papyrus, dated to 1700 BCE, gives a method to find the area of a circle. The result corresponds to (3.16049...) as an approximate value of π. Book 3 of Euclid's "Elements" deals with the properties of circles. Euclid's definition of a circle is: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A circle is a plane figure bounded by one curved line, and such that all straight lines drawn from a certain point within it to the bounding line, are equal. The bounding line is called its circumference and the point, its centre. In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles. In 1880 CE, Ferdinand von Lindemann proved that π is transcendental, proving that the millennia-old problem of squaring the circle cannot be performed with straightedge and compass. With the advent of abstract art in the early 20th century, geometric objects became an artistic subject in their own right. Wassily Kandinsky in particular often used circles as an element of his compositions. Symbolism and religious use. From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist's message and to express certain ideas. However, differences in worldview (beliefs and culture) had a great impact on artists' perceptions. While some emphasised the circle's perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits. The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, the Dharma wheel, a rainbow, mandalas, rose windows and so forth. Magic circles are part of some traditions of Western esotericism. Analytic results. Circumference. The ratio of a circle's circumference to its diameter is π (pi), an irrational constant approximately equal to 3.141592654. Thus the circumference "C" is related to the radius "r" and diameter "d" by: formula_2 Area enclosed. As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to π multiplied by the radius squared: formula_3 Equivalently, denoting diameter by "d", formula_4 that is, approximately 79% of the circumscribing square (whose side is of length "d"). The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality. Equations. Cartesian coordinates. Equation of a circle. In an "x"–"y" Cartesian coordinate system, the circle with centre coordinates ("a", "b") and radius "r" is the set of all points ("x", "y") such that formula_5 This equation, known as the "equation of the circle", follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |"x" − "a"| and |"y" − "b"|. If the circle is centred at the origin (0, 0), then the equation simplifies to formula_6 Parametric form. The equation can be written in parametric form using the trigonometric functions sine and cosine as formula_7 where "t" is a parametric variable in the range 0 to 2π, interpreted geometrically as the angle that the ray from ("a", "b") to ("x", "y") makes with the positive "x" axis. An alternative parametrisation of the circle is formula_8 In this parameterisation, the ratio of "t" to "r" can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the "x" axis (see Tangent half-angle substitution). However, this parameterisation works only if "t" is made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted. 3-point form. The equation of the circle determined by three points formula_9 not on a line is obtained by a conversion of the "3-point form of a circle equation": formula_10 Homogeneous form. In homogeneous coordinates, each conic section with the equation of a circle has the form formula_11 It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points "I"(1: "i": 0) and "J"(1: −"i": 0). These points are called the circular points at infinity. Polar coordinates. In polar coordinates, the equation of a circle is formula_12 where "a" is the radius of the circle, formula_13 are the polar coordinates of a generic point on the circle, and formula_14 are the polar coordinates of the centre of the circle (i.e., "r"0 is the distance from the origin to the centre of the circle, and "φ" is the anticlockwise angle from the positive "x" axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. "r"0 0, this reduces to "r" "a". When "r"0 "a", or when the origin lies on the circle, the equation becomes formula_15 In the general case, the equation can be solved for "r", giving formula_16 Without the ± sign, the equation would in some cases describe only half a circle. Complex plane. In the complex plane, a circle with a centre at "c" and radius "r" has the equation formula_17 In parametric form, this can be written as formula_18 The slightly generalised equation formula_19 for real "p", "q" and complex "g" is sometimes called a generalised circle. This becomes the above equation for a circle with formula_20, since formula_21. Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line. Tangent lines. The tangent line through a point "P" on the circle is perpendicular to the diameter passing through "P". If P ("x"1, "y"1) and the circle has centre ("a", "b") and radius "r", then the tangent line is perpendicular to the line from ("a", "b") to ("x"1, "y"1), so it has the form ("x"1 − "a")"x" + ("y"1 – "b")"y" "c". Evaluating at ("x"1, "y"1) determines the value of "c", and the result is that the equation of the tangent is formula_22 or formula_23 If "y"1 ≠ "b", then the slope of this line is formula_24 This can also be found using implicit differentiation. When the centre of the circle is at the origin, then the equation of the tangent line becomes formula_25 and its slope is formula_26 Properties. Chord. "cd". Tangent. arc("AQ"). Theorems. "AB" × "AE". "AB" × "AE" (corollary of the chord theorem). "AC" × "AD" (tangent–secant theorem). "r" √2, where "ℓ" is the length of the chord, and "r" is the radius of the circle. Inscribed angles. An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180°). Sagitta. The sagitta (also known as the versine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle. Given the length "y" of a chord and the length "x" of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines: formula_30 Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length "y" and with sagitta of length "x", since the sagitta intersects the midpoint of the chord, we know that it is a part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is (2"r" − "x") in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (2"r" − "x")"x" ("y" / 2)2. Solving for "r", we find the required result. Compass and straightedge constructions. There are many compass-and-straightedge constructions resulting in circles. The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass. Circle of Apollonius. Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant "ratio" (other than 1) of distances to two fixed foci, "A" and "B". (The set of points where the distances are equal is the perpendicular bisector of segment "AB", a line.) That circle is sometimes said to be drawn "about" two points. The proof is in two parts. First, one must prove that, given two foci "A" and "B" and a ratio of distances, any point "P" satisfying the ratio of distances must fall on a particular circle. Let "C" be another point, also satisfying the ratio and lying on segment "AB". By the angle bisector theorem the line segment "PC" will bisect the interior angle "APB", since the segments are similar: formula_31 Analogously, a line segment "PD" through some point "D" on "AB" extended bisects the corresponding exterior angle "BPQ" where "Q" is on "AP" extended. Since the interior and exterior angles sum to 180 degrees, the angle "CPD" is exactly 90 degrees; that is, a right angle. The set of points "P" such that angle "CPD" is a right angle forms a circle, of which "CD" is a diameter. Second, see15 for a proof that every point on the indicated circle satisfies the given ratio. Cross-ratios. A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If "A", "B", and "C" are as above, then the circle of Apollonius for these three points is the collection of points "P" for which the absolute value of the cross-ratio is equal to one: formula_32 Stated another way, "P" is a point on the circle of Apollonius if and only if the cross-ratio ["A", "B"; "C", "P"] is on the unit circle in the complex plane. Generalised circles. If "C" is the midpoint of the segment "AB", then the collection of points "P" satisfying the Apollonius condition formula_33 is not a circle, but rather a line. Thus, if "A", "B", and "C" are given distinct points in the plane, then the locus of points "P" satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius. Inscription in or circumscription about other figures. In every triangle a unique circle, called the incircle, can be inscribed such that it is tangent to each of the three sides of the triangle. About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's three vertices. A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon. Every regular polygon and every triangle is a tangential polygon. A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon. A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle. Limiting case of other figures. The circle can be viewed as a limiting case of various other figures: 2. Locus of constant sum. Consider a finite set of formula_35 points in the plane. The locus of points such that the sum of the squares of the distances to the given points is constant is a circle, whose centre is at the centroid of the given points. A generalization for higher powers of distances is obtained if under formula_35 points the vertices of the regular polygon formula_36 are taken. The locus of points such that the sum of the formula_37-th power of distances formula_38 to the vertices of a given regular polygon with circumradius formula_39 is constant is a circle, if formula_40 whose centre is the centroid of the formula_36. In the case of the equilateral triangle, the loci of the constant sums of the second and fourth powers are circles, whereas for the square, the loci are circles for the constant sums of the second, fourth, and sixth powers. For the regular pentagon the constant sum of the eighth powers of the distances will be added and so forth. Squaring the circle. Squaring the circle is the problem, proposed by ancient geometers, of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge. In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi (π) is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. Despite the impossibility, this topic continues to be of interest for pseudomath enthusiasts. Generalizations. In other "p"-norms. Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In "p"-norm, distance is determined by formula_41 In Euclidean geometry, "p" = 2, giving the familiar formula_42 In taxicab geometry, "p" = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length formula_43 using a Euclidean metric, where "r" is the circle's radius, its length in taxicab geometry is 2"r". Thus, a circle's circumference is 8"r". Thus, the value of a geometric analog to formula_44 is 4 in this geometry. The formula for the unit circle in taxicab geometry is formula_45 in Cartesian coordinates and formula_46 in polar coordinates. A circle of radius 1 (using this distance) is the von Neumann neighborhood of its centre. A circle of radius "r" for the Chebyshev distance ("L"∞ metric) on a plane is also a square with side length 2"r" parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between "L"1 and "L"∞ metrics does not generalize to higher dimensions. Topological definition. The circle is the one-dimensional hypersphere (the 1-sphere). In topology, a circle is not limited to the geometric concept, but to all of its homeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy). Specially named circles. &lt;templatestyles src="Col-begin/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "r=0" }, { "math_id": 2, "text": "C = 2\\pi r = \\pi d." }, { "math_id": 3, "text": "\\mathrm{Area} = \\pi r^2." }, { "math_id": 4, "text": "\\mathrm{Area} = \\frac{\\pi d^2}{4} \\approx 0.7854 d^2," }, { "math_id": 5, "text": "(x - a)^2 + (y - b)^2 = r^2." }, { "math_id": 6, "text": "x^2 + y^2 = r^2." }, { "math_id": 7, "text": "\\begin{align}\nx &= a + r\\,\\cos t, \\\\\ny &= b + r\\,\\sin t,\n\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\nx &= a + r \\frac{1 - t^2}{1 + t^2}, \\\\\ny &= b + r \\frac{2t}{1 + t^2}.\n\\end{align}" }, { "math_id": 9, "text": "(x_1, y_1), (x_2, y_2), (x_3, y_3)" }, { "math_id": 10, "text": "\n\\frac{({\\color{green}x} - x_1)({\\color{green}x} - x_2) + ({\\color{red}y} - y_1)({\\color{red}y} - y_2)}\n {({\\color{red}y} - y_1)({\\color{green}x} - x_2) - ({\\color{red}y} - y_2)({\\color{green}x} - x_1)} =\n\\frac{(x_3 - x_1)(x_3 - x_2) + (y_3 - y_1)(y_3 - y_2)}\n {(y_3 - y_1)(x_3 - x_2) - (y_3 - y_2)(x_3 - x_1)}." }, { "math_id": 11, "text": "x^2 + y^2 - 2axz - 2byz + cz^2 = 0." }, { "math_id": 12, "text": "r^2 - 2 r r_0 \\cos(\\theta - \\phi) + r_0^2 = a^2," }, { "math_id": 13, "text": "(r, \\theta)" }, { "math_id": 14, "text": "(r_0, \\phi)" }, { "math_id": 15, "text": "r = 2 a\\cos(\\theta - \\phi)." }, { "math_id": 16, "text": "r = r_0 \\cos(\\theta - \\phi) \\pm \\sqrt{a^2 - r_0^2 \\sin^2(\\theta - \\phi)}." }, { "math_id": 17, "text": "|z - c| = r." }, { "math_id": 18, "text": "z = re^{it} + c." }, { "math_id": 19, "text": "pz\\overline{z} + gz + \\overline{gz} = q" }, { "math_id": 20, "text": "p = 1,\\ g = -\\overline{c},\\ q = r^2 - |c|^2" }, { "math_id": 21, "text": "|z - c|^2 = z\\overline{z} - \\overline{c}z - c\\overline{z} + c\\overline{c}" }, { "math_id": 22, "text": "(x_1 - a)x + (y_1 - b)y = (x_1 - a)x_1 + (y_1 - b)y_1," }, { "math_id": 23, "text": "(x_1 - a)(x - a) + (y_1 - b)(y - b) = r^2." }, { "math_id": 24, "text": "\\frac{dy}{dx} = -\\frac{x_1 - a}{y_1 - b}." }, { "math_id": 25, "text": "x_1 x + y_1 y = r^2," }, { "math_id": 26, "text": "\\frac{dy}{dx} = -\\frac{x_1}{y_1}." }, { "math_id": 27, "text": "\\overset{\\frown}{DE}" }, { "math_id": 28, "text": "\\overset{\\frown}{BC}" }, { "math_id": 29, "text": "2\\angle{CAB} = \\angle{DOE} - \\angle{BOC}" }, { "math_id": 30, "text": "r = \\frac{y^2}{8x} + \\frac{x}{2}." }, { "math_id": 31, "text": "\\frac{AP}{BP} = \\frac{AC}{BC}." }, { "math_id": 32, "text": "\\bigl|[A, B; C, P]\\bigr| = 1." }, { "math_id": 33, "text": "\\frac{|AP|}{|BP|} = \\frac{|AC|}{|BC|}" }, { "math_id": 34, "text": "\\left|\\frac{x}{a}\\right|^n\\! + \\left|\\frac{y}{b}\\right|^n\\! = 1" }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": "P_n" }, { "math_id": 37, "text": "2m" }, { "math_id": 38, "text": "d_i" }, { "math_id": 39, "text": "R" }, { "math_id": 40, "text": "\\sum_{i=1}^n d_i^{2m} > nR^{2m} , \\quad \\text{ where } ~ m = 1, 2, \\dots, n-1;" }, { "math_id": 41, "text": " \\left\\| x \\right\\| _p = \\left( \\left|x_1\\right|^p + \\left|x_2\\right|^p + \\dotsb + \\left|x_n\\right|^p \\right) ^{1/p} ." }, { "math_id": 42, "text": " \\left\\| x \\right\\| _2 = \\sqrt{ \\left|x_1\\right|^2 + \\left|x_2\\right|^2 + \\dotsb + \\left|x_n\\right|^2 } ." }, { "math_id": 43, "text": "\\sqrt{2} r" }, { "math_id": 44, "text": "\\pi " }, { "math_id": 45, "text": "|x| + |y| = 1" }, { "math_id": 46, "text": "r = \\frac{1}{\\left| \\sin \\theta\\right| + \\left|\\cos\\theta\\right|}" } ]
https://en.wikipedia.org/wiki?curid=6220
62201615
Generalized variance
The generalized variance is a scalar value which generalizes variance for multivariate random variables. It was introduced by Samuel S. Wilks. The generalized variance is defined as the determinant of the covariance matrix, formula_0. It can be shown to be related to the multidimensional scatter of points around their mean. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\det(\\Sigma)" } ]
https://en.wikipedia.org/wiki?curid=62201615
62203368
Ezra 3
A chapter in the Book of Ezra Ezra 3 is the third chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapter 1 to 6 describes the history before the arrival of Ezra in the land of Judah in 468 BCE. This chapter focuses on the people's worship and culminates in the project to rebuild the temple's foundations. Text. The original text is written in Hebrew language. This chapter is divided into 13 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: Ἔσδρας Αʹ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: Ἔσδρας Βʹ). 1 Esdras 5:47-65 is an equivalent of Ezra 3 (Feast of Tabernacles). The Altar (3:1–6). Before reestablishing legitimate worship at the temple, which still needed to be rebuilt, the people repaired the altar and performed the sacrifices according to the Torah. "And when the seventh month was come, and the children of Israel were in the cities, the people gathered themselves together as one man to Jerusalem." Verse 1. "The seventh month", Tishrei, follows the liturgical calendar of Israel (cf. ; ; ; –; , which begins in the first month when the Passover is celebrated. Three central feasts are celebrated in the seventh month, making it the “preeminent month” in the calendar. The seventh month of the first year of the return of the exiles corresponds to September/October 537 BC. "Then Jeshua the son of Jozadak and his brethren the priests, and Zerubbabel the son of Shealtiel and his brethren, arose and built the altar of the God of Israel, to offer burnt offerings on it, as it is written in the Law of Moses the man of God." "Though fear had come upon them because of the people of those countries, they set the altar on its bases; and they offered burnt offerings on it to the Lord, both the morning and evening burnt offerings." Verse 3. The morning and evening burnt offerings are those prescribed in and . The Temple (3:7–13). After reintroduced worship at the former site of altar (in Solomon's Temple), the building of a new temple is initiated. Both the building of the altar and the foundation of the temple showed similarities to the one of the first temple, such as the importation of cedars from Lebanon () and the start of the project in the second month (which could be the appropriate time in early spring; cf. ). When the foundation of temple was laid, the people responded in different ways: the older ones who had seen the first temple wept loudly, while the younger ones gave a great shout of praise to God. "They gave money to the masons and carpenters, and food, drink, and oil to the people of Sidon and to the people of Tyre so that they would bring cedar trees from Lebanon to the sea, at Joppa, according to the grant they had from Cyrus king of Persia." Verse 7. The laborers and materials for the temple came from Sidon and Tyre in Lebanon, closely repeating those of the Solomon's temple (; ). "And when the builders laid the foundation of the temple of the Lord, they set the priests in their apparel with trumpets, and the Levites the sons of Asaph with cymbals, to praise the Lord, after the ordinance of David king of Israel." "And they sang responsively, praising and giving thanks to the Lord:" "For He is good," "For His mercy endures forever toward Israel." "Then all the people shouted with a great shout, when they praised the Lord, because the foundation of the house of the Lord was laid." Verse 11. The same song was sung at the dedication of the first temple (Solomon's temple) over four centuries earlier (). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62203368
62205214
Ezra 4
A chapter in the Book of Ezra Ezra 4 is the fourth chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapter 1 to 6 describes the history before the arrival of Ezra in the land of Judah in 468 BCE. This chapter records the opposition of the non-Jews to the re-building of the temple and their correspondence with the kings of Persia which brought a stop to the project until the reign of Darius the Great. Text. This chapter is divided into 24 verses. The original language of 4:1–7 is Hebrew language, whereas of Ezra 4:8–24 is Aramaic. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew/Aramaic are of the Masoretic Text, which includes Codex Leningradensis (1008). Fragments containing parts of this chapter were found among the Dead Sea Scrolls, that is, 4Q117 (4QEzra; 50 BCE) with extant verses 2–6 (2–5 // 1 Esdras 5:66–70), 9–11. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 5:66–73 is an equivalent of Ezra 4:1–5 (Work hindered until second year of Darius's reign), whereas 1 Esdras 2:15–26 is an equivalent of Ezra 4:7–24 (Artaxerxes' reign). The oldest Latin manuscript of 4 Esdra is the Codex Sangermanensis that lacks 7:[36]–[105] and is parent of the vast majority of extant manuscripts. Other Latin manuscripts are: An offer of help (4:1–5). The non-Jewish inhabitants of the land of Judah offered to help with the building, but regarding it as a 'proposal of compromise', the leaders of Judah rejected the offer. Due to the rejection, the surrounding inhabitants mounted opposition to the building project. "1 Now when the adversaries of Judah and Benjamin heard that the descendants of the captivity built the temple unto the Lord God of Israel, 2 they came to Zerubbabel, and to the chiefs of the fathers' households, and said to them, "Let us build with you, for, like you, we seek your God and have been sacrificing to Him since the days of Esarhaddon king of Assyria, who brought us here."" Verses 1–2. The enemies of the exiles try to destroy that community by assimilation, pointing out important similarities among their peoples (verse 2), wanting the exiles to be entirely like them, but the enemies don't have allegiance to Yahweh and assimilation for the exiles would have meant destruction of the covenant with God. The reference to the Assyrian king recalls the story in that after the fall of Samaria in 721 BC, the genuine Israel inhabitants of the northern kingdom were deported elsewhere and the Assyrians planted people from other places (bringing their own gods; cf. ) to the region of Samaria, initiated by Sargon (722–705 BC), but from this verse apparently extended to the reign of Esarhaddon (681–669 BC). "But Zerubbabel and Jeshua and the rest of the heads of the fathers' houses of Israel said to them, "You may do nothing with us to build a house for our God; but we alone will build to the Lord God of Israel, as King Cyrus the king of Persia has commanded us."" Verse 3. The rejection of Zerubbabel was based on "spiritual insight". "4 Then the people of the land demoralized the people of Judah and terrified them while building, 5 and hired counselors against them to frustrate their purpose, all the days of Cyrus king of Persia, even until the reign of Darius king of Persia." Historical divergence (4:6–23). The story of Zerubbabel was interrupted by the list of some accounts of hostilities which happened in a long period of time to illustrate the continuous opposition by non-Jews of the area to the attempts of the Jews to establish a community under the law of God. "And in the reign of Ahasuerus, in the beginning of his reign, they wrote an accusation against the inhabitants of Judah and Jerusalem." "In the days of Artaxerxes also, Bishlam, Mithredath, Tabel, and the rest of their companions wrote to Artaxerxes king of Persia; and the letter was written in Aramaic script, and translated into the Aramaic language." "9 then Rehum the chancellor, Shimshai the scribe, and the rest of their companions, the Dinaites, and the Apharsathchites, the Tarpelites, the Apharsites, the Archevites, the Babylonians, the Shushanchites, the Dehaites, the Elamites, 10 and the rest of the nations whom the great and noble Osnappar brought over, and set in the city of Samaria, and in the rest of the country beyond the River, and so forth, wrote." The story resumed (4:24). With the repetition of the essence in verse 5, the story of Zerubbabel and Jeshua resumes, continued to the next chapter. "Then ceased the work of the house of God which is at Jerusalem. So it ceased unto the second year of the reign of Darius king of Persia." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62205214
622053
S-matrix
Scattering amplitude In physics, the "S"-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT). More formally, in the context of QFT, the "S"-matrix is defined as the unitary matrix connecting sets of asymptotically free particle states (the "in-states" and the "out-states") in the Hilbert space of physical states. A multi-particle state is said to be "free" (or non-interacting) if it transforms under Lorentz transformations as a tensor product, or "direct product" in physics parlance, of "one-particle states" as prescribed by equation (1) below. "Asymptotically free" then means that the state has this appearance in either the distant past or the distant future. While the "S"-matrix may be defined for any background (spacetime) that is asymptotically solvable and has no event horizons, it has a simple form in the case of the Minkowski space. In this special case, the Hilbert space is a space of irreducible unitary representations of the inhomogeneous Lorentz group (the Poincaré group); the "S"-matrix is the evolution operator between formula_0 (the distant past), and formula_1 (the distant future). It is defined only in the limit of zero energy density (or infinite particle separation distance). It can be shown that if a quantum field theory in Minkowski space has a mass gap, the state in the asymptotic past and in the asymptotic future are both described by Fock spaces. History. The initial elements of "S"-matrix theory are found in Paul Dirac's 1927 paper "Über die Quantenmechanik der Stoßvorgänge". The "S"-matrix was first properly introduced by John Archibald Wheeler in the 1937 paper "On the Mathematical Description of Light Nuclei by the Method of Resonating Group Structure". In this paper Wheeler introduced a "scattering matrix" – a unitary matrix of coefficients connecting "the asymptotic behaviour of an arbitrary particular solution [of the integral equations] with that of solutions of a standard form", but did not develop it fully. In the 1940s, Werner Heisenberg independently developed and substantiated the idea of the "S"-matrix. Because of the problematic divergences present in quantum field theory at that time, Heisenberg was motivated to isolate the "essential features of the theory" that would not be affected by future changes as the theory developed. In doing so, he was led to introduce a unitary "characteristic" "S"-matrix. Today, however, exact "S"-matrix results are important for conformal field theory, integrable systems, and several further areas of quantum field theory and string theory. "S"-matrices are not substitutes for a field-theoretic treatment, but rather, complement the end results of such. Motivation. In high-energy particle physics one is interested in computing the probability for different outcomes in scattering experiments. These experiments can be broken down into three stages: The process by which the incoming particles are transformed (through their interaction) into the outgoing particles is called scattering. For particle physics, a physical theory of these processes must be able to compute the probability for different outgoing particles when different incoming particles collide with different energies. The "S"-matrix in quantum field theory achieves exactly this. It is assumed that the small-energy-density approximation is valid in these cases. Use. The "S"-matrix is closely related to the transition probability amplitude in quantum mechanics and to cross sections of various interactions; the elements (individual numerical entries) in the "S"-matrix are known as scattering amplitudes. Poles of the "S"-matrix in the complex-energy plane are identified with bound states, virtual states or resonances. Branch cuts of the "S"-matrix in the complex-energy plane are associated to the opening of a scattering channel. In the Hamiltonian approach to quantum field theory, the "S"-matrix may be calculated as a time-ordered exponential of the integrated Hamiltonian in the interaction picture; it may also be expressed using Feynman's path integrals. In both cases, the perturbative calculation of the "S"-matrix leads to Feynman diagrams. In scattering theory, the "S"-matrix is an operator mapping free particle "in-states" to free particle "out-states" (scattering channels) in the Heisenberg picture. This is very useful because often we cannot describe the interaction (at least, not the most interesting ones) exactly. In one-dimensional quantum mechanics. A simple prototype in which the "S"-matrix is 2-dimensional is considered first, for the purposes of illustration. In it, particles with sharp energy "E" scatter from a localized potential "V" according to the rules of 1-dimensional quantum mechanics. Already this simple model displays some features of more general cases, but is easier to handle. Each energy "E" yields a matrix "S" = "S"("E") that depends on "V". Thus, the total "S"-matrix could, figuratively speaking, be visualized, in a suitable basis, as a "continuous matrix" with every element zero except for 2 × 2-blocks along the diagonal for a given "V". Definition. Consider a localized one dimensional potential barrier "V"("x"), subjected to a beam of quantum particles with energy "E". These particles are incident on the potential barrier from left to right. The solutions of Schrödinger's equation outside the potential barrier are plane waves given by formula_2 for the region to the left of the potential barrier, and formula_3 for the region to the right to the potential barrier, where formula_4 is the wave vector. The time dependence is not needed in our overview and is hence omitted. The term with coefficient "A" represents the incoming wave, whereas term with coefficient "C" represents the outgoing wave. "B" stands for the reflecting wave. Since we set the incoming wave moving in the positive direction (coming from the left), "D" is zero and can be omitted. The "scattering amplitude", i.e., the transition overlap of the outgoing waves with the incoming waves is a linear relation defining the "S"-matrix, formula_5 The above relation can be written as formula_6 where formula_7 The elements of "S" completely characterize the scattering properties of the potential barrier "V"("x"). Unitary property. The unitary property of the "S"-matrix is directly related to the conservation of the probability current in quantum mechanics. The probability current density J of the wave function "ψ"("x") is defined as formula_8 The probability current density formula_9 of formula_10 to the left of the barrier is formula_11 while the probability current density formula_12 of formula_13 to the right of the barrier is formula_14 For conservation of the probability current, "J"L = "J"R. This implies the "S"-matrix is a unitary matrix. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof formula_15 Time-reversal symmetry. If the potential "V"("x") is real, then the system possesses time-reversal symmetry. Under this condition, if "ψ"("x") is a solution of Schrödinger's equation, then "ψ"*("x") is also a solution. The time-reversed solution is given by formula_16 for the region to the left to the potential barrier, and formula_17 for the region to the right to the potential barrier, where the terms with coefficient "B"*, "C"* represent incoming wave, and terms with coefficient "A"*, "D"* represent outgoing wave. They are again related by the "S"-matrix, formula_18 that is, formula_19 Now, the relations formula_20 together yield a condition formula_21 This condition, in conjunction with the unitarity relation, implies that the "S"-matrix is symmetric, as a result of time reversal symmetry, formula_22 By combining the symmetry and the unitarity, the S-matrix can be expressed in the form: formula_23 with formula_24 and formula_25. So the S-matrix is determined by three real parameters. Transfer matrix. The "transfer matrix" formula_26 relates the plane waves formula_27 and formula_28 on the "right" side of scattering potential to the plane waves formula_29 and formula_30 on the "left" side: formula_31 and its components can be derived from the components of the S-matrix via: formula_32 and formula_33, whereby time-reversal symmetry is assumed. In the case of time-reversal symmetry, the transfer matrix formula_34 can be expressed by three real parameters: formula_35 with formula_24 and formula_25 (in case "r" = 1 there would be no connection between the left and the right side) Finite square well. The one-dimensional, non-relativistic problem with time-reversal symmetry of a particle with mass m that approaches a (static) finite square "well", has the potential function "V" with formula_36 The scattering can be solved by decomposing the wave packet of the free particle into plane waves formula_37 with wave numbers formula_38 for a plane wave coming (faraway) from the left side or likewise formula_39 (faraway) from the right side. The S-matrix for the plane wave with wave number k has the solution: formula_40 and formula_41 ; hence formula_42 and therefore formula_43 and formula_44 in this case. Whereby formula_45 is the (increased) wave number of the plane wave inside the square well, as the energy eigenvalue formula_46 associated with the plane wave has to stay constant: formula_47 The transmission is formula_48 In the case of formula_49 then formula_50 and therefore formula_51 and formula_52 i.e. a plane wave with wave number k passes the well without reflection if formula_53 for a formula_54 Finite square barrier. The square "barrier" is similar to the square well with the difference that formula_55 for formula_56. There are three different cases depending on the energy eigenvalue formula_57 of the plane waves (with wave numbers k resp. −"k") far away from the barrier: Transmission coefficient and reflection coefficient. The transmission coefficient from the left of the potential barrier is, when "D" = 0, formula_58 The reflection coefficient from the left of the potential barrier is, when "D" = 0, formula_59 Similarly, the transmission coefficient from the right of the potential barrier is, when "A" = 0, formula_60 The reflection coefficient from the right of the potential barrier is, when "A" = 0, formula_61 The relations between the transmission and reflection coefficients are formula_62 and formula_63 This identity is a consequence of the unitarity property of the "S"-matrix. With time-reversal symmetry, the S-matrix is symmetric and hence formula_64 and formula_65. Optical theorem in one dimension. In the case of free particles "V"("x") = 0, the "S"-matrix is formula_66 Whenever "V"("x") is different from zero, however, there is a departure of the "S"-matrix from the above form, to formula_67 This departure is parameterized by two complex functions of energy, "r" and "t". From unitarity there also follows a relationship between these two functions, formula_68 The analogue of this identity in three dimensions is known as the optical theorem. Definition in quantum field theory. Interaction picture. A straightforward way to define the "S"-matrix begins with considering the interaction picture. Let the Hamiltonian "H" be split into the free part "H"0 and the interaction "V", "H" = "H"0 + "V". In this picture, the operators behave as free field operators and the state vectors have dynamics according to the interaction "V". Let formula_69 denote a state that has evolved from a free initial state formula_70 The "S"-matrix element is then defined as the projection of this state on the final state formula_71 Thus formula_72 where "S" is the S-operator. The great advantage of this definition is that the time-evolution operator U evolving a state in the interaction picture is formally known, formula_73 where T denotes the time-ordered product. Expressed in this operator, formula_74 from which formula_75 Expanding using the knowledge about "U" gives a Dyson series, formula_76 or, if V comes as a Hamiltonian density, formula_77 Being a special type of time-evolution operator, S is unitary. For any initial state and any final state one finds formula_78 This approach is somewhat naïve in that potential problems are swept under the carpet. This is intentional. The approach works in practice and some of the technical issues are addressed in the other sections. In and out states. Here a slightly more rigorous approach is taken in order to address potential problems that were disregarded in the interaction picture approach of above. The final outcome is, of course, the same as when taking the quicker route. For this, the notions of in and out states are needed. These will be developed in two ways, from vacua, and from free particle states. Needless to say, the two approaches are equivalent, but they illuminate matters from different angles. From vacua. If "a"†("k") is a creation operator, its hermitian adjoint is an annihilation operator and destroys the vacuum, formula_79 In Dirac notation, define formula_80 as a vacuum quantum state, i.e. a state without real particles. The asterisk signifies that not all vacua are necessarily equal, and certainly not equal to the Hilbert space zero state 0. All vacuum states are assumed Poincaré invariant, invariance under translations, rotations and boosts, formally, formula_81 where "P""μ" is the generator of translation in space and time, and "M""μν" is the generator of Lorentz transformations. Thus the description of the vacuum is independent of the frame of reference. Associated to the in and out states to be defined are the in and out field operators (aka fields) Φi and Φo. Attention is here focused to the simplest case, that of a scalar theory in order to exemplify with the least possible cluttering of the notation. The in and out fields satisfy formula_82 the free Klein–Gordon equation. These fields are postulated to have the same equal time commutation relations (ETCR) as the free fields, formula_83 where "π""i","j" is the field canonically conjugate to Φ"i","j". Associated to the in and out fields are two sets of creation and annihilation operators, "a"†i("k") and "a"†f ("k"), acting in the "same" Hilbert space, on two "distinct" complete sets (Fock spaces; initial space i, final space f). These operators satisfy the usual commutation rules, formula_84 The action of the creation operators on their respective vacua and states with a finite number of particles in the in and out states is given by formula_85 where issues of normalization have been ignored. See the next section for a detailed account on how a general state is normalized. The initial and final spaces are defined by formula_86 formula_87 The asymptotic states are assumed to have well defined Poincaré transformation properties, i.e. they are assumed to transform as a direct product of one-particle states. This is a characteristic of a non-interacting field. From this follows that the asymptotic states are all eigenstates of the momentum operator "Pμ", formula_88 In particular, they are eigenstates of the full Hamiltonian, formula_89 The vacuum is usually postulated to be stable and unique, formula_90 The interaction is assumed adiabatically turned on and off. Heisenberg picture. The Heisenberg picture is employed henceforth. In this picture, the states are time-independent. A Heisenberg state vector thus represents the complete spacetime history of a system of particles. The labeling of the in and out states refers to the asymptotic appearance. A state Ψ"α", in is characterized by that as "t" → −∞ the particle content is that represented collectively by α. Likewise, a state Ψ"β", out will have the particle content represented by β for "t" → +∞. Using the assumption that the in and out states, as well as the interacting states, inhabit the same Hilbert space and assuming completeness of the normalized in and out states (postulate of asymptotic completeness), the initial states can be expanded in a basis of final states (or vice versa). The explicit expression is given later after more notation and terminology has been introduced. The expansion coefficients are precisely the "S"-matrix elements to be defined below. While the state vectors are constant in time in the Heisenberg picture, the physical states they represent are "not". If a system is found to be in a state Ψ at time "t" = 0, then it will be found in the state "U"("τ")Ψ = "e"−"iHτ"Ψ at time "t" = "τ". This is not (necessarily) the same Heisenberg state vector, but it is an "equivalent" state vector, meaning that it will, upon measurement, be found to be one of the final states from the expansion with nonzero coefficient. Letting τ vary one sees that the observed Ψ (not measured) is indeed the Schrödinger picture state vector. By repeating the measurement sufficiently many times and averaging, one may say that the "same" state vector is indeed found at time "t" = τ as at time "t" = 0. This reflects the expansion above of an in state into out states. From free particle states. For this viewpoint, one should consider how the archetypical scattering experiment is performed. The initial particles are prepared in well defined states where they are so far apart that they don't interact. They are somehow made to interact, and the final particles are registered when they are so far apart that they have ceased to interact. The idea is to look for states in the Heisenberg picture that in the distant past had the appearance of free particle states. This will be the in states. Likewise, an out state will be a state that in the distant future has the appearance of a free particle state. The notation from the general reference for this section, will be used. A general non-interacting multi-particle state is given by formula_91 where These states are normalized as formula_92 Permutations work as such; if "s" ∈ "S""k" is a permutation of "k" objects (for a state) such that formula_93 then a nonzero term results. The sign is plus unless s involves an odd number of fermion transpositions, in which case it is minus. The notation is usually abbreviated letting one Greek letter stand for the whole collection describing the state. In abbreviated form the normalization becomes formula_94 When integrating over free-particle states one writes in this notation formula_95 where the sum includes only terms such that no two terms are equal modulo a permutation of the particle type indices. The sets of states sought for are supposed to be "complete". This is expressed as formula_96 which could be paraphrased as formula_97 where for each fixed α, the right hand side is a projection operator onto the state α. Under an inhomogeneous Lorentz transformation (Λ, "a"), the field transforms according to the rule where "W"(Λ, "p") is the Wigner rotation and "D"("j") is the representation of SO(3). By putting Λ = 1, "a" = ("τ", 0, 0, 0), for which "U" is exp("iHτ"), in (1), it immediately follows that formula_98 so the in and out states sought after are eigenstates of the full Hamiltonian that are necessarily non-interacting due to the absence of mixed particle energy terms. The discussion in the section above suggests that the in states Ψ+ and the out states Ψ− should be such that formula_99 for large positive and negative τ has the appearance of the corresponding package, represented by "g", of free-particle states, "g" assumed smooth and suitably localized in momentum. Wave packages are necessary, else the time evolution will yield only a phase factor indicating free particles, which cannot be the case. The right hand side follows from that the in and out states are eigenstates of the Hamiltonian per above. To formalize this requirement, assume that the full Hamiltonian H can be divided into two terms, a free-particle Hamiltonian "H"0 and an interaction V, "H" = "H"0 + "V" such that the eigenstates Φ"γ" of "H"0 have the same appearance as the in- and out-states with respect to normalization and Lorentz transformation properties, formula_100 formula_101 The in and out states are defined as eigenstates of the full Hamiltonian, formula_102 satisfying formula_103 for "τ" → −∞ or "τ" → +∞ respectively. Define formula_104 then formula_105 This last expression will work only using wave packages.From these definitions follow that the in and out states are normalized in the same way as the free-particle states, formula_106 and the three sets are unitarily equivalent. Now rewrite the eigenvalue equation, formula_107 where the ±"iε" terms has been added to make the operator on the LHS invertible. Since the in and out states reduce to the free-particle states for "V" → 0, put formula_108 on the RHS to obtain formula_109 Then use the completeness of the free-particle states, formula_110 to finally obtain formula_111 Here "H"0 has been replaced by its eigenvalue on the free-particle states. This is the Lippmann–Schwinger equation. In states expressed as out states. The initial states can be expanded in a basis of final states (or vice versa). Using the completeness relation, formula_112 formula_113 where |"C""m"|2 is the probability that the interaction transforms formula_114 into formula_115 By the ordinary rules of quantum mechanics, formula_116 and one may write formula_117 The expansion coefficients are precisely the "S"-matrix elements to be defined below. The "S"-matrix. The "S"-matrix is now defined by formula_118 Here α and β are shorthands that represent the particle content but suppresses the individual labels. Associated to the "S"-matrix there is the S-operator S defined by formula_119 where the Φ"γ" are free particle states. This definition conforms with the direct approach used in the interaction picture. Also, due to unitary equivalence, formula_120 As a physical requirement, S must be a unitary operator. This is a statement of conservation of probability in quantum field theory. But formula_121 By completeness then, formula_122 so "S" is the unitary transformation from in-states to out states. Lorentz invariance is another crucial requirement on the "S"-matrix. The S-operator represents the quantum canonical transformation of the initial "in" states to the final "out" states. Moreover, S leaves the vacuum state invariant and transforms "in"-space fields to "out"-space fields, formula_123 formula_124 In terms of creation and annihilation operators, this becomes formula_125 hence formula_126 A similar expression holds when S operates to the left on an out state. This means that the "S"-matrix can be expressed as formula_127 If S describes an interaction correctly, these properties must be also true: Evolution operator "U". Define a time-dependent creation and annihilation operator as follows, formula_128 so, for the fields, formula_129 where formula_130 We allow for a phase difference, given by formula_131 because for S, formula_132 Substituting the explicit expression for U, one has formula_133 where formula_134 is the interaction part of the Hamiltonian and formula_135 is the time ordering. By inspection, it can be seen that this formula is not explicitly covariant. Dyson series. The most widely used expression for the "S"-matrix is the Dyson series. This expresses the "S"-matrix operator as the series: formula_136 where: The not-"S"-matrix. Since the transformation of particles from black hole to Hawking radiation could not be described with an "S"-matrix, Stephen Hawking proposed a "not-"S"-matrix", for which he used the dollar sign ($), and which therefore was also called "dollar matrix". Remarks. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t= - \\infty " }, { "math_id": 1, "text": "t= + \\infty " }, { "math_id": 2, "text": "\\psi_{\\rm L}(x)= A e^{ikx} + B e^{-ikx}" }, { "math_id": 3, "text": "\\psi_{\\rm R}(x)= C e^{ikx} + D e^{-ikx}" }, { "math_id": 4, "text": "k=\\sqrt{2m E/\\hbar^{2}}" }, { "math_id": 5, "text": "\\begin{pmatrix} B \\\\ C \\end{pmatrix} = \\begin{pmatrix} S_{11} & S_{12} \\\\ S_{21} & S_{22} \\end{pmatrix} \\begin{pmatrix} A \\\\ D \\end{pmatrix}." }, { "math_id": 6, "text": "\\Psi_{\\rm out}=S \\Psi_{\\rm in}" }, { "math_id": 7, "text": "\\Psi_{\\rm out}=\\begin{pmatrix}B \\\\ C \\end{pmatrix}, \\quad \\Psi_{\\rm in}=\\begin{pmatrix}A \\\\ D \\end{pmatrix}, \\qquad S=\\begin{pmatrix} S_{11} & S_{12} \\\\ S_{21} & S_{22} \\end{pmatrix}." }, { "math_id": 8, "text": " J = \\frac{\\hbar}{2mi}\\left(\\psi^* \\frac{\\partial \\psi }{\\partial x}- \\psi \\frac{\\partial \\psi^* }{\\partial x} \\right) ." }, { "math_id": 9, "text": "J_{\\rm L}(x)" }, { "math_id": 10, "text": "\\psi_{\\rm L}(x)" }, { "math_id": 11, "text": "J_{\\rm L}(x)=\\frac{\\hbar k}{m}\\left(|A|^2-|B|^2\\right)," }, { "math_id": 12, "text": "J_{\\rm R}(x)" }, { "math_id": 13, "text": "\\psi_{\\rm R}(x)" }, { "math_id": 14, "text": "J_{\\rm R}(x)=\\frac{\\hbar k}{m}\\left(|C|^2-|D|^2\\right)." }, { "math_id": 15, "text": "\n\\begin{align}\n & J_{\\rm L} = J_{\\rm R} \\\\\n & |A|^2-|B|^2=|C|^2-|D|^2 \\\\\n & |B|^2+|C|^2=|A|^2+|D|^2 \\\\\n & \\Psi_\\text{out}^\\dagger \\Psi_\\text{out}=\\Psi_\\text{in}^\\dagger \\Psi_\\text{in} \\\\\n & \\Psi_\\text{in}^\\dagger S^\\dagger S \\Psi_\\text{in}=\\Psi_\\text{in}^\\dagger \\Psi_\\text{in} \\\\\n & S^\\dagger S = I\\\\\n\\end{align}" }, { "math_id": 16, "text": "\\psi^*_{\\rm L}(x)= A^* e^{-ikx} + B^* e^{ikx}" }, { "math_id": 17, "text": "\\psi^*_{\\rm R}(x)= C^* e^{-ikx} + D^* e^{ikx}" }, { "math_id": 18, "text": "\\begin{pmatrix}A^* \\\\ D^* \\end{pmatrix} = \\begin{pmatrix} S_{11} & S_{12} \\\\ S_{21} & S_{22} \\end{pmatrix}\\begin{pmatrix} B^* \\\\ C^* \\end{pmatrix}\\," }, { "math_id": 19, "text": "\\Psi^*_{\\rm in}=S \\Psi^*_{\\rm out}." }, { "math_id": 20, "text": "\\Psi^*_{\\rm in} = S \\Psi^*_{\\rm out}, \\quad \\Psi_{\\rm out}=S \\Psi_{\\rm in}" }, { "math_id": 21, "text": "S^*S=I" }, { "math_id": 22, "text": "S^T=S." }, { "math_id": 23, "text": "\\begin{pmatrix} S_{11} & S_{12} \\\\ S_{21} & S_{22} \\end{pmatrix} = \\begin{pmatrix} e^{i\\varphi} e^{i\\delta} \\cdot r & e^{i\\varphi} \\sqrt{1-r^2} \\\\ e^{i\\varphi}\\sqrt{1-r^2} & -e^{i\\varphi} e^{-i\\delta} \\cdot r \\end{pmatrix} = e^{i\\varphi} \\begin{pmatrix} e^{i\\delta} \\cdot r & \\sqrt{1-r^2} \\\\ \\sqrt{1-r^2} & -e^{-i\\delta} \\cdot r \\end{pmatrix}" }, { "math_id": 24, "text": "\\delta,\\varphi \\in [0;2\\pi]" }, { "math_id": 25, "text": "r\\in [0;1]" }, { "math_id": 26, "text": "M" }, { "math_id": 27, "text": "C e^{ikx}" }, { "math_id": 28, "text": "D e^{-ikx}" }, { "math_id": 29, "text": "A e^{ikx}" }, { "math_id": 30, "text": "B e^{-ikx}" }, { "math_id": 31, "text": "\\begin{pmatrix}C \\\\ D \\end{pmatrix} = \\begin{pmatrix} M_{11} & M_{12} \\\\ M_{21} & M_{22} \\end{pmatrix}\\begin{pmatrix} A \\\\ B \\end{pmatrix}" }, { "math_id": 32, "text": "M_{11}=1/S_{12}^*= 1/S_{21} ^* {,}\\ M_{22}= M_{11}^*" }, { "math_id": 33, "text": "M_{12}=-S_{11}^*/S_{12}^* = S_{22}/S_{12} {,}\\ M_{21} = M_{12}^*" }, { "math_id": 34, "text": "\\mathbf{M}" }, { "math_id": 35, "text": "M = \\frac{1}{\\sqrt{1-r^2}} \\begin{pmatrix} e^{i\\varphi} & -r\\cdot e^{-i\\delta} \\\\ -r\\cdot e^{i\\delta} & e^{-i\\varphi} \\end{pmatrix}" }, { "math_id": 36, "text": "V(x) = \\begin{cases}\n -V_0 & \\text{for}~~ |x| \\le a ~~ (V_0 > 0) \\quad\\text{and}\\\\[1ex]\n 0 & \\text{for}~~ |x|>a\n\\end{cases}" }, { "math_id": 37, "text": "A_k\\exp(ikx)" }, { "math_id": 38, "text": "k>0" }, { "math_id": 39, "text": "D_k\\exp(-ikx)" }, { "math_id": 40, "text": "S_{12}=S_{21}=\\frac{\\exp(-2ika)}{\\cos(2la)-i\\sin(2la)\\frac{l^2+k^2}{2kl}}" }, { "math_id": 41, "text": "S_{11}=S_{12}\\cdot i\\sin(2la)\\frac{l^2-k^2}{2kl}" }, { "math_id": 42, "text": "e^{i\\delta}=\\pm i" }, { "math_id": 43, "text": "-e^{-i\\delta}=e^{i\\delta}" }, { "math_id": 44, "text": "S_{22}=S_{11}" }, { "math_id": 45, "text": "l = \\sqrt{k^2+\\frac{2mV_0}{\\hbar^2}}" }, { "math_id": 46, "text": "E_k" }, { "math_id": 47, "text": "E_k = \\frac{\\hbar^2 k^2}{2m}=\\frac{\\hbar^2 l^2}{2m}-V_0" }, { "math_id": 48, "text": "T_k = |S_{21}|^2=|S_{12}|^2=\\frac{1}{(\\cos(2la))^2+(\\sin(2la))^2\\frac{(l^2+k^2)^2}{4k^2l^2}}=\\frac{1}{1+(\\sin(2la))^2\\frac{(l^2-k^2)^2}{4 k^2 l^2}}" }, { "math_id": 49, "text": "\\sin(2la)=0" }, { "math_id": 50, "text": "\\cos(2la)=\\pm 1" }, { "math_id": 51, "text": "S_{11} = S_{22} = 0" }, { "math_id": 52, "text": "|S_{21}| = |S_{12}| = 1" }, { "math_id": 53, "text": "k^2+\\frac{2mV_0}{\\hbar^2}=\\frac{n^2 \\pi^2}{4a^2}" }, { "math_id": 54, "text": "n\\in\\mathbb{N}" }, { "math_id": 55, "text": "V(x)=+V_0 > 0" }, { "math_id": 56, "text": "|x|\\le a" }, { "math_id": 57, "text": "E_k=\\frac{\\hbar^2 k^2}{2m}" }, { "math_id": 58, "text": "T_{\\rm L}=\\frac{|C|^2}{|A|^2} = |S_{21}|^2. " }, { "math_id": 59, "text": "R_{\\rm L}=\\frac{|B|^2}{|A|^2}=|S_{11}|^2." }, { "math_id": 60, "text": "T_{\\rm R}=\\frac{|B|^2}{|D|^2}=|S_{12}|^2." }, { "math_id": 61, "text": "R_{\\rm R}=\\frac{|C|^2}{|D|^2}=|S_{22}|^2." }, { "math_id": 62, "text": "T_{\\rm L}+R_{\\rm L}=1" }, { "math_id": 63, "text": "T_{\\rm R}+R_{\\rm R}=1." }, { "math_id": 64, "text": "T_{\\rm L}=|S_{21}|^2=|S_{12}|^2=T_{\\rm R}" }, { "math_id": 65, "text": "R_{\\rm L} = R_{\\rm R}" }, { "math_id": 66, "text": " S=\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix}." }, { "math_id": 67, "text": " S = \\begin{pmatrix} 2ir & 1+2it \\\\ 1+2it &2ir^* \\frac{1+2it}{1-2it^*} \\end{pmatrix}." }, { "math_id": 68, "text": "|r|^2+|t|^2 = \\operatorname{Im}(t)." }, { "math_id": 69, "text": "\\left|\\Psi(t)\\right\\rangle" }, { "math_id": 70, "text": "\\left|\\Phi_{\\rm i}\\right\\rangle." }, { "math_id": 71, "text": "\\left\\langle\\Phi_{\\rm f}\\right|." }, { "math_id": 72, "text": "S_{\\rm fi} \\equiv \\lim_{t \\rightarrow +\\infty} \\left\\langle\\Phi_{\\rm f}|\\Psi(t)\\right\\rangle \\equiv \\left\\langle\\Phi_{\\rm f}\\right|S\\left|\\Phi_{\\rm i}\\right\\rangle," }, { "math_id": 73, "text": "U(t, t_0) = Te^{-i\\int_{t_0}^t d\\tau V(\\tau)}," }, { "math_id": 74, "text": "S_{\\rm fi} = \\lim_{t_2 \\rightarrow +\\infty}\\lim_{t_1 \\rightarrow -\\infty}\\left\\langle\\Phi_{\\rm f}\\right|U(t_2, t_1)\\left|\\Phi_{\\rm i}\\right\\rangle," }, { "math_id": 75, "text": "S = U(\\infty, -\\infty)." }, { "math_id": 76, "text": "S = \\sum_{n=0}^\\infty \\frac{(-i)^n}{n!}\\int_{-\\infty}^\\infty dt_1\\cdots \\int_{-\\infty}^\\infty dt_n T\\left[V(t_1)\\cdots V(t_n)\\right]," }, { "math_id": 77, "text": "S = \\sum_{n=0}^\\infty \\frac{(-i)^n}{n!}\\int_{-\\infty}^\\infty dx_1^4\\cdots \\int_{-\\infty}^\\infty dx_n^4 T\\left[\\mathcal{H}(t_1)\\cdots \\mathcal{H}(t_n)\\right]." }, { "math_id": 78, "text": "S_{\\rm fi} = \\left\\langle\\Phi_{\\rm f}|S|\\Phi_{\\rm i}\\right\\rangle = \\left\\langle\\Phi_{\\rm f} \\left|\\sum_{n=0}^\\infty \\frac{(-i)^n}{n!}\\int_{-\\infty}^\\infty dx_1^4\\cdots \\int_{-\\infty}^\\infty dx_n^4 T\\left[\\mathcal{H}(t_1)\\cdots \\mathcal{H}(t_n)\\right]\\right| \\Phi_{\\rm i}\\right\\rangle ." }, { "math_id": 79, "text": "a(k)\\left |*, 0\\right\\rangle = 0." }, { "math_id": 80, "text": "|*, 0\\rangle" }, { "math_id": 81, "text": "P^\\mu |*, 0\\rangle = 0, \\quad M^{\\mu\\nu} |*, 0\\rangle = 0" }, { "math_id": 82, "text": "(\\Box^2 + m^2)\\phi_{\\rm i,o}(x) = 0," }, { "math_id": 83, "text": "\\begin{align}\n{[\\phi_{\\rm i,o}(x), \\pi_{\\rm i,o}(y)]}_{x_0 = y_0} &= i\\delta(\\mathbf{x} - \\mathbf{y}),\\\\\n{[\\phi_{\\rm i,o}(x), \\phi_{\\rm i,o}(y)]}_{x_0 = y_0} &= {[\\pi_{\\rm i,o}(x), \\pi_{\\rm i,o}(y)]}_{x_0 = y_0} = 0,\n\\end{align}" }, { "math_id": 84, "text": "\\begin{align}\n{[a_{\\rm i,o}(\\mathbf{p}), a^\\dagger_{\\rm i,o}(\\mathbf{p}')]} &= i\\delta(\\mathbf{p} - \\mathbf{p'}),\\\\\n{[a_{\\rm i,o}(\\mathbf{p}), a_{\\rm i,o}(\\mathbf{p'})]} &= {[a^\\dagger_{\\rm i,o}(\\mathbf{p}), a^\\dagger_{\\rm i,o}(\\mathbf{p'})]} = 0.\n\\end{align}" }, { "math_id": 85, "text": "\\begin{align}\n\\left| \\mathrm{i}, k_1\\ldots k_n \\right\\rangle &= a_i^\\dagger (k_1)\\cdots a_{\\rm i}^\\dagger (k_n)\\left| i, 0\\right\\rangle,\\\\\n\\left| \\mathrm{f}, p_1\\ldots p_n \\right\\rangle &= a_{\\rm f}^\\dagger (p_1)\\cdots a_f^\\dagger (p_n)\\left| f, 0\\right\\rangle,\n\\end{align}" }, { "math_id": 86, "text": "\\mathcal H_{\\rm i} = \\operatorname{span}\\{ \\left| \\mathrm{i}, k_1\\ldots k_n \\right\\rangle = a_{\\rm i}^\\dagger (k_1)\\cdots a_{\\rm i}^\\dagger (k_n)\\left| \\mathrm{i}, 0\\right\\rangle\\}," }, { "math_id": 87, "text": "\\mathcal H_{\\rm f} = \\operatorname{span}\\{ \\left| \\mathrm{f}, p_1\\ldots p_n \\right\\rangle = a_{\\rm f}^\\dagger (p_1)\\cdots a_{\\rm f}^\\dagger (p_n)\\left| \\mathrm{f}, 0\\right\\rangle\\}." }, { "math_id": 88, "text": "P^\\mu\\left| \\mathrm{i}, k_1\\ldots k_m \\right\\rangle = k_1^\\mu + \\cdots + k_m^\\mu\\left| \\mathrm{i}, k_1\\ldots k_m \\right\\rangle, \\quad P^\\mu\\left| \\mathrm{f}, p_1\\ldots p_n \\right\\rangle = p_1^\\mu + \\cdots + p_n^\\mu\\left| \\mathrm{f}, p_1\\ldots p_n \\right\\rangle." }, { "math_id": 89, "text": "H = P^0." }, { "math_id": 90, "text": "|\\mathrm{i}, 0\\rangle = |\\mathrm{f}, 0\\rangle = |*,0\\rangle \\equiv |0\\rangle." }, { "math_id": 91, "text": "\\Psi_{p_1\\sigma_1 n_1;p_2\\sigma_2 n_2;\\cdots}," }, { "math_id": 92, "text": "\\left(\\Psi_{p_1'\\sigma_1' n_1';p_2'\\sigma_2' n_2';\\cdots}, \\Psi_{p_1\\sigma_1 n_1;p_2\\sigma_2 n_2;\\cdots}\\right)\n=\\delta^3(\\mathbf{p}_1' - \\mathbf{p}_1)\\delta_{\\sigma_1'\\sigma_1}\\delta_{n_1'n_1}\n\\delta^3(\\mathbf{p}_2' - \\mathbf{p}_2)\\delta_{\\sigma_2'\\sigma_2}\\delta_{n_2'n_2}\\cdots \\quad \\pm \\text{ permutations}." }, { "math_id": 93, "text": "n_{s(i)}' = n_{i}, \\quad 1 \\le i \\le k," }, { "math_id": 94, "text": "\\left(\\Psi_{\\alpha'}, \\Psi_\\alpha\\right) = \\delta(\\alpha' - \\alpha)." }, { "math_id": 95, "text": " d\\alpha\\cdots \\equiv \\sum_{n_1\\sigma_1n_2\\sigma_2\\cdots} \\int d^3p_1 d^3p_2 \\cdots," }, { "math_id": 96, "text": "\\Psi = \\int d\\alpha \\ \\Psi_\\alpha\\left(\\Psi_\\alpha, \\Psi\\right)," }, { "math_id": 97, "text": "\\int d\\alpha \\ \\left|\\Psi_\\alpha\\right\\rangle\\left\\langle\\Psi_\\alpha\\right| = 1," }, { "math_id": 98, "text": "H\\Psi = E_\\alpha\\Psi, \\quad E_\\alpha = p_1^0 + p_2^0 + \\cdots ," }, { "math_id": 99, "text": "e^{-iH\\tau}\\int d\\alpha\\ g(\\alpha)\\Psi_\\alpha^\\pm = \\int d\\alpha\\ e^{-iE_\\alpha \\tau}g(\\alpha)\\Psi_\\alpha^\\pm" }, { "math_id": 100, "text": "H_0\\Phi_\\alpha = E_\\alpha\\Phi_\\alpha," }, { "math_id": 101, "text": "(\\Phi_\\alpha', \\Phi_\\alpha) = \\delta(\\alpha' - \\alpha)." }, { "math_id": 102, "text": "H\\Psi_\\alpha^\\pm = E_\\alpha\\Psi_\\alpha^\\pm," }, { "math_id": 103, "text": "e^{-iH\\tau} \\int d\\alpha \\ g(\\alpha) \\Psi_\\alpha^\\pm \\rightarrow e^{-iH_0\\tau}\\int d\\alpha \\ g(\\alpha) \\Phi_\\alpha." }, { "math_id": 104, "text": "\\Omega(\\tau) \\equiv e^{+iH\\tau}e^{-iH_0\\tau}," }, { "math_id": 105, "text": "\\Psi_\\alpha^\\pm = \\Omega(\\mp \\infty)\\Phi_\\alpha." }, { "math_id": 106, "text": "(\\Psi_\\beta^+, \\Psi_\\alpha^+) = (\\Phi_\\beta, \\Phi_\\alpha) = (\\Psi_\\beta^-, \\Psi_\\alpha^-) = \\delta(\\beta - \\alpha)," }, { "math_id": 107, "text": "(E_\\alpha - H_0 \\pm i\\epsilon)\\Psi_\\alpha^\\pm = \\pm i\\epsilon\\Psi_\\alpha^\\pm + V\\Psi_\\alpha^\\pm," }, { "math_id": 108, "text": "i\\epsilon\\Psi_\\alpha^\\pm = i\\epsilon\\Phi_\\alpha" }, { "math_id": 109, "text": "\\Psi_\\alpha^\\pm = \\Phi_\\alpha + (E_\\alpha - H_0 \\pm i\\epsilon)^{-1}V\\Psi_\\alpha^\\pm." }, { "math_id": 110, "text": "V\\Psi_\\alpha^\\pm = \\int d\\beta \\ (\\Phi_\\beta, V\\Psi_\\alpha^\\pm)\\Phi_\\beta \\equiv \\int d\\beta \\ T_{\\beta\\alpha}^\\pm\\Phi_\\beta," }, { "math_id": 111, "text": "\\Psi_\\alpha^\\pm = \\Phi_\\alpha + \\int d\\beta \\ \\frac{T_{\\beta\\alpha}^\\pm\\Phi_\\beta}{E_\\alpha - E_\\beta \\pm i\\epsilon}." }, { "math_id": 112, "text": "\\Psi_\\alpha^- = \\int d\\beta (\\Psi_\\beta^+,\\Psi_\\alpha^-)\\Psi_\\beta^+ = \n\\int d\\beta |\\Psi_\\beta^+\\rangle\\langle\\Psi_\\beta^+|\\Psi_\\alpha^-\\rangle = \n\\sum_{n_1\\sigma_1n_2\\sigma_2\\cdots} \\int d^3p_1d^3p_2\\cdots(\\Psi_\\beta^+,\\Psi_\\alpha^-)\\Psi_\\beta^+ ," }, { "math_id": 113, "text": "\\Psi_\\alpha^- = \\left| \\mathrm{i}, k_1\\ldots k_n \\right\\rangle = C_0 \\left| \\mathrm{f}, 0\\right\\rangle\\ + \\sum_{m=1}^\\infty \\int{d^4p_1\\ldots d^4p_mC_m(p_1\\ldots p_m)\\left| \\mathrm{f}, p_1\\ldots p_m \\right\\rangle} ~," }, { "math_id": 114, "text": "\\left| \\mathrm{i}, k_1\\ldots k_n \\right\\rangle = \\Psi_\\alpha^-" }, { "math_id": 115, "text": "\\left| \\mathrm{f}, p_1\\ldots p_m \\right\\rangle = \\Psi_\\beta^+ ." }, { "math_id": 116, "text": "C_m(p_1\\ldots p_m) = \\left\\langle \\mathrm{f}, p_1\\ldots p_m \\right|\\mathrm{i}, k_1\\ldots k_n \\rangle = (\\Psi_\\beta^+,\\Psi_\\alpha^-)" }, { "math_id": 117, "text": "\\left| \\mathrm{i}, k_1\\ldots k_n \\right\\rangle = C_0 \\left| \\mathrm{f}, 0\\right\\rangle\\ + \\sum_{m=1}^\\infty \\int{d^4p_1\\ldots d^4p_m\n\\left| \\mathrm{f}, p_1\\ldots p_m \\right\\rangle}\\left\\langle \\mathrm{f}, p_1\\ldots p_m \\right|\\mathrm{i}, k_1\\ldots k_n \\rangle ~." }, { "math_id": 118, "text": "S_{\\beta\\alpha} = \\langle\\Psi_\\beta^-|\\Psi_\\alpha^+\\rangle = \\langle \\mathrm{f},\\beta| \\mathrm{i},\\alpha\\rangle, \\qquad |\\mathrm{f}, \\beta\\rangle \\in \\mathcal H_{\\rm f}, \\quad |\\mathrm{i}, \\alpha\\rangle \\in \\mathcal H_{\\rm i}." }, { "math_id": 119, "text": "\\langle\\Phi_\\beta|S|\\Phi_\\alpha\\rangle \\equiv S_{\\beta\\alpha}," }, { "math_id": 120, "text": "\\langle\\Psi_\\beta^+|S|\\Psi_\\alpha^+\\rangle = S_{\\beta\\alpha} = \\langle\\Psi_\\beta^-|S|\\Psi_\\alpha^-\\rangle." }, { "math_id": 121, "text": "\\langle\\Psi_\\beta^-|S|\\Psi_\\alpha^-\\rangle = S_{\\beta\\alpha} = \\langle\\Psi_\\beta^-|\\Psi_\\alpha^+\\rangle." }, { "math_id": 122, "text": "S|\\Psi_\\alpha^-\\rangle = |\\Psi_\\alpha^+\\rangle," }, { "math_id": 123, "text": "S\\left|0\\right\\rangle = \\left|0\\right\\rangle" }, { "math_id": 124, "text": "\\phi_\\mathrm{f}=S\\phi_\\mathrm{i} S^{-1} ~." }, { "math_id": 125, "text": "a_{\\rm f}(p)=Sa_{\\rm i}(p)S^{-1}, a_{\\rm f}^\\dagger(p)=Sa_{\\rm i}^\\dagger(p)S^{-1}," }, { "math_id": 126, "text": "\\begin{align}\nS|\\mathrm{i}, k_1, k_2, \\ldots, k_n\\rangle\n&= Sa_{\\rm i}^\\dagger(k_1)a_{\\rm i}^\\dagger(k_2) \\cdots a_{\\rm i}^\\dagger(k_n)|0\\rangle = \nSa_{\\rm i}^\\dagger(k_1)S^{-1}Sa_{\\rm i}^\\dagger(k_2)S^{-1} \\cdots Sa_{\\rm i}^\\dagger(k_n)S^{-1}S|0\\rangle \\\\[1ex]\n&=a_{\\rm o}^\\dagger(k_1)a_{\\rm o}^\\dagger(k_2) \\cdots a_{\\rm o}^\\dagger(k_n)S|0\\rangle\n=a_{\\rm o}^\\dagger(k_1)a_{\\rm o}^\\dagger(k_2) \\cdots a_{\\rm o}^\\dagger(k_n)|0\\rangle\n=|\\mathrm{o}, k_1, k_2, \\ldots, k_n\\rangle.\n\\end{align}" }, { "math_id": 127, "text": "S_{\\beta\\alpha} = \\langle \\mathrm{o}, \\beta|\\mathrm{i}, \\alpha \\rangle = \\langle \\mathrm{i}, \\beta|S|\\mathrm{i}, \\alpha \\rangle = \\langle \\mathrm{o}, \\beta|S|\\mathrm{o}, \\alpha \\rangle." }, { "math_id": 128, "text": "\\begin{align}\na^{\\dagger}{\\left(k,t\\right)} &= U^{-1}(t) \\, a^{\\dagger}_{\\rm i}{\\left(k\\right)} \\, U{\\left( t \\right)} \\\\[1ex]\n a{\\left(k,t\\right)} &= U^{-1}(t) \\, a_{\\rm i}{\\left(k\\right)} \\, U{\\left( t \\right)} \\, ,\n\\end{align}" }, { "math_id": 129, "text": "\\phi_{\\rm f}=U^{-1}(\\infty)\\phi_{\\rm i} U(\\infty)=S^{-1}\\phi_{\\rm i} S~," }, { "math_id": 130, "text": "S= e^{i\\alpha}\\, U(\\infty)." }, { "math_id": 131, "text": "e^{i\\alpha}=\\left\\langle 0|U(\\infty)|0\\right\\rangle^{-1} ~," }, { "math_id": 132, "text": "S\\left|0\\right\\rangle = \\left|0\\right\\rangle \\Longrightarrow \\left\\langle 0|S|0\\right\\rangle = \\left\\langle 0|0\\right\\rangle =1 ~." }, { "math_id": 133, "text": "S=\\frac{1}{\\left\\langle 0|U(\\infty)|0\\right\\rangle}\\mathcal T e^{-i\\int{d\\tau H_{\\rm{int}}(\\tau)}}~," }, { "math_id": 134, "text": " H_{\\rm{int}}" }, { "math_id": 135, "text": " \\mathcal T " }, { "math_id": 136, "text": "S = \\sum_{n=0}^\\infty \\frac{(-i)^n}{n!} \\int \\cdots \\int d^4x_1 d^4x_2 \\ldots d^4x_n T [ \\mathcal{H}_{\\rm{int}}(x_1) \\mathcal{H}_{\\rm{int}}(x_2) \\cdots \\mathcal{H}_{\\rm{int}}(x_n)] " }, { "math_id": 137, "text": "T[\\cdots]" }, { "math_id": 138, "text": "\\; \\mathcal{H}_{\\rm{int}}(x)" } ]
https://en.wikipedia.org/wiki?curid=622053
62207342
Multilevel regression with poststratification
Statistical regression technique Multilevel regression with poststratification (MRP) is a statistical technique used for correcting model estimates for known differences between a sample population (the population of the data one has), and a target population (a population one wishes to estimate for). The poststratification refers to the process of adjusting the estimates, essentially a weighted average of estimates from all possible combinations of attributes (for example age and sex). Each combination is sometimes called a "cell". The multilevel regression is the use of a multilevel model to smooth noisy estimates in the cells with too little data by using overall or nearby averages. One application is estimating preferences in sub-regions (e.g., states, individual constituencies) based on individual-level survey data gathered at other levels of aggregation (e.g., national surveys). Mathematical formulation. Following the MRP model description , assume formula_0 represents single outcome measurement and the population mean value of formula_0, formula_1, is the target parameter of interest. In the underlying population, each individual, formula_2, belongs to one of formula_3 poststratification cells characterized by a unique set of covariates. The multilevel regression with poststratification model involves the following pair of steps: "MRP step 1 (multilevel regression)": The multilevel regression model specifies a linear predictor for the mean formula_1, or the logit transform of the mean in the case of a binary outcome, in poststratification cell formula_4, formula_5 where formula_6 is the "outcome measurement" for respondent formula_2 in cell formula_4, formula_7 is the "fixed intercept", formula_8 is the unique "covariate vector" for cell formula_4, formula_9 is a vector of regression coefficients ("fixed effects"), formula_10 is the varying coefficient ("random effect"), formula_11 maps the formula_4 cell index to the corresponding category index formula_12 of variable formula_13. All varying coefficients are exchangeable batches with independent normal "prior distributions" formula_14. "MRP step 2: poststratification": The poststratification (PS) estimate for the population parameter of interest is formula_15 where formula_16 is the estimated outcome of interest for poststratification cell formula_4 and is the size of the formula_4-th poststratification cell in the population. Estimates at any subpopulation level formula_17 are similarly derived formula_18 where formula_19 is the subset of all poststratification cells that comprise formula_17. The technique and its advantages. The technique essentially involves using data from, for example, censuses relating to various types of people corresponding to different characteristics (e.g., age, race), in a first step to estimate the relationship between those types and individual preferences (i.e., multi-level regression of the dataset). This relationship is then used in a second step to estimate the sub-regional preference based on the number of people having each type/characteristic in that sub-region (a process known as "poststratification"). In this way the need to perform surveys at sub-regional level, which can be expensive and impractical in an area (e.g., a country) with many sub-regions (e.g. counties, ridings, or states), is avoided. It also avoids issues with consistency of survey when comparing different surveys performed in different areas. Additionally, it allows the estimating of preference within a specific locality based on a survey taken across a wider area that includes relatively few people from the locality in question, or where the sample may be highly unrepresentative. History. The technique was originally developed by Gelman and T. Little in 1997, building upon ideas of Fay and Herriot and R. Little. It was subsequently expanded on by Park, Gelman, and Bafumi in 2004 and 2006. It was proposed for use in estimating US-state-level voter preference by Lax and Philips in 2009. Warshaw and Rodden subsequently proposed it for use in estimating district-level public opinion in 2012. Later, Wang et al. used survey data of Xbox users to predict the outcome of the 2012 US presidential election. The Xbox gamers were 65% 18- to 29-year-olds and 93% male, while the electorate as a whole was 19% 18- to 29-year-olds and 47% male. Even though the original data was highly biased, after multilevel regression with poststratification the authors were able to get estimates that agreed with those coming from polls using large amounts of random and representative data. Since then it has also been proposed for use in the field of epidemiology. YouGov used the technique to successfully predict the overall outcome of the 2017 UK general election, correctly predicting the result in 93% of constituencies. In the 2019 and 2024 elections other pollsters used MRP including Survation and Ipsos. Limitations and extensions. MRP can be extended to estimating the change of opinion over time and when used to predict elections works best when used relatively close to the polling date, after nominations have closed. Both the "multilevel regression" and "poststratification" ideas of MRP can be generalized. Multilevel regression can be replaced by nonparametric regression or regularized prediction, and poststratification can be generalized to allow for non-census variables, i.e. poststratification totals that are estimated rather than being known. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "\\mu_Y" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "j = 1,2, \\cdots, J" }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": "g{\\left({\\mathrm\\mu}_j\\right)}=g{\\left(E{\\left[Y_{j{\\lbrack i\\rbrack}}\\right]}\\right)}={\\mathrm\\beta}_0+\\boldsymbol X_j^T\\mathbf\\beta+\\sum_{k=1}^Ka_{l{\\lbrack j\\rbrack}}^k, " }, { "math_id": 6, "text": "Y_{j\\lbrack i\\rbrack}" }, { "math_id": 7, "text": "\\beta_0" }, { "math_id": 8, "text": "\\boldsymbol X_j" }, { "math_id": 9, "text": "{\\mathrm\\beta}" }, { "math_id": 10, "text": "a_{l{\\lbrack j\\rbrack}}^k" }, { "math_id": 11, "text": "l{\\lbrack j\\rbrack}" }, { "math_id": 12, "text": "l" }, { "math_id": 13, "text": "k\\in\\{1,2, \\cdots, K\\}" }, { "math_id": 14, "text": "a_l^k\\sim \\mathrm N\\left(0,\\mathrm\\sigma_k^2\\right),\\ l\\in\\{1,\\dots,L_k\\}" }, { "math_id": 15, "text": "\\hat{\\mu}^{PS} = \\frac{\\sum_{j=1}^{J} N_j \\hat{\\mu}_j}{\\sum_{i=1}^J N_j}" }, { "math_id": 16, "text": "\\hat{\\mu}_j" }, { "math_id": 17, "text": "s" }, { "math_id": 18, "text": "\\hat{\\mu}^{PS}_s = \\frac{\\sum_{j=1}^{J_s} N_j \\hat{\\mu}_j}{\\sum_{i=1}^{J_s} N_j}" }, { "math_id": 19, "text": "J_s" } ]
https://en.wikipedia.org/wiki?curid=62207342
62208717
Halanay inequality
Theorem in MathematicsHalanay inequality is a comparison theorem for differential equations with delay. This inequality and its generalizations have been applied to analyze the stability of delayed differential equations, and in particular, the stability of industrial processes with dead-time and delayed neural networks. Statement. Let formula_0 be a real number and formula_1 be a non-negative number. If formula_2 satisfies formula_3 where formula_4 and formula_5 are constants with formula_6, then formula_7 where formula_8 and formula_9. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_{0}" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "v: [t_{0}-\\tau, \\infty) \\rightarrow \\mathbb{R}^{+}" }, { "math_id": 3, "text": "\\frac{d}{dt} v(t) \\leq-\\alpha v(t)+\\beta\\left[\\sup _{s \\in[t-\\tau, t]} v(s)\\right], t \\geq t_{0} " }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "\\alpha>\\beta>0" }, { "math_id": 7, "text": "v(t) \\leq k e^{-\\eta\\left(t-t_{0}\\right)}, t \\geq t_{0}" }, { "math_id": 8, "text": "k>0" }, { "math_id": 9, "text": "\\eta>0" } ]
https://en.wikipedia.org/wiki?curid=62208717
62209935
Ezra 5
A chapter in the Book of Ezra Ezra 5 is the fifth chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapter 1 to 6 describes the history before the arrival of Ezra to the land of Judah in 468 BCE. This chapter records the contribution of the prophets Haggai and Zechariah to the temple building project and the investigation by Persian officials. Text. This chapter is divided into 17 verses. The original text of this chapter is written in Aramaic. Textual witnesses. Some early manuscripts containing the text of this chapter in Aramaic are of the Masoretic Text, which includes Codex Leningradensis (1008). A fragment containing a part of this chapter in Hebrew was found among the Dead Sea Scrolls, that is, 4Q117 (4QEzra; 50 BCE) with the extant verse 17 (= 1 Esdras 6:20). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 6:1–22 is an equivalent of Ezra 5 (The second year of Darius's reign). Renewed effort (5:1–2). Through the prophets Haggai and Zechariah, God sent the message of inspiration so the people began the repair of temple again "Then the prophets, Haggai the prophet, and Zechariah the son of Iddo, prophesied unto the Jews that were in Judah and Jerusalem in the name of the God of Israel, even unto them." Verse 1. The prophecies of Haggai and Zechariah are recorded in the Hebrew Bible in the Book of Haggai and Book of Zechariah respectively. Haggai's prophecy period completely covers the time mentioned here (; 520 BC), whereas Zechariah's only partly. "Then rose up Zerubbabel the son of Shealtiel, and Jeshua the son of Jozadak, and began to build the house of God which is at Jerusalem: and with them were the prophets of God helping them." The investigation (5:3–17). Based on the complaint of the non-Jews, the governor of the area began an investigation into the building project, interviewing the Jewish leaders and sending an inquiry to Darius, the king of Persia. ’’At the same time Tattenai, the governor beyond the River came to them, with Shetharbozenai, and their companions, and asked them, "Who gave you a decree to build this house, and to finish this wall?"" "The copy of the letter that Tattenai, the governor beyond the River, and Shetharbozenai, and his companions the Apharsachites, who were beyond the River, sent to Darius the king follows." "Be it known to the king that we went into the province of Judah, to the house of the great God, which is built with great stones, and timber is laid in the walls. This work goes on with diligence and prospers in their hands." "And thus they returned us an answer, saying: “We are the servants of the God of heaven and earth, and we are rebuilding the temple that was built many years ago, which a great king of Israel built and completed." Verse 11. The "great king of Israel" was Solomon. The conventional dates of Solomon's reign are about 970 to 931 BCE. The Jewish historian Josephus says that "the temple was burnt four hundred and seventy years, six months, and ten days after it was built". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62209935
62215099
Ezra 6
A chapter in the Book of Ezra Ezra 6 is the sixth chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapter 1 to 6 describes the history before the arrival of Ezra in the land of Judah in 468 BCE. This chapter records the response of the Persian court to the report from Tattenai in the previous chapter: a search is made for the original decree by Cyrus the Great and this is confirmed with a new decree from Darius the Great allowing the temple to be built. This chapter closes this first part of the book in a "glorious conclusion with the completion of the new temple and the celebration of Passover" by the people, as their worship life is restored according to the Law of Moses. Text. This chapter is divided into 22 verses. The original text of this chapter from 6:1 through 6:18 is in Aramaic, from 6:19 through is in Hebrew language. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew/Aramaic are of the Masoretic Text, which includes Codex Leningradensis (1008). Fragments containing parts of this chapter were found among the Dead Sea Scrolls, that is, 4Q117 (4QEzra; 50 BCE) with extant verses 1–5 (= 1 Esdras 6:21–25). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 6:23–7:9 is an equivalent of Ezra 6:1–18 (The temple is finished), whereas 1 Esdras 7:10–15 is equivalent to Ezra 6:19–22 (Celebration of the Passover). The Persian response to the Temple (6:1–12). The Persian court searched the royal archive to investigate the historical claim of the Jews for rebuilding the temple, first in Babylon, according to Tattenai's suggestion (Ezra 5:17) but they found a scroll containing Cyrus's edict in Ecbatana (modern Hamadan in northern Iran, former capital of the Median Empire.) Darius the king of Persia issued a decree supporting the temple building project. "Then Darius the king made a decree, and search was made in the house of the rolls, where the treasures were laid up in Babylon." "And there was found at Achmetha, in the palace that is in the province of the Medes, a roll, and therein was a record thus written:" "In the first year of King Cyrus, King Cyrus issued a decree concerning the house of God at Jerusalem: “Let the house be rebuilt, the place where they offered sacrifices; and let the foundations of it be firmly laid, its height sixty cubits and its width sixty cubits," Verse 3. The Aramaic memorandum of the decree (parallel to –) provides evidence that Cyrus's edict is real and it may span to a number of different documents according to their functions, such as the edict in verses 2b–5, which could be the treasury record to certify that the vessels from the temple in Jerusalem have been returned to the Jews, as it contains extra information compared to the version of the decree in chapter 1. The measurement of the temple (verse 3) and the directions about the manner of the building (verse 4) may be designed "to set limits to royal expenditure of the project". "Moreover I issue a decree as to what you shall do for the elders of these Jews, for the building of this house of God: Let the cost be paid at the king’s expense from taxes on the region beyond the River; this is to be given immediately to these men, so that they are not hindered." Verse 8. The reply letter of Darius to Tattenai opens with the cited words of Cyrus, but immediately follows with his own decree, confirming entirely the measures of his predecessor and reapply them to the new situation. "Then Tattenai, governor of the region beyond the River, Shethar-Boznai, and their companions diligently did according to what King Darius had sent." Completion and dedication of the Temple (6:13–18). Following the command of God and the decrees issued by Cyrus, Darius and Artaxerxes, kings of Persia, the Jews worked diligently, so the Temple was finally completed and the people could celebrate the dedication of it. "So the elders of the Jews built, and they prospered through the prophesying of Haggai the prophet and Zechariah the son of Iddo. And they built and finished it, according to the commandment of the God of Israel, and according to the command of Cyrus, Darius, and Artaxerxes king of Persia." Verse 14. The prophecies of Haggai and Zechariah are recorded in the Hebrew Bible under the name of the Book of Haggai and Book of Zechariah, respectively. Haggai's prophecy period completely covers the time mentioned here (; 520 BC), whereas Zechariah's only partly. "And this house was finished on the third day of the month Adar, which was in the sixth year of the reign of Darius the king." Verse 15. The date corresponds to February 21, 515 B.C. Haggai () writes that the building project was recommenced on the 24th day of the month Elul (the 6th month; September) in the second year of Darius (September 21, 520 BC), so it took nearly 4.5 years to finish, although the foundations had been laid some twenty years earlier (April 536 BC; cf. ). Therefore, it was completed around 70 years after its destruction in 587–586 BC, close to Jeremiah's prediction. "And offered at the dedication of this house of God an hundred bullocks, two hundred rams, four hundred lambs; and for a sin offering for all Israel, twelve he goats, according to the number of the tribes of Israel." "And they set the priests in their divisions, and the Levites in their courses, for the service of God, which is at Jerusalem; as it is written in the book of Moses." Verse 18. This verse refers to ‘the organization of the priests and Levites described in ’, which distributes the service of the Temple by periods, of a week each, among the courses and divisions of priests and Levites (cf. ; ). Celebrating Passover (6:19–22). In keeping with the law of Moses, Passover is celebrated following the dedication of the temple, and this marks the ‘renewal of religious life’ of the people. "And the children of the captivity kept the passover upon the fourteenth day of the first month." Verse 19. The Hebrew language resumes in verse 19 and continues through . The Passover on the 14th of the first month (Nisan) was commanded in , but since then only few celebrations are recorded in Hebrew Bible, as follows: The celebration of the Passover on each of these occasions marks “a new or a restored order of worship, and the solemn rededication by the people of their Covenant relation with God”. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62215099
62223323
Square root of 6
Positive real number which when multiplied by itself gives 6 The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6. It is more precisely called the principal square root of 6, to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as: formula_1 and in exponent form as: formula_2 It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about . It takes two more digits (2.4495) to reduce the error by about half. The approximation (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than , or less than one part in 47,000. Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3, both of which are irrational algebraic numbers. NASA has published more than a million decimal digits of the square root of six. Rational approximations. The square root of 6 can be expressed as the continued fraction formula_3 (sequence in the OEIS) The successive partial evaluations of the continued fraction, which are called its "convergents", approach formula_0: formula_4 Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …(sequence in the OEIS), and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, …(sequence in the OEIS). Each convergent is a best rational approximation of formula_0; in other words, it is closer to formula_0 than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent: formula_5 The convergents, expressed as , satisfy alternately the Pell's equations formula_6 When formula_0 is approximated with the Babylonian method, starting with "x"0 2 and using "x""n"+1 "x""n" + , the "n"th approximant "x""n" is equal to the 2"n"th convergent of the continued fraction: formula_7 The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial formula_8. The Newton's method update, formula_9 is equal to formula_10 when formula_11. The method therefore converges quadratically. Geometry. In plane geometry, the square root of 6 can be constructed via a sequence of dynamic rectangles, as illustrated here. In solid geometry, the square root of 6 appears as the longest distances between corners (vertices) of the double cube, as illustrated above. The square roots of all lower natural numbers appear as the distances between other vertex pairs in the double cube (including the vertices of the included two cubes). The edge length of a cube with total surface area of 1 is formula_12 or the reciprocal square root of 6. The edge lengths of a regular tetrahedron (t), a regular octahedron (o), and a cube (c) of equal total surface areas satisfy formula_13. The edge length of a regular octahedron is the square root of 6 times the radius of an inscribed sphere (that is, the distance from the center of the solid to the center of each face). The square root of 6 appears in various other geometry contexts, such as the side length formula_14 for the square enclosing an equilateral triangle of side 2 (see figure). Trigonometry. The square root of 6, with the square root of 2 added or subtracted, appears in several exact trigonometric values for angles at multiples of 15 degrees (formula_15 radians). In culture. Villard de Honnecourt's 13th century construction of a Gothic "fifth-point arch" with circular arcs of radius 5 has a height of twice the square root of 6, as illustrated here. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{6}" }, { "math_id": 1, "text": "\\sqrt{6}\\, , " }, { "math_id": 2, "text": "6^\\frac{1}{2}." }, { "math_id": 3, "text": " [2; 2, 4, 2, 4, 2,\\ldots] = 2 + \\cfrac 1 {2 + \\cfrac 1 {4 + \\cfrac 1 {2 + \\cfrac 1 {4 + \\dots}}}}. " }, { "math_id": 4, "text": "\\frac{2}{1}, \\frac{5}{2}, \\frac{22}{9}, \\frac{49}{20}, \\frac{218}{89}, \\frac{485}{198}, \\frac{2158}{881}, \\frac{4801}{1960}, \\dots" }, { "math_id": 5, "text": "\\frac{2}{1} = 2.0,\\quad \\frac{5}{2} = 2.5,\\quad \\frac{22}{9} = 2.4444\\dots,\\quad \\frac{49}{20} = 2.45,\\quad \\frac{218}{89} = 2.44943...,\\quad \\frac{485}{198} = 2.449494..., \\quad \\ldots" }, { "math_id": 6, "text": "x^2 - 6y^2 = -2\\quad \\mathrm{and} \\quad x^2 - 6y^2 = 1" }, { "math_id": 7, "text": " x_0 = 2, \\quad x_1 = \\frac{5}{2} = 2.5, \\quad x_2 = \\frac{49}{20} =2.45, \\quad x_3 = \\frac{4801}{1960} = 2.449489796..., \\quad x_4 = \\frac{46099201}{18819920} = 2.449489742783179..., \\quad \\dots" }, { "math_id": 8, "text": "x^2-6" }, { "math_id": 9, "text": "x_{n+1} = x_n - f(x_n)/f'(x_n)," }, { "math_id": 10, "text": "(x_n + 6/x_n)/2" }, { "math_id": 11, "text": "f(x) = x^2 - 6" }, { "math_id": 12, "text": "\\frac{\\sqrt{6}}{6}" }, { "math_id": 13, "text": "\\frac{t\\cdot o}{c^2} = \\sqrt{6}" }, { "math_id": 14, "text": "\\frac{\\sqrt{6}+\\sqrt{2}}{2}" }, { "math_id": 15, "text": "\\pi/12" } ]
https://en.wikipedia.org/wiki?curid=62223323
62223541
Sachdev–Ye–Kitaev model
Solvable physics model In condensed matter physics and black hole physics, the Sachdev–Ye–Kitaev (SYK) model is an exactly solvable model initially proposed by Subir Sachdev and Jinwu Ye, and later modified by Alexei Kitaev to the present commonly used form. The model is believed to bring insights into the understanding of strongly correlated materials and it also has a close relation with the discrete model of AdS/CFT. Many condensed matter systems, such as quantum dot coupled to topological superconducting wires, graphene flake with irregular boundary, and kagome optical lattice with impurities, are proposed to be modeled by it. Some variants of the model are amenable to digital quantum simulation, with pioneering experiments implemented in nuclear magnetic resonance. Model. Let formula_0 be an integer and formula_1 an even integer such that formula_2, and consider a set of Majorana fermions formula_3 which are fermion operators satisfying conditions: Let formula_6 be random variables whose expectations satisfy: Then the SYK model is defined as formula_9. Note that sometimes an extra normalization factor is included. The most famous model is when formula_10: formula_11, where the factor formula_12 is included to coincide with the most popular form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "2\\leq m\\leq n" }, { "math_id": 3, "text": "\\psi_1,\\dotsc,\\psi_n" }, { "math_id": 4, "text": "\\psi_i^{\\dagger}=\\psi_i" }, { "math_id": 5, "text": "\\{\\psi_i,\\psi_j\\}=2\\delta_{ij}" }, { "math_id": 6, "text": "J_{i_1 i_2 \\cdots i_m}" }, { "math_id": 7, "text": "\\mathbf{E}(J_{i_1i_2\\cdots i_m})=0" }, { "math_id": 8, "text": "\\mathbf{E}(J_{i_1i_2\\cdots i_m}^2)=1" }, { "math_id": 9, "text": "H_{\\rm SYK}=i^{m/2}\\sum_{1 \\leq i_1 < \\cdots < i_m \\leq n}J_{i_1i_2\\cdots i_m}\\psi_{i_1}\\psi_{i_2}\\cdots\\psi_{i_m}" }, { "math_id": 10, "text": "m=4" }, { "math_id": 11, "text": "H_{\\rm SYK}=-\\frac{1}{4!}\\sum_{i_1, \\dotsc, i_4 = 1}^n J_{i_1i_2i_3 i_4}\\psi_{i_1}\\psi_{i_2}\\psi_{i_3}\\psi_{i_4}" }, { "math_id": 12, "text": "1/4!" } ]
https://en.wikipedia.org/wiki?curid=62223541
62241786
Ezra 7
A chapter in the Book of Ezra Ezra 7 is the seventh chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra–Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapters 7 to 10 mainly describes the activities of Ezra the scribe and the priest. This chapter focuses on the commission of Ezra by Artaxerxes I of Persia, and the start of his journey from Babylon to Jerusalem. Text. This chapter is divided into 28 verses. The original text of verses 1–11 is in Hebrew language, verses 12–26 are in Aramaic, and verses 27–28 are in Hebrew again. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew/Aramaic are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 8:1–27 is an equivalent of Ezra 7 (In Artaxerxes' reign). Ezra the man and the mission (7:1–10). This part introduces Ezra, a priest and devout teacher of the Mosaic Law, the leader of another group of Jews leaving Babylonia for Jerusalem during the reign of Artaxerxes the king of Persia, thereby skipping almost sixty years of history about the remaining years of Darius and the entire reign of Xerxes. Ezra's priestly heritage (verses 1–5, cf. ) connects him to the great priests in history (ultimately to Phinehas, Eleazar, and Aaron the high priests) to validate his authority, before presenting his devotion and integrity (verse 6). Verses 7–10 contains the summary of Ezra's journey. "1 Now after these things, in the reign of Artaxerxes king of Persia, Ezra the son of Seraiah, the son of Azariah, the son of Hilkiah, 2 the son of Shallum, the son of Zadok, the son of Ahitub, 3 the son of Amariah, the son of Azariah, the son of Meraioth, 4 the son of Zerahiah, the son of Uzzi, the son of Bukki, 5 the son of Abishua, the son of Phinehas, the son of Eleazar, the son of Aaron the high priest— 6 this Ezra went up from Babylon. He was a scribe skilled in the Law of Moses, given by the Lord God of Israel. Because the hand of the Lord his God was upon him, the king granted him all his requests." "And there went up some of the children of Israel, and of the priests, and the Levites, and the singers, and the porters, and the Nethinims, unto Jerusalem, in the seventh year of Artaxerxes the king." "And he came to Jerusalem in the fifth month, which was in the seventh year of the king." "For upon the first day of the first month began he to go up from Babylon, and on the first day of the fifth month came he to Jerusalem, according to the good hand of his God upon him." Verse 9. Ezra had determined to depart ("go up") on the first day of the first month (Nisan; Assyrian: "Nisanu"; part of March and April), but the rendezvous with his group apparently took place on the 9th day of the same month, and the journey actually commenced on the 12th day (cf. , ), lasted throughout 18 days of Nisan, and the three months Iyyar, Sivan, and Tammuz; in all about 108 days. The straightline distance from Babylon to Jerusalem is over 500 miles, but following traditional route, Ezra's caravan should make a long detour by Carchemish to avoid the desert area, so the total journey could hardly have been less than 900 miles (cf. ). The King's Commission (7:11–26). This part, written in Aramaic, records how Artaxerxes, the king of Persia, provided Ezra with 'a letter of commission, authorization, and support as well as limitations' for his journey and mission to Jerusalem. "Artaxerxes, king of kings," "To Ezra the priest, a scribe of the Law of the God of heaven:" "Perfect peace, and so forth." Ezra's Praise (7:27–28). The last two verses (in Hebrew) are Ezra's own memoirs where he praised God's provision, care, and goodness, that became his source of courage for the journey ahead. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62241786
62244482
Ezra 8
A chapter in the Book of Ezra Ezra 8 is the eighth chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapters 7 to 10 mainly describes of activities of Ezra the scribe and the priest. This chapter follows Ezra's journey to Jerusalem and includes a genealogy of those returning with him (parallel to chapter 2). Text. This chapter is divided into 36 verses. The original text of this chapter is in Hebrew language. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 8:28-67 is an equivalent of Ezra 8 (List of latter exiles who returned). The Caravan (8:1–14). Large groups of Jews had returned to Jerusalem in past years, but many faithful men and their families still lived in Babylonian territories, some of whom at this time packed their belongings and assembled with Ezra to return to Judea. The list in this part is a parallel to the famous ""Golah" List" ("List of the Exiles") of Ezra 2 and Nehemiah 7, but notable here is the predominance of priestly associations before any Davidic identification. "Now these are the chiefs of the households of the fathers and the genealogical register of those who went up with me from Babylon, in the reign of King Artaxerxes:" Verse 1. Emboldened by God's involvement (chapter 7), Ezra recruited family heads and those registered with them to accompany him to Jerusalem (as noted in Ezra 2, 'Jewish society was organized around men and their extended families'). "of the sons of Phinehas, Gershom;" "of the sons of Ithamar, Daniel;" "of the sons of David, Hattush," Verse 2. The list begins with the priests, reflecting 'Ezra's own station as a priest', formed by two patriarchal families: the descendants of Phinehas (Gershom) and Ithamar (Daniel), as the two descendants of Aaron the high priest. After listing the priestly line, Ezra registers the political line of Israel, which is the descendants of David (royal line), indicating that 'the memory of Davidic ancestry continued in the postexilic community'. One family accompanying Ezra, Hattush, is a descendant of David (so called "Davidide"), and he would be the fourth generation after Zerubbabel (cf. : "19 …the sons of Zerubbabel… Hananiah… 21 And the sons of Hananiah… the sons of Shechaniah. 22 And the sons of Shechaniah… Shemaiah: and the sons of Shemaiah… Hattush…"). The record of "Hattush" 'makes any other date than 458 [BC] difficult'. Final preparations (8:15–30). Before departing from Babylonia. Ezra enlisted Levites to join his caravan, as well as 'called for a general fast to petition God's protection, and entrusted the money and valuable articles to consecrated priests'. "Now I gathered them by the river that flows to Ahava, and we camped there three days. And I looked among the people and the priests, and found none of the sons of Levi there." Verse 15. The presence of the Levites ("sons of Levi") was significant to Ezra because, under Law of Torah, the Levites were 'responsible for the transport of temple articles'. "For I was ashamed to require of the king a band of soldiers and horsemen to help us against the enemy in the way: because we had spoken unto the king, saying, The hand of our God is upon all them for good that seek him; but his power and his wrath is against all them that forsake him." Verse 22. In contrast to Nehemiah, who accepted an armed guard, Ezra chose to rely on God's protection (cf. ; ). The journey (8:31–32). Completing all the preparations, Ezra and his caravan 'embarked on the journey' from Babylonia to Jerusalem. "Then we departed from the river of Ahava on the twelfth day of the first month, to go unto Jerusalem: and the hand of our God was upon us, and he delivered us from the hand of the enemy, and of such as lay in wait by the way." "So we came to Jerusalem, and stayed there three days." Verse 32. According to , Ezra and his caravan arrived on the first day of the fifth month. Taking care of business (8:33–36). This part records that Ezra meticulously transferred the articles and finances, performed the required rituals of sacrifices, and delivered the edict of the Persian king. "Also the children of those that had been carried away, which were come out of the captivity, offered burnt offerings unto the God of Israel, twelve bullocks for all Israel, ninety and six rams, seventy and seven lambs, twelve he goats for a sin offering: all this was a burnt offering unto the Lord." Verse 35. After Ezra's group safely arrived in Jerusalem (verses 31–32), they offered sacrifice (verse 35), not because king Artaxerxes ordered them to do (), nor as an "isolated act of thanksgiving", but because "they were reconstituted as the people of God and therefore "must" worship" God. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62244482
62244991
Ezra 9
Chapter in the biblical Book of Ezra Ezra 9 is the ninth chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapters 7 to 10 mainly describes the activities of Ezra the scribe and the priest. This chapter and the next deal with the problem of intermarriage, starting with the introduction of the crisis, then Ezra's public mourning and prayer of shame. J. Gordon McConville suggests that this chapter is central to the Book of Ezra because it draws a sharp contrast between what the people of God ought to be and what they actually are. Text. This chapter is divided into 15 verses. The original text of this chapter is in Hebrew language. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Sinaiticus (S; BHK: formula_0S; 4th century; only Ezra 9:9 to end), Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 8:68-90 is an equivalent of Ezra 9 (Repentance from mixed marriages). The report (9:1–2). Some Jewish leaders in Jerusalem reported to Ezra about 'the misconduct of various leaders and members of the community'. "For they have taken some of their daughters as wives for themselves and their sons, so that the holy seed is mixed with the peoples of those lands. Indeed, the hand of the leaders and rulers has been foremost in this trespass." The response (9:3–5). Hearing the report, Ezra responded with a "public act of contrition" in his function as "the official representative of the community". "And when I heard this thing, I rent my garment and my mantle, and plucked off the hair of my head and of my beard, and sat down astonied." Verse 3. The action also denoted 'horror' on receiving shocking intelligence or hearing shocking words, such as: In the New Testament is also recorded: The prayer (9:6–15). Being a leader of the community, Ezra offered a "public prayer of confession" which is "sincere, personal, emotional and forthright". The Jerusalem Bible describes the prayer of Ezra as "also a sermon". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62244991
62245387
Ezra 10
A chapter in the Book of Ezra Ezra 10 is the tenth and final chapter of the Book of Ezra in the Old Testament of the Christian Bible, or the tenth chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The section comprising chapters 7 to 10 mainly describes the activities of Ezra the scribe and the priest. This chapter and the previous one deal with the problem of intermarriage, especially the solution of it, ending with a list of those who sent away their "foreign" wives and children; a somber note which finds relief in the Book of Nehemiah, as the continuation of the Book of Ezra. Text. This chapter is divided into 44 verses. The original text of this chapter is in Hebrew language. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 8:91-9:36 is an equivalent of Ezra 10 (Putting away of foreign wives and children). The consensus (10:1–6). Ezra's public humiliation and prayer attracted a group of people who joined him in 'demonstrations of sorrow over the sins of Israel', and as a result, they made a consensus of the resolution. "Now while Ezra was praying, and while he was confessing, weeping, and bowing down before the house of God, a very large assembly of men, women, and children gathered to him from Israel; for the people wept very bitterly." Verse 1. The Hebrew shows that the people were assembling during Ezra's prayer. The Jerusalem Bible describes the prayer of Ezra as "also a sermon". "And Shechaniah the son of Jehiel, one of the sons of Elam, spoke up and said to Ezra, “We have trespassed against our God, and have taken pagan wives from the peoples of the land; yet now there is hope in Israel in spite of this." Verse 2. The people acknowledged that they been unfaithful to God, in breach of the law. The laws to which Ezra must have referred would have been those found in , and . These passages contain prohibitions, very similar in character, directed against intermarriage with the nations that dwelt in Canaan, on the ground that such marriages would inevitably lead to idolatry and to the abominations connected with idolatrous worship. "Then Ezra arose, and made the leaders of the priests, the Levites, and all Israel swear an oath that they would do according to this word. So they swore an oath." Verse 5. Although Ezra has been given Persian authority, his choice of action to make the leaders, priests, Levites, and all Israel "swear an oath" to abide by a covenantal agreement reflects "internal politics", in contrast to Nehemiah, who prefers 'to command and order'. The assembly (10:7–15). The whole community was assembled "in the street of the house of God" to "confront the intermarriage issue and to decide on the divorce proposal". The commission (10:16–17). Following the majority opinion, Ezra appointed a commission by selecting 'men who were family heads' to form the official investigation of the intermarriage cases. The guilty (10:18–44). After the results of the commission's investigation were announced, an official list was created to record 'those found guilty of marrying pagan women. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62245387
62247177
Dietmar Salamon
German mathematician (born 1953) Dietmar Arno Salamon (born 7 March 1953 in Bremen) is a German mathematician. Education and career. Salamon studied mathematics at the Leibniz University Hannover. In 1982 he earned his doctorate at the University of Bremen with dissertation "On control and observation of neutral systems". He subsequently spent two years as a postdoctoral fellow at the Mathematical Research Center at the University of Wisconsin–Madison, followed by one year at the Mathematical Research Institute at ETH Zurich. In 1986 he became a lecturer at the University of Warwick, where he was appointed full professor in 1994. The summer semester 1988 he spent as a visiting professor at the University of Bremen and the winter semester 1991 at the University of Wisconsin-Madison. From 1998 to 2018 he was a full professor of mathematics at ETH Zurich, retiring as professor emeritus in 2018. Salamon's field of research is symplectic topology and related fields such as symplectic geometry. Symplectic topology is a relatively new field of mathematics that developed into an important branch of mathematics in the 1990s. Some important new techniques are Gromov's pseudoholomorphic curves, Floer homology, and Seiberg-Witten invariants on four-dimensional manifolds. In 1994 he was an Invited Speaker with talk "Lagrangian intersections, 3-manifolds with boundary and the Atiyah-Floer conjecture" at the International Congress of Mathematicians (ICM) in Zurich. In 2012 he was elected a Fellow of the American Mathematical Society. In 2017 he received, with Dusa McDuff, the AMS Leroy P. Steele Prize for Mathematical Exposition for the book "J-holomorphic curves and symplectic topology", which they co-authored. He has been a member of Academia Europaea since 2011.
[ { "math_id": 0, "text": "c_1\\mid_{\\pi_2M}=0" } ]
https://en.wikipedia.org/wiki?curid=62247177
62247933
Lemon (geometry)
Geometric shape In geometry, a lemon is a geometric shape that is constructed as the surface of revolution of a circular arc of angle less than half of a full circle rotated about an axis passing through the endpoints of the lens (or arc). The surface of revolution of the complementary arc of the same circle, through the same axis, is called an apple. The apple and lemon together make up a spindle torus (or "self-crossing torus" or "self-intersecting torus"). The lemon forms the boundary of a convex set, while its surrounding apple is non-convex. The ball in North American football has a shape resembling a geometric lemon. However, although used with a related meaning in geometry, the term "football" is more commonly used to refer to a surface of revolution whose Gaussian curvature is positive and constant, formed from a more complicated curve than a circular arc. Alternatively, a football may refer to a more abstract orbifold, a surface modeled locally on a sphere except at two points. Area and volume. The lemon is generated by rotating an arc of radius formula_0 and half-angle formula_1 less than formula_2 about its chord. Note that formula_3 denotes latitude, as used in geophysics. The surface area is given by formula_4 The volume is given by formula_5 These integrals can be evaluated analytically, giving formula_6 formula_7 The apple is generated by rotating an arc of half-angle formula_1 greater than formula_2 about its chord. The above equations are valid for both the lemon and apple. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "\\phi_m" }, { "math_id": 2, "text": "\\pi/2" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "A=2\\pi R^2\\int_{-\\phi_m}^{\\phi_m}(\\cos\\phi-\\cos\\phi_m)d\\phi" }, { "math_id": 5, "text": "V=\\pi R^3\\int_{-\\phi_m}^{\\phi_m}(\\cos\\phi-\\cos\\phi_m)^2\\cos\\phi d\\phi" }, { "math_id": 6, "text": "A=4\\pi R^2(\\sin\\phi_m-\\phi_m\\cos\\phi_m)" }, { "math_id": 7, "text": "V=\\tfrac{4}{3}\\pi R^3\\left[\\sin^{3}\\phi_m-\\tfrac{3}{4}\\cos\\phi_m(2\\phi_m-\\sin2\\phi_m)\\right]" } ]
https://en.wikipedia.org/wiki?curid=62247933
6226139
Transferable belief model
Model for reasoning with uncertain beliefs and evidence The transferable belief model (TBM) is an elaboration on the Dempster–Shafer theory (DST), which is a mathematical model used to evaluate the probability that a given proposition is true from other propositions that are assigned probabilities. It was developed by Philippe Smets who proposed his approach as a response to Zadeh’s example against Dempster's rule of combination. In contrast to the original DST the TBM propagates the open-world assumption that relaxes the assumption that all possible outcomes are known. Under the open world assumption Dempster's rule of combination is adapted such that there is no normalization. The underlying idea is that the probability mass pertaining to the empty set is taken to indicate an unexpected outcome, e.g. the belief in a hypothesis outside the frame of discernment. This adaptation violates the probabilistic character of the original DST and also Bayesian inference. Therefore, the authors substituted notation such as "probability masses" and "probability update" with terms such as "degrees of belief" and "transfer" giving rise to the name of the method: The "transferable belief model". Zadeh’s example in TBM context. Lotfi Zadeh describes an information fusion problem. A patient has an illness that can be caused by three different factors "A", "B" or "C". Doctor 1 says that the patient's illness is very likely to be caused by A (very likely, meaning probability "p" = 0.95), but "B" is also possible but not likely ("p" = 0.05). Doctor 2 says that the cause is very likely "C" ("p" = 0.95), but "B" is also possible but not likely ("p" = 0.05). How is one to make one's own opinion from this? Bayesian updating the first opinion with the second (or the other way round) implies certainty that the cause is "B". Dempster's rule of combination lead to the same result. This can be seen as paradoxical, since although the two doctors point at different causes, "A" and "C", they both agree that "B" is not likely. (For this reason the standard Bayesian approach is to adopt Cromwell's rule and avoid the use of 0 or 1 as probabilities.) Formal definition. The TBM describes beliefs at two levels: Credal level. According to the DST, a probability mass function formula_0 is defined such that: formula_1 with formula_2 where the power set formula_3 contains all possible subsets of the frame of discernment formula_4. In contrast to the DST the mass formula_5 allocated to the empty set formula_6 is not required to be zero, and hence generally formula_7 holds true. The underlying idea is that the frame of discernment is not necessarily exhaustive, and thus belief allocated to a proposition formula_8, is in fact allocated to formula_9 where formula_10 is the set of unknown outcomes. Consequently, the combination rule underlying the TBM corresponds to Dempster's rule of combination, except the normalization that grants formula_11. Hence, in the TBM any two independent functions formula_12 and formula_13 are combined to a single function formula_14 by: formula_15 where formula_16 In the TBM the "degree of belief" in a hypothesis formula_17 is defined by a function: formula_18 with formula_19 formula_20 Pignistic level. When a decision must be made the "credal beliefs" are transferred to pignistic probabilities by: formula_21 where formula_22 denote the atoms (also denoted as singletons) and formula_23 the number of atoms formula_24 that appear in formula_25. Hence, probability masses formula_26 are equally distributed among the atoms of A. This strategy corresponds to the principle of insufficient reason (also denoted as principle of maximum entropy) according to which an "unknown" distribution most probably corresponds to a uniform distribution. In the TBM "pignistic probability functions" are described by functions formula_27. Such a function satisfies the probability axioms: formula_28 with formula_29 formula_30 Philip Smets introduced them as "pignistic" to stress the fact that those probability functions are based on incomplete data, whose only purpose is a forced decision, e.g. to place a bet. This is in contrast to the "credal beliefs" described above, whose purpose is representing the actual "belief". Open world example. When tossing a coin one usually assumes that Head or Tail will occur, so that formula_31. The open-world assumption is that the coin can be stolen in mid-air, disappear, break apart or otherwise fall sideways so that neither Head nor Tail occurs, so that the power set of {Head,Tail} is considered and there is a decomposition of the overall probability (i.e. 1) of the following form: formula_32 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "m: 2^X \\rightarrow [0,1] \\,\\!" }, { "math_id": 2, "text": "\\sum_{A \\in 2^X} m(A) = 1 \\,\\!" }, { "math_id": 3, "text": " 2^X" }, { "math_id": 4, "text": " X" }, { "math_id": 5, "text": " m" }, { "math_id": 6, "text": " \\emptyset" }, { "math_id": 7, "text": " 0 \\leq m(\\emptyset) \\leq 1.0" }, { "math_id": 8, "text": " A \\in 2^X" }, { "math_id": 9, "text": " A \\in 2^X\\cup{e}" }, { "math_id": 10, "text": " {e}" }, { "math_id": 11, "text": " m(\\emptyset)=0" }, { "math_id": 12, "text": " m_1" }, { "math_id": 13, "text": " m_2" }, { "math_id": 14, "text": " m_{1,2}" }, { "math_id": 15, "text": "m_{1,2}(A) = (m_1 \\otimes m_2) (A) = \\sum_{B \\cap C = A} m_1(B) m_2(C) \\, \\!" }, { "math_id": 16, "text": "A,B,C \\in 2^X \\ne \\emptyset. \\, \\!" }, { "math_id": 17, "text": " H \\in 2^X \\ne \\emptyset" }, { "math_id": 18, "text": "\\operatorname{bel}: 2^X \\rightarrow [0,1] \\, \\!" }, { "math_id": 19, "text": "\\operatorname{bel}(H)= \\sum_{\\emptyset \\ne A \\subseteq H} m(A)" }, { "math_id": 20, "text": "\\operatorname{bel}(\\emptyset)=0. \\, \\!" }, { "math_id": 21, "text": "P_\\text{Bet}(x)= \\sum_{x \\in A \\subseteq X} \\frac {m(A)} {|A|} \\, \\! " }, { "math_id": 22, "text": " x \\in X " }, { "math_id": 23, "text": " |A|" }, { "math_id": 24, "text": " x" }, { "math_id": 25, "text": " A" }, { "math_id": 26, "text": " m(A)" }, { "math_id": 27, "text": "P_\\text{Bet}" }, { "math_id": 28, "text": "P_\\text{Bet}: X \\rightarrow [0,1] \\,\\!" }, { "math_id": 29, "text": "\\sum_{x \\in X} P_\\text{Bet}(x) = 1 \\,\\!" }, { "math_id": 30, "text": "P_\\text{Bet}(\\emptyset)=0 \\,\\!" }, { "math_id": 31, "text": " \\Pr(\\text{Head}) + \\Pr(\\text{Tail}) = 1" }, { "math_id": 32, "text": " \\Pr(\\emptyset) + \\Pr(\\text{Head}) + \\Pr(\\text{Tail}) + \\Pr(\\text{Head,Tail}) = 1." } ]
https://en.wikipedia.org/wiki?curid=6226139
6226425
Sommerfeld radiation condition
In applied mathematics, and theoretical physics, the Sommerfeld radiation condition is a concept from theory of differential equations and scattering theory used for choosing a particular solution to the Helmholtz equation. It was introduced by Arnold Sommerfeld in 1912 and is closely related to the limiting absorption principle (1905) and the limiting amplitude principle (1948). The boundary condition established by the principle essentially chooses a solution of some wave equations which only radiates outwards from known sources. It, instead, of allowing arbitrary inbound waves from the infinity propagating in instead detracts from them. The theorem most underpinned by the condition only holds true in three spatial dimensions. In two it breaks down because wave motion doesn't retain its power as one over radius squared. On the other hand, in spatial dimensions four and above, power in wave motion falls off much faster in distance. Formulation. Arnold Sommerfeld defined the condition of radiation for a scalar field satisfying the Helmholtz equation as "the sources must be sources, not sinks of energy. The energy which is radiated from the sources must scatter to infinity; no energy may be radiated from infinity into ... the field." Mathematically, consider the inhomogeneous Helmholtz equation formula_0 where formula_1 is the dimension of the space, formula_2 is a given function with compact support representing a bounded source of energy, and formula_3 is a constant, called the "wavenumber". A solution formula_4 to this equation is called "radiating" if it satisfies the Sommerfeld radiation condition formula_5 uniformly in all directions formula_6 (above, formula_7 is the imaginary unit and formula_8 is the Euclidean norm). Here, it is assumed that the time-harmonic field is formula_9 If the time-harmonic field is instead formula_10 one should replace formula_11 with formula_12 in the Sommerfeld radiation condition. The Sommerfeld radiation condition is used to solve uniquely the Helmholtz equation. For example, consider the problem of radiation due to a point source formula_13 in three dimensions, so the function formula_2 in the Helmholtz equation is formula_14 where formula_15 is the Dirac delta function. This problem has an infinite number of solutions, for example, any function of the form formula_16 where formula_17 is a constant, and formula_18 Of all these solutions, only formula_19 satisfies the Sommerfeld radiation condition and corresponds to a field radiating from formula_20 The other solutions are unphysical . For example, formula_21 can be interpreted as energy coming from infinity and sinking at formula_20 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n(\\nabla^2 + k^2) u = -f \\text{ in } \\mathbb R^n \n" }, { "math_id": 1, "text": "n=2, 3" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "k>0" }, { "math_id": 4, "text": "u" }, { "math_id": 5, "text": "\\lim_{|x| \\to \\infty} |x|^{\\frac{n-1}{2}} \\left( \\frac{\\partial}{\\partial |x|} - ik \\right) u(x) = 0" }, { "math_id": 6, "text": "\\hat{x} = \\frac{x}{|x|}" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "|\\cdot|" }, { "math_id": 9, "text": "e^{-i\\omega t}u." }, { "math_id": 10, "text": "e^{i\\omega t}u," }, { "math_id": 11, "text": "-i" }, { "math_id": 12, "text": "+i" }, { "math_id": 13, "text": "x_0" }, { "math_id": 14, "text": "f(x)=\\delta(x-x_0)," }, { "math_id": 15, "text": "\\delta" }, { "math_id": 16, "text": "u = cu_+ + (1-c) u_- \\," }, { "math_id": 17, "text": "c" }, { "math_id": 18, "text": "u_{\\pm}(x) = \\frac{e^{\\pm ik|x-x_0|}}{4\\pi |x-x_0|}." }, { "math_id": 19, "text": "u_+" }, { "math_id": 20, "text": "x_0." }, { "math_id": 21, "text": "u_{-}" } ]
https://en.wikipedia.org/wiki?curid=6226425
62264747
North Atlantic Aerosols and Marine Ecosystems Study
The North Atlantic Aerosols and Marine Ecosystems Study (NAAMES) was a five-year scientific research program that investigated aspects of phytoplankton dynamics in ocean ecosystems, and how such dynamics influence atmospheric aerosols, clouds, and climate. The study focused on the sub-arctic region of the North Atlantic Ocean, which is the site of one of Earth's largest recurring phytoplankton blooms. The long history of research in this location, as well as relative ease of accessibility, made the North Atlantic an ideal location to test prevailing scientific hypotheses in an effort to better understand the role of phytoplankton aerosol emissions on Earth's energy budget. NAAMES was led by scientists from Oregon State University and the National Aeronautics and Space Administration (NASA). They conducted four field campaigns from 2015-2018 that were designed to target specific phases of the annual phytoplankton cycle: minimum, climax, intermediary decreasing biomass, and increasing intermediary biomass. The campaigns were designed to observe each unique phase, in order to resolve the scientific debates on the timing of bloom formations and the patterns driving annual bloom re-creation. The NAAMES project also investigated the quantity, size, and composition of aerosols generated by primary production in order to understand how bloom cycles affect cloud formations and climate. Scientists employed multiple complementary research methods, including intensive field sampling via research ships, airborne aerosol sampling via airplane, and remote sensing via satellites. The findings from NAAMES, while still forthcoming, have shed light on aerosols and cloud condensation nuclei, phytoplankton annual cycles, phytoplankton physiology, and mesoscale biology.  Several methodological advances have also been published, including new remote sensing algorithms and advances in satellite remote sensing. Background. Competing hypotheses of plankton blooms. NAAMES sought to better understand the impact of bioaerosol emissions on cloud dynamics and climate. It also aimed to test two competing hypotheses on plankton blooms: Critical Depth Hypothesis - a resource-based view. The critical depth hypothesis is a resource-based view of the North Atlantic annual phytoplankton blooms. It is the traditional explanation for the cause of spring blooms and has been documented as a foundational concept in oceanography textbooks for over 50 years.  It focuses on the environmental conditions necessary to initiate a bloom such as high nutrients, shallower mixing, increased light, and warmer temperatures. The central argument for the critical depth hypothesis is that blooms are a consequence of increased phytoplankton growth rates resulting from shoaling of the mixed layer above the critical depth. The critical depth is a surface mixing depth where phytoplankton biomass growth equals phytoplankton biomass losses. In this hypothesis, losses are both constant and independent of growth. The decline in biomass may be due to grazing, sinking, dilution, vertical mixing, infection, or parasitism.  When the surface mixed layer becomes shallower than the critical depth, initiation of the seasonal bloom occurs due to phytoplankton growth exceeding loss. There is a correlation of phytoplankton growth with springtime increases of light, temperature, and shallower stratification depths. Climate warming may increase stratification or decrease mixed layer depth during the winter, which would enhance the vernal bloom or increase phytoplankton biomass if this hypothesis governed spring phytoplankton bloom dynamics. A primary criticism of this resource-based view is that spring blooms occur in the absence of stratification or shoaling of the mixed layer. Dilution-recoupling Hypothesis - an ecosystem-based view. The dilution-recoupling hypothesis is an ecosystem-based view of the North Atlantic annual phytoplankton bloom. This hypothesis focuses on the physical processes that alter the balance between growth and grazing.  The spring bloom is considered to be one feature of an annual cycle, and other features during the cycle “set the stage” for this bloom to occur. This ecosystem-based view is based upon a dilution experiment where the addition of seawater dilutes predators but does not change the growth of phytoplankton. Thus, growth rates increase with dilution. Although the dilution effect is transient, predator-prey interactions can be maintained if the rate of the addition of water equals the rate of growth. The deepening of the surface mixed layer dilutes the predator-prey interactions and decouples growth and grazing. When the mixed layer stops deepening, the increase in growth rate becomes apparent, but now growth and grazing become coupled again. The shoaling of the mixed layer concentrates predators, thereby increasing grazing pressure. However, the increase in light availability counters grazing pressure, which allows growth rates to remain high. In late spring, when the mixed layer is even more shallow, nutrient depletion or overgrazing ends the bloom—losses exceed growth at this point in the cycle.   Climate warming would increase stratification and suppress winter mixing that occurs with the deepening of the mixed layer. The suppression of winter mixing would decrease phytoplankton biomass under this hypothesis. Physical oceanographic processes. Mixed Layer Depth Debate. Meso-scale Eddies Mesoscale eddies play a significant role in modulating the Mixed Layer Depth (MLD). Fluctuations created by mesoscale eddies modulate nutrients in the base of the mixed layer. These modulations, along with light availability, drive the abundance of phytoplankton in the region. The availability of phytoplankton significantly affects the marine food web and ocean health. The fast-moving currents in the Gulf Stream meander and pinch-off to create eddies. These eddies retain the physical properties of their parent water mass (e.g. temperature, density, salinity, and other ocean dynamic properties) when they separate. As the eddies migrate, their physical properties change as they mix with the surrounding water. In the Gulf Stream, migrating eddies are known as anticyclonic or cyclonic eddies based on the direction in which they spin (clockwise vs. counter-clockwise). The two eddies differ in motion, physical properties, and, consequently, their effects on biology and chemistry of the ocean. The Coriolis force combined with high velocity currents drive eddy motion. This motion creates a 'bulge,' i.e., high sea surface height (SSH) in the center of the Anticyclonic eddies. In contrast, cyclonic eddies exhibit a low SSH in the center. The SSH in both anticyclonic and cyclonic decreases and increases, respectively, as the distance from the center increases. Upwelling and downwelling processes in the eddies create a cold and warm core. Downwelling in the anticyclonic eddy prevents colder water from entering the surface, thus creating a warm-core in the center. Whereas in the cyclonic eddy, the upwelling entrains deep cold water and forms a cold-core. Previous studies show the deepening effects of MLD under anticyclonic eddies and shoaling of MLD in cyclonic eddies. These phenomena may be due to increased heat loss to the atmosphere in anticyclonic eddies. This loss of heat causes the sinking of dense water, referred to as convective mixing, and the deepening of the MLD. In contrast, in cyclonic eddies the water temperature at the core is less cold than the Anticyclonic eddy. This therefore does not lead to deepening of the MLD. Studies conducted in the region via a network of Argo floats and model simulations created through satellite data have shown cases of the opposite phenomena. The deepening and shoaling of MLD via eddies is ubiquitous and varies seasonally. Such anomalies are most significant in the winter. Thus, the role of meso-scale eddies in MLD is complex, and a function of simultaneous processes where enhanced wind shear induced currents contribute to a shallowing of the MLD in anticyclonic eddies. Relevant Atmospheric Processes. Marine Boundary Layer. The marine boundary layer (MBL) is the part of the atmosphere in direct contact with the ocean surface. The MBL is influenced by the exchange of heat, moisture, gases, particulates, and momentum, primarily via turbulence. The MBL is characterized by the formation of convective cells (or vertical flow of air) above the ocean surface, which perturbs the direction of the mean surface wind and generates texture, roughness, and waves on the sea's surface. Two types of boundary layers exist. One is a stable, convective layer found between the lower 100m of the atmosphere extending up to approximately 3 km in height, and is referred to as the convective boundary layer (CBL). The other boundary layer forms as a result of a surface atmospheric inversion. This generally occurs closer to the surface in the absence of turbulence and vertical mixing, and is determined through the interpretation of vertical humidity and temperature profiles. The MBL is often a localized and temporally dynamic phenomenon, and therefore its height into the air column can vary considerably from one region to another, or even across the span of a few days. The North Atlantic is a region where diverse and well-formed MBL clouds are commonly formed, and where MBL layer height can be between 2.0-and 0.1 km in height Regional Atmospheric Processes. The westerlies are prevailing winds in the middle latitudes (between 35 and 65 degrees latitude), which blow in regions north or southward of the high-pressure sub-tropical regions of the world. Consequently, aerosols sampled over the North Atlantic Ocean will be influenced by air masses originating in North America, and therefore be characterized by both the natural terrestrial and anthropogenic inputs. Relevant to NAAMES are the emissions from industry and urban environments in eastern North America, which emit substantial quantities of sulfates, black carbon, and aromatic compounds. Such substances can be transported hundreds of kilometers over the sea. This contribution of continental influences may create a false positive signal in the biological fluorescence signals being measured and could affect cloud microphysical properties in the open North Atlantic Ocean. Furthermore, aerosols such as black carbon mixed with carbon dioxide and other greenhouse gases are emitted through the impartial combustion of fossil fuels from ship engines. These unburned hydrocarbons are present in the marine boundary layer of the North Atlantic and most other remote oceanic regions. As these particles age or are chemically transformed as a function of time in the air, they may alter microphysical and chemical properties as they react with other airborne particles. Role of aerosols. Aerosols. Aerosols are very small, solid particles or liquid droplets suspended in the atmosphere or inside another gas and are formed through natural processes or by human actions. Natural aerosols include volcanic ash, biological particles, and mineral dust, as well as black carbon from the natural combustion of biomass, such as wildfires. Anthropogenic aerosols are those that have been emitted from human actions, such as fossil fuel burning or industrial emissions. Aerosols are classified as either primary or secondary depending on whether they have been directly emitted into the atmosphere (primary) or whether they have reacted and changed in composition (secondary) after being emitted from their source. Aerosols emitted from the marine environment are one of the largest components of primary natural aerosols. Marine primary aerosols interact with anthropogenic pollution, and through these reactions produce other secondary aerosols. One of the most significant yet uncertain components of predictive climate change models is the impact of aerosols on the climate system. Aerosols affect Earth's radiation balance directly and indirectly. The direct effect occurs when aerosol particles scatter, absorb, or exhibit a combination of these two optical properties when interacting with incoming solar and infrared radiation in the atmosphere. Aerosols that typically scatter light include sulfates, nitrates, and some organic particles, while those that tend to exhibit a net absorption include mineral dust and black carbon (or soot). The second mechanism by which aerosols alter the planet's temperature is called the indirect effect, which occurs when a cloud's microphysical properties are altered causing either an increase in reflection of incoming solar radiation, or an inhibited ability of clouds to develop precipitation. The first indirect effect is an increase in the amount of water droplets, which leads to an increase in clouds that reflect more solar radiation and therefore cool the planet's surface. The second indirect effect (also called the cloud's lifetime effect) is the increase in droplet numbers, which simultaneously causes an increase in droplet size, and therefore less potential for precipitation. That is, smaller droplets mean clouds live longer and retain higher liquid water content, which is associated with lower precipitation rates and higher cloud albedo. This highlights the importance of aerosol size as one of the primary determinants of aerosol quantity in the atmosphere, how aerosols are removed from the atmosphere, and the implications of these processes in climate. Fine particles are generally those below 2 micrometers (μm) in diameter. Within this category, the range of particles that accumulate in the atmosphere (due to low volatility or condensation growth of nuclei) are from 0.1-1 μm, and are usually removed from the air through wet deposition. Wet deposition can be precipitation, snow or hail. On the other hand, coarse particles, such as old sea-spray and plant-derived particles, are removed from the atmosphere via dry deposition. This process is sometimes also called sedimentation. However, different types of biogenic organic aerosols exhibit different microphysical properties, and therefore their removal mechanisms from the air will depend on humidity. Without a better understanding of aerosol sizes and composition in the North Atlantic Ocean, climate models have limited ability to predict the magnitude of the cooling effect of aerosols in global climate. Sea-spray Aerosols. Although the amount and composition of aerosol particles in the marine atmosphere originate both from continental and oceanic sources and can be transported great distances, freshly emitted sea-spray aerosols (SSA) constitute one of the major sources of primary aerosols, especially from moderate and strong winds. The estimated global emission of pure sea-salt aerosols are on the order of 2,000-10,000 Tg per year. The mechanism by which this occurs starts with the generation of air bubbles in breaking waves, which then rise to the atmosphere and burst into hundreds of ultra-fine droplets ranging from 0.1-1.0 μm in diameter. Sea-spray aerosols are mostly composed of inorganic salts, such as sodium and chloride. However, these bubbles sometimes carry organic material found in seawater, forming secondary organic compounds (SOAs) such as dimethyl sulfide (DMS). This compound plays a key role in the NAAMES project. An important biogeochemical consequence of SSA are their role as cloud condensation nuclei. These are particles that provide the surfaces necessary for water vapor to condensate below supersaturation conditions. The freezing of organic matter in these aerosols promotes the formation of clouds in warmer and drier environments than where they would otherwise form, especially at high latitudes such as the North Atlantic Ocean. Organic matter in these aerosols help nucleation of water droplets at these regions, yet plenty of unknowns remain, such as what fraction contain ice-freezing organic materials, and from what biological sources. Nevertheless, the role of phytoplankton blooms as a source of enhanced ice nucleating particles has been confirmed in laboratory experiments, implying the important role of these aerosols in cloud radiative forcing. Primary marine aerosols created through bubble-bursting emission have been measured in the North Atlantic during spring 2008 by the International Chemistry Experiment in the Arctic Lower Troposphere (ICEALOT). This research cruise measured clean, or background, areas and found them to be mostly composed of primary marine aerosols containing hydroxyl (58% ±13) and alkene (21% ±9) functional groups, indicating the importance of chemical compounds in the air with biological origin. Nonetheless, the small temporal scale of these measurements, plus the inability to determine the exact source of these particles, justifies the scientific need for a better understanding of aerosols over this region. Bioaerosols. Bioaerosols are particles composed of living and non-living components released from terrestrial and marine ecosystems into the atmosphere. These can be forest, grasslands, agricultural crops, or even marine primary producers, such as phytoplankton. Primary biological aerosol particles (PBAPs) contain a range of biologic materials, including bacteria, archaea, algae, and fungi, and have been estimated to comprise as much as 25% of global total aerosol mass. Dispersal of these PBAPs occur via direct emission into the atmosphere through fungi spores, pollen, viruses, and biological fragments. Ambient concentrations and sizes of these particles vary by location and seasonality, but of relevance to NAAMES are the transient sizes of fungi spores (0.05 to 0.15 μm in diameter) and larger sizes (0.1 to 4 μm) for bacteria. Marine organic aerosols (OA) have been estimated through their correlation to chlorophyll pigments to vary in magnitude between 2-100 Tg per year. However, recent studies of OA are correlated with DMS production and to a lesser extent chlorophyll, suggesting that organic material in sea salt aerosols are connected to biological activity in the sea's surface. The mechanisms contributing to marine organic aerosols thus remain unclear, and were a main focus of NAAMES. There is some evidence that marine bioaerosols containing cyanobacteria and microalgae may be harmful to human health. Phytoplankton can absorb and accumulate a variety of toxic substances, such as methylmercury, polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons. Cyanobacteria are known to produce toxins that can be aerosolized, which when inhaled by humans can affect the nervous and liver systems.  For example, Caller et al. (2009) suggested that bioaerosls from cyanobacteria blooms could play a role in high incidences of amyotrophic lateral sclerosis (ALS).   In addition, a group of toxic compounds called microcystins are produced by some cyanobacteria in the genera "Microcystis, Synechococcus", and "Anabaena".  These microcystins have been found in aerosols by a number of investigators, and such aerosols have been implicated as causing isolated cases of pneumonia, gastroenteritis, and non-alcoholic fatty liver disease. Dinoflagellates are also thought to be involved in bioaerosol toxicity, with the genus "Ostreopsis" causing symptoms such as dyspnea, fever, rhinorrhea, and cough. Importantly, marine toxic aerosols have been found as far as 4 km inland, but investigators recommend additional studies that trace the fate of bioaerosols further inland. The fungi phylum of Ascomycota has been understood as the major contributor (72% in relative proportion to other phyla) to marine bioaerosols, at least in the Southern Ocean. Of these, Agaricomycetes constitutes the majority (95%) of fungi classes inside this phylum. Within this group, the "Penicillium" genus is most frequently detected in marine fungi aerosols. Fungi bioaerosols can also serve as ice nuclei, and therefore also impact the radiative budget in remote ocean regions, such as the North Atlantic Ocean. In addition to sea-spray aerosols (see section above), biogenic aerosols produced by phytoplankton are also important source of small (typically 0.2 μm) cloud condensation nuclei (CCN) particles suspended in the atmosphere. The Intergovernmental Panel on Climate Change (IPCC), forecasted an increase in global surface ocean temperatures by +1.3 to +2.8 degrees Celsius over the next century, which will cause spatial and seasonal shifts in North Atlantic phytoplankton blooms.  Changes in community dynamics will greatly affect the bioaerosols available for cloud condensation nuclei. Therefore, cloud formation in the North Atlantic is sensitive to bioaerosol availability, particle size, and chemical composition. Marine Bioaerosols and Global Radiation Balance. Marine aerosols contribute significantly to global aerosols. Traditionally, biogeochemical cycling and climate modeling have focused on sea-salt aerosols, with less attention on biogenically-derived aerosol particles such as sulfates and related chemical species emitted from phytoplankton. For example, in the eastern North Atlantic during the spring 2002 bloom, high phytoplankton activity was marked more by organic carbon (both soluble and insoluble species) than by sea-salts. The organic fraction from phytoplankton contributed as much as 63% of the aerosol mass in the atmosphere, while during winter periods of low biological activity it only accounted for 15% of the aerosol mass. Those data provided early empirical evidence of this emission phenomena, while also showing that organic matter from ocean biota can enhance cloud droplet concentrations by as much as 100%. Data to test the CLAW Hypothesis. There is growing evidence describing how oceanic phytoplankton affect cloud albedo and climate through the biogeochemical cycle of sulfur, as originally proposed in the late 1980s. The CLAW hypothesis conceptualizes and tries to quantify the mechanisms by which phytoplankton can alter global cloud cover and provide planetary-scale radiation balance or homeostasis regulation. As solar irradiance drives primary production in the upper layers of the ocean, aerosols are released into the planetary boundary layer. A percentage of these aerosols are assimilated into clouds, which then can generate a negative feedback loop by reflecting solar radiation. The ecosystem-based hypothesis of phytoplankton bloom cycles (explored by NAAMES) suggests that a warming ocean would lead to a decrease in phytoplankton productivity. Decreased phytoplankton would cause a decrease in aerosol availability, which may lead to fewer clouds. This would result in a positive feedback loop, where warmer oceans lead to fewer clouds, which allows for more warming. One of the key components of the CLAW hypothesis is the emission of dimethylsulfoniopropionate (DMSP) by phytoplankton. Another chemical compound, dimethyl sulfide (DMS), has been identified as a major volatile sulfur compound in most oceans. DMS concentrations in the world's seawater have been estimated to be, on average, on the order of 102.4 nanograms per liter (ng/L). Regional values of the North Atlantic are roughly 66.8 ng/L. These regional values vary seasonally and are influenced by the effects of continental aerosols. Nonetheless, DMS is one of the dominant sources of biogenic volatile sulfur compounds in the marine atmosphere. Since its conceptualization, several research studies have found empirical and circumstantial evidence supporting the CLAW hypothesis in mid-latitudes of the Atlantic Ocean. The NAAMES campaign sought to provide an empirical understanding of the effects of marine bioaerosols on cloud formation and global radiation balance by quantifying the mechanisms underlying the CLAW hypothesis. Emissions from the sea surface micro-layer. Dissolved organic compounds containing remnants of polysaccharides, proteins, lipids, and other biological components are released by phytoplankton and bacteria. They are concentrated into nano-sized gels on the surface of the oceans. Specifically, such compounds are concentrated in the sea surface micro-layer (SML), the uppermost film of water in the ocean. The SML is considered a "skin" within the top 1 millimeter of water where the exchange of matter and energy occurs between the sea and atmosphere. The biological, chemical, and physical processes occurring here may be some of the most important anywhere on Earth, and this thin layer experiences the first exposure to climatic changes such as heat, trace gases, winds, precipitation, and also wastes such as nanomaterials and plastics. The SML also has important roles in air-sea gas exchange and the production of primary organic aerosols. A study using water samples and ambient conditions from the North Atlantic Ocean found that a polysaccharide-containing exopolymer and a protein are easily aerosolized in surface ocean waters, and scientists were able to quantify the amount and size resolution of the primary sea to air transport of biogenic material. These materials are small enough (0.2μm) to be largely emitted from phytoplankton and other microorganisms. However, predicting aerosol quantity, size distribution, and composition through water samples are currently problematic. Investigators suggest that future measurements focus on comparing fluorescence detection techniques that are able to detect proteins in aerosols. NAAMES filled this research gap by providing a fluorescent-based instrument (See section on Atmospheric Instruments below), both in the air column and near the sea's surface. NAAMES Objectives. To accomplish this objective, a combination of ship-based, airborne, and remote sensing measurements was used.  NAAMES conducted multiple campaigns that occurred during the various phases of the cycle in order to capture the important transitory features of the annual bloom for a comprehensive view. This objective seeks to reconcile the competing resource-based and ecosystem-based hypotheses.  NAAMES goal was to provide the mechanistic field studies necessary to understand a more holistic view of the annual bloom cycle. The effects of aerosols on clouds is an understudied topic despite the major implications it could have for predicting future climate change. This objective addressed this gap by using combined measurement methods to understand the contribution of various aerosols to cloud formation produced during each major phase of the annual phytoplankton cycle. Methodology. Field Campaigns. Four field campaigns were conducted to target the four specific changes during the annual plankton cycle. The four NAAMES field campaigns synchronized data collections from the ship, air, and satellites, and were strategically timed to capture the four unique phases of plankton blooms in the North Atlantic: winter transition, accumulation phase, climax transition, and depletion phase. Sampling. Research cruises on the R/V Atlantis. Ship-based instruments measured gases, particles, and volatile organic compounds above the ocean surface. Water samples were also collected to describe the plankton community composition, rates of productivity and respiration, and physiologic stress.   All four campaigns followed a similar ship and flight plan. The R/V Atlantis departed from Woods Hole, Massachusetts, to embark on 26-day cruises covering 4700 nautical miles. The ship first sailed to 40formula_0W. It then moved due north from 40formula_0N to 55formula_0N latitude along the 40formula_0W longitude parallel. This intensive south-north transect involved multiple stationary measurements. The ship then returned to port in Woods Hole. Underway sampling (i.e., while the ship was moving) occurred along the entire cruise using the ship’s flow-through seawater analysis system. Then, once it reached the beginning of the triangular transect area, the ship stopped twice a day at dawn and noon for stationary measurements to collect water samples for incubation (e.g. respiration), and perform water-column sampling and optical measurements. Scientists also used autonomous ARGO floats at three locations during each cruise. These autonomous floating instruments measured parameters such as chlorophyll (a measure of phytoplankton abundance), light intensity, temperature, water density, and suspended particulates. A total of 12 autonomous instruments were deployed during the four cruises. Airborne sampling. Airplane-based measurements were designed to run at precisely the same time as the research vessel cruises so that scientists could link ocean-level processes with those in the lower atmosphere. Satellite data were also synthesized to create a more complete understanding of plankton and aerosol dynamics, and their potential impact on climate and ecosystems. Airborne sampling involved a C-130 equipped with sensitive scientific instruments.  The flight crew based at St. John’s, Canada, conducted 10-hour flights in a “Z-pattern” above the study area. Flights took place at both high-altitudes and low-altitudes to measure aerosol heights and aerosol/ecosystem spatial features.  High-altitude flights collected data on above-cloud aerosols and atmospheric measurements of background aerosols in the troposphere. Once above the ship, the airplane underwent spiral descents to low-altitude to acquire data on the vertical structure of aerosols. These low-altitude flights sampled aerosols within the marine boundary layer. Cloud sampling measured in-cloud droplet number, density, and size measurements. Satellite Observations. Satellite measurements were used in near real-time to help guide ship movement and flight planning. Measurements included sea surface height, sea surface temperature, ocean color, winds, and clouds. Satellite data also provided mean surface chlorophyll concentrations via NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), as a proxy for primary productivity. Autonomous ARGO Floats. Autonomous "in-situ" instruments called Argo floats were deployed to collect physical properties and bio-optical measurements. Argo floats are a battery-powered instrument that uses hydraulics to control its buoyancy to descend-and-ascend in the water. The Argo floats collect both the biological and physical properties of the ocean. The data collected from the floats are transmitted remotely via the ARGOS satellite. Atmospheric Instruments. Instruments used to characterize processes in the atmosphere can be divided into those that measure gas composition, and those that measure the composition of optical properties. Generally, aerosol sampling instruments are categorized by their ability to measure optical, physical, or chemical properties. Physical properties include parameters such as the particle diameter and shape. Two commonly measured optical parameters are absorption and scattering of light by aerosol particles. The absorption and scattering coefficients depend on aerosol quantity. Total light scattering by aerosol particles can be measured with a nephelometer. In contrast, aerosol light absorption can be measured using several types of instruments, such as the Particle Soot/Absorption Photometer (PSAP) and the Continuous Light Absorption Photometer (CLAP). In both of these instruments, particles are collected on a filter and light transmission through the filter is monitored continuously.  This method is based on the integrating plate technique, in which the change in optical transmission of a filter caused by particle deposition is related to the light absorption coefficient of the deposited particles using Beer-Lambert's Law. One of the instruments used to characterize the amount and composition of bioaerosols was the Wideband Integrated Bioaerosol Sensors (WIBS). This instrument uses ultraviolet light-induced fluorescence (UV-LIF) to detect the fluorescence signals from common amino acids like tryptophan and nicotinamide adenine dinucleotide (NADH). A lamp flashing the gas xenon is able to detect particle’s size and shape using high precision ultraviolet wavebands (280 nm and 370 nm). Scientific Findings. Results. Some results stemming from NAAMES research include scientific articles on aerosols and cloud condensation nuclei, phytoplankton annual cycles, phytoplankton physiology, and mesoscale biology. There have also been publications on improved methodologies including new remote sensing algorithms and advances in satellite remote sensing. Phytoplankton annual cycles. Seasonal changes in phytoplankton biomass are controlled by predator-prey interactions and changes in mixed layer conditions such as temperature, light, and nutrients. Understanding the relative importance of these various factors at different stages of the seasonal cycle allows for better predictions of future ocean changes. One publication from NAAMES found the winter mixed layer depth to be positively correlated with spring chlorophyll concentrations in the Labrador Sea. Losses through sinking during the winter were compensated by net growth of phytoplankton, and this net wintertime growth was most likely a function of reduced grazing due to dilution. Phytoplankton physiology. Understanding taxonomic differences in photoacclimation and general phytoplankton community photoacclimation strategies is important for constructing models that rely on light as a major factor controlling bloom dynamics.  Furthermore, a better understanding of phytoplankton light-driven physiology can assist with better readings of satellite data on chlorophyll concentrations and sea surface temperature. A NAAMES study determined the photoacclimation responses of multiple taxonomic groups during a 4-day storm event that caused deep mixing and re-stratification in the subarctic Atlantic ocean. There were significant differences in photoacclimation and biomass accumulation at various depths of light intensity during the storm event. Mesoscale biology. One of the most recent results of the NAAMES campaign includes a better understanding of how biology helps draw atmospheric carbon dioxide down into the water column. Specifically, the impact of zooplankton vertical migration on carbon export to the deep sea via the Biological Pump was parametrized and modeled for the first time. Aerosols and cloud condensation nuclei. A clear seasonal difference in the quantity of biogenic sulfate aerosols was discovered in the North Atlantic as a result of the NAAMES campaign. These aerosols were traced to two different biogenic origins, both of them marine due to the lack of continental air mass influences during the study period. The biogenic origin was the production of dimethyl sulfide (DMS) by phytoplankton, which then act as cloud condensation nuclei (CCN) and affect cloud formation. This study classified the sulfates as "New Sulfate", formed by nucleation in the atmosphere; and "Added Sulfate", which were existing aerosols in the atmosphere where sulfate was incorporated. During the November 2015 cruise (Campaign 1), primary sea salt was the main mechanism (55%) for CCN budget. However, during the spring bloom in May–June 2016 (Campaign 2) Added Sulfate accounted for 32% of CCN while sea-salt accounted for 4%. These empirical measurements by seasonality will help improve the accuracy of climate models that simulate warming or cooling effects of marine bioaerosols. Improved measurement methodologies. NAAMES scientists developed several novel measurement techniques during the project. For example, sorting flow cytometry combined with bioluminescent detection of ATP and NADH provides relatively precise determination of phytoplankton net primary productivity, growth rate, and biomass. Both laboratory and field tests validated this approach, which does not require traditional carbon-14 isotope incubation techniques. Other NAAMES investigators employed new techniques to measure particle size distribution, which is an important metric of biogeochemistry and ecosystem dynamics. By coupling a submersible laser diffraction particle sizer with a continuously flowing seawater system, scientists were able to accurately measure particle size distribution just as well as more established (but more time- and effort-intensive) methods such as Coulter counter and flow-cytobot. In addition to new oceanographic techniques, the NAAMES team also developed a novel method of collecting cloud water. An aircraft-mounted probe used inertial separation to collect cloud droplets from the atmosphere. Their axial cyclone technique was reported to collect cloud water at a rate of 4.5 ml per minute, which was stored and later analyzed in the lab. New remote sensing algorithms. Advances in remote sensing algorithms were also developed during the NAAMES expeditions. Zhang et al. provided atmospheric corrections for the hyperspectral geostationary coastal and air pollution events airborne simulator (GCAS) instrument using both vicarious and cloud shadow approaches. Other scientists tested new approaches to measuring cloud droplet size, and found that using a research scanning polarimeter correlated well with direct cloud droplet probe measurements and high-spectral resolution LIDAR. Their findings suggest that polarimetric droplet size retrieval may be an accurate and useful tool to measure global cloud droplet size. Advances in satellite LIDAR ocean remote sensing. The NAAMES team made advances in the use of LIDAR in oceanography. For example, Behrenfeld et al. (2017) showed that space-based LIDAR could capture annual cycles of phytoplankton dynamics in regions poleward of 45formula_0 latitude. Using these new techniques, they found that Antarctic phytoplankton biomass mainly changes due to ice cover, while in the arctic the changes in phytoplankton are driven mainly by ecological processes. In another paper, the team described new advances in satellite LIDAR techniques, and argued that a new era of space-based LIDAR has the potential to revolutionize oceanographic remote sensing. Future Implications. NAAMES provided groundbreaking data on aerosols and their relationship to numerous ecosystems and oceanographic parameters. Their discoveries and methodologic innovations can be employed by modelers to determine how future oceanic ecosystem changes could affect climate. NAAMES Data. Finalized versions of field data can be viewed through NASA’s Distributed Active Archive Centers (DAACs).  Data for each cruise campaign were stored as separate projects and each campaign’s information was publicly released within 1 year of measurement collection. Ship-based information can be viewed through the SeaWiFS Bio-optical Archive and Storage System (SeaBASS) while airborne information can be viewed through the Atmospheric Science Data Center (ASDC).  NAAMES anticipates many additional publications to be released in the coming years from ongoing research and processing of data. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\circ" } ]
https://en.wikipedia.org/wiki?curid=62264747
6226587
Necklace (combinatorics)
In combinatorics, a "k"-ary necklace of length "n" is an equivalence class of "n"-character strings over an alphabet of size "k", taking all rotations as equivalent. It represents a structure with "n" circularly connected beads which have "k" available colors. A "k"-ary bracelet, also referred to as a turnover (or free) necklace, is a necklace such that strings may also be equivalent under reflection. That is, given two strings, if each is the reverse of the other, they belong to the same equivalence class. For this reason, a necklace might also be called a fixed necklace to distinguish it from a turnover necklace. Formally, one may represent a necklace as an orbit of the cyclic group acting on "n"-character strings over an alphabet of size "k", and a bracelet as an orbit of the dihedral group. One can count these orbits, and thus necklaces and bracelets, using Pólya's enumeration theorem. Equivalence classes. Number of necklaces. There are formula_0 different "k"-ary necklaces of length "n", where formula_1 is Euler's totient function. When the beads are restricted to particular color multiset formula_2, where formula_3 is the number of beads of color formula_4, there are formula_5 different necklaces made of all the beads of formula_6. Here formula_7 and formula_8 is the multinomial coefficient. These two formulas follow directly from Pólya's enumeration theorem applied to the action of the cyclic group formula_9 acting on the set of all functions formula_10. If all "k" colors must be used, the count is formula_11 where formula_12 are the Stirling number of the second kind. formula_13 (sequence in the OEIS) and formula_14 (sequence in the OEIS) are related via the Binomial coefficients: formula_15 and formula_16 Number of bracelets. The number of different "k"-ary bracelets of length "n" (sequence in the OEIS) is formula_17 where "N""k"("n") is the number of "k"-ary necklaces of length "n". This follows from Pólya's method applied to the action of the dihedral group formula_18. Case of distinct beads. For a given set of "n" beads, all distinct, the number of distinct necklaces made from these beads, counting rotated necklaces as the same, is = ("n" − 1)!. This is because the beads can be linearly ordered in "n"! ways, and the "n" circular shifts of such an ordering all give the same necklace. Similarly, the number of distinct bracelets, counting rotated and reflected bracelets as the same, is , for "n" ≥ 3. If the beads are not all distinct, having repeated colors, then there are fewer necklaces (and bracelets). The above necklace-counting polynomials give the number necklaces made from all possible multisets of beads. Polya's pattern inventory polynomial refines the counting polynomial, using variable for each bead color, so that the coefficient of each monomial counts the number of necklaces on a given multiset of beads. Aperiodic necklaces. An aperiodic necklace of length "n" is a rotation equivalence class having size "n", i.e., no two distinct rotations of a necklace from such class are equal. According to Moreau's necklace-counting function, there are formula_19 different "k"-ary aperiodic necklaces of length "n", where "μ" is the Möbius function. The two necklace-counting functions are related by: formula_20 where the sum is over all divisors of "n", which is equivalent by Möbius inversion to formula_21 Each aperiodic necklace contains a single Lyndon word so that Lyndon words form representatives of aperiodic necklaces. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N_k(n)=\\frac{1}{n}\\sum_{d\\mid n}\\varphi(d)k^{n/d} = \\frac{1}{n} \\sum_{i=1}^n k^{\\, {\\rm gcd}(i, n)} " }, { "math_id": 1, "text": " \\varphi " }, { "math_id": 2, "text": "\\mathcal{B} = \\{1^{n_1},\\ldots,k^{n_k} \\}" }, { "math_id": 3, "text": "n_i" }, { "math_id": 4, "text": "i \\in \\{1,\\ldots,k\\}" }, { "math_id": 5, "text": "N(\\mathcal{B}) = \\frac{1}{|\\mathcal{B}|} \\sum_{d | {\\rm gcd}(n_1,\\ldots,n_k)} {|\\mathcal{B}|/d \\choose n_{1}/d, \\ldots, n_{k}/d} \\phi(d)" }, { "math_id": 6, "text": " \\mathcal{B} " }, { "math_id": 7, "text": " |\\mathcal{B}| := \\sum\\limits_{i=1}^k n_i" }, { "math_id": 8, "text": "{m \\choose m_1, \\ldots, m_k}\n := \\frac{m!}{m_1! \\cdots m_k!} " }, { "math_id": 9, "text": "C_n" }, { "math_id": 10, "text": "f : \\{1,\\ldots,n\\} \\to\\{1,\\ldots,k\\}" }, { "math_id": 11, "text": "L_k(n)=\\frac{k!}{n}\\sum_{d\\mid n}\\varphi(d)S\\left(\\frac{n}{d}, k\\right)\\;," }, { "math_id": 12, "text": "S(n, k)" }, { "math_id": 13, "text": "N_k(n)" }, { "math_id": 14, "text": "L_k(n)" }, { "math_id": 15, "text": "N_k(n)=\\sum_{j=1}^k\\binom{k}{j} L_j(n)" }, { "math_id": 16, "text": "L_k(n)=\\sum_{j=1}^k(-1)^{k-j}\\binom{k}{j}N_j(n)" }, { "math_id": 17, "text": "B_k(n) = \\begin{cases}\n\\tfrac12 N_k(n) + \\tfrac14 (k+1)k^{n/2} & \\text{if }n\\text{ is even} \\\\[10px]\n\\tfrac12 N_k(n) + \\tfrac12 k^{(n+1)/2} & \\text{if }n\\text{ is odd}\n\\end{cases}\\quad," }, { "math_id": 18, "text": "D_n" }, { "math_id": 19, "text": "M_k(n)=\\frac{1}{n}\\sum_{d\\mid n}\\mu(d)k^{n/d}" }, { "math_id": 20, "text": "N_k(n) = \\sum_{d|n} M_k(d)," }, { "math_id": 21, "text": "M_k(n) = \\sum_{d|n} N_k(d)\\,\\mu\\bigl(\\tfrac{n}{d}\\bigr)." } ]
https://en.wikipedia.org/wiki?curid=6226587
62272076
Mikhail Kapranov
Russian mathematician (born 1962) Mikhail Kapranov, (Михаил Михайлович Капранов, born 1962) is a Russian mathematician, specializing in algebraic geometry, representation theory, mathematical physics, and category theory. He is currently a professor of the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo. Kapranov graduated from Lomonosov University in 1982 and received his doctorate in 1988 under the supervision of Yuri Manin at the Steklov Institute in Moscow. Afterwards he worked at the Steklov Institute and from 1990 to 1991 at Cornell University. At Northwestern University he was from 1991 to 1993 an assistant professor, from 1993 to 1995 an associate professor, and from 1995 to 1999 a full professor. He was from 1999 to 2003 a professor at University of Toronto and from 2003 to 2014 a professor at Yale University. In 1993 he was a Sloan Research Fellow. From fall 2018 to spring 2019 he was a visiting professor at the Institute for Advanced Study. From 1989 to 1990 he collaborated with Vladimir Voevodsky on formula_0-groupoids, following the proposal made by Alexander Grothendieck in "Esquisse d'un Programme". In 1990 Voevodsky and Kapranov published “formula_0-Groupoids as a Model for a Homotopy Category”, in which they claimed to provide a rigorous mathematical formulation and a logically valid proof of Grothendieck's idea connecting two classes of mathematical objects: formula_0-groupoids and homotopy types. In October 1998, Carlos Simpson published on arXiv the article “Homotopy Types of Strict 3-groupoids”, which argued that the main result of the “formula_0-groupoids” paper, published by Kapranov and Voevodsky in 1990, is false. It was not until 2013 Voevodsky convinced himself that Carlos Simpson's article is correct. Kapranov was also involved in the beginning of Voevodsky's program for the development of motivic cohomology. With Israel Gelfand and Andrei Zelevinsky, Kapranov investigated generalized Euler integrals, formula_1-hypergeometric functions, formula_1-discriminants, and hyperdeterminants, and authored "Discriminants, Resultants, and Multidimensional Determinants" in 1994. According to Gelfand, Kapranov, and Zelevinsky: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... in an 1848 note on the resultant, Cayley ... laid out the foundations of homological algebra. The place of discriminants in the general theory of hypergeometric functions is similar to the place of quasi-classical approximation in quantum mechanics. ... The relation between differential operators and their highest symbols is the mathematical counterpart of the relation between quantum and classical mechanics; so we can say that hypergeometric functions provide a "quantization" of discriminants. In 1995 Kapranov provided a framework for a Langlands program for higher-dimensional schemes, and with, Victor Ginzburg and Éric Vasserot, extended the "Geometric Langlands Conjecture" from algebraic curves to algebraic surfaces. In 1998 Kapranov was an Invited Speaker with talk " Operads and Algebraic Geometry " at the International Congress of Mathematicians in Berlin.
[ { "math_id": 0, "text": "\\infty" }, { "math_id": 1, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=62272076
62279536
Electrostatic septum
An electrostatic septum is a dipolar electric field device used in particle accelerators to inject or extract a particle beam into or from a synchrotron. In an electrostatic septum, basically an electric field septum, two separate areas can be identified, one with an electric field and a field free region. The two areas are separated by a physical wall that is called the septum. An important feature of septa is to have a homogeneous field in the gap and no field in the region of the circulating beam. The basic principle. Electrostatic septa provide an electric field in the direction of extraction, by applying a voltage between the septum foil and an electrode. The septum foil is very thin to have the least interaction with the beam when it is slowly extracted. Slowly means over millions of turns of the particles in the synchrotron. The orbiting beam generally passes through the hollow support of the septum foil, which ensures a field free region, as not to affect the circulating beam. The field free region is achieved by using the hollow support of the septum and the septum foil itself as a Faraday cage. The extracted beam passes just on the other side of the septum, where the electric field changes the direction of the beam to be extracted. The septum separates the gap field between the electrode and the foil from the field free region for the circulating beam. Electrostatic septa are always sitting in a vacuum tank to allow high electric fields, since the vacuum works as an insulator between the septum and high voltage electrode. To allow precise matching of the septum position with the circulation beam trajectory, the septum is often fitted with a displacement system, which allows parallel and angular displacement with respect to the circulating beam. Great difficulty lies in the choice of materials and the manufacturing techniques of the different components. In the figure a typical cross section of an electrostatic septum is shown. The septum foil and its support are marked in blue, while the electrode is marked in red. In the lower part of the figure the electric field E is shown as it could be measured on the axis indicated as a dotted line in the cross section. The field free region is inside the support of the septum foil. The electric field E in the gap between the septum foil and the electrode is homogeneous on the axis and is equal to: formula_0 Where V is the voltage applied to the electrode and d is the distance between the septum foil and the electrode. Typical technical specifications. Typical device specifications are listed below. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " E = - \\frac{V}{d}" } ]
https://en.wikipedia.org/wiki?curid=62279536
622844
Homogeneous function
Function with a multiplicative scaling behaviour In mathematics, a homogeneous function is a function of several variables such that the following holds: If each of the function's arguments is multiplied by the same scalar, then the function's value is multiplied by some power of this scalar; the power is called the degree of homogeneity, or simply the "degree". That is, if k is an integer, a function f of n variables is homogeneous of degree k if formula_0 for every formula_1 and formula_2 For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k. The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function formula_3 between two F-vector spaces is "homogeneous" of degree formula_4 if for all nonzero formula_5 and formula_6 This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that formula_7 implies formula_8 for every nonzero scalar s. In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for formula_9 and allowing any real number k as a degree of homogeneity. Every homogeneous real function is "positively homogeneous". The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes. Definitions. The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers. The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called "positive homogeneity", the qualificative "positive" being often omitted when there is no risk of confusion. Positive homogeneity leads to considering more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous. The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number. General homogeneity. Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that formula_10 for all formula_11 and all nonzero formula_12 A "homogeneous function" f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies formula_13 for some integer k, every formula_14 and every nonzero formula_12 The integer k is called the "degree of homogeneity", or simply the "degree" of f. A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its "cone of definition" is the linear cone of the points where the value of denominator is not zero. Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in the Proj construction of projective schemes. Positive homogeneity. When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider "positive homogeneity", the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by ""s" &gt; 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined. Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since formula_15 if formula_16 This remains true in the complex case, since the field of the complex numbers formula_17 and every complex vector space can be considered as real vector spaces. Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the "fundamental theorem on homogeneous functions". Examples. Simple example. The function formula_19 is homogeneous of degree 2: formula_20 Absolute value and norms. The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since formula_21 if formula_22 and formula_23 if formula_24 The absolute value of a complex number is a positively homogeneous function of degree formula_25 over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers. More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function. Linear functions. Any linear map formula_3 between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity: formula_26 for all formula_27 and formula_6 Similarly, any multilinear function formula_28 is homogeneous of degree formula_29 by the definition of multilinearity: formula_30 for all formula_27 and formula_31 Homogeneous polynomials. Monomials in formula_32 variables define homogeneous functions formula_33 For example, formula_34 is homogeneous of degree 10 since formula_35 The degree is the sum of the exponents on the variables; in this example, formula_36 A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example, formula_37 is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions. Given a homogeneous polynomial of degree formula_4 with real coefficients that takes only positive values, one gets a positively homogeneous function of degree formula_38 by raising it to the power formula_39 So for example, the following function is positively homogeneous of degree 1 but not homogeneous: formula_40 Min/max. For every set of weights formula_41 the following functions are positively homogeneous of degree 1, but not homogeneous: Rational functions. Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if formula_18 is homogeneous of degree formula_44 and formula_45 is homogeneous of degree formula_29 then formula_46 is homogeneous of degree formula_47 away from the zeros of formula_48 Non-examples. The homogeneous real functions of a single variable have the form formula_49 for some constant c. So, the affine function formula_50 the natural logarithm formula_51 and the exponential function formula_52 are not homogeneous. Euler's theorem. Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Euler's homogeneous function theorem —  If f is a (partial) function of n real variables that is positively homogeneous of degree k, and continuously differentiable in some open subset of formula_53 then it satisfies in this open set the partial differential equation formula_54 Conversely, every maximal continuously differentiable solution of this partial differentiable equation is a positively homogeneous function of degree k, defined on a positive cone (here, "maximal" means that the solution cannot be prolongated to a function with a larger domain). &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof For having simpler formulas, we set formula_55 The first part results by using the chain rule for differentiating both sides of the equation formula_56 with respect to formula_57 and taking the limit of the result when s tends to 1. The converse is proved by integrating a simple differential equation. Let formula_58 be in the interior of the domain of f. For s sufficiently close to 1, the function formula_59 is well defined. The partial differential equation implies that formula_60 The solutions of this linear differential equation have the form formula_61 Therefore, formula_62 if s is sufficiently close to 1. If this solution of the partial differential equation would not be defined for all positive s, then the functional equation would allow to prolongate the solution, and the partial differential equation implies that this prolongation is unique. So, the domain of a maximal solution of the partial differential equation is a linear cone, and the solution is positively homogeneous of degree k. formula_63 As a consequence, if formula_64 is continuously differentiable and homogeneous of degree formula_65 its first-order partial derivatives formula_66 are homogeneous of degree formula_67 This results from Euler's theorem by differentiating the partial differential equation with respect to one variable. In the case of a function of a single real variable (formula_68), the theorem implies that a continuously differentiable and positively homogeneous function of degree k has the form formula_69 for formula_70 and formula_71 for formula_72 The constants formula_73 and formula_74 are not necessarily the same, as it is the case for the absolute value. Application to differential equations. The substitution formula_75 converts the ordinary differential equation formula_76 where formula_77 and formula_78 are homogeneous functions of the same degree, into the separable differential equation formula_79 Generalizations. Homogeneity under a monoid action. The definitions given above are all specialized cases of the following more general notion of homogeneity in which formula_80 can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid. Let formula_81 be a monoid with identity element formula_82 let formula_80 and formula_83 be sets, and suppose that on both formula_80 and formula_83 there are defined monoid actions of formula_84 Let formula_4 be a non-negative integer and let formula_85 be a map. Then formula_18 is said to be homogeneous of degree formula_4 over formula_81 if for every formula_86 and formula_87 formula_88 If in addition there is a function formula_89 denoted by formula_90 called an absolute value then formula_18 is said to be absolutely homogeneous of degree formula_4 over formula_81 if for every formula_86 and formula_87 formula_91 A function is homogeneous over formula_81 (resp. absolutely homogeneous over formula_81) if it is homogeneous of degree formula_25 over formula_81 (resp. absolutely homogeneous of degree formula_25 over formula_81). More generally, it is possible for the symbols formula_92 to be defined for formula_93 with formula_4 being something other than an integer (for example, if formula_81 is the real numbers and formula_4 is a non-zero real number then formula_92 is defined even though formula_4 is not an integer). If this is the case then formula_18 will be called homogeneous of degree formula_4 over formula_81 if the same equality holds: formula_94 The notion of being absolutely homogeneous of degree formula_4 over formula_81 is generalized similarly. Distributions (generalized functions). A continuous function formula_18 on formula_95 is homogeneous of degree formula_4 if and only if formula_96 for all compactly supported test functions formula_97; and nonzero real formula_98 Equivalently, making a change of variable formula_99 formula_18 is homogeneous of degree formula_4 if and only if formula_100 for all formula_101 and all test functions formula_102 The last display makes it possible to define homogeneity of distributions. A distribution formula_103 is homogeneous of degree formula_4 if formula_104 for all nonzero real formula_101 and all test functions formula_102 Here the angle brackets denote the pairing between distributions and test functions, and formula_105 is the mapping of scalar division by the real number formula_98 Glossary of name variants. Let formula_85 be a map between two vector spaces over a field formula_106 (usually the real numbers formula_107 or complex numbers formula_108). If formula_103 is a set of scalars, such as formula_109 formula_110 or formula_111 for example, then formula_18 is said to be if formula_112 for every formula_86 and scalar formula_113 For instance, every additive map between vector spaces is formula_114 although it might not be formula_115 The following commonly encountered special cases and variations of this definition have their own terminology: All of the above definitions can be generalized by replacing the condition formula_116 with formula_131 in which case that definition is prefixed with the word "absolute" or "absolutely." For example, If formula_4 is a fixed real number then the above definitions can be further generalized by replacing the condition formula_116 with formula_132 (and similarly, by replacing formula_133 with formula_134 for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree formula_4" (where in particular, all of the above definitions are "of degree formula_25"). For instance, A nonzero continuous function that is homogeneous of degree formula_4 on formula_135 extends continuously to formula_95 if and only if formula_136 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Proofs &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(sx_1,\\ldots, sx_n)=s^k f(x_1,\\ldots, x_n)" }, { "math_id": 1, "text": "x_1, \\ldots, x_n," }, { "math_id": 2, "text": "s\\ne 0." }, { "math_id": 3, "text": "f : V \\to W" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "s \\in F" }, { "math_id": 6, "text": "v \\in V." }, { "math_id": 7, "text": "\\mathbf{v}\\in C" }, { "math_id": 8, "text": "s \\mathbf{v}\\in C" }, { "math_id": 9, "text": "s > 0," }, { "math_id": 10, "text": "sx\\in C" }, { "math_id": 11, "text": "x\\in C" }, { "math_id": 12, "text": "s\\in F." }, { "math_id": 13, "text": "f(sx) = s^kf(x)" }, { "math_id": 14, "text": "x\\in C," }, { "math_id": 15, "text": "|-x|=|x|\\neq -|x|" }, { "math_id": 16, "text": "x\\neq 0." }, { "math_id": 17, "text": "\\C" }, { "math_id": 18, "text": "f" }, { "math_id": 19, "text": "f(x, y) = x^2 + y^2" }, { "math_id": 20, "text": "f(tx, ty) = (tx)^2 + (ty)^2 = t^2 \\left(x^2 + y^2\\right) = t^2 f(x, y)." }, { "math_id": 21, "text": "|sx|=s|x|" }, { "math_id": 22, "text": "s>0," }, { "math_id": 23, "text": "|sx|=-s|x|" }, { "math_id": 24, "text": "s<0." }, { "math_id": 25, "text": "1" }, { "math_id": 26, "text": "f(\\alpha \\mathbf{v}) = \\alpha f(\\mathbf{v})" }, { "math_id": 27, "text": "\\alpha \\in {F}" }, { "math_id": 28, "text": "f : V_1 \\times V_2 \\times \\cdots V_n \\to W" }, { "math_id": 29, "text": "n," }, { "math_id": 30, "text": "f\\left(\\alpha \\mathbf{v}_1, \\ldots, \\alpha \\mathbf{v}_n\\right) = \\alpha^n f(\\mathbf{v}_1, \\ldots, \\mathbf{v}_n)" }, { "math_id": 31, "text": "v_1 \\in V_1, v_2 \\in V_2, \\ldots, v_n \\in V_n." }, { "math_id": 32, "text": "n" }, { "math_id": 33, "text": "f : \\mathbb{F}^n \\to \\mathbb{F}." }, { "math_id": 34, "text": "f(x, y, z) = x^5 y^2 z^3 \\," }, { "math_id": 35, "text": "f(\\alpha x, \\alpha y, \\alpha z) = (\\alpha x)^5(\\alpha y)^2(\\alpha z)^3 = \\alpha^{10} x^5 y^2 z^3 = \\alpha^{10} f(x, y, z). \\," }, { "math_id": 36, "text": "10 = 5 + 2 + 3." }, { "math_id": 37, "text": "x^5 + 2x^3 y^2 + 9xy^4" }, { "math_id": 38, "text": "k/d" }, { "math_id": 39, "text": "1 / d." }, { "math_id": 40, "text": "\\left(x^2 + y^2 + z^2\\right)^\\frac{1}{2}." }, { "math_id": 41, "text": "w_1,\\dots,w_n," }, { "math_id": 42, "text": "\\min\\left(\\frac{x_1}{w_1}, \\dots, \\frac{x_n}{w_n}\\right)" }, { "math_id": 43, "text": "\\max\\left(\\frac{x_1}{w_1}, \\dots, \\frac{x_n}{w_n}\\right)" }, { "math_id": 44, "text": "m" }, { "math_id": 45, "text": "g" }, { "math_id": 46, "text": "f / g" }, { "math_id": 47, "text": "m - n" }, { "math_id": 48, "text": "g." }, { "math_id": 49, "text": "x\\mapsto cx^k" }, { "math_id": 50, "text": "x\\mapsto x+5," }, { "math_id": 51, "text": "x\\mapsto \\ln(x)," }, { "math_id": 52, "text": "x\\mapsto e^x" }, { "math_id": 53, "text": "\\R^n," }, { "math_id": 54, "text": "k\\,f(x_1, \\ldots,x_n)=\\sum_{i=1}^n x_i\\frac{\\partial f}{\\partial x_i}(x_1, \\ldots,x_n)." }, { "math_id": 55, "text": "\\mathbf x=(x_1, \\ldots, x_n)." }, { "math_id": 56, "text": "f(s\\mathbf x ) = s^k f(\\mathbf x)" }, { "math_id": 57, "text": "s," }, { "math_id": 58, "text": "\\mathbf{x}" }, { "math_id": 59, "text": " g(s) = f(s \\mathbf{x})" }, { "math_id": 60, "text": "\nsg'(s)= k f(s \\mathbf{x})=k g(s).\n" }, { "math_id": 61, "text": "g(s)=g(1)s^k." }, { "math_id": 62, "text": " f(s \\mathbf{x}) = g(s) = s^k g(1) = s^k f(\\mathbf{x})," }, { "math_id": 63, "text": "\\square" }, { "math_id": 64, "text": "f : \\R^n \\to \\R" }, { "math_id": 65, "text": "k," }, { "math_id": 66, "text": "\\partial f/\\partial x_i" }, { "math_id": 67, "text": "k - 1." }, { "math_id": 68, "text": "n = 1" }, { "math_id": 69, "text": "f(x)=c_+ x^k" }, { "math_id": 70, "text": "x>0" }, { "math_id": 71, "text": "f(x)=c_- x^k" }, { "math_id": 72, "text": "x<0." }, { "math_id": 73, "text": "c_+" }, { "math_id": 74, "text": "c_-" }, { "math_id": 75, "text": "v = y / x" }, { "math_id": 76, "text": "I(x, y)\\frac{\\mathrm{d}y}{\\mathrm{d}x} + J(x,y) = 0," }, { "math_id": 77, "text": "I" }, { "math_id": 78, "text": "J" }, { "math_id": 79, "text": "x \\frac{\\mathrm{d}v}{\\mathrm{d}x} = - \\frac{J(1,v)}{I(1,v)} - v." }, { "math_id": 80, "text": "X" }, { "math_id": 81, "text": "M" }, { "math_id": 82, "text": "1 \\in M," }, { "math_id": 83, "text": "Y" }, { "math_id": 84, "text": "M." }, { "math_id": 85, "text": "f : X \\to Y" }, { "math_id": 86, "text": "x \\in X" }, { "math_id": 87, "text": "m \\in M," }, { "math_id": 88, "text": "f(mx) = m^k f(x)." }, { "math_id": 89, "text": "M \\to M," }, { "math_id": 90, "text": "m \\mapsto |m|," }, { "math_id": 91, "text": "f(mx) = |m|^k f(x)." }, { "math_id": 92, "text": "m^k" }, { "math_id": 93, "text": "m \\in M" }, { "math_id": 94, "text": "f(mx) = m^k f(x) \\quad \\text{ for every } x \\in X \\text{ and } m \\in M." }, { "math_id": 95, "text": "\\R^n" }, { "math_id": 96, "text": "\\int_{\\R^n} f(tx) \\varphi(x)\\, dx = t^k \\int_{\\R^n} f(x)\\varphi(x)\\, dx" }, { "math_id": 97, "text": "\\varphi" }, { "math_id": 98, "text": "t." }, { "math_id": 99, "text": "y = tx," }, { "math_id": 100, "text": "t^{-n}\\int_{\\R^n} f(y)\\varphi\\left(\\frac{y}{t}\\right)\\, dy = t^k \\int_{\\R^n} f(y)\\varphi(y)\\, dy" }, { "math_id": 101, "text": "t" }, { "math_id": 102, "text": "\\varphi." }, { "math_id": 103, "text": "S" }, { "math_id": 104, "text": "t^{-n} \\langle S, \\varphi \\circ \\mu_t \\rangle = t^k \\langle S, \\varphi \\rangle" }, { "math_id": 105, "text": "\\mu_t : \\R^n \\to \\R^n" }, { "math_id": 106, "text": "\\mathbb{F}" }, { "math_id": 107, "text": "\\R" }, { "math_id": 108, "text": "\\Complex" }, { "math_id": 109, "text": "\\Z," }, { "math_id": 110, "text": "[0, \\infty)," }, { "math_id": 111, "text": "\\Reals" }, { "math_id": 112, "text": "f(s x) = s f(x)" }, { "math_id": 113, "text": "s \\in S." }, { "math_id": 114, "text": "S := \\Q" }, { "math_id": 115, "text": "S := \\R." }, { "math_id": 116, "text": "f(rx) = r f(x)" }, { "math_id": 117, "text": "r > 0." }, { "math_id": 118, "text": "r \\geq 0." }, { "math_id": 119, "text": "[-\\infty, \\infty] = \\Reals \\cup \\{\\pm \\infty\\}," }, { "math_id": 120, "text": "0 \\cdot f(x)" }, { "math_id": 121, "text": "f(x) = \\pm \\infty" }, { "math_id": 122, "text": "r." }, { "math_id": 123, "text": "f(sx) = s f(x)" }, { "math_id": 124, "text": "s \\in \\mathbb{F}." }, { "math_id": 125, "text": "X." }, { "math_id": 126, "text": "f(sx) = \\overline{s} f(x)" }, { "math_id": 127, "text": "\\mathbb{F} = \\Complex" }, { "math_id": 128, "text": "\\overline{s}" }, { "math_id": 129, "text": "s" }, { "math_id": 130, "text": "\\mathbb{F}." }, { "math_id": 131, "text": "f(rx) = |r| f(x)," }, { "math_id": 132, "text": "f(rx) = r^k f(x)" }, { "math_id": 133, "text": "f(rx) = |r| f(x)" }, { "math_id": 134, "text": "f(rx) = |r|^k f(x)" }, { "math_id": 135, "text": "\\R^n \\backslash \\lbrace 0 \\rbrace" }, { "math_id": 136, "text": "k > 0." } ]
https://en.wikipedia.org/wiki?curid=622844
62285602
Multi-agent reinforcement learning
Sub-field of reinforcement learning &lt;templatestyles src="Machine learning/styles.css"/&gt; Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complex group dynamics. Multi-agent reinforcement learning is closely related to game theory and especially repeated games, as well as multi-agent systems. Its study combines the pursuit of finding ideal algorithms that maximize rewards with a more sociological set of concepts. While research in single-agent reinforcement learning is concerned with finding the algorithm that gets the biggest number of points for one agent, research in multi-agent reinforcement learning evaluates and quantifies social metrics, such as cooperation, reciprocity, equity, social influence, language and discrimination. Definition. Similarly to single-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of a Markov decision process (MDP). For example, In settings with perfect information, such as the games of chess and Go, the MDP would be fully observable. In settings with imperfect information, especially in real-world applications like self-driving cars, each agent would access an observation that only has part of the information about the current state. In the partially observable setting, the core model is the partially observable stochastic game in the general case, and the decentralized POMDP in the cooperative case. Cooperation vs. competition. When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior: Pure competition settings. When two agents are playing a zero-sum game, they are in pure competition with each other. Many traditional games such as chess and Go fall under this category, as do two-player variants of modern games like StarCraft. Because each agent can only win at the expense of the other agent, many complexities are stripped away. There's no prospect of communication or social dilemmas, as neither agent is incentivized to take actions that benefit its opponent. The Deep Blue and AlphaGo projects demonstrate how to optimize the performance of agents in pure competition settings. One complexity that is not stripped away in pure competition settings is autocurricula. As the agents' policy is improved using self-play, multiple layers of learning may occur. Pure cooperation settings. MARL is used to explore how separate agents with identical interests can communicate and work together. Pure cooperation settings are explored in recreational cooperative games such as Overcooked, as well as real-world scenarios in robotics. In pure cooperation settings all the agents get identical rewards, which means that social dilemmas do not occur. In pure cooperation settings, oftentimes there are an arbitrary number of coordination strategies, and agents converge to specific "conventions" when coordinating with each other. The notion of conventions has been studied in language and also alluded to in more general multi-agent collaborative tasks. Mixed-sum settings. Most real-world scenarios involving multiple agents have elements of both cooperation and competition. For example, when multiple self-driving cars are planning their respective paths, each of them has interests that are diverging but not exclusive: Each car is minimizing the amount of time it's taking to reach its destination, but all cars have the shared interest of avoiding a traffic collision. Zero-sum settings with three or more agents often exhibit similar properties to mixed-sum settings, since each pair of agents might have a non-zero utility sum between them. Mixed-sum settings can be explored using classic matrix games such as prisoner's dilemma, more complex sequential social dilemmas, and recreational games such as Among Us, Diplomacy and StarCraft II. Mixed-sum settings can give rise to communication and social dilemmas. Social dilemmas. As in game theory, much of the research in MARL revolves around social dilemmas, such as prisoner's dilemma, chicken and stag hunt. While game theory research might focus on Nash equilibria and what an ideal policy for an agent would be, MARL research focuses on how the agents would learn these ideal policies using a trial-and-error process. The reinforcement learning algorithms that are used to train the agents are maximizing the agent's own reward; the conflict between the needs of the agents and the needs of the group is a subject of active research. Various techniques have been explored in order to induce cooperation in agents: Modifying the environment rules, adding intrinsic rewards, and more. Sequential social dilemmas. Social dilemmas like prisoner's dilemma, chicken and stag hunt are "matrix games". Each agent takes only one action from a choice of two possible actions, and a simple 2x2 matrix is used to describe the reward that each agent will get, given the actions that each agent took. In humans and other living creatures, social dilemmas tend to be more complex. Agents take multiple actions over time, and the distinction between cooperating and defecting is not as clear cut as in matrix games. The concept of a sequential social dilemma (SSD) was introduced in 2017 as an attempt to model that complexity. There is ongoing research into defining different kinds of SSDs and showing cooperative behavior in the agents that act in them. Autocurricula. An autocurriculum (plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop results in several distinct phases of learning, each depending on the previous one. The stacked layers of learning are called an autocurriculum. Autocurricula are especially apparent in adversarial settings, where each group of agents is racing to counter the current strategy of the opposing group. The Hide and Seek game is an accessible example of an autocurriculum occurring in an adversarial setting. In this experiment, a team of seekers is competing against a team of hiders. Whenever one of the teams learns a new strategy, the opposing team adapts its strategy to give the best possible counter. When the hiders learn to use boxes to build a shelter, the seekers respond by learning to use a ramp to break into that shelter. The hiders respond by locking the ramps, making them unavailable for the seekers to use. The seekers then respond by "box surfing", exploiting a glitch in the game to penetrate the shelter. Each "level" of learning is an emergent phenomenon, with the previous level as its premise. This results in a stack of behaviors, each dependent on its predecessor. Autocurricula in reinforcement learning experiments are compared to the stages of the evolution of life on Earth and the development of human culture. A major stage in evolution happened 2-3 billion years ago, when photosynthesizing life forms started to produce massive amounts of oxygen, changing the balance of gases in the atmosphere. In the next stages of evolution, oxygen-breathing life forms evolved, eventually leading up to land mammals and human beings. These later stages could only happen after the photosynthesis stage made oxygen widely available. Similarly, human culture could not have gone through the Industrial Revolution in the 18th century without the resources and insights gained by the agricultural revolution at around 10,000 BC. Applications. Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry: AI alignment. Multi-agent reinforcement learning has been used in research into AI alignment. The relationship between the different agents in a MARL setting can be compared to the relationship between a human and an AI agent. Research efforts in the intersection of these two fields attempt to simulate possible conflicts between a human's intentions and an AI agent's actions, and then explore which variables could be changed to prevent these conflicts. Limitations. There are some inherent difficulties about multi-agent deep reinforcement learning. The environment is not stationary anymore, thus the Markov property is violated: transitions and rewards do not only depend on the current state of an agent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "\\mathcal A_i" }, { "math_id": 2, "text": "i \\in I = \\{1, ..., N\\}" }, { "math_id": 3, "text": "P_\\overrightarrow{a}(s,s')=\\Pr(s_{t+1}=s'\\mid s_t=s, \\overrightarrow{a}_t=\\overrightarrow{a})" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "s'" }, { "math_id": 7, "text": "\\overrightarrow{a}" }, { "math_id": 8, "text": "\\overrightarrow{R}_\\overrightarrow{a}(s,s')" } ]
https://en.wikipedia.org/wiki?curid=62285602
62286468
GraphBLAS
API for graph data and graph operations GraphBLAS () is an API specification that defines standard building blocks for graph algorithms in the language of linear algebra. GraphBLAS is built upon the notion that a sparse matrix can be used to represent graphs as either an adjacency matrix or an incidence matrix. The GraphBLAS specification describes how graph operations (e.g. traversing and transforming graphs) can be efficiently implemented via linear algebraic methods (e.g. matrix multiplication) over different semirings. The development of GraphBLAS and its various implementations is an ongoing community effort, including representatives from industry, academia, and government research labs. Background. Graph algorithms have long taken advantage of the idea that a graph can be represented as a matrix, and graph operations can be performed as linear transformations and other linear algebraic operations on sparse matrices. For example, matrix-vector multiplication can be used to perform a step in a breadth-first search. The GraphBLAS specification (and the various libraries that implement it) provides data structures and functions to compute these linear algebraic operations. In particular, GraphBLAS specifies "sparse" matrix objects which map well to graphs where vertices are likely connected to relatively few neighbors (i.e. the degree of a vertex is significantly smaller than the total number of vertices in the graph). The specification also allows for the use of different semirings to accomplish operations in a variety of mathematical contexts. Originally motivated by the need for standardization in graph analytics, similar to its namesake BLAS, the GraphBLAS standard has also begun to interest people outside the graph community, including researchers in machine learning, and bioinformatics. GraphBLAS implementations have also been used in high-performance graph database applications such as RedisGraph. Specification. The GraphBLAS specification has been in development since 2013, and has reached version 2.1.0 as of December 2023. While formally a specification for the C programming language, a variety of programming languages have been used to develop implementations in the spirit of GraphBLAS, including C++, Java, and Nvidia CUDA. Compliant implementations and language bindings. There are currently two fully-compliant reference implementations of the GraphBLAS specification. Bindings assuming a compliant specification exist for the Python, MATLAB, and Julia programming languages. Linear algebraic foundations. The mathematical foundations of GraphBLAS are based in linear algebra and the duality between matrices and graphs. Each graph operation in GraphBLAS operates on a semiring, which is made up of the following elements: Note that the zero element (i.e. the element that represents the absence of an edge in the graph) can also be reinterpreted."VII. 0-Element: No Graph Edge" For example, the following algebras can be implemented in GraphBLAS: All the examples above satisfy the following two conditions in their respective domains: For instance, a user can specify the min-plus algebra over the domain of double-precision floating point numbers with codice_0. Functionality. While the GraphBLAS specification generally allows significant flexibility in implementation, some functionality and implementation details are explicitly described: The GraphBLAS specification also prescribes that library implementations be thread-safe.2.5.2 Multi-threaded execution Example code. The following is a GraphBLAS 2.1-compliant example of a breadth-first search in the C programming language.294 * Given a boolean n x n adjacency matrix A and a source vertex s, performs a BFS traversal * of the graph and sets v[i] to the level in which vertex i is visited (v[s] == 1). * If i is not reachable from s, then v[i] = 0 does not have a stored element. * Vector v should be uninitialized on input. GrB_Info BFS(GrB_Vector *v, GrB_Matrix A, GrB_Index s) GrB_Index n; GrB_Matrix_nrows(&amp;n,A); // n = # of rows of A GrB_Vector_new(v,GrB_INT32,n); // Vector&lt;int32_t&gt; v(n) GrB_Vector q; // vertices visited in each level GrB_Vector_new(&amp;q, GrB_BOOL, n); // Vector&lt;bool&gt; q(n) GrB_Vector_setElement(q, (bool)true, s); // q[s] = true, false everywhere else * BFS traversal and label the vertices. int32_t level = 0; // level = depth in BFS traversal GrB_Index nvals; do { ++level; // next level (start with 1) GrB_apply(*v, GrB_NULL, GrB_PLUS_INT32, GrB_SECOND_INT32, q, level, GrB_NULL); // v[q] = level GrB_vxm(q, *v, GrB_NULL, GrB_LOR_LAND_SEMIRING_BOOL, q, A, GrB_DESC_RC); // q[!v] = q ||.&amp;&amp; A; finds all the // unvisited successors from current q GrB_Vector_nvals(&amp;nvals, q); } while (nvals); // if there is no successor in q, we are done. GrB_free(&amp;q); // q vector no longer needed return GrB_SUCCESS; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\oplus" }, { "math_id": 2, "text": "\\otimes" }, { "math_id": 3, "text": "a \\oplus 0 = a" }, { "math_id": 4, "text": "a \\otimes 0 = 0" }, { "math_id": 5, "text": "A\\langle M \\rangle = B" }, { "math_id": 6, "text": "B" }, { "math_id": 7, "text": "M" } ]
https://en.wikipedia.org/wiki?curid=62286468
6229
Colossus computer
Early British cryptanalysis computer Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world's first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program. Colossus was designed by General Post Office (GPO) research telephone engineer Tommy Flowers based on plans developed by mathematician Max Newman at the Government Code and Cypher School (GC&amp;CS) at Bletchley Park. Alan Turing's use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma. (Turing's machine that helped decode Enigma was the electromechanical Bombe, not Colossus.) The prototype, Colossus Mark 1, was shown to be working in December 1943 and was in use at Bletchley Park by early 1944. An improved Colossus Mark 2 that used shift registers to quintuple the processing speed, first worked on 1 June 1944, just in time for the Normandy landings on D-Day. Ten Colossi were in use by the end of the war and an eleventh was being commissioned. Bletchley Park's use of these machines allowed the Allies to obtain a vast amount of high-level military intelligence from intercepted radiotelegraphy messages between the German High Command ("OKW") and their army commands throughout occupied Europe. The existence of the Colossus machines was kept secret until the mid-1970s. All but two machines were dismantled into such small parts that their use could not be inferred. The two retained machines were eventually dismantled in the 1960s. In January 2024, new photos were released by GCHQ that showed re-engineered Colossus in a very different environment from the Bletchley Park buildings, presumably at GCHQ Cheltenham. A functioning reconstruction of a Mark 2 Colossus was completed in 2008 by Tony Sale and a team of volunteers; it is on display in The National Museum of Computing at Bletchley Park. Purpose and origins. The Colossus computers were used to help decipher intercepted radio teleprinter messages that had been encrypted using an unknown device. Intelligence information revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" (sawfish). This led the British to call encrypted German teleprinter traffic "Fish", and the unknown machine and its intercepted messages "Tunny" (tunafish). Before the Germans increased the security of their operating procedures, British cryptanalysts diagnosed how the unseen machine functioned and built an imitation of it called "British Tunny". It was deduced that the machine had twelve wheels and used a Vernam ciphering technique on message characters in the standard 5-bit ITA2 telegraph code. It did this by combining the plaintext characters with a stream of key characters using the XOR Boolean function to produce the ciphertext. In August 1941, a blunder by German operators led to the transmission of two versions of the same message with identical machine settings. These were intercepted and worked on at Bletchley Park. First, John Tiltman, a very talented GC&amp;CS cryptanalyst, derived a keystream of almost 4000 characters. Then Bill Tutte, a newly arrived member of the Research Section, used this keystream to work out the logical structure of the Lorenz machine. He deduced that the twelve wheels consisted of two groups of five, which he named the χ ("chi") and ψ ("psi") wheels, the remaining two he called μ ("mu") or "motor" wheels. The "chi" wheels stepped regularly with each letter that was encrypted, while the "psi" wheels stepped irregularly, under the control of the motor wheels. With a sufficiently random keystream, a Vernam cipher removes the natural language property of a plaintext message of having an uneven frequency distribution of the different characters, to produce a uniform distribution in the ciphertext. The Tunny machine did this well. However, the cryptanalysts worked out that by examining the frequency distribution of the character-to-character changes in the ciphertext, instead of the plain characters, there was a departure from uniformity which provided a way into the system. This was achieved by "differencing" in which each bit or character was XOR-ed with its successor. After Germany surrendered, allied forces captured a Tunny machine and discovered that it was the electromechanical Lorenz SZ ("Schlüsselzusatzgerät", cipher attachment) in-line cipher machine. In order to decrypt the transmitted messages, two tasks had to be performed. The first was "wheel breaking", which was the discovery of the cam patterns for all the wheels. These patterns were set up on the Lorenz machine and then used for a fixed period of time for a succession of different messages. Each transmission, which often contained more than one message, was enciphered with a different start position of the wheels. Alan Turing invented a method of wheel-breaking that became known as Turingery. Turing's technique was further developed into "Rectangling", for which Colossus could produce tables for manual analysis. Colossi 2, 4, 6, 7 and 9 had a "gadget" to aid this process. The second task was "wheel setting", which worked out the start positions of the wheels for a particular message and could only be attempted once the cam patterns were known. It was this task for which Colossus was initially designed. To discover the start position of the "chi" wheels for a message, Colossus compared two character streams, counting statistics from the evaluation of programmable Boolean functions. The two streams were the ciphertext, which was read at high speed from a paper tape, and the keystream, which was generated internally, in a simulation of the unknown German machine. After a succession of different Colossus runs to discover the likely "chi"-wheel settings, they were checked by examining the frequency distribution of the characters in the processed ciphertext. Colossus produced these frequency counts. Decryption processes. By using differencing and knowing that the "psi" wheels did not advance with each character, Tutte worked out that trying just two differenced bits (impulses) of the "chi"-stream against the differenced ciphertext would produce a statistic that was non-random. This became known as Tutte's "1+2 break in". It involved calculating the following Boolean function: formula_2 and counting the number of times it yielded "false" (zero). If this number exceeded a pre-defined threshold value known as the "set total", it was printed out. The cryptanalyst would examine the printout to determine which of the putative start positions was most likely to be the correct one for the "chi"-1 and "chi"-2 wheels. This technique would then be applied to other pairs of, or single, impulses to determine the likely start position of all five "chi" wheels. From this, the de-"chi" (D) of a ciphertext could be obtained, from which the "psi" component could be removed by manual methods. If the frequency distribution of characters in the de-"chi" version of the ciphertext was within certain bounds, "wheel setting" of the "chi" wheels was considered to have been achieved, and the message settings and de-"chi" were passed to the "Testery". This was the section at Bletchley Park led by Major Ralph Tester where the bulk of the decrypting work was done by manual and linguistic methods. Colossus could also derive the start position of the "psi" and motor wheels. The feasibility of utilizing this additional capability regularly was made possible in the last few months of the war when there were plenty of Colossi available and the number of Tunny messages had declined. Design and construction. Colossus was developed for the "Newmanry", the section headed by the mathematician Max Newman that was responsible for machine methods against the twelve-rotor Lorenz SZ40/42 on-line teleprinter cipher machine (code-named Tunny, for tunafish). The Colossus design arose out of a parallel project that produced a less-ambitious counting machine dubbed "Heath Robinson". Although the Heath Robinson machine proved the concept of machine analysis for this part of the process, it had serious limitations. The electro-mechanical parts were relatively slow and it was difficult to synchronise two looped paper tapes, one containing the enciphered message, and the other representing part of the keystream of the Lorenz machine. Also the tapes tended to stretch and break when being read at up to 2000 characters per second. Tommy Flowers MBE was a senior electrical engineer and Head of the Switching Group at the Post Office Research Station at Dollis Hill. Prior to his work on Colossus, he had been involved with GC&amp;CS at Bletchley Park from February 1941 in an attempt to improve the Bombes that were used in the cryptanalysis of the German Enigma cipher machine. He was recommended to Max Newman by Alan Turing, who had been impressed by his work on the Bombes. The main components of the Heath Robinson machine were as follows. Flowers had been brought in to design the Heath Robinson's combining unit. He was not impressed by the system of a key tape that had to be kept synchronised with the message tape and, on his own initiative, he designed an electronic machine which eliminated the need for the key tape by having an electronic analogue of the Lorenz (Tunny) machine. He presented this design to Max Newman in February 1943, but the idea that the one to two thousand thermionic valves (vacuum tubes and thyratrons) proposed, could work together reliably, was greeted with great scepticism, so more Robinsons were ordered from Dollis Hill. Flowers, however, knew from his pre-war work that most thermionic valve failures occurred as a result of the thermal stresses at power-up, so not powering a machine down reduced failure rates to very low levels. Additionally, if the heaters were started at a low voltage then slowly brought up to full voltage, thermal stress was reduced. The valves themselves could be soldered-in to avoid problems with plug-in bases, which could be unreliable. Flowers persisted with the idea and obtained support from the Director of the Research Station, W Gordon Radley. Flowers and his team of some fifty people in the switching group spent eleven months from early February 1943 designing and building a machine that dispensed with the second tape of the Heath Robinson, by generating the wheel patterns electronically. Flowers used some of his own money for the project. This prototype, Mark 1 Colossus, contained 1,600 thermionic valves (tubes). It performed satisfactorily at Dollis Hill on 8 December 1943 and was dismantled and shipped to Bletchley Park, where it was delivered on 18 January and re-assembled by Harry Fensom and Don Horwood. It was operational in January and it successfully attacked its first message on 5 February 1944. It was a large structure and was dubbed 'Colossus'. A memo held in the National Archives written by Max Newman on 18 January 1944 records that "Colossus arrives today". During the development of the prototype, an improved design had been developed – the Mark 2 Colossus. Four of these were ordered in March 1944 and by the end of April the number on order had been increased to twelve. Dollis Hill was put under pressure to have the first of these working by 1 June. Allen Coombs took over leadership of the production Mark 2 Colossi, the first of which – containing 2,400 valves – became operational at 08:00 on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day. Subsequently, Colossi were delivered at the rate of about one a month. By the time of V-E Day there were ten Colossi working at Bletchley Park and a start had been made on assembling an eleventh. Seven of the Colossi were used for 'wheel setting' and three for 'wheel breaking'. The main units of the Mark 2 design were as follows. Most of the design of the electronics was the work of Tommy Flowers, assisted by William Chandler, Sidney Broadhurst and Allen Coombs; with Erie Speight and Arnold Lynch developing the photoelectric reading mechanism. Coombs remembered Flowers, having produced a rough draft of his design, tearing it into pieces that he handed out to his colleagues for them to do the detailed design and get their team to manufacture it. The Mark 2 Colossi were both five times faster and were simpler to operate than the prototype. Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal storage for the data. The design overcame the problem of synchronizing the electronics with the speed of the message tape by generating a clock signal from reading its sprocket holes. The speed of operation was thus limited by the mechanics of reading the tape. During development, the tape reader was tested up to 9700 characters per second (53 mph) before the tape disintegrated. So 5000 characters/second () was settled on as the speed for regular use. Flowers designed a 6-character shift register, which was used both for computing the delta function (ΔZ) and for testing five different possible starting points of Tunny's wheels in the five processors. This five-way parallelism enabled five simultaneous tests and counts to be performed giving an effective processing speed of 25,000 characters per second. The computation used algorithms devised by W. T. Tutte and colleagues to decrypt a Tunny message. Operation. The Newmanry was staffed by cryptanalysts, operators from the Women's Royal Naval Service (WRNS) – known as "Wrens" – and engineers who were permanently on hand for maintenance and repair. By the end of the war the staffing was 272 Wrens and 27 men. The first job in operating Colossus for a new message was to prepare the paper tape loop. This was performed by the Wrens who stuck the two ends together using Bostik glue, ensuring that there was a 150-character length of blank tape between the end and the start of the message. Using a special hand punch they inserted a start hole between the third and fourth channels &lt;templatestyles src="Fraction/styles.css" /&gt;2+1⁄2 sprocket holes from the end of the blank section, and a stop hole between the fourth and fifth channels &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄2 sprocket holes from the end of the characters of the message. These were read by specially positioned photocells and indicated when the message was about to start and when it ended. The operator would then thread the paper tape through the gate and around the pulleys of the bedstead and adjust the tension. The two-tape bedstead design had been carried on from Heath Robinson so that one tape could be loaded whilst the previous one was being run. A switch on the Selection Panel specified the "near" or the "far" tape. After performing various resetting and zeroizing tasks, the Wren operators would, under instruction from the cryptanalyst, operate the "set total" decade switches and the K2 panel switches to set the desired algorithm. They would then start the bedstead tape motor and lamp and, when the tape was up to speed, operate the master start switch. Programming. Howard Campaigne, a mathematician and cryptanalyst from the US Navy's OP-20-G, wrote the following in a foreword to Flowers' 1983 paper "The Design of Colossus".&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;My view of Colossus was that of cryptanalyst-programmer. I told the machine to make certain calculations and counts, and after studying the results, told it to do another job. It did not remember the previous result, nor could it have acted upon it if it did. Colossus and I alternated in an interaction that sometimes achieved an analysis of an unusual German cipher system, called "Geheimschreiber" by the Germans, and "Fish" by the cryptanalysts. Colossus was not a stored-program computer. The input data for the five parallel processors was read from the looped message paper tape and the electronic pattern generators for the "chi", "psi" and motor wheels. The programs for the processors were set and held on the switches and jack panel connections. Each processor could evaluate a Boolean function and count and display the number of times it yielded the specified value of "false" (0) or "true" (1) for each pass of the message tape. Input to the processors came from two sources, the shift registers from tape reading and the thyratron rings that emulated the wheels of the Tunny machine. The characters on the paper tape were called Z and the characters from the Tunny emulator were referred to by the Greek letters that Bill Tutte had given them when working out the logical structure of the machine. On the selection panel, switches specified either Z or ΔZ, either formula_0 or Δformula_0 and either formula_1 or Δformula_1 for the data to be passed to the jack field and 'K2 switch panel'. These signals from the wheel simulators could be specified as stepping on with each new pass of the message tape or not. The K2 switch panel had a group of switches on the left-hand side to specify the algorithm. The switches on the right-hand side selected the counter to which the result was fed. The plugboard allowed less specialized conditions to be imposed. Overall the K2 switch panel switches and the plugboard allowed about five billion different combinations of the selected variables. As an example: a set of runs for a message tape might initially involve two "chi" wheels, as in Tutte's 1+2 algorithm. Such a two-wheel run was called a long run, taking on average eight minutes unless the parallelism was utilised to cut the time by a factor of five. The subsequent runs might only involve setting one "chi" wheel, giving a short run taking about two minutes. Initially, after the initial long run, the choice of the next algorithm to be tried was specified by the cryptanalyst. Experience showed, however, that decision trees for this iterative process could be produced for use by the Wren operators in a proportion of cases. Influence and fate. Although the Colossus was the first of the electronic digital machines with programmability, albeit limited by modern standards, it was not a general-purpose machine, being designed for a range of cryptanalytic tasks, most involving counting the results of evaluating Boolean algorithms. A Colossus computer was thus not a fully Turing complete machine. However, University of San Francisco professor Benjamin Wells has shown that if all ten Colossus machines made were rearranged in a specific cluster, then the entire set of computers could have simulated a universal Turing machine, and thus be Turing complete. Colossus and the reasons for its construction were highly secret and remained so for 30 years after the War. Consequently, it was not included in the history of computing hardware for many years, and Flowers and his associates were deprived of the recognition they were due. All but two of the Colossi were dismantled after the war and parts returned to the Post Office. Some parts, sanitised as to their original purpose, were taken to Max Newman's Royal Society Computing Machine Laboratory at Manchester University. Two Colossi, along with two Tunny machines, were retained and moved to GCHQ's new headquarters at Eastcote in April 1946, and then to Cheltenham between 1952 and 1954. One of the Colossi, known as "Colossus Blue", was dismantled in 1959; the other in the 1960s. Tommy Flowers was ordered to destroy all documentation. He duly burnt them in a furnace and later said of that order: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;That was a terrible mistake. I was instructed to destroy all the records, which I did. I took all the drawings and the plans and all the information about Colossus on paper and put it in the boiler fire. And saw it burn.The Colossi were adapted for other purposes, with varying degrees of success; in their later years they were used for training. Jack Good related how he was the first to use Colossus after the war, persuading the US National Security Agency that it could be used to perform a function for which they were planning to build a special-purpose machine. Colossus was also used to perform character counts on one-time pad tape to test for non-randomness. A small number of people who were associated with Colossus—and knew that large-scale, reliable, high-speed electronic digital computing devices were feasible—played significant roles in early computer work in the UK and probably in the US. However, being so secret, it had little direct influence on the development of later computers; it was EDVAC that was the seminal computer architecture of the time. In 1972, Herman Goldstine, who was unaware of Colossus and its legacy to the projects of people such as Alan Turing (ACE), Max Newman (Manchester computers) and Harry Huskey (Bendix G-15), wrote that, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Britain had such vitality that it could immediately after the war embark on so many well-conceived and well-executed projects in the computer field. Professor Brian Randell, who unearthed information about Colossus in the 1970s, commented on this, saying that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It is my opinion that the COLOSSUS project was an important source of this vitality, one that has been largely unappreciated, as has the significance of its places in the chronology of the invention of the digital computer. Randell's efforts started to bear fruit in the mid-1970s. The secrecy about Bletchley Park had been broken when Group Captain Winterbotham published his book "The Ultra Secret" in 1974. Randell was researching the history of computer science in Britain for a conference on the history of computing held at the Los Alamos Scientific Laboratory, New Mexico on 10–15 June 1976, and got permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill (in October 1975 the British Government had released a series of captioned photographs from the Public Record Office). The interest in the "revelations" in his paper resulted in a special evening meeting when Randell and Coombs answered further questions. Coombs later wrote that "no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days". In 1977 Randell published an article "The First Electronic Computer" in several journals. In October 2000, a 500-page technical report on the Tunny cipher and its cryptanalysis—entitled "General Report on Tunny"—was released by GCHQ to the national Public Record Office, and it contains a fascinating paean to Colossus by the cryptographers who worked with it: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It is regretted that it is not possible to give an adequate idea of the fascination of a Colossus at work; its sheer bulk and apparent complexity; the fantastic speed of thin paper tape round the glittering pulleys; the childish pleasure of not-not, span, print main header and other gadgets; the wizardry of purely mechanical decoding letter by letter (one novice thought she was being hoaxed); the uncanny action of the typewriter in printing the correct scores without and beyond human aid; the stepping of the display; periods of eager expectation culminating in the sudden appearance of the longed-for score; and the strange rhythms characterizing every type of run: the stately break-in, the erratic short run, the regularity of wheel-breaking, the stolid rectangle interrupted by the wild leaps of the carriage-return, the frantic chatter of a motor run, even the ludicrous frenzy of hosts of bogus scores. Reconstruction. A team led by Tony Sale built a fully functional reconstruction of a Colossus Mark 2 between 1993 and 2008. In spite of the blueprints and hardware being destroyed, a surprising amount of material had survived, mainly in engineers' notebooks, but a considerable amount of it in the U.S. The optical tape reader might have posed the biggest problem, but Dr. Arnold Lynch, its original designer was able to redesign it to his own original specification. The reconstruction is on display, in the historically correct place for Colossus No. 9, at The National Museum of Computing, in H Block Bletchley Park in Milton Keynes, Buckinghamshire. In November 2007, to celebrate the project completion and to mark the start of a fundraising initiative for The National Museum of Computing, a Cipher Challenge pitted the rebuilt Colossus against radio amateurs worldwide in being first to receive and decode three messages enciphered using the Lorenz SZ42 and transmitted from radio station DL0HNF in the "Heinz Nixdorf MuseumsForum" computer museum. The challenge was easily won by radio amateur Joachim Schüth, who had carefully prepared for the event and developed his own signal processing and code-breaking code using Ada. The Colossus team were hampered by their wish to use World War II radio equipment, delaying them by a day because of poor reception conditions. Nevertheless, the victor's 1.4 GHz laptop, running his own code, took less than a minute to find the settings for all 12 wheels. The German codebreaker said: "My laptop digested ciphertext at a speed of 1.2 million characters per second—240 times faster than Colossus. If you scale the CPU frequency by that factor, you get an equivalent clock of 5.8 MHz for Colossus. That is a remarkable speed for a computer built in 1944." The Cipher Challenge verified the successful completion of the rebuilding project. "On the strength of today's performance Colossus is as good as it was six decades ago", commented Tony Sale. "We are delighted to have produced a fitting tribute to the people who worked at Bletchley Park and whose brainpower devised these fantastic machines which broke these ciphers and shortened the war by many months." Other meanings. There was a fictional computer named "Colossus" in the 1970 film "" which was based on the 1966 novel "Colossus" by D. F. Jones. This was a coincidence as it pre-dates the public release of information about Colossus, or even its name. Neal Stephenson's novel "Cryptonomicon" (1999) also contains a fictional treatment of the historical role played by Turing and Bletchley Park. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; A guided tour of the history and geography of the Park, written by one of the founder members of the Bletchley Park Trust
[ { "math_id": 0, "text": "\\chi" }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "\\Delta Z_1 \\oplus \\Delta Z_2 \\oplus \\Delta\\chi_1 \\oplus \\Delta\\chi_2 = \\bullet" } ]
https://en.wikipedia.org/wiki?curid=6229
62290105
Serre's inequality on height
In algebra, specifically in the theory of commutative rings, Serre's inequality on height states: given a (Noetherian) regular ring "A" and a pair of prime ideals formula_0 in it, for each prime ideal formula_1 that is a minimal prime ideal over the sum formula_2, the following inequality on heights holds: formula_3 Without the assumption on regularity, the inequality can fail; see scheme-theoretic intersection#Proper intersection. Sketch of Proof. Serre gives the following proof of the inequality, based on the validity of Serre's multiplicity conjectures for formal power series ring over a complete discrete valuation ring. By replacing formula_4 by the localization at formula_1, we assume formula_5 is a local ring. Then the inequality is equivalent to the following inequality: for finite formula_4-modules formula_6 such that formula_7 has finite length, formula_8 where formula_9 = the dimension of the support of formula_10 and similar for formula_11. To show the above inequality, we can assume formula_4 is complete. Then by Cohen's structure theorem, we can write formula_12 where formula_13 is a formal power series ring over a complete discrete valuation ring and formula_14 is a nonzero element in formula_13. Now, an argument with the Tor spectral sequence shows that formula_15. Then one of Serre's conjectures says formula_16, which in turn gives the asserted inequality. formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{p}, \\mathfrak{q}" }, { "math_id": 1, "text": "\\mathfrak r" }, { "math_id": 2, "text": "\\mathfrak p + \\mathfrak q" }, { "math_id": 3, "text": "\\operatorname{ht}(\\mathfrak r) \\le \\operatorname{ht}(\\mathfrak p) + \\operatorname{ht}(\\mathfrak q)." }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "(A, \\mathfrak r)" }, { "math_id": 6, "text": "M, N" }, { "math_id": 7, "text": "M \\otimes_A N" }, { "math_id": 8, "text": "\\dim_A M + \\dim_A N \\le \\dim A" }, { "math_id": 9, "text": "\\dim_A M = \\dim(A/\\operatorname{Ann}_A(M))" }, { "math_id": 10, "text": "M" }, { "math_id": 11, "text": "\\dim_A N" }, { "math_id": 12, "text": "A = A_1/a_1 A_1" }, { "math_id": 13, "text": "A_1" }, { "math_id": 14, "text": "a_1" }, { "math_id": 15, "text": "\\chi^{A_1}(M, N) = 0" }, { "math_id": 16, "text": "\\dim_{A_1} M + \\dim_{A_1} N < \\dim A_1" }, { "math_id": 17, "text": "\\square" } ]
https://en.wikipedia.org/wiki?curid=62290105
62293001
Inverse depth parametrization
Computational method for constructing 3D models In computer vision, the inverse depth parametrization is a parametrization used in methods for 3D reconstruction from multiple images such as simultaneous localization and mapping (SLAM). Given a point formula_0 in 3D space observed by a monocular pinhole camera from multiple views, the inverse depth parametrization of the point's position is a 6D vector that encodes the optical centre of the camera formula_1 when in first observed the point, and the position of the point along the ray passing through formula_0 and formula_1. Inverse depth parametrization generally improves numerical stability and allows to represent points with zero parallax. Moreover, the error associated to the observation of the point's position can be modelled with a Gaussian distribution when expressed in inverse depth. This is an important property required to apply methods, such as Kalman filters, that assume normality of the measurement error distribution. The major drawback is the larger memory consumption, since the dimensionality of the point's representation is doubled. Definition. Given 3D point formula_2 with world coordinates in a reference frame formula_3, observed from different views, the inverse depth parametrization formula_4 of formula_0 is given by: formula_5 where the first five components encode the camera pose in the first observation of the point, being formula_6 the optical centre, formula_7 the azimuth, formula_8 the elevation angle, and formula_9 the inverse depth of formula_10 at the first observation.
[ { "math_id": 0, "text": "\\mathbf{p}" }, { "math_id": 1, "text": "\\mathbf{c}_0" }, { "math_id": 2, "text": "\\mathbf{p} = (x, y, z)" }, { "math_id": 3, "text": "(e_1, e_2, e_3)" }, { "math_id": 4, "text": "\\mathbf{y}" }, { "math_id": 5, "text": " \\mathbf{y} = (x_0, y_0, z_0, \\theta, \\phi, \\rho) " }, { "math_id": 6, "text": "\\mathbf{c_0} = (x_0, y_0, z_0)" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "\\theta" }, { "math_id": 9, "text": "\\rho = \\frac{1}{\\left\\Vert \\mathbf{p} - \\mathbf{c}_0\\right\\Vert}" }, { "math_id": 10, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=62293001
62295363
Knowledge distillation
Machine learning method to transfer knowledge from a large model to a smaller one In machine learning, knowledge distillation or model distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized. It can be just as computationally expensive to evaluate a model even if it utilizes little of its knowledge capacity. Knowledge distillation transfers knowledge from a large model to a smaller model without loss of validity. As smaller models are less expensive to evaluate, they can be deployed on less powerful hardware (such as a mobile device). Knowledge distillation has been successfully used in several applications of machine learning such as object detection, acoustic models, and natural language processing. Recently, it has also been introduced to graph neural networks applicable to non-grid data. Concept of distillation. Transferring the knowledge from a large to a small model needs to somehow teach to the latter without loss of validity. If both models are trained on the same data, the small model may have insufficient capacity to learn a concise knowledge representation given the same computational resources and same data as the large model. However, some information about a concise knowledge representation is encoded in the pseudolikelihoods assigned to its output: when a model correctly predicts a class, it assigns a large value to the output variable corresponding to such class, and smaller values to the other output variables. The distribution of values among the outputs for a record provides information on how the large model represents knowledge. Therefore, the goal of economical deployment of a valid model can be achieved by training only the large model on the data, exploiting its better ability to learn concise knowledge representations, and then distilling such knowledge into the smaller model, that would not be able to learn it on its own, by training it to learn the soft output of the large model. A related methodology was "model compression" or "pruning", where a trained network is reduced in size, via methods such as Biased Weight Decay and Optimal Brain Damage. The idea of using the output of one neural network to train another neural network was studied as the teacher-student network configuration. In 1992, several papers studied the statistical mechanics of teacher-student network configuration, where both networks are committee machines or both are parity machines. Another early example of network distillation was also published in 1992, in the field of recurrent neural networks (RNNs). The problem was sequence prediction. It was solved by two RNNs. One of them ("atomizer") predicted the sequence, and another ("chunker") predicted the errors of the atomizer. Simultaneously, the atomizer predicted the internal states of the chunker. After the atomizer manages to predict the chunker's internal states well, it would start fixing the errors, and soon the chunker is obsoleted, leaving just one RNN in the end. A related methodology to compress the knowledge of multiple models into a single neural network was called "model compression" in 2006. Compression was achieved by training a smaller model on large amounts of pseudo-data labelled by a higher-performing ensemble, optimising to match the logit of the compressed model to the logit of the ensemble. Knowledge distillation is a generalisation of such approach, introduced by Geoffrey Hinton et al. in 2015, in a preprint that formulated the concept and showed some results achieved in the task of image classification. Knowledge distillation is also related to the concept of "behavioral cloning" discussed by Faraz Torabi et. al. Formulation. Given a large model as a function of the vector variable formula_0, trained for a specific classification task, typically the final layer of the network is a softmax in the form formula_1 where formula_2 is a parameter called "temperature", that for a standard softmax is normally set to 1. The softmax operator converts the logit values formula_3 to pseudo-probabilities, and higher values of temperature have the effect of generating a softer distribution of pseudo-probabilities among the output classes. Knowledge distillation consists of training a smaller network, called the "distilled model", on a dataset called transfer set (different than the dataset used to train the large model) using the cross entropy as loss function between the output of the distilled model formula_4 and the output formula_5 produced by the large model on the same record (or the average of the individual outputs, if the large model is an ensemble), using a high value of softmax temperature formula_2 for both models formula_6 In this context, a high temperature increases the entropy of the output, and therefore provides more information to learn for the distilled model compared to hard targets, at the same time reducing the variance of the gradient between different records and therefore allowing higher learning rates. If ground truth is available for the transfer set, the process can be strengthened by adding to the loss the cross-entropy between the output of the distilled model (computed with formula_7) and the known label formula_8 formula_9 where the component of the loss with respect to the large model is weighted by a factor of formula_10 since, as the temperature increases, the gradient of the loss with respect to the model weights scales by a factor of formula_11. Relationship with model compression. Under the assumption that the logits have zero mean, it is possible to show that model compression is a special case of knowledge distillation. The gradient of the knowledge distillation loss formula_12 with respect to the logit of the distilled model formula_13 is given by formula_14 where formula_15 are the logits of the large model. For large values of formula_2 this can be approximated as formula_16 and under the zero-mean hypothesis formula_17 it becomes formula_18, which is the derivative of formula_19, i.e. the loss is equivalent to matching the logits of the two models, as done in model compression.
[ { "math_id": 0, "text": "\\mathbf{x}" }, { "math_id": 1, "text": "\ny_i(\\mathbf{x}|t) = \\frac{e^{\\frac{z_i(\\mathbf{x})}{t}}}{\\sum_j e^{\\frac{z_j(\\mathbf{x})}{t}}}\n" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "z_i(\\mathbf{x})" }, { "math_id": 4, "text": "\\mathbf{y}(\\mathbf{x}|t)" }, { "math_id": 5, "text": "\\hat{\\mathbf{y}}(\\mathbf{x}|t)" }, { "math_id": 6, "text": "\nE(\\mathbf{x}|t) = -\\sum_i \\hat{y}_i(\\mathbf{x}|t) \\log y_i(\\mathbf{x}|t) .\n" }, { "math_id": 7, "text": "t = 1" }, { "math_id": 8, "text": "\\bar{y}" }, { "math_id": 9, "text": "\nE(\\mathbf{x}|t) = -t^2 \\sum_i \\hat{y}_i(\\mathbf{x}|t) \\log y_i(\\mathbf{x}|t) - \\sum_i \\bar{y}_i \\log \\hat{y}_i(\\mathbf{x}|1)\n" }, { "math_id": 10, "text": "t^2" }, { "math_id": 11, "text": "\\frac{1}{t^2}" }, { "math_id": 12, "text": "E" }, { "math_id": 13, "text": "z_i" }, { "math_id": 14, "text": "\n\\begin{align}\n \\frac{\\partial}{\\partial z_i} E\n &= -\\frac{\\partial}{\\partial z_i} \\sum_j \\hat{y}_j \\log y_j \\\\\n &= -\\frac{\\partial}{\\partial z_i} \\hat{y}_i \\log y_i + \\left( -\\frac{\\partial}{\\partial z_i} \\sum_{k\\neq i} \\hat{y}_k \\log y_k \\right)\\\\\n &= -\\hat{y}_i \\frac{1}{y_i} \\frac{\\partial}{\\partial z_i} y_i + \\sum_{k\\neq i} \\left( -\\hat{y}_k \\cdot \\frac{1}{y_k} \\cdot e^{\\frac{z_k}{t}} \\cdot \\left( -\\frac{1}{\\left(\\sum_j e^{\\frac{z_j}{t}} \\right)^2 }\\right) \\cdot e^{\\frac{z_i}{t}} \\cdot \\frac{1}{t} \\right)\\\\\n &= -\\hat{y}_i \\frac{1}{y_i} \\frac{\\partial}{\\partial z_i} \\frac{e^{\\frac{z_i}{t}}}{\\sum_j e^{\\frac{z_j}{t}}} + \\sum_{k\\neq i} \\left( \\hat{y}_k \\cdot \\frac{1}{y_k} \\cdot y_k \\cdot y_i \\cdot \\frac{1}{t} \\right)\\\\\n &= -\\hat{y}_i \\frac{1}{y_i}\n \\left(\n \\frac{\\frac{1}{t} e^{\\frac{z_i}{t}} \\sum_j e^{\\frac{z_j}{t}} - \\frac{1}{t} \\left( e^{\\frac{z_i}{t}} \\right)^2}\n {\\left( \\sum_j e^{\\frac{z_j}{t}} \\right)^2}\n \\right) + \\frac{y_i\\sum_{k\\neq i}\\hat{y}_k}{t}\\\\\n &= -\\hat{y}_i \\frac{1}{y_i} \\left( \\frac{y_i}{t} - \\frac{y_i^2}{t} \\right) + \\frac{y_i(1-\\hat{y}_i)}{t}\\\\\n &= \\frac{1}{t} \\left( y_i - \\hat{y}_i \\right) \\\\\n &= \\frac{1}{t} \\left( \\frac{e^{\\frac{z_i}{t}}}{\\sum_j e^{\\frac{z_j}{t}}} - \\frac{e^{\\frac{\\hat{z}_i}{t}}}{\\sum_j e^{\\frac{\\hat{z}_j}{t}}} \\right) \\\\\n\\end{align}\n" }, { "math_id": 15, "text": "\\hat{z}_i" }, { "math_id": 16, "text": "\n\\frac{1}{t}\n\\left(\n \\frac{1 + \\frac{z_i}{t}}{N + \\sum_j \\frac{z_j}{t}} -\n \\frac{1 + \\frac{\\hat{z}_i}{t}}{N + \\sum_j \\frac{\\hat{z}_j}{t}}\n\\right)\n" }, { "math_id": 17, "text": "\\sum_j z_j = \\sum_j \\hat{z}_j = 0" }, { "math_id": 18, "text": " \\frac{z_i - \\hat{z}_i}{NT^2} " }, { "math_id": 19, "text": "\\frac{1}{2} \\left( z_i - \\hat{z}_i \\right)^2" } ]
https://en.wikipedia.org/wiki?curid=62295363
622966
Cousin problems
Make a meromorphic function from local data in multiple variables In mathematics, the Cousin problems are two questions in several complex variables, concerning the existence of meromorphic functions that are specified in terms of local data. They were introduced in special cases by Pierre Cousin in 1895. They are now posed, and solved, for any complex manifold "M", in terms of conditions on "M". For both problems, an open cover of "M" by sets "Ui" is given, along with a meromorphic function "fi" on each "Ui". First Cousin problem. The first Cousin problem or additive Cousin problem assumes that each difference formula_0 is a holomorphic function, where it is defined. It asks for a meromorphic function "f" on "M" such that formula_1 is "holomorphic" on "Ui"; in other words, that "f" shares the singular behaviour of the given local function. The given condition on the formula_0 is evidently "necessary" for this; so the problem amounts to asking if it is sufficient. The case of one variable is the Mittag-Leffler theorem on prescribing poles, when "M" is an open subset of the complex plane. Riemann surface theory shows that some restriction on "M" will be required. The problem can always be solved on a Stein manifold. The first Cousin problem may be understood in terms of sheaf cohomology as follows. Let K be the sheaf of meromorphic functions and O the sheaf of holomorphic functions on "M". A global section formula_2 of K passes to a global section formula_3 of the quotient sheaf K/O. The converse question is the first Cousin problem: given a global section of K/O, is there a global section of K from which it arises? The problem is thus to characterize the image of the map formula_4 By the long exact cohomology sequence, formula_5 is exact, and so the first Cousin problem is always solvable provided that the first cohomology group "H"1("M",O) vanishes. In particular, by Cartan's theorem B, the Cousin problem is always solvable if "M" is a Stein manifold. Second Cousin problem. The second Cousin problem or multiplicative Cousin problem assumes that each ratio formula_6 is a non-vanishing holomorphic function, where it is defined. It asks for a meromorphic function "f" on "M" such that formula_7 is holomorphic and non-vanishing. The second Cousin problem is a multi-dimensional generalization of the Weierstrass theorem on the existence of a holomorphic function of one variable with prescribed zeros. The attack on this problem by means of taking logarithms, to reduce it to the additive problem, meets an obstruction in the form of the first Chern class (see also exponential sheaf sequence). In terms of sheaf theory, let formula_8 be the sheaf of holomorphic functions that vanish nowhere, and formula_9 the sheaf of meromorphic functions that are not identically zero. These are both then sheaves of abelian groups, and the quotient sheaf formula_10 is well-defined. The multiplicative Cousin problem then seeks to identify the image of quotient map formula_11 formula_12 The long exact sheaf cohomology sequence associated to the quotient is formula_13 so the second Cousin problem is solvable in all cases provided that formula_14 The quotient sheaf formula_10 is the sheaf of germs of Cartier divisors on "M". The question of whether every global section is generated by a meromorphic function is thus equivalent to determining whether every line bundle on "M" is trivial. The cohomology group formula_15 for the multiplicative structure on formula_8 can be compared with the cohomology group formula_16 with its additive structure by taking a logarithm. That is, there is an exact sequence of sheaves formula_17 where the leftmost sheaf is the locally constant sheaf with fiber formula_18. The obstruction to defining a logarithm at the level of "H"1 is in formula_19, from the long exact cohomology sequence formula_20 When "M" is a Stein manifold, the middle arrow is an isomorphism because formula_21 for formula_22 so that a necessary and sufficient condition in that case for the second Cousin problem to be always solvable is that formula_23 References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f_i-f_j" }, { "math_id": 1, "text": "f-f_i" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\\phi(f)" }, { "math_id": 4, "text": "H^0(M,\\mathbf{K}) \\, \\xrightarrow{\\phi} \\, H^0(M,\\mathbf{K}/\\mathbf{O})." }, { "math_id": 5, "text": "H^0(M,\\mathbf{K}) \\,\\xrightarrow{\\phi}\\, H^0(M,\\mathbf{K}/\\mathbf{O})\\to H^1(M,\\mathbf{O})" }, { "math_id": 6, "text": "f_i/f_j" }, { "math_id": 7, "text": "f/f_i" }, { "math_id": 8, "text": "\\mathbf{O}^*" }, { "math_id": 9, "text": "\\mathbf{K}^*" }, { "math_id": 10, "text": "\\mathbf{K}^*/\\mathbf{O}^*" }, { "math_id": 11, "text": "\\phi" }, { "math_id": 12, "text": "H^0(M,\\mathbf{K}^*)\\xrightarrow{\\phi} H^0(M,\\mathbf{K}^*/\\mathbf{O}^*)." }, { "math_id": 13, "text": "H^0(M,\\mathbf{K}^*)\\xrightarrow{\\phi} H^0(M,\\mathbf{K}^*/\\mathbf{O}^*)\\to H^1(M,\\mathbf{O}^*)" }, { "math_id": 14, "text": "H^1(M,\\mathbf{O}^*)=0." }, { "math_id": 15, "text": "H^1(M,\\mathbf{O}^*)," }, { "math_id": 16, "text": "H^1(M,\\mathbf{O})" }, { "math_id": 17, "text": "0\\to 2\\pi i\\Z\\to \\mathbf{O} \\xrightarrow{\\exp} \\mathbf{O}^* \\to 0" }, { "math_id": 18, "text": "2\\pi i\\Z" }, { "math_id": 19, "text": "H^2(M,\\Z)" }, { "math_id": 20, "text": "H^1(M,\\mathbf{O})\\to H^1(M,\\mathbf{O}^*)\\to 2\\pi i H^2(M,\\Z) \\to H^2(M, \\mathbf{O})." }, { "math_id": 21, "text": "H^q(M,\\mathbf{O}) = 0" }, { "math_id": 22, "text": "q > 0" }, { "math_id": 23, "text": "H^2(M,\\Z)=0." } ]
https://en.wikipedia.org/wiki?curid=622966
62299760
Order polytope
In mathematics, the order polytope of a finite partially ordered set is a convex polytope defined from the set. The points of the order polytope are the monotonic functions from the given set to the unit interval, its vertices correspond to the upper sets of the partial order, and its dimension is the number of elements in the partial order. The order polytope is a distributive polytope, meaning that coordinatewise minima and maxima of pairs of its points remain within the polytope. The order polytope of a partial order should be distinguished from the "linear ordering polytope", a polytope defined from a number formula_0 as the convex hull of indicator vectors of the sets of edges of formula_0-vertex transitive tournaments. Definition and example. A partially ordered set is a pair formula_1 where formula_2 is an arbitrary set and formula_3 is a binary relation on pairs of elements of formula_2 that is reflexive (for all formula_4, formula_5), antisymmetric (for all formula_6 with formula_7 at most one of formula_8 and formula_9 can be true), and transitive (for all formula_10, if formula_8 and formula_11 then formula_12). A partially ordered set formula_1 is said to be finite when formula_2 is a finite set. In this case, the collection of all functions formula_13 that map formula_2 to the real numbers forms a finite-dimensional vector space, with pointwise addition of functions as the vector sum operation. The dimension of the space is just the number of elements of formula_2. The order polytope is defined to be the subset of this space consisting of functions formula_13 with the following two properties: For example, for a partially ordered set consisting of two elements formula_16 and formula_17, with formula_18 in the partial order, the functions formula_13 from these points to real numbers can be identified with points formula_19 in the Cartesian plane. For this example, the order polytope consists of all points in the formula_20-plane with formula_21. This is an isosceles right triangle with vertices at (0,0), (0,1), and (1,1). Vertices and facets. The vertices of the order polytope consist of monotonic functions from formula_2 to formula_22. That is, the order polytope is an integral polytope; it has no vertices with fractional coordinates. These functions are exactly the indicator functions of upper sets of the partial order. Therefore, the number of vertices equals the number of upper sets. The facets of the order polytope are of three types: The facets can be considered in a more symmetric way by introducing special elements formula_27 below all elements in the partial order and formula_28 above all elements, mapped by formula_13 to 0 and 1 respectively, and keeping only inequalities of the third type for the resulting augmented partially ordered set. More generally, with the same augmentation by formula_27 and formula_28, the faces of all dimensions of the order polytope correspond 1-to-1 with quotients of the partial order. Each face is congruent to the order polytope of the corresponding quotient partial order. Volume and Ehrhart polynomial. The order polytope of a linear order is a special type of simplex called an order simplex or orthoscheme. Each point of the unit cube whose coordinates are all distinct lies in a unique one of these orthoschemes, the order simplex for the linear order of its coordinates. Because these order simplices are all congruent to each other and (for orders on formula_0 elements) there are formula_29 different linear orders, the volume of each order simplex is formula_30. More generally, an order polytope can be partitioned into order simplices in a canonical way, with one simplex for each linear extension of the corresponding partially ordered set. Therefore, the volume of any order polytope is formula_30 multiplied by the number of linear extensions of the corresponding partially ordered set. This connection between the number of linear extensions and volume can be used to approximate the number of linear extensions of any partial order efficiently (despite the fact that computing this number exactly is #P-complete) by applying a randomized polynomial-time approximation scheme for polytope volume. The Ehrhart polynomial of the order polytope is a polynomial whose values at integer values formula_16 give the number of integer points in a copy of the polytope scaled by a factor of formula_16. For the order polytope, the Ehrhart polynomial equals (after a minor change of variables) the order polynomial of the corresponding partially ordered set. This polynomial encodes several pieces of information about the polytope including its volume (the leading coefficient of the polynomial and its number of vertices (the sum of coefficients). Continuous lattice. By Birkhoff's representation theorem for finite distributive lattices, the upper sets of any partially ordered set form a finite distributive lattice, and every finite distributive lattice can be represented in this way. The upper sets correspond to the vertices of the order polytope, so the mapping from upper sets to vertices provides a geometric representation of any finite distributive lattice. Under this representation, the edges of the polytope connect comparable elements of the lattice. If two functions formula_31 and formula_32 both belong to the order polytope of a partially ordered set formula_1, then the function formula_33 that maps formula_16 to formula_34, and the function formula_35 that maps formula_16 to formula_36 both also belong to the order polytope. The two operations formula_37 and formula_38 give the order polytope the structure of a continuous distributive lattice, within which the finite distributive lattice of Birkhoff's theorem is embedded. That is, every order polytope is a distributive polytope. The distributive polytopes with all vertex coordinates equal to 0 or 1 are exactly the order polytopes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "(S,\\le)" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "\\le" }, { "math_id": 4, "text": "x\\in S" }, { "math_id": 5, "text": "x\\le x" }, { "math_id": 6, "text": "x,y\\in S" }, { "math_id": 7, "text": "x\\ne y" }, { "math_id": 8, "text": "x\\le y" }, { "math_id": 9, "text": "y\\le x" }, { "math_id": 10, "text": "x,y,z\\in S" }, { "math_id": 11, "text": "y\\le z" }, { "math_id": 12, "text": "x\\le z" }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "0\\le f(x)\\le 1" }, { "math_id": 15, "text": "f(x)\\le f(y)" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "y" }, { "math_id": 18, "text": "x \\le y" }, { "math_id": 19, "text": "(f(x),f(y))" }, { "math_id": 20, "text": "(x,y)" }, { "math_id": 21, "text": "0\\le x\\le y \\le 1" }, { "math_id": 22, "text": "\\{0,1\\}" }, { "math_id": 23, "text": "0\\le f(x)" }, { "math_id": 24, "text": "f(y)\\le 1" }, { "math_id": 25, "text": "x,y" }, { "math_id": 26, "text": "z" }, { "math_id": 27, "text": "\\bot" }, { "math_id": 28, "text": "\\top" }, { "math_id": 29, "text": "n!" }, { "math_id": 30, "text": "1/n!" }, { "math_id": 31, "text": "p" }, { "math_id": 32, "text": "q" }, { "math_id": 33, "text": "p\\wedge q" }, { "math_id": 34, "text": "\\min(p(x),q(x))" }, { "math_id": 35, "text": "p\\vee q" }, { "math_id": 36, "text": "\\max(p(x),q(x))" }, { "math_id": 37, "text": "\\wedge" }, { "math_id": 38, "text": "\\vee" } ]
https://en.wikipedia.org/wiki?curid=62299760
62302975
Soft Growing Robotics
Soft Growing Robotics is a subset of soft robotics concerned with designing and building robots that use robot body expansion to move and interact with the environment. Soft growing robots are built from compliant materials and attempt to mimic how vines, plant shoots, and other organisms reach new locations through growth. While other forms of robots use locomotion to achieve their objectives, soft growing robots elongate their body through addition of new material, or expansion of material. This gives them the ability to travel through constricted areas and form a wide range of useful 3-D formations. Currently there are two main soft growing robot designs: additive manufacturing and tip extension. Some goals of soft growing robotics development are the creation of robots that can explore constricted areas and improve surgical procedures. Additive manufacturing design. One way of extending the robot body is through additive manufacturing. Additive manufacturing generally refers to 3-D printing, or the fabrication of three dimensional objects through the conjoining of many layers of material. Additive manufacturing design of a soft growing robot utilizes a modified 3-D printer at the tip of the robot to deposit thermoplastics (material that is rigid when cooled and flexible when heated) to extend the robot in the desired orientation. Design characteristics. The body of the robot consists of: The additive manufacturing process involves polylactic acid filament (a thermoplastic) being pulled through the tubular body of the robot by a motor in the tip. At the tip, the filament passes through a heating element, making it pliable. The filament is then turned perpendicular to the direction of robot growth and deposited onto the outer edge of a rotating disk facing the base of the robot. As the disk (known as the deposition head) rotates, new filament is deposited in spiraling layers. This filament solidifies in front of the previous layer of filament, pushing the tip of the robot forward. The interactions between the temperature of the heating element, the rotation of the deposition head, and the speed the filament is fed through the heating element is precisely controlled to ensure the robot grows in the desired manner. Movement control. The speed of the robot is controlled by changing the temperature of the heating element, the speed at which filament is fed through the heating element, and the speed the deposition head is spun. Speed can be defined as the function: formula_0 Where formula_1 is the thickness of the deposited layer of filament, and formula_2 is the angle of the helix in which the filament material is deposited. Controlling the direction of growth (and thus the direction of robot "movement") can be done in two ways: Capabilities. One of the major advantages of soft growing robots is that minimal friction exists between the outside environment and the robot. This is because only the robot tip moves relative to the environment. Multiple robots using additive manufacturing for growth were designed for burrowing into the soil, as less friction with the environment reduces energy required to move through the environment Tip extension design. A second form of soft growing robot design is tip extension. This design is characterized by a tube of material (common materials include nylon fabric, low density polyethylene, and silicone coated nylon) pressurized with air or water that is folded into itself. By letting out the folded material, the robot extends from the tip as the pressurized tube pushes out the inner folded material. Design characteristics. In contrast with additive manufacturing where new material is deposited behind the tip of the robot to push the tip forward, tip extension utilizes the internal pressure within the robot body to push out new material at the tip of the robot. Often, the tubing inside the robot body is stored on a reel to make it easier to control the release of tubing and thus robot growth. Multiple methods of turning a tip extension robot have been developed. They include: Robots utilizing the tip extension design are retractable. Current designs use a wire attached to the tip of the robot that is used to pull the tip of the robot back into the robot body. Mathematical analysis. The theoretical force the tip grows under can be modelled as: formula_3 Where formula_4 represents the force the tip grows under, formula_5 represents internal pressure, and formula_6 represents cross sectional area of the robot tip. However, the experimental force the tip expands under has been found to be less than this largely due to axial tension in the robot body. A model that approximates formula_4 more accurately is: formula_7 Here, formula_8 is an experimentally determined constant and formula_9 is yield pressure when no growth occurs. formula_10, formula_11, and formula_12, are force terms dependent on velocity, length, and curvature or the robot respectively. Additionally, multiple mathematical models for various forms of turning, twisting, and retracting have been developed. Methods of robot operation. Soft growing robots can be controlled in various ways depending on how well the objective and growth path are defined. Without a clearly defined goal or robot growth path, teleoperation is used. When a clearly defined goal exists (such as a light source), computer vision can be used to find a path to the goal and grow a robot along that path. If the desired path of robot growth is known before the robot is deployed, pre-planned turning positions can be used to control the robot. Applications. Possible applications of soft growing robots focus on their low friction/interaction with the environment, their simple method of growth, and their ability to grow through cramped environments. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S=\\frac{L_d}{\\sqrt{\\frac{1}{(\\tan\\alpha)^2}+1}}" }, { "math_id": 1, "text": "L_d" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "F_{driving}=PA" }, { "math_id": 4, "text": "F_{driving}" }, { "math_id": 5, "text": "P" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "F_{driving}=PAk-[YA+F_v]-[F_l+\\Sigma_iF_{Ci}]\n" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "Y\n" }, { "math_id": 10, "text": "F_v\n" }, { "math_id": 11, "text": "F_l\n" }, { "math_id": 12, "text": "F_{Ci}\n" } ]
https://en.wikipedia.org/wiki?curid=62302975
62303294
Geometrically (algebraic geometry)
In algebraic geometry, especially in scheme theory, a property is said to hold geometrically over a field if it also holds over the algebraic closure of the field. In other words, a property holds geometrically if it holds after a base change to a geometric point. For example, a smooth variety is a variety that is geometrically regular. Geometrically irreducible and geometrically reduced. Given a scheme "X" that is of finite type over a field "k", the following are equivalent: The same statement also holds if "irreducible" is replaced with "reduced" and the separable closure is replaced by the perfect closure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\times_k \\overline{k} := X \\times_{\\operatorname{Spec} k} {\\operatorname{Spec} \\overline{k}}" }, { "math_id": 1, "text": "\\overline{k}" }, { "math_id": 2, "text": "X \\times_k k_s" }, { "math_id": 3, "text": "k_s" }, { "math_id": 4, "text": "X \\times_k F" } ]
https://en.wikipedia.org/wiki?curid=62303294
62305152
Inverse consistency
In image registration, inverse consistency measures the consistency of mappings between images produced by a registration algorithm. The inverse consistency error, introduced by Christiansen and Johnson in 2001, quantifies the distance between the composition of the mappings from each image to the other, produced by the registration procedure, and the identity function, and is used as a regularisation constraint in the loss function of many registration algorithms to enforce consistent mappings. Inverse consistency is necessary for good image registration but it is not sufficient, since a mapping can be perfectly consistent but not register the images at all. Definition. Image registration is the process of establishing a common coordinate system between two images, and given two images formula_0 registering a source image formula_1 to a target image formula_2 consists of determining a transformation formula_3 that maps points from the target space to the source space. An ideal registration algorithm should not be sensitive to which image in the pair is used as source or target, and the registration operator should be antisymmetric such that the mappings formula_4 produced when registering formula_1 to formula_2 and formula_2 to formula_1 respectively should be the inverse of each other, i.e. formula_5 and formula_6 or, equivalently, formula_7 and formula_8, where formula_9 denotes the function composition operator. Real algorithms are not perfect, and when swapping the role of source and target image in a registration problem the so obtained transformations are not the inverse of each other. Inverse consistency can be enforced by adding to the loss function of the registration a symmetric regularisation term that penalises inconsistent transformations formula_10 Inverse consistency can be used as a quality metric to evaluate image registration results. The inverse consistency error (formula_11) measures the distance between the composition of the two transforms and the identity function, and it can be formulated in terms of both average (formula_12) or maximum (formula_13) over a region of interest formula_14 of the image: formula_15 While inverse consistency is a necessary property of good registration algorithms, inverse consistency error alone is not a sufficient metric to evaluate the quality of image registration results, since a perfectly consistent mapping, with no other constraint, may be not even close to correctly register a pair of images.
[ { "math_id": 0, "text": "\n\\begin{align}\n I_1: \\Omega_1 \\to \\mathbb{R} \\\\\n I_2: \\Omega_2 \\to \\mathbb{R}\n\\end{align}\n" }, { "math_id": 1, "text": "I_1" }, { "math_id": 2, "text": "I_2" }, { "math_id": 3, "text": "f_1: \\Omega_2 \\to \\Omega_1" }, { "math_id": 4, "text": "\n\\begin{align}\n f_1: \\Omega_2 \\to \\Omega_1 \\\\\n f_2: \\Omega_1 \\to \\Omega_2\n\\end{align}\n" }, { "math_id": 5, "text": "f_2 = f_1^{-1}" }, { "math_id": 6, "text": "f_1 = f_2^{-1}" }, { "math_id": 7, "text": "f_2 \\circ f_1 = \\operatorname{id}_{\\Omega_2}" }, { "math_id": 8, "text": "f_1 \\circ f_2 = \\operatorname{id}_{\\Omega_1}" }, { "math_id": 9, "text": "\\circ" }, { "math_id": 10, "text": "\n\\int_{\\Omega_2} \\left\\Vert f_2(f_1(x)) - x \\right\\Vert^2 \\mathrm{d}x +\n\\int_{\\Omega_1} \\left\\Vert f_1(f_2(x)) - x \\right\\Vert^2 \\mathrm{d}x .\n" }, { "math_id": 11, "text": "ICE" }, { "math_id": 12, "text": "ICE_a" }, { "math_id": 13, "text": "ICE_m" }, { "math_id": 14, "text": "\\Omega" }, { "math_id": 15, "text": "\n\\begin{align}\n ICE_a &= \\frac{1}{\\int_{\\Omega} \\mathrm{d}x} \\int_{\\Omega} \\left\\Vert f_2(f_1(x)) - x \\right\\Vert \\mathrm{d}x \\\\\n ICE_m &= \\max_{x \\in \\Omega} \\left\\Vert f_2(f_1(x)) - x \\right\\Vert .\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=62305152
6230931
Expander mixing lemma
The expander mixing lemma intuitively states that the edges of certain formula_0-regular graphs are evenly distributed throughout the graph. In particular, the number of edges between two vertex subsets formula_1 and formula_2 is always close to the expected number of edges between them in a random formula_0-regular graph, namely formula_3. "d"-Regular Expander Graphs. Define an formula_4-graph to be a formula_0-regular graph formula_5 on formula_6 vertices such that all of the eigenvalues of its adjacency matrix formula_7 except one have absolute value at most formula_8 The formula_0-regularity of the graph guarantees that its largest absolute value of an eigenvalue is formula_9 In fact, the all-1's vector formula_10 is an eigenvector of formula_7 with eigenvalue formula_0, and the eigenvalues of the adjacency matrix will never exceed the maximum degree of formula_5 in absolute value. If we fix formula_0 and formula_11 then formula_4-graphs form a family of expander graphs with a constant spectral gap. Statement. Let formula_12 be an formula_4-graph. For any two subsets formula_13, let formula_14 be the number of edges between "S" and "T" (counting edges contained in the intersection of "S" and "T" twice). Then formula_15 Tighter Bound. We can in fact show that formula_16 using similar techniques. Biregular Graphs. For biregular graphs, we have the following variation, where we take formula_17 to be the second largest eigenvalue. Let formula_18 be a bipartite graph such that every vertex in formula_19 is adjacent to formula_20 vertices of formula_21 and every vertex in formula_21 is adjacent to formula_22 vertices of formula_19. Let formula_23 with formula_24 and formula_25. Let formula_26. Then formula_27 Note that formula_28 is the largest eigenvalue of formula_5. Proofs. Proof of First Statement. Let formula_7 be the adjacency matrix of formula_5 and let formula_29 be the eigenvalues of formula_7 (these eigenvalues are real because formula_7 is symmetric). We know that formula_30 with corresponding eigenvector formula_31, the normalization of the all-1's vector. Define formula_32 and note that formula_33. Because formula_7 is symmetric, we can pick eigenvectors formula_34 of formula_7 corresponding to eigenvalues formula_35 so that formula_36 forms an orthonormal basis of formula_37. Let formula_38 be the formula_39 matrix of all 1's. Note that formula_40 is an eigenvector of formula_38 with eigenvalue formula_6 and each other formula_41, being perpendicular to formula_42, is an eigenvector of formula_38 with eigenvalue 0. For a vertex subset formula_43, let formula_44 be the column vector with formula_45 coordinate equal to 1 if formula_46 and 0 otherwise. Then, formula_47. Let formula_48. Because formula_7 and formula_38 share eigenvectors, the eigenvalues of formula_49 are formula_50. By the Cauchy-Schwarz inequality, we have that formula_51. Furthermore, because formula_49 is self-adjoint, we can write formula_52. This implies that formula_53 and formula_54. Proof Sketch of Tighter Bound. To show the tighter bound above, we instead consider the vectors formula_55 and formula_56, which are both perpendicular to formula_40. We can expand formula_57 because the other two terms of the expansion are zero. The first term is equal to formula_58, so we find that formula_59 We can bound the right hand side by formula_60 using the same methods as in the earlier proof. Applications. The expander mixing lemma can be used to upper bound the size of an independent set within a graph. In particular, the size of an independent set in an formula_4-graph is at most formula_61 This is proved by letting formula_62 in the statement above and using the fact that formula_63 An additional consequence is that, if formula_5 is an formula_4-graph, then its chromatic number formula_64 is at least formula_65 This is because, in a valid graph coloring, the set of vertices of a given color is an independent set. By the above fact, each independent set has size at most formula_66 so at least formula_67 such sets are needed to cover all of the vertices. A second application of the expander mixing lemma is to provide an upper bound on the maximum possible size of an independent set within a polarity graph. Given a finite projective plane formula_68 with a polarity formula_69 the polarity graph is a graph where the vertices are the points a of formula_68, and vertices formula_70 and formula_71 are connected if and only if formula_72 In particular, if formula_68 has order formula_73 then the expander mixing lemma can show that an independent set in the polarity graph can have size at most formula_74 a bound proved by Hobart and Williford. Converse. Bilu and Linial showed that a converse holds as well: if a formula_0-regular graph formula_12 satisfies that for any two subsets formula_13 with formula_75 we have formula_76 then its second-largest (in absolute value) eigenvalue is bounded by formula_77. Generalization to hypergraphs. Friedman and Widgerson proved the following generalization of the mixing lemma to hypergraphs. Let formula_78 be a formula_79-uniform hypergraph, i.e. a hypergraph in which every "edge" is a tuple of formula_79 vertices. For any choice of subsets formula_80 of vertices, formula_81 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "\\frac dn|S||T|" }, { "math_id": 4, "text": "(n, d, \\lambda)" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "A_G" }, { "math_id": 8, "text": "\\lambda." }, { "math_id": 9, "text": "d." }, { "math_id": 10, "text": "\\mathbf1" }, { "math_id": 11, "text": "\\lambda" }, { "math_id": 12, "text": "G = (V, E)" }, { "math_id": 13, "text": "S, T \\subseteq V" }, { "math_id": 14, "text": "e(S, T) = |\\{(x,y) \\in S \\times T : xy \\in E(G)\\}|" }, { "math_id": 15, "text": "\\left|e(S, T) - \\frac{d |S| |T|}{n}\\right| \\leq \\lambda \\sqrt{|S| |T| }\\,." }, { "math_id": 16, "text": "\\left|e(S, T) - \\frac{d |S| |T|}{n}\\right| \\leq \\lambda \\sqrt{|S| |T| (1 - |S|/n)(1 - |T|/n)}\\," }, { "math_id": 17, "text": "\\lambda " }, { "math_id": 18, "text": "G = (L, R, E)" }, { "math_id": 19, "text": "L" }, { "math_id": 20, "text": "d_L" }, { "math_id": 21, "text": "R" }, { "math_id": 22, "text": "d_R" }, { "math_id": 23, "text": "S \\subseteq L, T \\subseteq R" }, { "math_id": 24, "text": "|S| = \\alpha|L|" }, { "math_id": 25, "text": "|T| = \\beta |R|" }, { "math_id": 26, "text": "e(G) = |E(G)|" }, { "math_id": 27, "text": "\\left|\\frac{e(S, T)}{e(G)} - \\alpha \\beta\\right| \\leq \\frac{\\lambda}{\\sqrt{d_Ld_R}} \\sqrt{\\alpha \\beta (1 - \\alpha) (1 - \\beta)} \\leq \\frac{\\lambda}{\\sqrt{d_Ld_R}} \\sqrt{\\alpha \\beta}\\,." }, { "math_id": 28, "text": "\\sqrt{d_L d_R}" }, { "math_id": 29, "text": "\\lambda_1\\geq\\cdots\\geq\\lambda_n" }, { "math_id": 30, "text": "\\lambda_1=d" }, { "math_id": 31, "text": "v_1=\\frac 1{\\sqrt n}\\mathbf{1}" }, { "math_id": 32, "text": "\\lambda=\\sqrt{\\max\\{\\lambda_2^2,\\dots,\\lambda_n^2\\}}" }, { "math_id": 33, "text": "\\max\\{\\lambda_2^2,\\dots,\\lambda_n^2\\}\\leq\\lambda^2\\leq\\lambda_1^2=d^2" }, { "math_id": 34, "text": "v_2,\\ldots,v_n" }, { "math_id": 35, "text": "\\lambda_2,\\ldots,\\lambda_n" }, { "math_id": 36, "text": "\\{v_1,\\ldots,v_n\\}" }, { "math_id": 37, "text": "\\mathbf R^n" }, { "math_id": 38, "text": "J" }, { "math_id": 39, "text": "n\\times n" }, { "math_id": 40, "text": "v_1" }, { "math_id": 41, "text": "v_i" }, { "math_id": 42, "text": "v_1=\\mathbf{1}" }, { "math_id": 43, "text": "U \\subseteq V" }, { "math_id": 44, "text": "1_U" }, { "math_id": 45, "text": "v^\\text{th}" }, { "math_id": 46, "text": "v\\in U" }, { "math_id": 47, "text": "\\left|e(S,T)-\\frac dn|S||T|\\right|=\\left|1_S^\\operatorname{T}\\left(A_G-\\frac dnJ\\right)1_T\\right|" }, { "math_id": 48, "text": "M=A_G-\\frac dnJ" }, { "math_id": 49, "text": "M" }, { "math_id": 50, "text": "0,\\lambda_2,\\ldots,\\lambda_n" }, { "math_id": 51, "text": "|1_S^\\operatorname{T}M1_T|=|1_S\\cdot M1_T|\\leq\\|1_S\\|\\|M1_T\\|" }, { "math_id": 52, "text": "\\|M1_T\\|^2=\\langle M1_T,M1_T\\rangle=\\langle 1_T,M^21_T\\rangle=\\left\\langle 1_T,\\sum_{i=1}^nM^2\\langle 1_T,v_i\\rangle v_i\\right\\rangle=\\sum_{i=2}^n\\lambda_i^2\\langle 1_T,v_i\\rangle^2\\leq\\lambda^2\\|1_T\\|^2" }, { "math_id": 53, "text": "\\|M1_T\\|\\leq\\lambda\\|1_T\\|" }, { "math_id": 54, "text": "\\left|e(S,T)-\\frac dn|S||T|\\right|\\leq\\lambda\\|1_S\\|\\|1_T\\|=\\lambda\\sqrt{|S||T|}" }, { "math_id": 55, "text": "1_S-\\frac{|S|}n\\mathbf 1" }, { "math_id": 56, "text": "1_T-\\frac{|T|}n\\mathbf 1" }, { "math_id": 57, "text": "1_S^\\operatorname{T}A_G1_T=\\left(\\frac{|S|}n\\mathbf 1\\right)^\\operatorname{T}A_G\\left(\\frac{|T|}n\\mathbf 1\\right)+\\left(1_S-\\frac{|S|}n\\mathbf 1\\right)^\\operatorname{T}A_G\\left(1_T-\\frac{|T|}n\\mathbf 1\\right)" }, { "math_id": 58, "text": "\\frac{|S||T|}{n^2}\\mathbf 1^\\operatorname{T}A_G\\mathbf 1=\\frac dn|S||T|" }, { "math_id": 59, "text": "\\left|e(S,T)-\\frac dn|S||T|\\right|\n\\leq\\left|\\left(1_S-\\frac{|S|}n\\mathbf 1\\right)^\\operatorname{T}A_G\\left(1_T-\\frac{|T|}n\\mathbf 1\\right)\\right|" }, { "math_id": 60, "text": "\\lambda\\left\\|1_S-\\frac{|S|}{|n|}\\mathbf 1\\right\\|\\left\\|1_T-\\frac{|T|}{|n|}\\mathbf 1\\right\\|\n=\\lambda\\sqrt{|S||T|\\left(1-\\frac{|S|}n\\right)\\left(1-\\frac{|T|}n\\right)}" }, { "math_id": 61, "text": "\\lambda n/d." }, { "math_id": 62, "text": "T=S" }, { "math_id": 63, "text": "e(S,S)=0." }, { "math_id": 64, "text": "\\chi(G)" }, { "math_id": 65, "text": "d/\\lambda." }, { "math_id": 66, "text": "\\lambda n/d," }, { "math_id": 67, "text": "d/\\lambda" }, { "math_id": 68, "text": "\\pi" }, { "math_id": 69, "text": "\\perp," }, { "math_id": 70, "text": "x" }, { "math_id": 71, "text": "y" }, { "math_id": 72, "text": "x\\in y^{\\perp}." }, { "math_id": 73, "text": "q," }, { "math_id": 74, "text": "q^{3/2} - q + 2q^{1/2} - 1," }, { "math_id": 75, "text": "S \\cap T = \\empty " }, { "math_id": 76, "text": "\\left|e(S, T) - \\frac{d |S| |T|}{n}\\right| \\leq \\lambda \\sqrt{|S| |T|}," }, { "math_id": 77, "text": "O(\\lambda (1+\\log(d/\\lambda)))" }, { "math_id": 78, "text": "H" }, { "math_id": 79, "text": "k" }, { "math_id": 80, "text": "V_1, ..., V_k" }, { "math_id": 81, "text": "\\left| |e(V_1,...,V_k)| - \\frac{k!|E(H)|}{n^k}|V_1|...|V_k| \\right| \\le \\lambda_2(H)\\sqrt{|V_1|...|V_k|}." } ]
https://en.wikipedia.org/wiki?curid=6230931
6231032
Acoustic guitar
Fretted string instrument An acoustic guitar is a musical instrument in the string family. When a string is plucked, its vibration is transmitted from the bridge, resonating throughout the top of the guitar. It is also transmitted to the side and back of the instrument, resonating through the air in the body, and producing sound from the sound hole. While the original, general term for this stringed instrument is "guitar", the retronym 'acoustic guitar' – often used to indicate the steel stringed model – distinguishes it from an electric guitar, which relies on electronic amplification. Typically, a guitar's body is a sound box, of which the top side serves as a sound board that enhances the vibration sounds of the strings. In standard tuning the guitar's six strings are tuned (low to high) E2 A2 D3 G3 B3 E4. Guitar strings may be plucked individually with a pick (plectrum) or fingertip, or strummed to play chords. Plucking a string causes it to vibrate at a fundamental pitch determined by the string's length, mass, and tension. (Overtones are also present, closely related to harmonics of the fundamental pitch.) The string causes the soundboard and the air enclosed by the sound box to vibrate. As these have their own resonances, they amplify some overtones more strongly than others, affecting the timbre of the resulting sound. History. The guitar likely originated in Spain in the early 16th century, deriving from the guitarra latina. Gitterns (small, plucked guitars), were the first small, guitar-like instruments created during the Spanish Middle Ages with a round back, like that of the lute. Modern guitar-shaped instruments were not seen until the Renaissance era, when the body and size began to take a guitar-like shape. The earliest string instruments related to the guitar and its structure were broadly known as vihuelas within Spanish musical culture. Vihuelas were string instruments that were commonly seen in the 16th century during the Renaissance. Later, Spanish writers distinguished these instruments into two categories of vihuelas. The vihuela de arco was an instrument that mimicked the violin, and the vihuela de Penola was played with a plectrum or by hand. When it was played by hand it was known as the vihuela de mano. Vihuela de mano shared extreme similarities with the Renaissance guitar as it used hand movement at the sound hole or sound chamber of the instrument to create music. By 1790 only six-course vihuela guitars (six unison-tuned pairs of strings) were being created and had become the main type and model of guitar used in Spain. Most of the older 5-course guitars were still in use but were also being modified to a six-coursed acoustical guitar. Fernando Ferandiere's book "Arte de tocar la Guitarra Española por Música" (Madrid, 1799) describes the standard Spanish guitar from his time as an instrument with seventeen frets and six courses with the first two 'gut' strings tuned in unison called the "terceras" and the tuning named to 'G' of the two strings. The acoustic guitar at this time began to take the shape familiar in the modern acoustic guitar. The coursed pairs of strings eventually became less common in favor of single strings. Around 1850, the form and structure of the modern guitar was established by Spanish guitar maker Antonio Torres Jurado who increased the size of the guitar body, altered its proportions, and made use of fan bracing, which first appeared in guitars made by Francisco Sanguino in the late 18th century. The bracing pattern, which refers to the internal pattern of wood reinforcements used to secure the guitar's top and back to prevent the instrument from collapsing under tension, is an important factor in how the guitar sounds. Torres' design greatly improved the volume, tone, and projection of the instrument, and it has remained essentially unchanged since. Acoustic properties. The acoustic guitar's soundboard, or top, also has a strong effect on the loudness of the guitar. Woods that are good at transmitting sound, like spruce, are commonly used for the soundboard. No amplification occurs in this process, because musicians add no external energy to increase the loudness of the sound (as would be the case with an electronic amplifier). All the energy is provided by the plucking of the string. Without a soundboard, however, the string would just "cut" through the air without moving it much. The soundboard increases the surface of the vibrating area in a process called mechanical impedance matching. The soundboard can move the air much more easily than the string alone, because it is large and flat. This increases the entire system's energy transfer efficiency, and musicians emit a much louder sound. In addition, the acoustic guitar has a hollow body, and an additional coupling and resonance effect increases the efficiency of energy transmission in lower frequencies. The air in a guitar's cavity resonates with the vibrational modes of the string and soundboard. At low frequencies, which depend on the size of the box, the chamber acts like a Helmholtz resonator, increasing or decreasing the volume of the sound again depending on whether the air in the box moves in phase or out of phase with the strings. When in phase, the sound increases by about 3 decibels. In opposing phase, it decreases about 3 decibels. As a Helmholtz resonator, the air at the opening is vibrating in or out of phase with the air in the box and in or out of phase with the strings. These resonance interactions attenuate or amplify the sound at different frequencies, boosting or damping various harmonic tones. Ultimately, the cavity air vibrations couple to the outside air through the sound hole, though some variants of the acoustic guitar omit this hole, or have formula_0 holes, like a violin family instrument (a trait found in some electric guitars such as the ES-335 and ES-175 models from Gibson). This coupling is most efficient because here the impedance matching is perfect: it is air pushing air. A guitar has several sound coupling modes: string to soundboard, soundboard to cavity air, and both soundboard and cavity air to outside air. The back of the guitar also vibrates to some degree, driven by air in the cavity and mechanical coupling to the rest of the guitar. The guitar—as an acoustic system—colors the sound by the way it generates and emphasizes harmonics, and how it couples this energy to the surrounding air (which ultimately is what we perceive as loudness). Improved coupling, however, comes costing decay time, since the string's energy is more efficiently transmitted. Solid body electric guitars (with no soundboard at all) produce very low volume, but tend to have long sustain. All these complex air coupling interactions, and the resonant properties of the panels themselves, are a key reason that different guitars have different tonal qualities. The sound is a complex mixture of harmonics that give the guitar its distinctive sound. Amplification. Classical gut-string guitars lacked adequate projection, and were unable to displace banjos until innovations introduced helped to increase their volume. Two important innovations were introduced by United States firm C.F. Martin: steel strings and the increasing of the guitar top area; the popularity of Martin's larger "dreadnought" body size among acoustic performers is related to the greater sound volume produced. These innovations allowed guitars to compete with and often displace the banjos that had previously dominated jazz bands. The steel-strings increased tension on the neck; for stability, Martin reinforced the neck with a steel truss rod, which became standard in later steel-string guitars. An acoustic guitar can be amplified by using various types of pickups or microphones. However, amplification of acoustic guitars had many problems with audio feedback. In the 1960s, Ovation's parabolic bowls dramatically reduced feedback, allowing greater amplification of acoustic guitars. In the 1970s, Ovation developed thinner sound-boards with carbon-based composites laminating a thin layer of birch, in its Adamas model, which has been viewed as one of the most radical designs in the history of acoustic guitars. The Adamas model dissipated the sound-hole of the traditional soundboard among 22 small sound-holes in the upper chamber of the guitar, yielding greater volume and further reducing feedback during amplification. Another method for reducing feedback is to fit a rubber or plastic disc into the sound hole. The most common types of pickups used for acoustic guitar amplification are piezo and magnetic pickups. Piezo pickups are generally mounted under the bridge saddle of the acoustic guitar and can be plugged into a mixer or amplifier. A Piezo pickup made by Baldwin was incorporated in the body of Ovation guitars, rather than attached by drilling through the body; the combination of the Piezo pickup and parabolic ("roundback") body helped Ovation succeed in the market during the 1970s. Magnetic pickups on acoustic guitars are generally mounted in the sound hole, and are similar to those in electric guitars. An acoustic guitar with pickups for electrical amplification is called an acoustic-electric guitar. In the 2000s, manufacturers introduced new types of pickups to try to amplify the full sound of these instruments. This includes body sensors, and systems that include an internal microphone along with body sensors or under-the-saddle pickups. Types. Historical and modern acoustic guitars are extremely varied in their design and construction. Some of the most important varieties are the classical guitar (Spanish Guitar/Nylon-stringed), steel-string acoustic guitar and Colombian tiple. Body shape. Common body shapes for modern acoustic guitars, from smallest to largest: Range – The smallest common body shape, sometimes called a "mini jumbo", is three-quarters the size of a jumbo-shaped guitar. A range shape typically has a rounded back to improve projection for the smaller body. The smaller body and scale length make the range guitar an option for players who struggle with larger body guitars. Parlor – Parlor guitars have small compact bodies and have been described as "punchy" sounding with a delicate tone. It normally has 12 open frets. The smaller body makes the parlor a more comfortable option for players who find large body guitars uncomfortable. Grand Concert – This mid-sized body shape is not as deep as other full-size guitars, but has a full waist. Because of the smaller body, grand concert guitars have a more controlled overtone and are often used for their sound projection when recording. Auditorium – Similar in dimensions to the dreadnought body shape, but with a much more pronounced waist. This general body shape is also sometimes referred to as an "Orchestra" style guitar depending on the manufacturer. The shifting of the waist provides different tones to stand out. The auditorium body shape is a newer body when compared to the other shapes such as dreadnought. Dreadnought – This is the classic guitar body shape. The style was designed by Martin Guitars to produce a deeper sound than "classic"-style guitars, with very resonant bass. The body is large and the waist of the guitar is not as pronounced as the auditorium and grand concert bodies. There are many Dreadnought variants produced, one of the most notable being the Gibson J-45. Jumbo – The largest standard guitar body shape found on acoustic guitars. Jumbo is bigger than an Auditorium but similarly proportioned, and is generally designed to provide a deep tone similar to a dreadnought's. It was designed by Gibson to compete with the dreadnought, but with maximum resonant space for greater volume and sustain. The foremost example of the style is the Gibson J-200, but like the dreadnought, most guitar manufacturers have at least one jumbo model. Playing techniques. The acoustic guitar is played in a variety of different genres and musical styles, with each featuring different playing techniques. Some of the most commonly used techniques are: Strumming. Strumming involves a rhythmic upward and downward motion of the picking hand (right if playing a right-handed guitar; left if playing a left-handed guitar) across the strings, while the opposite ("fretting") hand is in chord formation. This can be done with or without a guitar pick, depending on if the guitarist wants a crisp or more dull and blended sound, respectively. There are many common strumming patterns, which are played based on the specific time signature of a given song. Simple on-beat strumming is typically the first and least complex technique that guitarists learn. Guitarists can also alternate patterns or emphasize strums on specific beats to add rhythm, character, and unique style to a song. An example of a song featuring the strum technique is "Free Fallin'" by Tom Petty, where you hear full open chord strums. Fingerstyle. Fingerstyle, also known as fingerpicking, involves a patterned plucking of the strings with the picking hand. This technique focuses on playing specific notes in a melodic pattern, rather than full chord strums. Guitarists use their thumb, index, middle, and ring fingers, which are notated as "p" (as in pulgar), "i" (as in indice), "m" (as in medio), and "a" (as in annular), respectively, based on the Spanish language. This "PIMA" acronym in sheet music or tabs tells guitarists which picking hand finger to pluck a string with in a given picking pattern. When strings are plucked downward, this technique produces a clear and articulate sound that adds movement and melody to a song. A variation of fingerstyle is "percussive fingerstyle," where guitarists combine traditional fingerstyle with rhythmic taps or hits on the body of the guitar to imitate a percussion sound. An example of a song featuring the fingerstyle technique is "Landslide" by Fleetwood Mac, where you hear plucked moving notes rather than full strums. Slide. Slide guitar is a common technique that can be played on acoustic, steel acoustic, and/or electric guitars. It is primarily used in the blues, rock, and country genres. When playing with this technique, guitarists wear a small metal, glass, or plastic tube on one of their fretting hand fingers and slide it across the fretboard rather than pressing firmly on singular frets. The picking hand either strums or plucks as normal. This produces a smooth and blended transition between notes and chords, called glissando. An example of a song featuring the slide technique is "For Emma, Forever Ago" by Bon Iver, in which a seamless sliding melody over the song can be heard. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=6231032
623154
Assessment of kidney function
Ways of assessing the function of the kidneys Assessment of kidney function occurs in different ways, using the presence of symptoms and signs, as well as measurements using urine tests, blood tests, and medical imaging. Functions of a healthy kidney include maintaining a person's fluid balance, maintaining an acid-base balance; regulating electrolytes including sodium, potassium, and other electrolytes; clearing toxins; regulating blood pressure; and regulating hormones, such as erythropoietin; and activation of vitamin D. Description. The functions of the kidney include maintenance of acid-base balance; regulation of fluid balance; regulation of sodium, potassium, and other electrolytes; clearance of toxins; absorption of glucose, amino acids, and other small molecules; regulation of blood pressure; production of various hormones, such as erythropoietin; and activation of vitamin D. The Glomerular filtration rate (GFR) is regarded as the best overall measure of the kidney's ability to carry out these numerous functions. An estimate of the GFR is used clinically to determine the degree of kidney impairment and to track the progression of the disease. The GFR, however, does not reveal the source of the kidney disease. This is accomplished by urinalysis, measurement of urine protein excretion, kidney imaging, and, if necessary, kidney biopsy. Much of renal physiology is studied at the level of the nephron – the smallest functional unit of the kidney. Each nephron begins with a filtration component that filters the blood entering the kidney. This filtrate then flows along the length of the nephron, which is a tubular structure lined by a single layer of specialized cells and surrounded by capillaries. The major functions of these lining cells are the reabsorption of water and small molecules from the filtrate into the blood, and the secretion of wastes from the blood into the urine. Proper function of the kidney requires that it receives and adequately filters blood. This is performed at the microscopic level by many hundreds of thousands of filtration units called renal corpuscles, each of which is composed of a glomerulus and a Bowman's capsule. A global assessment of renal function is often ascertained by estimating the rate of filtration, called the glomerular filtration rate (GFR). Clinical assessment. Clinical assessment can be used to assess the function of the kidneys. This is because a person with abnormally functioning kidneys may have symptoms that develop. For example, a person with chronic kidney disease may develop oedema due to failure of the kidneys to regulate water balance. They may develop evidence of chronic kidney disease, that can be used to assess its severity, for example high blood pressure, osteoporosis or anaemia. If the kidneys are unable to excrete urea, a person may develop a widespread itch or confusion. Urine tests. Part of the assessment of kidney function includes the measurement of urine and its contents. Abnormal kidney function may cause too much or too little urine to be produced. The ability of the kidneys to filter protein is often measured, as urine albumin or urine protein levels, measured either at a single instance or, because of variation throughout the day, as 24-hour urine tests. Blood tests. Blood tests are also used to assess kidney function. These include tests that are intended to directly measure the function of the kidneys, as well as tests that assess the function of the kidneys by looking for evidence of problems associated with abnormal function. One of the measures of kidney function is the glomerular filtration rate (GFR). Other tests that can assess the function of the kidneys include assessment of electrolyte levels such as potassium and phosphate, assessment of acid-base status by the measurement of bicarbonate levels from a vein, and assessment of the full blood count for anaemia. Glomerular filtration rate. The glomerular filtration rate (GFR) describes the volume of fluid filtered from the renal (kidney) glomerular capillaries into the Bowman's capsule per unit time. Creatinine clearance (CCr) is the volume of blood plasma that is cleared of creatinine per unit time and is a useful measure for approximating the GFR. Creatinine clearance exceeds GFR due to creatinine secretion, which can be blocked by cimetidine. Both GFR and CCr may be accurately calculated by comparative measurements of substances in the blood and urine, or estimated by formulas using just a blood test result (eGFR and eCCr) The results of these tests are used to assess the excretory function of the kidneys. Staging of chronic kidney disease is based on categories of GFR as well as albuminuria and cause of kidney disease. Central to the physiologic maintenance of GFR is the differential basal tone of the afferent and efferent arterioles (see diagram). In other words, the filtration rate is dependent on the difference between the higher blood pressure created by vasoconstriction of the input or afferent arteriole versus the lower blood pressure created by lesser vasoconstriction of the output or efferent arteriole. GFR is equal to the renal clearance ratio when any solute is freely filtered and is neither reabsorbed nor secreted by the kidneys. The rate therefore measured is the quantity of the substance in the urine that originated from a calculable volume of blood. Relating this principle to the below equation – for the substance used, the product of urine concentration and urine flow equals the mass of substance excreted during the time that urine has been collected. This mass equals the mass filtered at the glomerulus as nothing is added or removed in the nephron. Dividing this mass by the plasma concentration gives the volume of plasma which the mass must have originally come from, and thus the volume of plasma fluid that has entered Bowman's capsule within the aforementioned period of time. The GFR is typically recorded in units of "volume per time", e.g., milliliters per minute (mL/min). Compare to filtration fraction. formula_0 There are several different techniques used to calculate or estimate the glomerular filtration rate (GFR or eGFR). The above formula only applies for GFR calculation when it is equal to the Clearance Rate. The normal range of GFR, adjusted for body surface area, is 100–130 average 125 (mL/min)/(1.73 m2) in men and 90–120 (mL/min)/(1.73 m2) in women younger than the age of 40. In children, GFR measured by inulin clearance is 110 (mL/min)/(1.73 m2) until 2 years of age in both sexes, and then it progressively decreases. After age 40, GFR decreases progressively with age, by 0.4–1.2 mL/min per year. Estimated GFR (eGFR) is now recommended by clinical practice guidelines and regulatory agencies for routine evaluation of GFR whereas measured GFR (mGFR) is recommended as a confirmatory test when more accurate assessment is required. Medical imaging. The kidney function can also be assessed with medical imaging. Some forms of imaging, such as kidney ultrasound or CT scans, may assess kidney function by indicating chronic disease that can impact function, by showing a small or shrivelled kidney.. Other tests, such as nuclear medicine tests, directly assess the function of the kidney by measuring the perfusion and excretion of radioactive substances through the kidneys. Kidney function in disease. A decreased renal function can be caused by many types of kidney disease. Upon presentation of decreased renal function, it is recommended to perform a history and physical examination, as well as performing a renal ultrasound and a urinalysis. The most relevant items in the history are medications, edema, nocturia, gross hematuria, family history of kidney disease, diabetes and polyuria. The most important items in a physical examination are signs of vasculitis, lupus erythematosus, diabetes, endocarditis and hypertension. A urinalysis is helpful even when not showing any pathology, as this finding suggests an extrarenal etiology. Proteinuria and/or urinary sediment usually indicates the presence of glomerular disease. Hematuria may be caused by glomerular disease or by a disease along the urinary tract. The most relevant assessments in a renal ultrasound are renal sizes, echogenicity and any signs of hydronephrosis. Renal enlargement usually indicates diabetic nephropathy, focal segmental glomerular sclerosis or myeloma. Renal atrophy suggests longstanding chronic renal disease. Chronic kidney disease stages. Risk factors for kidney disease include diabetes, high blood pressure, family history, older age, ethnic group and smoking. For most patients, a GFR over 60 (mL/min)/(1.73 m2) is adequate. But significant decline of the GFR from a previous test result can be an early indicator of kidney disease requiring medical intervention. The sooner kidney dysfunction is diagnosed and treated the greater odds of preserving remaining nephrons, and preventing the need for dialysis. The severity of chronic kidney disease (CKD) is described by six stages; the most severe three are defined by the MDRD-eGFR value, and first three also depend on whether there is other evidence of kidney disease (e.g., proteinuria): 0) Normal kidney function – GFR above 90 (mL/min)/(1.73 m2) and no proteinuria 1) CKD1 – GFR above 90 (mL/min)/(1.73 m2) with evidence of kidney damage 2) CKD2 (mild) – GFR of 60 to 89 (mL/min)/(1.73 m2) with evidence of kidney damage 3) CKD3 (moderate) – GFR of 30 to 59 (mL/min)/(1.73 m2) 4) CKD4 (severe) – GFR of 15 to 29 (mL/min)/(1.73 m2) 5) CKD5 kidney failure – GFR less than 15 (mL/min)/(1.73 m2) Some people add CKD5D for those stage 5 patients requiring dialysis; many patients in CKD5 are not yet on dialysis. Note: others add a "T" to patients who have had a transplant regardless of stage. Not all clinicians agree with the above classification, suggesting that it may mislabel patients with mildly reduced kidney function, especially the elderly, as having a disease. A conference was held in 2009 regarding these controversies by Kidney Disease: Improving Global Outcomes (KDIGO) on CKD: Definition, Classification and Prognosis, gathering data on CKD prognosis to refine the definition and staging of CKD. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "GFR = \\frac { \\mbox{Urine Concentration} \\times \\mbox{Urine Flow} }{ \\mbox{Plasma Concentration} }" } ]
https://en.wikipedia.org/wiki?curid=623154
62319195
Fuhrmann triangle
Special triangle based on arbitrary triangle The Fuhrmann triangle, named after Wilhelm Fuhrmann (1833–1904), is special triangle based on a given arbitrary triangle. For a given triangle formula_2 and its circumcircle the midpoints of the arcs over triangle sides are denoted by formula_0. Those midpoints get reflected at the associated triangle sides yielding the points formula_3, which forms the "Fuhrmann triangle". The circumcircle of Fuhrmann triangle is the Fuhrmann circle. Furthermore the Furhmann triangle is similar to the triangle formed by the mid arc points, that is formula_1. For the area of the Fuhrmann triangle the following formula holds: formula_4 Where formula_5 denotes the circumcenter of the given triangle formula_2 and formula_6 its radius as well as formula_7 denoting the incenter and formula_8 its radius. Due to Euler's theorem one also has formula_9. The following equations hold for the sides of the Fuhrmann triangle: formula_10 formula_11 formula_12 Where formula_13 denote the sides of the given triangle formula_2 and formula_14 the sides of the Fuhrmann triangle (see drawing).
[ { "math_id": 0, "text": "M_a, M_b, M_c " }, { "math_id": 1, "text": "\\triangle M^\\prime_c M^\\prime_b M^\\prime_a \\sim \\triangle M_a M_b M_c " }, { "math_id": 2, "text": "\\triangle ABC" }, { "math_id": 3, "text": "M^\\prime_a, M^\\prime_b, M^\\prime_c " }, { "math_id": 4, "text": "|\\triangle M^\\prime_c M^\\prime_b M^\\prime_a| = \\frac{(a+b+c)|OI|^2}{4R}=\\frac{(a+b+c)(R-2r)}{4}" }, { "math_id": 5, "text": "O" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "|OI|^2=R(R-2r)" }, { "math_id": 10, "text": "a^\\prime=\\sqrt{\\frac{(-a+b+c)(a+b+c)}{bc}}|OI|" }, { "math_id": 11, "text": "b^\\prime=\\sqrt{\\frac{(a-b+c)(a+b+c)}{ac}}|OI|" }, { "math_id": 12, "text": "c^\\prime=\\sqrt{\\frac{(a+b-c)(a+b+c)}{ab}}|OI|" }, { "math_id": 13, "text": "a, b, c" }, { "math_id": 14, "text": "a^\\prime, b^\\prime, c^\\prime" } ]
https://en.wikipedia.org/wiki?curid=62319195
62325228
Milnor–Wood inequality
In mathematics, more specifically in differential geometry and geometric topology, the Milnor–Wood inequality is an obstruction to endow circle bundles over surfaces with a flat structure. It is named after John Milnor and John W. Wood. Flat bundles. For linear bundles, flatness is defined as the vanishing of the curvature form of an associated connection. An arbitrary smooth (or topological) "d"-dimensional fiber bundle is flat if it can be endowed with a foliation of codimension d that is transverse to the fibers. The inequality. The Milnor–Wood inequality is named after two separate results that were proven by John Milnor and John W. Wood. Both of them deal with orientable circle bundles over a closed oriented surface formula_0 of positive genus "g". Theorem (Milnor, 1958) Let formula_1 be a flat oriented linear circle bundle. Then the Euler number of the bundle satisfies formula_2. Theorem (Wood, 1971) Let formula_3 be a flat oriented topological circle bundle. Then the Euler number of the bundle satisfies formula_4. Wood's theorem implies Milnor's older result, as the homomorphism formula_5 classifying the linear flat circle bundle gives rise to a topological circle bundle via the 2-fold covering map formula_6, doubling the Euler number. Either of these two statements can be meant by referring to the Milnor–Wood inequality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Sigma_g " }, { "math_id": 1, "text": "\\pi\\colon E \\to \\Sigma_g " }, { "math_id": 2, "text": " |e(\\pi)| \\leq g -1" }, { "math_id": 3, "text": " \\pi\\colon E \\to \\Sigma_g " }, { "math_id": 4, "text": " |e(\\pi)| \\leq 2g -2" }, { "math_id": 5, "text": "<Math>\\pi_1:\\Sigma\\to SL(2,\\R)</Math>" }, { "math_id": 6, "text": "<Math>SL(2,\\R)\\to PSL(2,\\R) \\subset \\operatorname{Homeo}^+(S^1)</Math>" } ]
https://en.wikipedia.org/wiki?curid=62325228
6232594
Share capital
Portion of a company's equity A corporation's share capital, commonly referred to as capital stock in the United States, is the portion of a corporation's equity that has been derived by the issue of shares in the corporation to a shareholder, usually for cash. "Share capital" may also denote the number and types of shares that compose a corporation's share structure. Definition. In accounting, the share capital of a corporation is the nominal value of issued shares (that is, the sum of their par values, sometimes indicated on share certificates). If the allocation price of shares is greater than the par value, as in a rights issue, the shares are said to be sold at a premium (variously called share premium, additional paid-in capital or paid-in capital in excess of par). This equation shows the constituents that make up a company's real share capital: formula_0 This is differentiated from share capital in the accounting sense, as it presents nominal share capital and does not take the premium value of shares into account, which instead is reported as additional paid-in capital. Legal capital. Legal capital is a concept used in European corporate and foundation law, United Kingdom company law, and various other corporate law jurisdictions to refer to the sum of assets contributed to a company by shareholders when they are issued shares. The law often requires that this capital is maintained, and that dividends are not paid when a company is not showing a profit above the level of historically recorded legal capital. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum\\text{Share capital = Number of shares issued} \\times \\text{(Par value + Share premium)}" } ]
https://en.wikipedia.org/wiki?curid=6232594
6233
Connected space
Topological space that is connected In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. Connectedness is one of the principal topological properties that are used to distinguish topological spaces. A subset of a topological space formula_0 is a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;connected set if it is a connected space when viewed as a subspace of formula_0. Some related but stronger conditions are path connected, simply connected, and formula_1-connected. Another related notion is "locally connected", which neither implies nor follows from connectedness. Formal definition. A topological space formula_0 is said to be &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;disconnected if it is the union of two disjoint non-empty open sets. Otherwise, formula_0 is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice. For a topological space formula_0 the following conditions are equivalent: Historically this modern formulation of the notion of connectedness (in terms of no partition of formula_0 into two separated sets) first appeared (independently) with N.J. Lennes, Frigyes Riesz, and Felix Hausdorff at the beginning of the 20th century. See for details. Connected components. Given some point formula_3 in a topological space formula_4 the union of any collection of connected subsets such that each contained formula_3 will once again be a connected subset. The connected component of a point formula_3 in formula_0 is the union of all connected subsets of formula_0 that contain formula_5 it is the unique largest (with respect to formula_6) connected subset of formula_0 that contains formula_7 The maximal connected subsets (ordered by inclusion formula_6) of a non-empty topological space are called the connected components of the space. The components of any topological space formula_0 form a partition of formula_0: they are disjoint, non-empty and their union is the whole space. Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets (singletons), which are not open. Proof: Any two distinct rational numbers formula_8 are in different components. Take an irrational number formula_9 and then set formula_10 and formula_11 Then formula_12 is a separation of formula_13 and formula_14. Thus each component is a one-point set. Let formula_15 be the connected component of formula_3 in a topological space formula_4 and formula_16 be the intersection of all clopen sets containing formula_3 (called quasi-component of formula_7) Then formula_17 where the equality holds if formula_0 is compact Hausdorff or locally connected. Disconnected spaces. A space in which all components are one-point sets is called &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;totally disconnected. Related to this property, a space formula_0 is called &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;totally separated if, for any two distinct elements formula_3 and formula_18 of formula_0, there exist disjoint open sets formula_19 containing formula_3 and formula_20 containing formula_18 such that formula_0 is the union of formula_19 and formula_20. Clearly, any totally separated space is totally disconnected, but the converse does not hold. For example take two copies of the rational numbers formula_21, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff. Examples. An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space. Path connectedness. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;path-connected space is a stronger notion of connectedness, requiring the structure of a path. A path from a point formula_3 to a point formula_18 in a topological space formula_0 is a continuous function formula_40 from the unit interval formula_41 to formula_0 with formula_42 and formula_43. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;path-component of formula_0 is an equivalence class of formula_0 under the equivalence relation which makes formula_3 equivalent to formula_18 if there is a path from formula_3 to formula_18. The space formula_0 is said to be path-connected (or pathwise connected or formula_44-connected) if there is exactly one path-component. For non-empty spaces, this is equivalent to the statement that there is a path joining any two points in formula_0. Again, many authors exclude the empty space. Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line formula_45 and the topologist's sine curve. Subsets of the real line formula_31 are connected if and only if they are path-connected; these subsets are the intervals and rays of formula_31. Also, open subsets of formula_29 or formula_46 are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. Arc connectedness. A space formula_0 is said to be arc-connected or arcwise connected if any two topologically distinguishable points can be joined by an arc, which is an embedding formula_47. An arc-component of formula_0 is a maximal arc-connected subset of formula_0; or equivalently an equivalence class of the equivalence relation of whether two points can be joined by an arc or by a path whose points are topologically indistinguishable. Every Hausdorff space that is path-connected is also arc-connected; more generally this is true for a formula_48-Hausdorff space, which is a space where each image of a path is closed. An example of a space which is path-connected but not arc-connected is given by the line with two origins; its two copies of formula_49 can be connected by a path but not by an arc. Intuition for path-connected spaces does not readily transfer to arc-connected spaces. Let formula_0 be the line with two origins. The following are facts whose analogues hold for path-connected spaces, but do not hold for arc-connected spaces: Local connectedness. A topological space is said to be locally connected at a point formula_3 if every neighbourhood of formula_3 contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space formula_0 is locally connected if and only if every component of every open set of formula_0 is open. Similarly, a topological space is said to be &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;locally path-connected if it has a base of path-connected sets. An open subset of a locally path-connected space is connected if and only if it is path-connected. This generalizes the earlier statement about formula_29 and formula_46, each of which is locally path-connected. More generally, any topological manifold is locally path-connected. Locally connected does not imply connected, nor does locally path-connected imply path connected. A simple example of a locally connected (and locally path-connected) space that is not connected (or path-connected) is the union of two separated intervals in formula_31, such as formula_51. A classical example of a connected space that is not locally connected is the so called topologist's sine curve, defined as formula_52, with the Euclidean topology induced by inclusion in formula_53. Set operations. The intersection of connected sets is not necessarily connected. The union of connected sets is not necessarily connected, as can be seen by considering formula_54. Each ellipse is a connected set, but the union is not connected, since it can be partitioned to two disjoint open sets formula_19 and formula_20. This means that, if the union formula_0 is disconnected, then the collection formula_55 can be partitioned to two sub-collections, such that the unions of the sub-collections are disjoint and open in formula_0 (see picture). This implies that in several cases, a union of connected sets is necessarily connected. In particular: The set difference of connected sets is not necessarily connected. However, if formula_63 and their difference formula_64 is disconnected (and thus can be written as a union of two open sets formula_65 and formula_66), then the union of formula_67 with each such component is connected (i.e. formula_68 is connected for all formula_69). &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof By contradiction, suppose formula_70 is not connected. So it can be written as the union of two disjoint open sets, e.g. formula_71. Because formula_67 is connected, it must be entirely contained in one of these components, say formula_72, and thus formula_73 is contained in formula_65. Now we know that: formula_74 The two sets in the last union are disjoint and open in formula_0, so there is a separation of formula_0, contradicting the fact that formula_0 is connected. Graphs. Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them. But it is not always possible to find a topology on the set of points which induces the same connected sets. The 5-cycle graph (and any formula_1-cycle with formula_77 odd) is one such example. As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets . Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs. However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space. Stronger forms of connectedness. There are stronger forms of connectedness for topological spaces, for instance: In general, any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\{ 0, 1 \\}" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "X," }, { "math_id": 5, "text": "x;" }, { "math_id": 6, "text": "\\subseteq" }, { "math_id": 7, "text": "x." }, { "math_id": 8, "text": "q_1<q_2" }, { "math_id": 9, "text": "q_1 < r < q_2," }, { "math_id": 10, "text": "A = \\{q \\in \\Q : q < r\\}" }, { "math_id": 11, "text": "B = \\{q \\in \\Q : q > r\\}." }, { "math_id": 12, "text": "(A,B)" }, { "math_id": 13, "text": "\\Q," }, { "math_id": 14, "text": "q_1 \\in A, q_2 \\in B" }, { "math_id": 15, "text": "\\Gamma_x" }, { "math_id": 16, "text": "\\Gamma_x'" }, { "math_id": 17, "text": "\\Gamma_x \\subset \\Gamma'_x" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "U" }, { "math_id": 20, "text": "V" }, { "math_id": 21, "text": "\\Q" }, { "math_id": 22, "text": "[0, 2)" }, { "math_id": 23, "text": "[0, 1)" }, { "math_id": 24, "text": "[1, 2)," }, { "math_id": 25, "text": "[0, 2)." }, { "math_id": 26, "text": "(1, 2]" }, { "math_id": 27, "text": "[0, 1) \\cup (1, 2]." }, { "math_id": 28, "text": "(0, 1) \\cup \\{ 3 \\}" }, { "math_id": 29, "text": "\\R^n" }, { "math_id": 30, "text": "(0, 0)," }, { "math_id": 31, "text": "\\R" }, { "math_id": 32, "text": "\\mathbb{R}" }, { "math_id": 33, "text": "n \\geq 2," }, { "math_id": 34, "text": "n\\geq 3" }, { "math_id": 35, "text": "\\Complex" }, { "math_id": 36, "text": "\\operatorname{GL}(n, \\R)" }, { "math_id": 37, "text": "\\operatorname{GL}(n, \\Complex)" }, { "math_id": 38, "text": "R" }, { "math_id": 39, "text": "\\ne 0, 1" }, { "math_id": 40, "text": "f" }, { "math_id": 41, "text": "[0,1]" }, { "math_id": 42, "text": "f(0)=x" }, { "math_id": 43, "text": "f(1)=y" }, { "math_id": 44, "text": "\\mathbf{0}" }, { "math_id": 45, "text": "L^*" }, { "math_id": 46, "text": "\\C^n" }, { "math_id": 47, "text": "f : [0, 1] \\to X" }, { "math_id": 48, "text": "\\Delta" }, { "math_id": 49, "text": "0" }, { "math_id": 50, "text": "X \\times \\mathbb{R}" }, { "math_id": 51, "text": "(0,1) \\cup (2,3)" }, { "math_id": 52, "text": "T = \\{(0,0)\\} \\cup \\left\\{ \\left(x, \\sin\\left(\\tfrac{1}{x}\\right)\\right) : x \\in (0, 1] \\right\\}" }, { "math_id": 53, "text": "\\R^2" }, { "math_id": 54, "text": "X=(0,1) \\cup (1,2)" }, { "math_id": 55, "text": "\\{X_i\\}" }, { "math_id": 56, "text": " \\bigcap X_i \\neq \\emptyset" }, { "math_id": 57, "text": "\\forall i,j: X_i \\cap X_j \\neq \\emptyset" }, { "math_id": 58, "text": "\\forall i: X_i \\cap X_{i+1} \\neq \\emptyset" }, { "math_id": 59, "text": "X / \\{X_i\\}" }, { "math_id": 60, "text": "U \\cup V" }, { "math_id": 61, "text": "q(U) \\cup q(V)" }, { "math_id": 62, "text": "q(U), q(V)" }, { "math_id": 63, "text": "X \\supseteq Y" }, { "math_id": 64, "text": "X \\setminus Y" }, { "math_id": 65, "text": "X_1" }, { "math_id": 66, "text": "X_2" }, { "math_id": 67, "text": "Y" }, { "math_id": 68, "text": "Y \\cup X_{i}" }, { "math_id": 69, "text": "i" }, { "math_id": 70, "text": "Y \\cup X_{1}" }, { "math_id": 71, "text": "Y \\cup X_{1}=Z_{1} \\cup Z_{2}" }, { "math_id": 72, "text": "Z_1" }, { "math_id": 73, "text": "Z_2" }, { "math_id": 74, "text": "X=\\left(Y \\cup X_{1}\\right) \\cup X_{2}=\\left(Z_{1} \\cup Z_{2}\\right) \\cup X_{2}=\\left(Z_{1} \\cup X_{2}\\right) \\cup\\left(Z_{2} \\cap X_{1}\\right)" }, { "math_id": 75, "text": "f:X\\rightarrow Y" }, { "math_id": 76, "text": "f(X)" }, { "math_id": 77, "text": "n>3" } ]
https://en.wikipedia.org/wiki?curid=6233
62331378
Markov constant
Property of an irrational number In number theory, specifically in Diophantine approximation theory, the Markov constant formula_1 of an irrational number formula_2 is the factor for which Dirichlet's approximation theorem can be improved for formula_2. History and motivation. Certain numbers can be approximated well by certain rationals; specifically, the convergents of the continued fraction are the best approximations by rational numbers having denominators less than a certain bound. For example, the approximation formula_3 is the best rational approximation among rational numbers with denominator up to 56. Also, some numbers can be approximated more readily than others. Dirichlet proved in 1840 that the least readily approximable numbers are the rational numbers, in the sense that for every irrational number there exists infinitely many rational numbers approximating it to a certain degree of accuracy that only finitely many such rational approximations exist for rational numbers. Specifically, he proved that for any number formula_2 there are infinitely many pairs of relatively prime numbers formula_4 such that formula_5 if and only if formula_2 is irrational. 51 years later, Hurwitz further improved Dirichlet's approximation theorem by a factor of √5, improving the right-hand side from formula_6 to formula_7 for irrational numbers: formula_8 The above result is best possible since the golden ratio formula_0 is irrational but if we replace √5 by any larger number in the above expression then we will only be able to find finitely many rational numbers that satisfy the inequality for formula_9. Furthermore, he showed that among the irrational numbers, the least readily approximable numbers are those of the form formula_10 where formula_0 is the golden ratio, formula_11 and formula_12. (These numbers are said to be "equivalent" to formula_0.) If we omit these numbers, just as we omitted the rational numbers in Dirichlet's theorem, then we "can" increase the number √5 to 2√2. Again this new bound is best possible in the new setting, but this time the number √2, and numbers equivalent to it, limits the bound. If we don't allow those numbers then we "can" again increase the number on the right hand side of the inequality from 2√2 to √221/5, for which the numbers equivalent to formula_13 limit the bound. The numbers generated show how well these numbers can be approximated; this can be seen as a property of the real numbers. However, instead of considering Hurwitz's theorem (and the extensions mentioned above) as a property of the real numbers except certain special numbers, we can consider it as a property of each excluded number. Thus, the theorem can be interpreted as "numbers equivalent to formula_0, √2 or formula_13 are among the least readily approximable irrational numbers." This leads us to consider how accurately each number can be approximated by rationals - specifically, by how much can the factor in Dirichlet's approximation theorem be increased to from 1 for "that specific" number. Definition. Mathematically, the Markov constant of irrational formula_2 is defined as formula_14. If the set does not have an upper bound we define formula_15. Alternatively, it can be defined as formula_16 where formula_17 is defined as the closest integer to formula_18. Properties and results. Hurwitz's theorem implies that formula_19 for all formula_20. If formula_21 is its continued fraction expansion then formula_22. From the above, if formula_23 then formula_24. This implies that formula_15 if and only if formula_25 is not bounded. In particular, formula_26 if formula_2 is a quadratic irrationality. In fact, the lower bound for formula_1 can be strengthened to formula_27, the tightest possible. The values of formula_2 for which formula_28 are families of quadratic irrationalities having the same period (but at different offsets), and the values of formula_1 for these formula_2 are limited to Lagrange numbers. There are uncountably many numbers for which formula_29, no two of which have the same ending; for instance, for each number formula_30 where formula_31, formula_29. If formula_32 where formula_33 then formula_34. In particular if formula_35 them formula_36. The set formula_37 forms the Lagrange spectrum. It contains the interval formula_38 where F is Freiman's constant. Hence, if formula_39 then there exists irrational formula_2 whose Markov constant is formula_40. Numbers having a Markov constant less than 3. Burger et al. (2002) provides a formula for which the quadratic irrationality formula_41 whose Markov constant is the nth Lagrange number: formula_42 where formula_43 is the nth Markov number, and u is the smallest positive integer such that formula_44. Nicholls (1978) provides a geometric proof of this (based on circles tangent to each other), providing a method that these numbers can be systematically found. Examples. A demonstration that has Markov constant , as stated in the example below. This plot graphs "y"("k") = against log("k") (the natural log of "k") where "f"("x") is the nearest integer to "x". The dots at the top corresponding to an x-axis value of 0.7, 2.5, 4.3 and 6.1 (k=2,12,74,456) are the points for which the limit superior of is approached. Markov constant of two numbers. Since formula_45, formula_46 As formula_47 because the continued fraction representation of e is unbounded. Numbers αn having Markov constant less than 3. Consider formula_48; Then formula_49. By trial and error it can be found that formula_50. Then formula_51 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "M(\\alpha)" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\pi\\approx\\frac{22}{7}" }, { "math_id": 4, "text": "(p,q)" }, { "math_id": 5, "text": "\\left|\\alpha - \\frac{p}{q}\\right| < \\frac{1}{q^2}" }, { "math_id": 6, "text": "1/q^2" }, { "math_id": 7, "text": "1/\\sqrt{5}q^2" }, { "math_id": 8, "text": "\\left|\\alpha - \\frac{p}{q}\\right| < \\frac{1}{\\sqrt{5}q^2}." }, { "math_id": 9, "text": "\\alpha=\\phi" }, { "math_id": 10, "text": "\\frac{a\\phi+b}{c\\phi+d}" }, { "math_id": 11, "text": "a,b,c,d\\in\\Z" }, { "math_id": 12, "text": "ad-bc=\\pm1" }, { "math_id": 13, "text": "\\frac{1+\\sqrt{221}}{10}" }, { "math_id": 14, "text": "M(\\alpha)=\\sup \\left\\{\\lambda\\in\\R|\\left\\vert \\alpha - \\frac{p}{q} \\right\\vert<\\frac{1}{\\lambda q^2} \\text{ has infinitely many solutions for }p,q\\in\\N \\right\\}" }, { "math_id": 15, "text": "M(\\alpha)=\\infty" }, { "math_id": 16, "text": "\\limsup_{k\\to\\infty}\\frac{1}{k^2\\left\\vert \\alpha-\\frac{f(k)}{k} \\right\\vert}" }, { "math_id": 17, "text": "f(k)" }, { "math_id": 18, "text": "\\alpha k" }, { "math_id": 19, "text": "M(\\alpha)\\ge\\sqrt{5}" }, { "math_id": 20, "text": "\\alpha\\in\\R-\\Q" }, { "math_id": 21, "text": "\\alpha = [a_0; a_1, a_2, ...]" }, { "math_id": 22, "text": "M(\\alpha)=\\limsup_{k\\to\\infty}{([a_{k+1}; a_{k+2}, a_{k+3}, ...] + [0; a_{k}, a_{k-1}, ...,a_2,a_1])}" }, { "math_id": 23, "text": "p=\\limsup_{k\\to\\infty}{a_k}" }, { "math_id": 24, "text": "p<M(\\alpha)<p+2" }, { "math_id": 25, "text": "(a_k)" }, { "math_id": 26, "text": "M(\\alpha)<\\infty" }, { "math_id": 27, "text": "M(\\alpha)\\ge\\sqrt{p^2+4}" }, { "math_id": 28, "text": "M(\\alpha)<3" }, { "math_id": 29, "text": "M(\\alpha)=3" }, { "math_id": 30, "text": "\\alpha = [\\underbrace{1;1,...,1}_{r_1},2,2,\\underbrace{1,1,...,1}_{r_2},2,2,\\underbrace{1,1,...,1}_{r_3},2,2,...]" }, { "math_id": 31, "text": "r_1<r_2<r_3<\\cdots" }, { "math_id": 32, "text": "\\beta=\\frac{p\\alpha+q}{r\\alpha+s}" }, { "math_id": 33, "text": "p,q,r,s\\in\\Z" }, { "math_id": 34, "text": "M(\\beta)\\ge\\frac{M(\\alpha)}{\\left\\vert ps-rq \\right\\vert}" }, { "math_id": 35, "text": "\\left\\vert ps-rq \\right\\vert=1 " }, { "math_id": 36, "text": "M(\\beta)=M(\\alpha) " }, { "math_id": 37, "text": "L=\\{M(\\alpha)|\\alpha\\in\\R-\\Q\\}" }, { "math_id": 38, "text": "[F,\\infty]" }, { "math_id": 39, "text": "m>F\\approx4.52783" }, { "math_id": 40, "text": "m" }, { "math_id": 41, "text": "\\alpha_n\n" }, { "math_id": 42, "text": "\\alpha_n=\\frac{2u-3m_n+\\sqrt{9m_n^2-4}}{2m_n}" }, { "math_id": 43, "text": "m_n" }, { "math_id": 44, "text": "m_n\\mid u^2+1" }, { "math_id": 45, "text": "\\frac{\\sqrt{10}}{2}=[1;\\overline{1,1,2}]" }, { "math_id": 46, "text": "\\begin{align} M\\left ( \\frac{\\sqrt{10}}{2} \\right ) \n& = \\max([1;\\overline{2,1,1}]+[0;\\overline{1,2,1}],[1;\\overline{1,2,1}]+[0;\\overline{2,1,1}],[2;\\overline{1,1,2}]+[0;\\overline{1,1,2}]) \\\\ \n& = \\max\\left ( \\frac{2\\sqrt{10}}{3},\\frac{2\\sqrt{10}}{3},\\sqrt{10} \\right ) \\\\ \n& = \\sqrt{10}. \\end{align} " }, { "math_id": 47, "text": "\n e = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, \\ldots, 1, 2n, 1, \\ldots], M(e)=\\infty\n" }, { "math_id": 48, "text": "n=6" }, { "math_id": 49, "text": "m_n=34" }, { "math_id": 50, "text": "u=13" }, { "math_id": 51, "text": "\\begin{align} \\alpha_6 & = \\frac{2u-3m_6+\\sqrt{9m_6^2-4}}{2m_6} \\\\[6pt]\n& = \\frac{-76+\\sqrt{10400}}{68} \\\\[6pt]\n&= \\frac{-19+5\\sqrt{26}}{17} \\\\[6pt]\n&=[0;\\overline{2,1,1,1,1,1,1,2}]. \\end{align}" } ]
https://en.wikipedia.org/wiki?curid=62331378
6233828
Cumulative prospect theory
Cumulative prospect theory (CPT) is a model for descriptive decisions under risk and uncertainty which was introduced by Amos Tversky and Daniel Kahneman in 1992 (Tversky, Kahneman, 1992). It is a further development and variant of prospect theory. The difference between this version and the original version of prospect theory is that weighting is applied to the cumulative probability distribution function, as in rank-dependent expected utility theory but not applied to the probabilities of individual outcomes. In 2002, Daniel Kahneman received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for his contributions to behavioral economics, in particular the development of Cumulative Prospect Theory (CPT). Outline of the model. The main observation of CPT (and its predecessor prospect theory) is that people tend to think of possible outcomes usually relative to a certain reference point (often the status quo) rather than to the final status, a phenomenon which is called framing effect. Moreover, they have different risk attitudes towards gains (i.e. outcomes above the reference point) and losses (i.e. outcomes below the reference point) and care generally more about potential losses than potential gains (loss aversion). Finally, people tend to overweight extreme events, but underweight "average" events. The last point is in contrast to Prospect Theory which assumes that people overweight unlikely events, independently of their relative outcomes. CPT incorporates these observations in a modification of expected utility theory by replacing final wealth with payoffs relative to the reference point, replacing the utility function with a value function that depends on relative payoff, and replacing cumulative probabilities with weighted cumulative probabilities. In the general case, this leads to the following formula for subjective utility of a risky outcome described by probability measure formula_0: formula_1 where formula_2 is the value function (typical form shown in Figure 1), formula_3 is the weighting function (as sketched in Figure 2) and formula_4, i.e. the integral of the probability measure over all values up to formula_5, is the cumulative probability. This generalizes the original formulation by Tversky and Kahneman from finitely many distinct outcomes to infinite (i.e., continuous) outcomes. Differences from prospect theory. The main modification to prospect theory is that, as in rank-dependent expected utility theory, cumulative probabilities are transformed, rather than the probabilities themselves. This leads to the aforementioned overweighting of extreme events which occur with small probability, rather than to an overweighting of all small probability events. The modification helps to avoid a violation of first order stochastic dominance and makes the generalization to arbitrary outcome distributions easier. CPT is, therefore, an improvement over Prospect Theory on theoretical grounds. Applications. Cumulative prospect theory has been applied to a diverse range of situations which appear inconsistent with standard economic rationality, in particular the equity premium puzzle, the asset allocation puzzle, the status quo bias, various gambling and betting puzzles, intertemporal consumption and the endowment effect. Parameters for cumulative prospect theory have been estimated for a large number of countries, demonstrating the broad validity of the theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "U(p):=\\int_{-\\infty}^0 v(x)\\frac{d}{dx}(w(F(x)))\\,dx+\\int_0^{+\\infty} v(x)\\frac{d}{dx}(-w(1-F(x)))\\,dx," }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "F(x):=\\int_{-\\infty}^x\\,dp" }, { "math_id": 5, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=6233828
62338906
Categorical trace
In category theory, a branch of mathematics, the categorical trace is a generalization of the trace of a matrix. Definition. The trace is defined in the context of a symmetric monoidal category "C", i.e., a category equipped with a suitable notion of a product formula_0. (The notation reflects that the product is, in many cases, a kind of a tensor product.) An object "X" in such a category "C" is called dualizable if there is another object formula_1 playing the role of a dual object of "X". In this situation, the trace of a morphism formula_2 is defined as the composition of the following morphisms: formula_3 where 1 is the monoidal unit and the extremal morphisms are the coevaluation and evaluation, which are part of the definition of dualizable objects. The same definition applies, to great effect, also when "C" is a symmetric monoidal ∞-category. formula_4 which is the multiplication by the trace of the endomorphism "f" in the usual sense of linear algebra. formula_5 Further applications. have used categorical trace methods to prove an algebro-geometric version of the Atiyah–Bott fixed point formula, an extension of the Lefschetz fixed point formula. Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\otimes" }, { "math_id": 1, "text": "X^\\vee" }, { "math_id": 2, "text": "f: X \\to X" }, { "math_id": 3, "text": "\\mathrm{tr}(f) : 1 \\ \\stackrel{coev}{\\longrightarrow}\\ X \\otimes X^\\vee \\ \\stackrel{f \\otimes \\operatorname{id}}{\\longrightarrow}\\ X \\otimes X^\\vee \\ \\stackrel{twist}{\\longrightarrow}\\ X^\\vee \\otimes X \\ \\stackrel{eval}{\\longrightarrow}\\ 1" }, { "math_id": 4, "text": "k \\to k" }, { "math_id": 5, "text": "\\mathrm{tr}(\\operatorname{id}_V) = \\sum_i (-1)^i \\operatorname {rank} V_i." } ]
https://en.wikipedia.org/wiki?curid=62338906
62350115
Hiroshi Enatsu
Japanese theoretical physicist (1922–2019) Hiroshi Enatsu (12 September 1922 – 4 August 2019) was a Japanese theoretical physicist who contributed to a relativistic Hamiltonian formalism in quantum field theory. Academic works. Enatsu has found that the commutation relation formula_0 in a relativistic Hamiltonian formalism is equivalent to that in the conventional non-relativistic Hamiltonian formalism of quantum field theory, where formula_1 is the commutator, formula_2 is space-time coordinates, formula_3 is proper time, formula_4 is Hermitian adjoint of formula_5, and formula_6 is the Dirac delta function, with the aid of the relation formula_7. Here, a step function formula_8 follows formula_9 for formula_10, and formula_11 for formula_12. Biography. Early stage. Enatsu was born on 12 September 1922 in Miyakonojō as a son of Eizo and Fumi (Kuroiwa) Enatsu. Miyakonojō is a town within the territory of the former Satsuma Domain, and it was rather natural for Enatsu to receive an education in Kagoshima. So, he spent in Kagoshima for secondary education and junior college. Encounter with Hideki Yukawa. In the last year of junior college, Hideki Yukawa made a lecture on meson theory at Kagoshima. After listening to the lecture, Enatsu became interested in Yukawa and meson theory, so he decided to study under Yukawa. He studied on meson theory under Yukawa in undergraduate course. He received Bachelor of Science from Kyoto Imperial University in 1944. He received Doctor of Science. from Kyoto Imperial University in 1953 under Yukawa. Enatsu was an assistant under Yukawa at Kyoto University from 1946 to 1957. Enatsu was a research assistant at Columbia University in New York City from 1952 to 1953. Encounter with Niels Bohr. Enatsu was a visiting member of the Institute for Theoretical Physics in Copenhagen from 1955 to 1956. During his stay in Copenhagen, he could ask some questions to Bohr almost every week. It was a special treatment. Professor at Ritsumeikan University. In 1957, Enatsu was an assistant professor and inaugurated a professor at Ritsumeikan University in Kyoto. From 1971 to 1972, he was also the dean of faculty of science and engineering at Ritsumeikan University. In 1988, he retired from a professor at Ritsumeikan University in Kyoto, and has been a professor emeritus. In 1997. he received the 3rd class of the Order of the Sacred Treasure. Enatsu died on 4 August 2019 in Kyoto Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[\\psi (x, \\tau), \\psi ^*(x', \\tau)] = \\delta ( x - x' )" }, { "math_id": 1, "text": "[\\psi, \\phi] = \\psi \\phi - \\phi \\psi" }, { "math_id": 2, "text": " x " }, { "math_id": 3, "text": "\\tau" }, { "math_id": 4, "text": "\\psi ^*" }, { "math_id": 5, "text": "\\psi" }, { "math_id": 6, "text": "\\delta (x)" }, { "math_id": 7, "text": "\\epsilon ( x - x' ) \\epsilon ( \\tau - \\tau' )=1" }, { "math_id": 8, "text": "\\epsilon ( x ) " }, { "math_id": 9, "text": "\\epsilon ( x )=1" }, { "math_id": 10, "text": " 0 < x " }, { "math_id": 11, "text": "\\epsilon ( x )=-1" }, { "math_id": 12, "text": "x < 0" } ]
https://en.wikipedia.org/wiki?curid=62350115
6235137
Discriminant validity
In psychology, discriminant validity tests whether concepts or measurements that are not supposed to be related are actually unrelated. Campbell and Fiske (1959) introduced the concept of discriminant validity within their discussion on evaluating test validity. They stressed the importance of using both discriminant and convergent validation techniques when assessing new tests. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts. In showing that two scales do not correlate, it is necessary to correct for attenuation in the correlation due to measurement error. It is possible to calculate the extent to which the two scales overlap by using the following formula where formula_0 is correlation between x and y, formula_1 is the reliability of x, and formula_2 is the reliability of y: formula_3 Although there is no standard value for discriminant validity, a result less than 0.70 suggests that discriminant validity likely exists between the two scales. A result greater than 0.70, however, suggests that the two constructs overlap greatly and they are likely measuring the same thing, and therefore, discriminant validity between them cannot be claimed. Consider researchers developing a new scale designed to measure narcissism. They may want to show discriminant validity with a scale measuring self-esteem. Narcissism and self-esteem are theoretically different concepts, and therefore it is important that the researchers show that their new scale measures narcissism and not simply self-esteem. First, the average inter-item correlations within and between the two scales can be calculated: Narcissism — Narcissism: 0.47 Narcissism — Self-esteem: 0.30 Self-esteem — Self-esteem: 0.52 The correction for attenuation formula can then be applied: formula_4 Since 0.607 is less than 0.85, it can be concluded that discriminant validity exists between the scale measuring narcissism and the scale measuring self-esteem. The two scales measure theoretically different constructs. Recommended approaches to test for discriminant validity on the construct level are AVE-SE comparisons (Fornell &amp; Larcker, 1981; note: hereby the measurement error-adjusted inter-construct correlations derived from the CFA model should be used rather than raw correlations derived from the data.) and the assessment of the HTMT ratio (Henseler et al., 2014). Simulation tests reveal that the former performs poorly for variance-based structural equation models (SEM), e.g. PLS, but well for covariance-based SEM, e.g. Amos, and the latter performs well for both types of SEM. Voorhees et al. (2015) recommend combining both methods for covariance-based SEM with a HTMT cutoff of 0.85. A recommended approach to test for discriminant validity on the item level is exploratory factor analysis (EFA). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_{xy}" }, { "math_id": 1, "text": "r_{xx}" }, { "math_id": 2, "text": "r_{yy}" }, { "math_id": 3, "text": "\\cfrac{r_{xy}}{\\sqrt{r_{xx} \\cdot r_{yy}}}" }, { "math_id": 4, "text": "\\cfrac{0.30}{\\sqrt{0.47 * 0.52}} = 0.607" } ]
https://en.wikipedia.org/wiki?curid=6235137
62369482
Jean Lannes (mathematician)
French mathematician Jean E. Lannes (born 21 September 1947 in Pauligne) is a French mathematician, specializing in algebraic topology and homotopy theory. Lannes completed his secondary studies at the Lycée Louis-le-Grand in Paris and graduated in 1966 from the École Normale Supérieure. He received his doctorate in 1975 from the University of Paris-Saclay (Paris 12). Afterwards he was a professor there and at the Paris Diderot University (Paris 7). In 2009 he became a professor at the École polytechnique and "Directeur des recherches" at the Centre de mathématiques Laurent-Schwartz (CMLS); he is now professor emeritus. He was a visiting scholar at several academic institutions, including the Institute for Advanced Study (1979/80) and the Massachusetts Institute of Technology (MIT). Lannes is known for his research on the homotopy theory of classifying spaces of groups. He proved in the mid-1980s the generalized Sullivan conjecture (which was also proven independently by Gunnar Carlsson and Haynes Miller). The mod "p" cohomology of the classifying spaces of certain finite groups (elementary Abelian "p"-groups, for which the generalized Sullivan conjecture was formulated) played an important role in the proof. The connection between the cohomology theory of these finite groups and the classifying spaces of groups is illuminated by the work of Lannes. He introduced the formula_0-functor on the category of unstable algebra over the Steenrod algebra. Lannes thus led an important development of algebraic topology in the 1980s. He has collaborated extensively with Lionel Schwartz, Hans-Werner Henn, and Saîd Zarati. Lannes has also done research on the knot invariants of Vassiliev. He was an invited speaker at the International Congress of Mathematicians (ICM) in Zurich in 1994. His doctoral candidates include Fabien Morel. In 2007 there was a conference in Djerba in honor of Lannes's 60th birthday.
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=62369482
62371614
William Chapple (surveyor)
William Chapple (1718–1781) was an English surveyor and mathematician. His mathematical discoveries were mostly in plane geometry and include: He was also one of the earliest mathematicians to calculate the values of annuities. Life. Chapple was born in Witheridge on 25 January 1719 [O.S. 14 January 1718], the son of a poor farmer and parish clerk. He was a devoted bibliophile, and gained much of his knowledge of mathematics from Ward's "The Young Mathematician's Guide: Being a Plain and Easie Introduction to the Mathematicks, in Five Parts". He became an assistant to the parish priest, and a regular contributor to "The Ladies' Diary", especially concerning mathematical problems. He also later contributed work on West Country English to "The Gentleman's Magazine". His correspondence led him to become, in 1738, the clerk for a surveyor in Exeter. He married the surveyor's niece, supervised the construction of a new hospital in Exeter, and became secretary of the hospital. He also worked as the estate steward for William Courtenay, 1st Viscount Courtenay. In 1772 he began work on an update to Tristram Risdon's "Survey of the County of Devon", and spent much of the rest of his life working on it; it was published in part throughout his life, and in complete form posthumously in 1785. He died in early September 1781. A tablet in his memory could be found in the west end of the nave of the Church of St Mary Major, Exeter, prior to that church's demolition in 1971. Chapple Road in Witheridge is named after him. Contributions to mathematics. Andrea del Centina writes that: "To illustrate the work of Chapple, whose arguments are often confused and whose logic is very poor, even for the standard of his time, is not easy especially when trying to keep as faithful as possible to his thought." Nevertheless, Chapple made several significant discoveries in mathematics. Plane geometry. Euler's theorem in geometry gives a formula for the distance formula_0 between the incentre and circumcentre of a circle, as a function of the inradius formula_1 and circumradius formula_2: formula_3 An immediate consequence is the related inequality formula_4. Although these results are named for Leonhard Euler, who published them in 1765, they were found earlier by Chapple, in a 1746 essay in "The Gentleman's Magazine". In the same work he stated that, when two circles are the incircle and circumcircle of a triangle, then there is an infinite family of triangles for which they are the incircle and circumcircle. This is the triangular case of Poncelet's closure theorem, which applies more generally to polygons of any number of sides and to conics other than circles. It is the first known mathematical publication on pairs of inscribed and circumscribed circles of polygons, and significantly predates Poncelet's own 1822 work in this area. In 1749, Chapple published the first known proof of the existence of the orthocentre of a triangle, the point where the three perpendiculars from the vertices to the sides meet. The orthocentre itself was known previously, but Chapple writes that its existence was "often taken for granted, but no where demonstrated". Finance. Chapple learned of the problem of valuation of annuities through his correspondence with John Rowe and Thomas Simpson, and carried out this valuation for Courtenay. In this, he became one of the first mathematicians to work on this problem, along with Simpson, Abraham de Moivre, James Dodson, and William Jones. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "d=\\sqrt{R(R-2r)}." }, { "math_id": 4, "text": "R\\ge 2r" } ]
https://en.wikipedia.org/wiki?curid=62371614
62377789
Oculometer
Device measuring eye movement Oculometer is a device that tracks eye movement. The oculometer computes eye movement by tracking corneal reflection relative to the center of the pupil. An oculometer, which can provide continuous measurements in real time, can be a research tool to understand gaze as well as cognitive function. Further, it can be applied for hands-free control. It has applications in flight training, cognitive assessment, disease diagnosis, and treatment. The oculometer relies on the principle that when a collimated light beam is incident on the eye, the direction in which the eye moves is proportional to the position of the reflection of that light beam from the cornea with respect to the center of the pupil. Eye movements can be accurately measured over a linear range of more than 20formula_0  with a resolution of 0.1formula_0. History. Eye movement and tracking have been studied for centuries, with the very first eye tracking being simple observation of the eyes, by either oneself or another. The first improvement on this occurred in 1738, when an observer would feel the outside of closed eyelids to track eye movement. Next in 1879, an innovation to listen to muscle movements using a kymograph was implemented. Though rudimentary, these early techniques show repeated need throughout history to track eye movements. The first true eye tracking device was invented by Huey in 1898. To work, this device was required to contact the cornea, which limited its comfort, usability, and generalizability. It was not until the 20th century that a robust, non-contact, modern eye-tracker came to fruition. This device, called the photocornograph, worked by photographing eye movement based on reflection from the cornea. This device only recorded horizontal movements, until the work of Judd and colleagues in 1905 added both temporal and vertical recording. Due to the many applications of an eye tracking device to aviators and pilots, NASA and the United States Air Force carried out extensive studies on this technology, propelling the field forward. Much of this took place during the 1970s and 1980s. However even with this extensive research, oculometers remained bulky and technically difficult. Research-grade oculometers finally received a user-friendly redesign, with commercial devices available as of recently. These low-profile devices can be worn non-intrusively on a pair of eyeglasses. Advantages. Since the principles governing the workings of the oculometer rely on a relatively simple concept (electro-optical sensing of the eye), it ensures that the oculometer will be functional whenever the user is seeing. Additionally, the position of the reflection of the collimated beam from the cornea can be approximated to be on the plane of the pupil. This implies minimal parallax error between the corneal reflection and the center of the pupil, thus making the oculometer insensitive to changes in the head position during measurements. These properties of the oculometer ensures minimal interference with the routine activities of the user during measurements. It also negates the need for extensive equipment like bite plates or rigid skull clamping for measurements. General principles. Eye movement can be quantified by reflection off the cornea. However, in this case a movement of the head would also cause a movement to be recorded. This can be overcome by either rigidly fixing the head to prevent any movements, however this is intrusive and uncomfortable for the user and not broadly applicable for human research studies. Or, the entire apparatus could be mounted on the head, which likewise is bulky and uncomfortable. A better solution is to measure two parameters, such as corneal reflection and pupil movement (based on pupil center). Optical design. The optical design of the oculometer allows normal vision, directs light from a fixed internal source onto the eye of the user, and forms the image of the pupil on a detector. The basic lens design includes a fixed eye piece and an adjustable objective lens followed by 2 beam splitters. The device also consists of a polarization system to polarize the light from the source (typically a glow modulator tube) in the H direction. In order to attenuate the light from the source through reflections in the eyepiece, a linear polarizer in the V direction is placed in the optical path. A quarter wave plate is placed between the eye and the eye piece and rotates the plane of polarization by 90 degrees thus ensuring that the V-polarizer does not attenuate the true corneal reflections. The light source and detector are aligned coaxially. When the eye moves, the reflection off the cornea is displaced from the pupil center. This displacement is measured by formula_1 D is displacement, formula_2 is the distance from the center of the cornea, formula_3 is the angle of inclination of the eye's optical axis to the oculometer. Near infrared light (NIR) (approximately 750nm to 2,500nm wavelengths) is used for a few reasons. First, NIR light is less detectable to the human eye than other wavelengths of visible light, so the NIR light beam is less intrusive or noticeable to the user. Second, with this configuration the pupil is backlit, resulting in a bright disc, effectively differentiating the pupil from the rest of the eye and face. Typically, the oculometer consists of an eyepiece through which the user sees. An alternate design exists where the oculometer is head-mounted. This arrangement does not include the traditional eye-piece and user sees through a transparent, curved visor placed in front of his eyes. Electronic design. The traditional oculometer operates in two modes: acquisition and tracking modes. When the  user first sees through the eye piece, a rough raster scan captures the black pupil and bright reflections from the cornea. Then, the device automatically switches to tracking mode where time-division-multiplex-scans acquires continuous measurements of eye direction. Eye direction from the time-division-multiplex scans are computed by the superposition of the scan positions of corneal reflection and pupil positions. In case of device malfunction or loss in continuity due to the user blinking their eyes, the device switches back to the acquisition mode until tracking is restored. In recent designs, the acquisition mode has been automated to ensure that the pupil/iris boundary was instantly captures once the user sees through the eye piece. The automation also led to automatic switch to tracking mode after initial acquisition was obtained or after the user blinks. Applications. Piloting aircraft. There are numerous uses for the oculometer in the field of aviation. One is understanding whether cognitive abilities are sufficient for flight clearance. Further, flight programs can use the oculometer to inform cockpit design in terms of instrumentation panels, by studying the gaze of pilots as they fly. Finally, aviator training has benefitted from the oculometer as well. Understanding how a particular pilot scans through his field of view while flying allows for personalized feedback from flight coaches. It can provide instructors with more information by which to evaluate and further instruct learning pilots. For this reason, NASA and the US Armed Forces have utilized oculometers in their training programs, creating the Oculometer Training Tape Technique in the late 1900s. NASA. A NASA research project regarding the oculometer was to realize the ability for a person to control a machine using their eyes, which firstly necessitates eye movement measurements. NASA engineered a telescopic oculometer in which a user looks through an eyepiece, and given that the user can see through the eyepiece, eye movements will be measured. One particular application of NASA's oculometer endeavor is eye control of an Astronaut Maneuvering Unit (AMU). When an astronaut is in space and would like to move, the AMU facilitates this. However, controlling such a unit is no trivial task. Manual/hand controls are difficult as there are many axes and therefore many muscle outputs needed to coordinate 3D movement. However, eye control would be easier to implement with an oculometer.   Cognitive assessment. Aviation requires robust, sharp cognitive function, and the eye is part of the central nervous system as they are extensions of the brain, linking cognitive function with healthy eye function. Therefore oculometers can function as cognitive assessment tools. Diagnosis of Parkinson's disease. Abnormal eye movements is an established biomarker for numerous motor diseases including Parkinson's disease. Each motor disease is expected to produce different signature pattern of eye movement abnormalities. Using those eye movement patterns both as a diagnostic tool and for monitoring disease progression has therefore been of scientific interest. Oculometers are therefore used in this area for tracking eye movement. The use of oculometers for diagnosis of motor diseases is promising, though it has not yet been validated in the clinic. For Parkinson's disease specifically, the signature pattern of eye movement abnormalities occur as horizontal saccades (rapid, conjugate, eye movement that shift the center of the vision field). Patients with Parkinson's disease displayed high inabilities in performing antisaccadic tasks (eye movement in the opposite direction from the onset trigger). Measurement of antisaccades therefore enables scientists to detect early stages of Parkinson's disease. These studies are still in the research phase. Smart eyeglasses. For this application, the electronic design of the traditional oculometer has been modified to replace complex real-time video processing such that the oculometer could fit on light weight eyeglasses and have relatively long battery life. Smart eyeglasses are used to correct for vision errors due to age-related conditions while restoring normal vision. Smart eyeglasses utilize tunable eyepieces compared to fixed lenses used in conventional glasses. These glasses work by projecting light from a few different directions using infrared LEDs on the user's eyeball and receives the refracted light from discrete infrared proximity sensors also placed at a few different locations. The use of multiple detectors not only enables oculometers to be used as lightweight wearables but also ensures that signals detected by the sensors are not dependent on external illumination. This property allows the device to be functional in dark conditions. The major disadvantage of the use of sensors compared to continuous video processing is the significant decline in accuracy since measurements are both reduced in frequency and number of measurements. Other applications. Other potential application of oculometers that are still currently under development include in air traffic control for operators to designate aircraft through eye movement; in laser communication in dynamic situations where operators can transmit signals by looking at the signal; in television systems to monitor the eye direction as it views the television display such that sensory requirements of the eye can be met with lower bandwidths; and in psychological tests to analyze pattern of images that patients tend to avoid. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^\\circ" }, { "math_id": 1, "text": " D=\\kappa*sin \\theta" }, { "math_id": 2, "text": "\\kappa" }, { "math_id": 3, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=62377789
6238120
Phi value analysis
Phi value analysis, formula_0 analysis, or formula_1-value analysis is an experimental protein engineering technique for studying the structure of the folding transition state of small protein domains that fold in a two-state manner. The structure of the folding transition state is hard to find using methods such as protein NMR or X-ray crystallography because folding transitions states are mobile and partly unstructured by definition. In formula_0-value analysis, the folding kinetics and conformational folding stability of the wild-type protein are compared with those of point mutants to find "phi values". These measure the mutant residue's energetic contribution to the folding transition state, which reveals the degree of native structure around the mutated residue in the transition state, by accounting for the relative free energies of the unfolded state, the folded state, and the transition state for the wild-type and mutant proteins. The protein's residues are mutated one by one to identify residue clusters that are well-ordered in the folded transition state. These residues' interactions can be checked by "double-mutant-cycle formula_0" "analysis", in which the single-site mutants' effects are compared to the double mutants'. Most mutations are conservative and replace the original residue with a smaller one ("cavity-creating mutations") like alanine, though tyrosine-to-phenylalanine, isoleucine-to-valine and threonine-to-serine mutants can be used too. Chymotrypsin inhibitor, SH3 domains, WW domain, individual domains of proteins L and G, ubiquitin, and barnase have all been studied by formula_0 analysis. Mathematical approach. Phi is defined thus: formula_2 formula_3 is the difference in energy between the wild-type protein's transition and denatured state, formula_4 is the same energy difference but for the mutant protein, and the formula_5 bits are the differences in energy between the native and denatured state. The phi value is interpreted as how much the mutation destabilizes the transition state versus the folded state. Though formula_0 may have been meant to range from zero to one, negative values can appear. A value of zero suggests the "mutation" doesn't affect the structure of the folding pathway's rate-limiting transition state, and a value of one suggests the mutation destabilizes the transition state as much as the folded state; values near zero suggest the "area around the mutation" is relatively unfolded or unstructured in the transition state, and values near one suggest the transition state's local structure near the mutation site is similar to the native state's. Conservative substitutions on the protein's surface often give phi values near one. When formula_0 is well between zero and one, it is less informative as it doesn't tell us which is the case: Example: barnase. Alan Fersht pioneered phi value analysis in his study of the small bacterial protein barnase. Using molecular dynamics simulations, he found that the transition state between folding and unfolding looks like the native state and is the same no matter the reaction direction. Phi varied with the mutation location as some regions gave values near zero and others near one. The distribution of formula_0 values throughout the protein's sequence agreed with all of the simulated transition state but one helix which folded semi-independently and made native-like contacts with the rest of the protein only once the transition state had formed fully. Such variation in the folding rate in one protein makes it hard to interpret formula_0 values as the transition state structure must otherwise be compared to folding-unfolding simulations which are computationally expensive. Variants. Other 'kinetic perturbation' techniques for studying the folding transition state have appeared recently. Best known is the psi (formula_6) value which is found by engineering two metal-binding amino acid residues like histidine into a protein and then recording the folding kinetics as a function of metal ion concentration, though Fersht thought this approach difficult. A 'cross-linking' variant of the formula_1-value was used to study segment association in a folding transition state as covalent crosslinks like disulfide bonds were introduced. formula_1-T value analysis has been used as an extension of formula_1-value analysis to measure the response of mutants as a function of temperature to separate enthalpic and entropic contributions to the transition state free energy. Limitations. The error in equilibrium stability and aqueous (un)folding rate measurements may be large when values of formula_1 for solutions with denaturants must be extrapolated to aqueous solutions that are nearly pure or the stability difference between the native and mutant protein is 'low', or less than 7 kJ/mol. This may cause formula_1 to fall beyond the zero-one range. Calculated values formula_1 depend strongly on how many data point are available. A study of 78 mutants of WW domain with up to four mutations per residue has quantified what types of mutations avoid interference from native state flexibility, solvation, and other effects, and statistical analysis shows that reliable information about transition state perturbation can be obtained from large mutant screens. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\phi\n" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\n\\phi = \\frac{(\\Delta G^{TS \\rightarrow D}_{W} - \\Delta G^{TS \\rightarrow D}_{M})}{(\\Delta G^{N \\rightarrow D}_{W} - \\Delta G^{N \\rightarrow D}_{M})} = \\frac{\\Delta\\Delta G^{TS \\rightarrow D}}{\\Delta\\Delta G^{N \\rightarrow D}}\n" }, { "math_id": 3, "text": "\\Delta G^{TS \\rightarrow D}_{W}" }, { "math_id": 4, "text": "\\Delta G^{TS \\rightarrow D}_{M}" }, { "math_id": 5, "text": "\\Delta G^{N \\rightarrow D}" }, { "math_id": 6, "text": "\\psi" } ]
https://en.wikipedia.org/wiki?curid=6238120
62382
Catalan's conjecture
The only nontrivial positive integer solution to x^a-y^b equals 1 is 3^2-2^3 Catalan's conjecture (or Mihăilescu's theorem) is a theorem in number theory that was conjectured by the mathematician Eugène Charles Catalan in 1844 and proven in 2002 by Preda Mihăilescu at Paderborn University. The integers 23 and 32 are two perfect powers (that is, powers of exponent higher than one) of natural numbers whose values (8 and 9, respectively) are consecutive. The theorem states that this is the "only" case of two consecutive perfect powers. That is to say, that History. The history of the problem dates back at least to Gersonides, who proved a special case of the conjecture in 1343 where ("x", "y") was restricted to be (2, 3) or (3, 2). The first significant progress after Catalan made his conjecture came in 1850 when Victor-Amédée Lebesgue dealt with the case "b" = 2. In 1976, Robert Tijdeman applied Baker's method in transcendence theory to establish a bound on a,b and used existing results bounding "x","y" in terms of "a", "b" to give an effective upper bound for "x","y","a","b". Michel Langevin computed a value of formula_0 for the bound, resolving Catalan's conjecture for all but a finite number of cases. Catalan's conjecture was proven by Preda Mihăilescu in April 2002. The proof was published in the "Journal für die reine und angewandte Mathematik", 2004. It makes extensive use of the theory of cyclotomic fields and Galois modules. An exposition of the proof was given by Yuri Bilu in the Séminaire Bourbaki. In 2005, Mihăilescu published a simplified proof. Pillai's conjecture. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Does each positive integer occur only finitely many times as a difference of perfect powers? Pillai's conjecture concerns a general difference of perfect powers (sequence in the OEIS): it is an open problem initially proposed by S. S. Pillai, who conjectured that the gaps in the sequence of perfect powers tend to infinity. This is equivalent to saying that each positive integer occurs only finitely many times as a difference of perfect powers: more generally, in 1931 Pillai conjectured that for fixed positive integers "A", "B", "C" the equation formula_1 has only finitely many solutions ("x", "y", "m", "n") with ("m", "n") ≠ (2, 2). Pillai proved that for fixed "A", "B", "x", "y", and for any λ less than 1, we have formula_2 uniformly in "m" and "n". The general conjecture would follow from the ABC conjecture. Pillai's conjecture means that for every natural number "n", there are only finitely many pairs of perfect powers with difference "n". The list below shows, for "n" ≤ 64, all solutions for perfect powers less than 1018, such that the exponent of both powers is greater than 1. The number of such solutions for each n is listed at OEIS: . See also OEIS:  for the smallest solution (&gt; 0). See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exp \\exp \\exp \\exp 730 \\approx 10^{10^{10^{10^{317}}}}" }, { "math_id": 1, "text": "Ax^n - By^m = C" }, { "math_id": 2, "text": "|Ax^n - By^m| \\gg x^{\\lambda n}" } ]
https://en.wikipedia.org/wiki?curid=62382
6238212
Rectified 24-cell
In geometry, the rectified 24-cell or rectified icositetrachoron is a uniform 4-dimensional polytope (or uniform 4-polytope), which is bounded by 48 cells: 24 cubes, and 24 cuboctahedra. It can be obtained by rectification of the 24-cell, reducing its octahedral cells to cubes and cuboctahedra. E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as tC24. It can also be considered a cantellated 16-cell with the lower symmetries B4 = [3,3,4]. B4 would lead to a bicoloring of the cuboctahedral cells into 8 and 16 each. It is also called a runcicantellated demitesseract in a D4 symmetry, giving 3 colors of cells, 8 for each. Construction. The rectified 24-cell can be derived from the 24-cell by the process of rectification: the 24-cell is truncated at the midpoints. The vertices become cubes, while the octahedra become cuboctahedra. Cartesian coordinates. A rectified 24-cell having an edge length of √2 has vertices given by all permutations and sign permutations of the following Cartesian coordinates: (0,1,1,2) [4!/2!×23 = 96 vertices] The dual configuration with edge length 2 has all coordinate and sign permutations of: (0,2,2,2) [4×23 = 32 vertices] (1,1,1,3) [4×24 = 64 vertices] Symmetry constructions. There are three different symmetry constructions of this polytope. The lowest formula_0 construction can be doubled into formula_1 by adding a mirror that maps the bifurcating nodes onto each other. formula_0 can be mapped up to formula_2 symmetry by adding two mirror that map all three end nodes together. The vertex figure is a triangular prism, containing two cubes and three cuboctahedra. The three symmetries can be seen with 3 colored cuboctahedra in the lowest formula_0 construction, and two colors (1:2 ratio) in formula_1, and all identical cuboctahedra in formula_2. Related polytopes. The convex hull of the rectified 24-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 192 cells: 48 cubes, 144 square antiprisms, and 192 vertices. Its vertex figure is a triangular bifrustum. Related uniform polytopes. The "rectified 24-cell" can also be derived as a "cantellated 16-cell": Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{D}_4" }, { "math_id": 1, "text": "{C}_4" }, { "math_id": 2, "text": "{F}_4" } ]
https://en.wikipedia.org/wiki?curid=6238212
62382505
Van Vleck paramagnetism
Magnetic property In condensed matter and atomic physics, Van Vleck paramagnetism refers to a positive and temperature-independent contribution to the magnetic susceptibility of a material, derived from second order corrections to the Zeeman interaction. The quantum mechanical theory was developed by John Hasbrouck Van Vleck between the 1920s and the 1930s to explain the magnetic response of gaseous nitric oxide (NO) and of rare-earth salts. Alongside other magnetic effects like Paul Langevin's formulas for paramagnetism (Curie's law) and diamagnetism, Van Vleck discovered an additional paramagnetic contribution of the same order as Langevin's diamagnetism. Van Vleck contribution is usually important for systems with one electron short of being half filled and this contribution vanishes for elements with closed shells. Description. The magnetization of a material under an external small magnetic field formula_0 is approximately described by formula_1 where formula_2 is the magnetic susceptibility. When a magnetic field is applied to a paramagnetic material, its magnetization is parallel to the magnetic field and formula_3. For a diamagnetic material, the magnetization opposes the field, and formula_4. Experimental measurements show that most non-magnetic materials have a susceptibility that behaves in the following way: formula_5, where formula_6 is the absolute temperature; formula_7 are constant, and formula_8, while formula_9 can be positive, negative or null. Van Vleck paramagnetism often refers to systems where formula_10 and formula_11. Derivation. The Hamiltonian for an electron in a static homogeneous magnetic field formula_0 in an atom is usually composed of three terms formula_12 where formula_13 is the vacuum permeability, formula_14is the Bohr magneton, formula_15 is the g-factor, formula_16 is the elementary charge, formula_17 is the electron mass, formula_18 is the orbital angular momentum operator, formula_19 the spin and formula_20 is the component of the position operator orthogonal to the magnetic field. The Hamiltonian has three terms, the first one formula_21 is the unperturbed Hamiltonian without the magnetic field, the second one is proportional to formula_0, and the third one is proportional to formula_22. In order to obtain the ground state of the system, one can treat formula_21 exactly, and treat the magnetic field dependent terms using perturbation theory. Note that for strong magnetic fields, Paschen-Back effect dominates. First order perturbation theory. First order perturbation theory on the second term of the Hamiltonian (proportional to formula_23) for electrons bound to an atom, gives a positive correction to energy given by formula_24 where formula_25 is the ground state, formula_26 is the Landé g-factor of the ground state and formula_27 is the total angular momentum operator (see Wigner–Eckart theorem). This correction leads to what is known as Langevin paramagnetism (the quantum theory is sometimes called Brillouin paramagnetism), that leads to a positive magnetic susceptibility. For sufficiently large temperatures, this contribution is described by Curie's law: formula_28, a susceptibility that is inversely proportional to the temperature formula_6, where formula_29 is the material dependent Curie constant. If the ground state has no total angular momentum there is no Curie contribution and other terms dominate. The first perturbation theory on the third term of the Hamiltonian (proportional to formula_22), leads to a negative response (magnetization that opposes the magnetic field). Usually known as Larmor or Langenvin diamagnetism: formula_30 where formula_31 is another constant proportional to formula_32 the number of atoms per unit volume, and formula_33 is the mean squared radius of the atom. Note that Larmor susceptibility does not depend on the temperature. Second order: Van Vleck susceptibility. While Curie and Larmor susceptibilities were well understood from experimental measurements, J.H. Van Vleck noticed that the calculation above was incomplete. If formula_23 is taken as the perturbation parameter, the calculation must include all orders of perturbation up to the same power of formula_23. As Larmor diamagnetism comes from first order perturbation of the formula_22, one must calculate second order perturbation of the formula_34 term: formula_35 where the sum goes over all excited degenerate states formula_36, and formula_37 are the energies of the excited states and the ground state, respectively, the sum excludes the state formula_38, where formula_39. Historically, J.H. Van Vleck called this term the "high frequency matrix elements". In this way, Van Vleck susceptibility comes from the second order energy correction, and can be written as formula_40 where formula_41 is the number density, and formula_42 and formula_43 are the projection of the spin and orbital angular momentum in the direction of the magnetic field, respectively. In this way, formula_44, as the signs of Larmor and Van Vleck susceptibilities are opposite, the sign of formula_9 depends on the specific properties of the material. General formula and Van Vleck criteria. For a more general system (molecules, complex systems), the paramagnetic susceptibility for an ensemble of independent magnetic moments can be written as formula_45 where formula_46, formula_47, and formula_48 is the Landé g-factor of state "i". Van Vleck summarizes the results of this formula in four cases, depending on the temperature: While molecular oxygen O2 and nitric oxide NO are similar paramagnetic gases, O2 follows Curie law as in case (a), while NO, deviates slightly from it. In 1927, Van Vleck considered NO to be in case (d) and obtained a more precise prediction of its susceptibility using the formula above. Systems of interest. The standard example of Van Vleck paramagnetism are europium(III) oxide (Eu2O3) salts where there are six 4f electrons in trivalent europium ions. The ground state of Eu3+ that has a total azimuthal quantum number formula_49 and Curie's contribution (formula_50) vanishes, the first excited state with formula_51 is very close to the ground state at 330 K and contributes through second order corrections as showed by Van Vleck. A similar effect is observed in samarium salts (Sm3+ ions). In the actinides, Van Vleck paramagnetism is also important in Bk5+ and Cm4+ which have a localized 5f6 configuration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{H}" }, { "math_id": 1, "text": "\\mathbf{M}=\\chi\\mathbf{H}" }, { "math_id": 2, "text": "\\chi" }, { "math_id": 3, "text": "\\chi>0" }, { "math_id": 4, "text": "\\chi<0" }, { "math_id": 5, "text": "\\chi(T)\\approx \\frac{C_0}{T}+\\chi_0" }, { "math_id": 6, "text": "T" }, { "math_id": 7, "text": "C_0,\\chi_0" }, { "math_id": 8, "text": "C_0\\ge0" }, { "math_id": 9, "text": "\\chi_0" }, { "math_id": 10, "text": "C_0\\approx 0" }, { "math_id": 11, "text": "\\chi_0>0" }, { "math_id": 12, "text": "\\mathcal{H}=\\mathcal{H}_0+\\mu_0\\frac{\\mu_{\\rm B}}{\\hbar}(\\mathbf{L}+g\\mathbf{S})\\cdot\\mathbf{H}+\\mu_0^2\\frac{e^2}{8m_{\\rm e}}r^2_{\\perp} H^2" }, { "math_id": 13, "text": "\\mu_0" }, { "math_id": 14, "text": "\\mu_{\\rm B}" }, { "math_id": 15, "text": "g" }, { "math_id": 16, "text": "e" }, { "math_id": 17, "text": "m_{\\rm e}" }, { "math_id": 18, "text": "\\mathbf{L}" }, { "math_id": 19, "text": "\\mathbf{S}" }, { "math_id": 20, "text": "r_\\perp" }, { "math_id": 21, "text": "\\mathcal{H}_0" }, { "math_id": 22, "text": "H^2" }, { "math_id": 23, "text": "H" }, { "math_id": 24, "text": "\\Delta E^{(1)}=\\mu_0\\frac{\\mu_{\\rm B}}{\\hbar}\\langle \\mathrm g| (\\mathbf{L}+g\\mathbf{S})\\cdot \\mathbf H|\\mathrm{g}\\rangle =g_J\\mu_0\\frac{\\mu_{\\rm B}}{\\hbar} \\langle \\mathrm g| \\mathbf{J}\\cdot \\mathbf H|\\mathrm{g}\\rangle" }, { "math_id": 25, "text": "|\\mathrm g\\rangle" }, { "math_id": 26, "text": "g_J" }, { "math_id": 27, "text": "\\mathbf{J}=\\mathbf{L}+\\mathbf{S}" }, { "math_id": 28, "text": "\\chi_{\\rm Curie}\\approx \\frac{C_1}{T}" }, { "math_id": 29, "text": "C_0\\approx C_1" }, { "math_id": 30, "text": "\\chi_{\\rm Larmor}=-C_2\\langle r^2\\rangle" }, { "math_id": 31, "text": "C_2" }, { "math_id": 32, "text": "n" }, { "math_id": 33, "text": "\\langle r^2 \\rangle" }, { "math_id": 34, "text": "B" }, { "math_id": 35, "text": "\\Delta E^{\\rm (2)}=\\left(\\frac{\\mu_0 \\mu_{\\rm B}}{\\hbar}\\right)^2\\sum_i\\frac{|\\langle \\mathrm{g}|(\\mathbf{L}+g\\mathbf{S})\\cdot \\mathbf H|\\mathrm{e}_i\\rangle|^2}{E^{(0)}_\\mathrm{g}-E^{(0)}_{\\mathrm{e},i}}" }, { "math_id": 36, "text": "|\\mathrm{e}_i\\rangle" }, { "math_id": 37, "text": "E^{(0)}_{\\mathrm{e},i},E^{(0)}_\\mathrm{g}" }, { "math_id": 38, "text": "i=0" }, { "math_id": 39, "text": "|\\mathrm{e}_{0}\\rangle=|\\mathrm{g}\\rangle " }, { "math_id": 40, "text": "\\chi_{\\rm VV}=2n\\mu_0\\left(\\frac{\\mu_{\\rm B}}{\\hbar}\\right)^2\\sum_{i(i\\neq 0)}\\frac{g_j^2|\\langle \\mathrm{g}|L_z+gS_z|\\mathrm{e}_i\\rangle|^2}{E_{\\mathrm{e},i}-E_{\\rm g}}," }, { "math_id": 41, "text": "n " }, { "math_id": 42, "text": "S_z " }, { "math_id": 43, "text": "L_z " }, { "math_id": 44, "text": "\\chi_0\\approx\\chi_{\\rm VV}+\\chi_{\\rm Larmor}" }, { "math_id": 45, "text": "\\chi_{\\rm para}=\\mu_0\\mu_{\\rm B}^2\\frac{n}{\\sum_{i} p_i} \\sum_{i} p_i\\left[\\frac{\\left(W^{(1)}_i\\right)^2}{k T} - 2 W^{(2)}_i\\right]\\;;\\;p_i=\\exp\\left(-\\frac{E_i^{(0)}}{k T}\\right)," }, { "math_id": 46, "text": "W_i^{(1)}=g_J^{(i)}\\langle \\mathrm{e}_i|J_z| \\mathrm{e}_i\\rangle/\\hbar" }, { "math_id": 47, "text": "W_i^{\\rm (2)}=\\frac{1}{\\hbar^2}\\sum_{k(k\\neq i)}\\frac{|\\langle \\mathrm{e}_i|L_z+gS_z|\\mathrm{e}_k\\rangle|^2}{\\delta E_{i,k}}\\;;\\;\\delta E_{i,k}=E^{(0)}_{\\mathrm{e},i}-E^{(0)}_{\\mathrm{e},k}" }, { "math_id": 48, "text": "g_J^{(i)} " }, { "math_id": 49, "text": "j=0" }, { "math_id": 50, "text": "C_0/T" }, { "math_id": 51, "text": "j=1" } ]
https://en.wikipedia.org/wiki?curid=62382505
62383372
Scanning quantum dot microscopy
Scanning quantum dot microscopy (SQDM) is a scanning probe microscopy (SPM) that is used to image nanoscale electric potential distributions on surfaces. The method quantifies surface potential variations via their influence on the potential of a quantum dot (QD) attached to the apex of the scanned probe. SQDM allows, for example, the quantification of surface dipoles originating from individual adatoms, molecules, or nanostructures. This gives insights into surface and interface mechanisms such as reconstruction or relaxation, mechanical distortion, charge transfer and chemical interaction. Measuring electric potential distributions is also relevant for characterizing organic and inorganic semiconductor devices which feature electric dipole layers at the relevant interfaces. The probe to surface distance in SQDM ranges from 2 nm to 10 nm and therefore allows imaging on non-planar surfaces or, e.g., of biomolecules with a distinct 3D structure. Related imaging techniques are Kelvin Probe Force Microscopy (KPFM) and Electrostatic Force Microscopy (EFM). Working principle. In SQDM, the relation between the potential at the QD and the surface potential (the quantity of interest) is described by a boundary value problem of electrostatics. The boundary formula_0 is given by the surfaces of sample and probe assumed to be connected at infinity. Then, the potential formula_1 of a point-like QD at formula_2 can be expressed using the Green's function formalism as a sum over volume and surface integrals, where formula_3 denotes the volume enclosed by formula_0 and formula_4 is the surface normal. formula_5 In this expression, formula_6 depends on the charge density formula_7 inside formula_3 and on the potential formula_8 on formula_0 weighted by the Green's function formula_9 where formula_10 satisfies the Laplace equation. By specifying formula_10 and thus defining the boundary conditions, these equations can be used to obtain the relation between formula_6 and the surface potential formula_11 for more specific measurement situations. The combination of a conductive probe and a conductive surface, a situation characterized by Dirichlet boundary conditions, has been described in detail. Conceptually, the relation between formula_12 and formula_13 links data in the imaging plane, obtained by reading out the QD potential, to data in the object surface - the surface potential. If the sample surface is approximated as locally flat and the relation between formula_12 and formula_13 therefore translationally invariant, the recovery of the object surface information from the imaging plane information is a deconvolution with a point spread function defined by the boundary value problem. In the specific case of a conductive boundary, the mutual screening of surface potentials by tip and surface lead to an exponential drop-off of the point spread function. This causes the exceptionally high lateral resolution of SQDM at large tip-surface separations compared to, for example, KPFM. Practical implementation. Two methods have been reported to obtain the imaging plane information, i.e., the variations in the QD potential formula_12 as the probe is scanned over the surface. In the compensation technique, formula_6 is held at a constant value formula_14. The influence of the laterally varying surface potentials on formula_6 is actively compensated by continuously adjusting the global sample potential via an external bias voltage formula_15. formula_14 is chosen such that it matches a discrete transition of the QD charge state and the corresponding change in probe-sample force is used in non-contact atomic force microscopy to verify a correct compensation. In an alternative method, the vertical component of the electric field at the QD position is mapped by measuring the energy shift of a specific optical transition of the QD which occurs due to the Stark effect. This method requires an additional optical setup in addition to the SPM setup. The object plane image formula_13 can be interpreted as a variation of the work function, the surface potential, or the surface dipole density. The equivalence of these quantities is given by the Helmholtz equation. Within the surface dipole density interpretation, surface dipoles of individual nanostructures can be obtained by integration over a sufficiently large surface area. Topographic information from SQDM. In the compensation technique, the influence of the global sample potential formula_15 on formula_6 depends on the shape of the sample surface in a way that is defined by the corresponding boundary value problem. On a non-planar surface, changes in formula_6 can therefore not uniquely be assigned to either a change in surface potential or in surface topography formula_16 if only a single charge state transition is tracked. For example, a protrusion in the surface affects the QD potential since the gating by formula_15 works more efficiently if the QD is placed above the protrusion. If two transitions are used in the compensation technique the contributions of surface topography formula_16 and potential formula_17 can be disentangled and both quantities can be obtained unambiguously. The topographic information obtained via the compensation technique is an effective "dielectric topography" of metallic nature which is defined by the geometric topography and the dielectric properties of the sample surface or of a nanostructure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{S}" }, { "math_id": 1, "text": "\\Phi_\\text{QD} = \\Phi(\\mathbf{r})" }, { "math_id": 2, "text": "\\mathbf{r}" }, { "math_id": 3, "text": "\\mathcal{V}" }, { "math_id": 4, "text": "\\mathbf{n}'" }, { "math_id": 5, "text": "\\Phi_\\text{QD} = \\Phi(\\mathbf{r})=\\iiint\\limits_\\mathcal{V} G(\\mathbf{r}, \\mathbf{r}') \\frac{\\rho(\\mathbf{r}')}{e}d^3\\mathbf{r}'+ \\frac{\\epsilon_0}{e}\\oint\\limits_\\mathcal{S} \\bigg[G(\\mathbf{r}, \\mathbf{r}')\\frac{\\partial\\Phi(\\mathbf{r}')}{\\partial \\mathbf{n}'}-\\Phi(\\mathbf{r}')\\frac{\\partial G(\\mathbf{r}, \\mathbf{r}') }{\\partial \\mathbf{n}'}\\bigg]d^2\\mathbf{r}'." }, { "math_id": 6, "text": "\\Phi_\\text{QD}" }, { "math_id": 7, "text": "\\rho" }, { "math_id": 8, "text": "\\Phi" }, { "math_id": 9, "text": "G(\\mathbf{r},\\mathbf{r}')=\\frac{e}{4\\pi\\epsilon_0|\\mathbf{r}-\\mathbf{r}'|} + F(\\mathbf{r},\\mathbf{r}')," }, { "math_id": 10, "text": "F" }, { "math_id": 11, "text": "\\Phi_\\text{s}(\\mathbf{r}'), \\quad \\mathbf{r}' \\in \\mathcal{S}" }, { "math_id": 12, "text": "\\Phi_\\text{QD}(\\mathbf{r})" }, { "math_id": 13, "text": "\\Phi_\\text{s}(\\mathbf{r}')" }, { "math_id": 14, "text": "\\Phi_\\text{QD}^0" }, { "math_id": 15, "text": "V_\\text{b}" }, { "math_id": 16, "text": "t_\\text{d}" }, { "math_id": 17, "text": "\\Phi_\\text{s}" } ]
https://en.wikipedia.org/wiki?curid=62383372
623870
Raman scattering
Inelastic scattering of photons by matter In physics, Raman scattering or the Raman effect () is the inelastic scattering of photons by matter, meaning that there is both an exchange of energy and a change in the light's direction. Typically this effect involves vibrational energy being gained by a molecule as incident photons from a visible laser are shifted to lower energy. This is called "normal Stokes-Raman scattering". Light has a certain probability of being scattered by a material. When photons are scattered, most of them are elastically scattered (Rayleigh scattering), such that the scattered photons have the same energy (frequency, wavelength, and therefore color) as the incident photons, but different direction. Rayleigh scattering usually has an intensity in the range 0.1% to 0.01% relative to that of a radiation source. An even smaller fraction of the scattered photons (about 1 in a million) can be scattered "inelastically", with the scattered photons having an energy different (usually lower) from those of the incident photons—these are Raman scattered photons. Because of conservation of energy, the material either gains or loses energy in the process. The effect is exploited by chemists and physicists to gain information about materials for a variety of purposes by performing various forms of Raman spectroscopy. Many other variants of Raman spectroscopy allow rotational energy to be examined, if gas samples are used, and electronic energy levels may be examined if an X-ray source is used, in addition to other possibilities. More complex techniques involving pulsed lasers, multiple laser beams and so on are known. The Raman effect is named after Indian scientist C. V. Raman, who discovered it in 1928 with assistance from his student K. S. Krishnan. Raman was awarded the 1930 Nobel Prize in Physics for his discovery of Raman scattering. The effect had been predicted theoretically by Adolf Smekal in 1923. History. The elastic light scattering phenomena called Rayleigh scattering, in which light retains its energy, was described in the 19th century. The intensity of Rayleigh scattering is about 10−3 to 10−4 compared to the intensity of the exciting source. In 1908, another form of elastic scattering, called Mie scattering was discovered. The inelastic scattering of light was predicted by Adolf Smekal in 1923 and in older German-language literature it has been referred to as the Smekal-Raman-Effekt. In 1922, Indian physicist C. V. Raman published his work on the "Molecular Diffraction of Light", the first of a series of investigations with his collaborators that ultimately led to his discovery (on 16 February 1928) of the radiation effect that bears his name. The Raman effect was first reported by Raman and his coworker K. S. Krishnan, and independently by Grigory Landsberg and Leonid Mandelstam, in Moscow on 21 February 1928 (5 days after Raman and Krishnan). In the former Soviet Union, Raman's contribution was always disputed; thus in Russian scientific literature the effect is usually referred to as "combination scattering" or "combinatory scattering". Raman received the Nobel Prize in 1930 for his work on the scattering of light. In 1998 the Raman effect was designated a National Historic Chemical Landmark by the American Chemical Society in recognition of its significance as a tool for analyzing the composition of liquids, gases, and solids. Instrumentation. Modern Raman spectroscopy nearly always involves the use of lasers as an exciting light source. Because lasers were not available until more than three decades after the discovery of the effect, Raman and Krishnan used a mercury lamp and photographic plates to record spectra. Early spectra took hours or even days to acquire due to weak light sources, poor sensitivity of the detectors and the weak Raman scattering cross-sections of most materials. The most common modern detectors are charge-coupled devices (CCDs). Photodiode arrays and photomultiplier tubes were common prior to the adoption of CCDs. Theory. The following focuses on the theory of normal (non-resonant, spontaneous, vibrational) Raman scattering of light by discrete molecules. X-ray Raman spectroscopy is conceptually similar but involves excitation of electronic, rather than vibrational, energy levels. Molecular vibrations. Raman scattering generally gives information about vibrations within a molecule. In the case of gases, information about rotational energy can also be gleaned. For solids, phonon modes may also be observed. The basics of infrared absorption regarding molecular vibrations apply to Raman scattering although the selection rules are different. Degrees of freedom. For any given molecule, there are a total of 3N degrees of freedom, where N is the number of atoms. This number arises from the ability of each atom in a molecule to move in three dimensions. When dealing with molecules, it is more common to consider the movement of the molecule as a whole. Consequently, the 3N degrees of freedom are partitioned into molecular translational, rotational, and vibrational motion. Three of the degrees of freedom correspond to translational motion of the molecule as a whole (along each of the three spatial dimensions). Similarly, three degrees of freedom correspond to rotations of the molecule about the formula_0, formula_1, and formula_2-axes. Linear molecules only have two rotations because rotations along the bond axis do not change the positions of the atoms in the molecule. The remaining degrees of freedom correspond to molecular vibrational modes. These modes include stretching and bending motions of the chemical bonds of the molecule. For a linear molecule, the number of vibrational modes is 3N-5, whereas for a non-linear molecule the number of vibrational modes is 3N-6. Vibrational energy. Molecular vibrational energy is known to be quantized and can be modeled using the quantum harmonic oscillator (QHO) approximation or a Dunham expansion when anharmonicity is important. The vibrational energy levels according to the QHO are formula_3, where "n" is a quantum number. Since the selection rules for Raman and infrared absorption generally dictate that only fundamental vibrations are observed, infrared excitation or Stokes Raman excitation results in an energy change of formula_4 The energy range for vibrations is in the range of approximately 5 to 3500 cm−1. The fraction of molecules occupying a given vibrational mode at a given temperature follows a Boltzmann distribution. A molecule can be excited to a higher vibrational mode through the direct absorption of a photon of the appropriate energy, which falls in the terahertz or infrared range. This forms the basis of infrared spectroscopy. Alternatively, the same vibrational excitation can be produced by an inelastic scattering process. This is called Stokes Raman scattering, by analogy with the Stokes shift in fluorescence discovered by George Stokes in 1852, with light emission at longer wavelength (now known to correspond to lower energy) than the absorbed incident light. Conceptually similar effects can be caused by neutrons or electrons rather than light. An increase in photon energy which leaves the molecule in a lower vibrational energy state is called anti-Stokes scattering. Raman scattering. Raman scattering is conceptualized as involving a virtual electronic energy level which corresponds to the energy of the exciting laser photons. Absorption of a photon excites the molecule to the imaginary state and re-emission leads to Raman or Rayleigh scattering. In all three cases the final state has the same electronic energy as the initial state but is higher in vibrational energy in the case of Stokes Raman scattering, lower in the case of anti-Stokes Raman scattering or the same in the case of Rayleigh scattering. Normally this is thought of in terms of wavenumbers, where formula_5 is the wavenumber of the laser and formula_6 is the wavenumber of the vibrational transition. Thus Stokes scattering gives a wavenumber of formula_7 while formula_8 is given for anti-Stokes. When the exciting laser energy corresponds to an actual electronic excitation of the molecule then the resonance Raman effect occurs. A classical physics based model is able to account for Raman scattering and predicts an increase in the intensity which scales with the fourth-power of the light frequency. Light scattering by a molecule is associated with oscillations of an induced electric dipole. The oscillating electric field component of electromagnetic radiation may bring about an induced dipole in a molecule which follows the alternating electric field which is modulated by the molecular vibrations. Oscillations at the external field frequency are therefore observed along with beat frequencies resulting from the external field and normal mode vibrations. The spectrum of the scattered photons is termed the Raman spectrum. It shows the intensity of the scattered light as a function of its frequency difference "Δν" to the incident photons, more commonly called a Raman shift. The locations of corresponding Stokes and anti-Stokes peaks form a symmetric pattern around the Rayleigh"Δν=0" line. The frequency shifts are symmetric because they correspond to the energy difference between the same upper and lower resonant states. The intensities of the pairs of features will typically differ, though. They depend on the populations of the initial states of the material, which in turn depend on the temperature. In thermodynamic equilibrium, the lower state will be more populated than the upper state. Therefore, the rate of transitions from the more populated lower state to the upper state (Stokes transitions) will be higher than in the opposite direction (anti-Stokes transitions). Correspondingly, Stokes scattering peaks are stronger than anti-Stokes scattering peaks. Their ratio depends on the temperature, and can therefore be exploited to measure it: formula_9 Selection rules. In contrast to IR spectroscopy, where there is a requirement for a change in dipole moment for vibrational excitation to take place, Raman scattering requires a change in polarizability. A Raman transition from one state to another is allowed only if the molecular polarizability of those states is different. For a vibration, this means that the derivative of the polarizability with respect to the normal coordinate associated to the vibration is non-zero: formula_10. In general, a normal mode is Raman active if it transforms with the same symmetry of the quadratic forms formula_11, which can be verified from the character table of the molecule's point group. As with IR spectroscopy, only fundamental excitations (formula_12) are allowed according to the QHO. There are however many cases where overtones are observed. The rule of mutual exclusion, which states that vibrational modes cannot be both IR and Raman active, applies to certain molecules. The specific selection rules state that the allowed rotational transitions are formula_13, where formula_14 is the rotational state. This generally is only relevant to molecules in the gas phase where the Raman linewidths are small enough for rotational transitions to be resolved. A selection rule relevant only to ordered solid materials states that only phonons with zero phase angle can be observed by IR and Raman, except when phonon confinement is manifest. Symmetry and polarization. Monitoring the polarization of the scattered photons is useful for understanding the connections between molecular symmetry and Raman activity which may assist in assigning peaks in Raman spectra. Light polarized in a single direction only gives access to some Raman–active modes, but rotating the polarization gives access to other modes. Each mode is separated according to its symmetry. The symmetry of a vibrational mode is deduced from the depolarization ratio ρ, which is the ratio of the Raman scattering with polarization orthogonal to the incident laser and the Raman scattering with the same polarization as the incident laser: formula_15 Here formula_16 is the intensity of Raman scattering when the analyzer is rotated 90 degrees with respect to the incident light's polarization axis, and formula_17 the intensity of Raman scattering when the analyzer is aligned with the polarization of the incident laser. When polarized light interacts with a molecule, it distorts the molecule which induces an equal and opposite effect in the plane-wave, causing it to be rotated by the difference between the orientation of the molecule and the angle of polarization of the light wave. If formula_18, then the vibrations at that frequency are "depolarized"; meaning they are not totally symmetric. Stimulated Raman scattering and Raman amplification. The Raman-scattering process as described above takes place spontaneously; i.e., in random time intervals, one of the many incoming photons is scattered by the material. This process is thus called "spontaneous Raman scattering". On the other hand, "stimulated Raman scattering" can take place when some Stokes photons have previously been generated by spontaneous Raman scattering (and somehow forced to remain in the material), or when deliberately injecting Stokes photons ("signal light") together with the original light ("pump light"). In that case, the total Raman-scattering rate is increased beyond that of spontaneous Raman scattering: pump photons are converted more rapidly into additional Stokes photons. The more Stokes photons that are already present, the faster more of them are added. Effectively, this "amplifies" the Stokes light in the presence of the pump light, which is exploited in Raman amplifiers and Raman lasers. Stimulated Raman scattering is a nonlinear optical effect. Requirement for space-coherence. Suppose that the distance between two points A and B of an exciting beam is "x". Generally, as the exciting frequency is not equal to the scattered Raman frequency, the corresponding relative wavelengths λ and λ' are not equal. Thus, a phase-shift Θ 2π"x"(1/λ − 1/λ') appears. For Θ "π", the scattered amplitudes are opposite, so that the Raman scattered beam remains weak. Several tricks may be used to get a larger amplitude: In labs, femtosecond laser pulses must be used because the ISRS becomes very weak if the pulses are too long. Thus ISRS cannot be observed using nanosecond pulses making ordinary time-incoherent light. Inverse Raman effect. The inverse Raman effect is a form of Raman scattering first noted by W. J. Jones and Boris P. Stoicheff. In some circumstances, Stokes scattering can exceed anti-Stokes scattering; in these cases the continuum (on leaving the material) is observed to have an absorption line (a dip in intensity) at ν"L"+ν"M". This phenomenon is referred to as the "inverse Raman effect"; the application of the phenomenon is referred to as "inverse Raman spectroscopy", and a record of the continuum is referred to as an "inverse Raman spectrum". In the original description of the inverse Raman effect, the authors discuss both absorption from a continuum of higher frequencies and absorption from a continuum of lower frequencies. They note that absorption from a continuum of lower frequencies will not be observed if the Raman frequency of the material is vibrational in origin and if the material is in thermal equilibrium. Supercontinuum generation. For high-intensity continuous wave (CW) lasers, stimulated Raman scattering can be used to produce a broad bandwidth supercontinuum. This process can also be seen as a special case of four-wave mixing, in which the frequencies of the two incident photons are equal and the emitted spectra are found in two bands separated from the incident light by the phonon energies. The initial Raman spectrum is built up with spontaneous emission and is amplified later on. At high pumping levels in long fibers, higher-order Raman spectra can be generated by using the Raman spectrum as a new starting point, thereby building a chain of new spectra with decreasing amplitude. The disadvantage of intrinsic noise due to the initial spontaneous process can be overcome by seeding a spectrum at the beginning, or even using a feedback loop as in a resonator to stabilize the process. Since this technology easily fits into the fast evolving fiber laser field and there is demand for transversal coherent high-intensity light sources (i.e., broadband telecommunication, imaging applications), Raman amplification and spectrum generation might be widely used in the near-future. Applications. Raman spectroscopy employs the Raman effect for substances analysis. The spectrum of the Raman-scattered light depends on the molecular constituents present and their state, allowing the spectrum to be used for material identification and analysis. Raman spectroscopy is used to analyze a wide range of materials, including gases, liquids, and solids. Highly complex materials such as biological organisms and human tissue can also be analyzed by Raman spectroscopy. For solid materials, Raman scattering is used as a tool to detect high-frequency phonon and magnon excitations. Raman lidar is used in atmospheric physics to measure the atmospheric extinction coefficient and the water vapour vertical distribution. Stimulated Raman transitions are also widely used for manipulating a trapped ion's energy levels, and thus basis qubit states. Raman spectroscopy can be used to determine the force constant and bond length for molecules that do not have an infrared absorption spectrum. Raman amplification is used in optical amplifiers. The Raman effect is also involved in producing the appearance of the blue sky (see Rayleigh Scattering: 'Rayleigh scattering of molecular nitrogen and oxygen in the atmosphere includes elastic scattering as well as the inelastic contribution from rotational Raman scattering in air'). Raman spectroscopy has been used to chemically image small molecules, such as nucleic acids, in biological systems by a vibrational tag. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "E_n = h \\left( n + {1 \\over 2 } \\right)\\nu=h\\left( n + {1 \\over 2 } \\right) {1\\over {2 \\pi}} \\sqrt{k \\over m} \\!" }, { "math_id": 4, "text": "E=h \\nu={h\\over {2 \\pi}} \\sqrt{k \\over m}" }, { "math_id": 5, "text": "\\tilde{\\nu}_0" }, { "math_id": 6, "text": "\\tilde{\\nu}_M" }, { "math_id": 7, "text": "\\tilde{\\nu}_0 - \\tilde{\\nu}_M" }, { "math_id": 8, "text": "\\tilde{\\nu}_0 + \\tilde{\\nu}_M" }, { "math_id": 9, "text": "\\frac{I_\\text{Stokes}}{I_\\text{anti-Stokes}} = \\frac{(\\tilde{\\nu}_0 - \\tilde{\\nu}_M)^4}{(\\tilde{\\nu}_0 + \\tilde{\\nu}_M)^4}\\exp \n\\left(\\frac{hc \\,\\tilde{\\nu}_M}{k_BT}\\right)" }, { "math_id": 10, "text": "\\frac{\\partial \\alpha}{\\partial Q} \\ne 0" }, { "math_id": 11, "text": "(x^2, y^2, z^2, xy, xz, yz)" }, { "math_id": 12, "text": "\\Delta\\nu=\\pm1" }, { "math_id": 13, "text": "\\Delta J=\\pm2" }, { "math_id": 14, "text": "J" }, { "math_id": 15, "text": "\\rho = \\frac{I_r}{I_u}" }, { "math_id": 16, "text": "I_r" }, { "math_id": 17, "text": "I_u" }, { "math_id": 18, "text": "\\rho \\geq \\frac{3}{4}" } ]
https://en.wikipedia.org/wiki?curid=623870
62389
Langmuir probe
Device used to measure plasma A Langmuir probe is a device used to determine the electron temperature, electron density, and electric potential of a plasma. It works by inserting one or more electrodes into a plasma, with a constant or time-varying electric potential between the various electrodes or between them and the surrounding vessel. The measured currents and potentials in this system allow the determination of the physical properties of the plasma. "I-V" characteristic of the Debye sheath. The beginning of Langmuir probe theory is the "I–V" characteristic of the Debye sheath, that is, the current density flowing to a surface in a plasma as a function of the voltage drop across the sheath. The analysis presented here indicates how the electron temperature, electron density, and plasma potential can be derived from the "I–V" characteristic. In some situations a more detailed analysis can yield information on the ion density (formula_0), the ion temperature formula_1, or the electron energy distribution function (EEDF) or formula_2. Ion saturation current density. Consider first a surface biased to a large negative voltage. If the voltage is large enough, essentially all electrons (and any negative ions) will be repelled. The ion velocity will satisfy the Bohm sheath criterion, which is, strictly speaking, an inequality, but which is usually marginally fulfilled. The Bohm criterion in its marginal form says that the ion velocity at the sheath edge is simply the sound speed given by formula_3. The ion temperature term is often neglected, which is justified if the ions are cold. "Z" is the (average) charge state of the ions, and formula_4 is the adiabatic coefficient for the ions. The proper choice of formula_4 is a matter of some contention. Most analyses use formula_5, corresponding to isothermal ions, but some kinetic theory suggests that formula_6. For formula_7 and formula_8, using the larger value results in the conclusion that the density is formula_9 times smaller. Uncertainties of this magnitude arise several places in the analysis of Langmuir probe data and are very difficult to resolve. The charge density of the ions depends on the charge state "Z", but quasineutrality allows one to write it simply in terms of the electron density as formula_10, where formula_11 is the charge of an electron and formula_12 is the number density of electrons. Using these results we have the current density to the surface due to the ions. The current density at large negative voltages is due solely to the ions and, except for possible sheath expansion effects, does not depend on the bias voltage, so it is referred to as the ion saturation current density and is given by formula_13 where formula_14 is as defined above. The plasma parameters, in particular, the density, are those at the sheath edge. Exponential electron current. As the voltage of the Debye sheath is reduced, the more energetic electrons are able to overcome the potential barrier of the electrostatic sheath. We can model the electrons at the sheath edge with a Maxwell–Boltzmann distribution, i.e., formula_15, except that the high energy tail moving away from the surface is missing, because only the lower energy electrons moving toward the surface are reflected. The higher energy electrons overcome the sheath potential and are absorbed. The mean velocity of the electrons which are able to overcome the voltage of the sheath is formula_16, where the cut-off velocity for the upper integral is formula_17. formula_18 is the voltage across the Debye sheath, that is, the potential at the sheath edge minus the potential of the surface. For a large voltage compared to the electron temperature, the result is formula_19. With this expression, we can write the electron contribution to the current to the probe in terms of the ion saturation current as formula_20, valid as long as the electron current is not more than two or three times the ion current. Floating potential. The total current, of course, is the sum of the ion and electron currents: formula_21. We are using the convention that current "from" the surface into the plasma is positive. An interesting and practical question is the potential of a surface to which no net current flows. It is easily seen from the above equation that formula_22. If we introduce the ion reduced mass formula_23, we can write formula_24 Since the floating potential is the experimentally accessible quantity, the current (below electron saturation) is usually written as formula_25. Electron saturation current. When the electrode potential is equal to or greater than the plasma potential, then there is no longer a sheath to reflect electrons, and the electron current saturates. Using the Boltzmann expression for the mean electron velocity given above with formula_26 and setting the ion current to zero, the electron saturation current density would be formula_27 Although this is the expression usually given in theoretical discussions of Langmuir probes, the derivation is not rigorous and the experimental basis is weak. The theory of double layers typically employs an expression analogous to the Bohm criterion, but with the roles of electrons and ions reversed, namely formula_28 where the numerical value was found by taking "T""i"="T""e" and γ"i"=γ"e". In practice, it is often difficult and usually considered uninformative to measure the electron saturation current experimentally. When it is measured, it is found to be highly variable and generally much lower (a factor of three or more) than the value given above. Often a clear saturation is not seen at all. Understanding electron saturation is one of the most important outstanding problems of Langmuir probe theory. Effects of the bulk plasma. The Debye sheath theory explains the basic behavior of Langmuir probes, but is not complete. Merely inserting an object like a probe into a plasma changes the density, temperature, and potential at the sheath edge and perhaps everywhere. Changing the voltage on the probe will also, in general, change various plasma parameters. Such effects are less well understood than sheath physics, but they can at least in some cases be roughly accounted. Pre-sheath. The Bohm criterion requires the ions to enter the Debye sheath at the sound speed. The potential drop that accelerates them to this speed is called the pre-sheath. It has a spatial scale that depends on the physics of the ion source but which is large compared to the Debye length and often of the order of the plasma dimensions. The magnitude of the potential drop is equal to (at least) formula_29 The acceleration of the ions also entails a decrease in the density, usually by a factor of about 2 depending on the details. Resistivity. Collisions between ions and electrons will also affect the "I-V" characteristic of a Langmuir probe. When an electrode is biased to any voltage other than the floating potential, the current it draws must pass through the plasma, which has a finite resistivity. The resistivity and current path can be calculated with relative ease in an unmagnetized plasma. In a magnetized plasma, the problem is much more difficult. In either case, the effect is to add a voltage drop proportional to the current drawn, which shears the characteristic. The deviation from an exponential function is usually not possible to observe directly, so that the flattening of the characteristic is usually misinterpreted as a larger plasma temperature. Looking at it from the other side, any measured "I-V" characteristic can be interpreted as a hot plasma, where most of the voltage is dropped in the Debye sheath, or as a cold plasma, where most of the voltage is dropped in the bulk plasma. Without quantitative modeling of the bulk resistivity, Langmuir probes can only give an upper limit on the electron temperature. Sheath expansion. It is not enough to know the current "density" as a function of bias voltage since it is the "absolute" current which is measured. In an unmagnetized plasma, the current-collecting area is usually taken to be the exposed surface area of the electrode. In a magnetized plasma, the projected area is taken, that is, the area of the electrode as viewed along the magnetic field. If the electrode is not shadowed by a wall or other nearby object, then the area must be doubled to account for current coming along the field from both sides. If the electrode dimensions are not small in comparison to the Debye length, then the size of the electrode is effectively increased in all directions by the sheath thickness. In a magnetized plasma, the electrode is sometimes assumed to be increased in a similar way by the ion Larmor radius. The finite Larmor radius allows some ions to reach the electrode that would have otherwise gone past it. The details of the effect have not been calculated in a fully self-consistent way. If we refer to the probe area including these effects as formula_30 (which may be a function of the bias voltage) and make the assumptions and ignore the effects of then the "I-V" characteristic becomes formula_32, where formula_33. Magnetized plasmas. The theory of Langmuir probes is much more complex when the plasma is magnetized. The simplest extension of the unmagnetized case is simply to use the projected area rather than the surface area of the electrode. For a long cylinder far from other surfaces, this reduces the effective area by a factor of π/2 = 1.57. As mentioned before, it might be necessary to increase the radius by about the thermal ion Larmor radius, but not above the effective area for the unmagnetized case. The use of the projected area seems to be closely tied with the existence of a magnetic sheath. Its scale is the ion Larmor radius at the sound speed, which is normally between the scales of the Debye sheath and the pre-sheath. The Bohm criterion for ions entering the magnetic sheath applies to the motion along the field, while at the entrance to the Debye sheath it applies to the motion normal to the surface. This results in a reduction of the density by the sine of the angle between the field and the surface. The associated increase in the Debye length must be taken into account when considering ion non-saturation due to sheath effects. Especially interesting and difficult to understand is the role of cross-field currents. Naively, one would expect the current to be parallel to the magnetic field along a flux tube. In many geometries, this flux tube will end at a surface in a distant part of the device, and this spot should itself exhibit an "I-V" characteristic. The net result would be the measurement of a double-probe characteristic; in other words, electron saturation current equal to the ion saturation current. When this picture is considered in detail, it is seen that the flux tube must charge up and the surrounding plasma must spin around it. The current into or out of the flux tube must be associated with a force that slows down this spinning. Candidate forces are viscosity, friction with neutrals, and inertial forces associated with plasma flows, either steady or fluctuating. It is not known which force is strongest in practice, and in fact it is generally difficult to find any force that is powerful enough to explain the characteristics actually measured. It is also likely that the magnetic field plays a decisive role in determining the level of electron saturation, but no quantitative theory is as yet available. Electrode configurations. Once one has a theory of the "I-V" characteristic of an electrode, one can proceed to measure it and then fit the data with the theoretical curve to extract the plasma parameters. The straightforward way to do this is to sweep the voltage on a single electrode, but, for a number of reasons, configurations using multiple electrodes or exploring only a part of the characteristic are used in practice. Single probe. The most straightforward way to measure the "I-V" characteristic of a plasma is with a single probe, consisting of one electrode biased with a voltage ramp relative to the vessel. The advantages are simplicity of the electrode and redundancy of information, i.e. one can check whether the "I-V" characteristic has the expected form. Potentially additional information can be extracted from details of the characteristic. The disadvantages are more complex biasing and measurement electronics and a poor time resolution. If fluctuations are present (as they always are) and the sweep is slower than the fluctuation frequency (as it usually is), then the "I-V" is the "average" current as a function of voltage, which may result in systematic errors if it is analyzed as though it were an instantaneous "I-V". The ideal situation is to sweep the voltage at a frequency above the fluctuation frequency but still below the ion cyclotron frequency. This, however, requires sophisticated electronics and a great deal of care. Double probe. An electrode can be biased relative to a second electrode, rather than to the ground. The theory is similar to that of a single probe, except that the current is limited to the ion saturation current for both positive and negative voltages. In particular, if formula_34 is the voltage applied between two identical electrodes, the current is given by; formula_35, which can be rewritten using formula_36 as a hyperbolic tangent: formula_37. One advantage of the double probe is that neither electrode is ever very far above floating, so the theoretical uncertainties at large electron currents are avoided. If it is desired to sample more of the exponential electron portion of the characteristic, an asymmetric double probe may be used, with one electrode larger than the other. If the ratio of the collection areas is larger than the square root of the ion to electron mass ratio, then this arrangement is equivalent to the single tip probe. If the ratio of collection areas is not that big, then the characteristic will be in-between the symmetric double tip configuration and the single-tip configuration. If formula_38 is the area of the larger tip then: formula_39 Another advantage is that there is no reference to the vessel, so it is to some extent immune to the disturbances in a radio frequency plasma. On the other hand, it shares the limitations of a single probe concerning complicated electronics and poor time resolution. In addition, the second electrode not only complicates the system, but it makes it susceptible to disturbance by gradients in the plasma. Triple probe. An elegant electrode configuration is the triple probe, consisting of two electrodes biased with a fixed voltage and a third which is floating. The bias voltage is chosen to be a few times the electron temperature so that the negative electrode draws the ion saturation current, which, like the floating potential, is directly measured. A common rule of thumb for this voltage bias is 3/e times the expected electron temperature. Because the biased tip configuration is floating, the positive probe can draw at most an electron current only equal in magnitude and opposite in polarity to the ion saturation current drawn by the negative probe, given by : formula_40 and as before the floating tip draws effectively no current: formula_41. Assuming that: 1.) The electron energy distribution in the plasma is Maxwellian, 2.) The mean free path of the electrons is greater than the ion sheath about the tips and larger than the probe radius, and 3.) the probe sheath sizes are much smaller than the probe separation, then the current to any probe can be considered composed of two parts – the high energy tail of the Maxwellian electron distribution, and the ion saturation current: formula_42 where the current "Ie" is thermal current. Specifically, formula_43, where "S" is surface area, "Je" is electron current density, and "ne" is electron density. Assuming that the ion and electron saturation current is the same for each probe, then the formulas for current to each of the probe tips take the form formula_44 formula_45 formula_46. It is then simple to show formula_47 but the relations from above specifying that "I+=-I−" and "Ifl"=0 give formula_48, a transcendental equation in terms of applied and measured voltages and the unknown "Te" that in the limit "qeVBias = qe(V+-V−) » k Te", becomes formula_49. That is, the voltage difference between the positive and floating electrodes is proportional to the electron temperature. (This was especially important in the sixties and seventies before sophisticated data processing became widely available.) More sophisticated analysis of triple probe data can take into account such factors as incomplete saturation, non-saturation, unequal areas. Triple probes have the advantage of simple biasing electronics (no sweeping required), simple data analysis, excellent time resolution, and insensitivity to potential fluctuations (whether imposed by an rf source or inherent fluctuations). Like double probes, they are sensitive to gradients in plasma parameters. Special arrangements. Arrangements with four (tetra probe) or five (penta probe) have sometimes been used, but the advantage over triple probes has never been entirely convincing. The spacing between probes must be larger than the Debye length of the plasma to prevent an overlapping Debye sheath. A pin-plate probe consists of a small electrode directly in front of a large electrode, the idea being that the voltage sweep of the large probe can perturb the plasma potential at the sheath edge and thereby aggravate the difficulty of interpreting the "I-V" characteristic. The floating potential of the small electrode can be used to correct for changes in potential at the sheath edge of the large probe. Experimental results from this arrangement look promising, but experimental complexity and residual difficulties in the interpretation have prevented this configuration from becoming standard. Various geometries have been proposed for use as ion temperature probes, for example, two cylindrical tips that rotate past each other in a magnetized plasma. Since shadowing effects depend on the ion Larmor radius, the results can be interpreted in terms of ion temperature. The ion temperature is an important quantity that is very difficult to measure. Unfortunately, it is also very difficult to analyze such probes in a fully self-consistent way. Emissive probes use an electrode heated either electrically or by the exposure to the plasma. When the electrode is biased more positive than the plasma potential, the emitted electrons are pulled back to the surface so the "I"-"V" characteristic is hardly changed. As soon as the electrode is biased negative with respect to the plasma potential, the emitted electrons are repelled and contribute a large negative current. The onset of this current or, more sensitively, the onset of a discrepancy between the characteristics of an unheated and a heated electrode, is a sensitive indicator of the plasma potential. To measure fluctuations in plasma parameters, arrays of electrodes are used, usually one – but occasionally two-dimensional. A typical array has a spacing of 1 mm and a total of 16 or 32 electrodes. A simpler arrangement to measure fluctuations is a negatively biased electrode flanked by two floating electrodes. The ion-saturation current is taken as a surrogate for the density and the floating potential as a surrogate for the plasma potential. This allows a rough measurement of the turbulent particle flux formula_50 Cylindrical Langmuir probe in electron flow. Most often, the Langmuir probe is a small sized electrode inserted into a plasma which is connected to an external circuit that measures the properties of the plasma with respect to ground. The ground is typically an electrode with a large surface area and is usually in contact with the same plasma (very often the metallic wall of the chamber). This allows the probe to measure the I-V characteristic of the plasma. The probe measures the characteristic current formula_51 of the plasma when the probe is biased with a potential formula_52. Relations between the probe I-V characteristic and parameters of isotropic plasma were found by Irving Langmuir and they can be derived most elementary for the planar probe of a large surface area formula_53 (ignoring the edge effects problem). Let us choose the point formula_54 in plasma at the distance formula_55 from the probe surface where electric field of the probe is negligible while each electron of plasma passing this point could reach the probe surface without collisions with plasma components: formula_56, formula_57 is the Debye length and formula_58 is the electron free path calculated for its total cross section with plasma components. In the vicinity of the point formula_54 we can imagine a small element of the surface area formula_59 parallel to the probe surface. The elementary current formula_60 of plasma electrons passing throughout formula_59 in a direction of the probe surface can be written in the form where formula_61 is a scalar of the electron thermal velocity vector formula_62, formula_63 is the element of the solid angle with its relative value formula_64, formula_65 is the angle between perpendicular to the probe surface recalled from the point formula_54 and the radius-vector of the electron thermal velocity formula_62 forming a spherical layer of thickness formula_66 in velocity space, and formula_67 is the electron distribution function normalized to unity Taking into account uniform conditions along the probe surface (boundaries are excluded), formula_68, we can take double integral with respect to the angle formula_69, and with respect to the velocity formula_70, from the expression (1), after substitution Eq. (2) in it, to calculate a total electron current on the probe whereformula_52 is the probe potential with respect to the potential of plasma formula_71, formula_72 is the lowest electron velocity value at which the electron still could reach the probe surface charged to the potential formula_52, formula_73 is the upper limit of the angle formula_65 at which the electron having initial velocity formula_61 can still reach the probe surface with a zero-value of its velocity at this surface. That means the value formula_73 is defined by the condition Deriving the value formula_73 from Eq. (5) and substituting it in Eq. (4), we can obtain the probe I-V characteristic (neglecting the ion current) in the range of the probe potential formula_74 in the form Differentiating Eq. (6) twice with respect to the potential formula_52, one can find the expression describing the second derivative of the probe I-V characteristic (obtained firstly by M. J. Druyvestein defining the electron distribution function over velocity formula_75 in the evident form. M. J. Druyvestein has shown in particular that Eqs. (6) and (7) are valid for description of operation of the probe of any arbitrary convex geometrical shape. Substituting the Maxwellian distribution function: where formula_76 is the most probable velocity, in Eq. (6) we obtain the expression From which the very useful in practice relation follows allowing one to derive the electron energy formula_77 (for Maxwellian distribution function only!) by a slope of the probe I-V characteristic in a semilogarithmic scale. Thus in plasmas with isotropic electron distributions, the electron current formula_78 on a surface formula_79 of the cylindrical Langmuir probe at plasma potential formula_71 is defined by the average electron thermal velocity formula_80 and can be written down as equation (see Eqs. (6), (9) at formula_71) where formula_81 is the electron concentration, formula_82 is the probe radius, and formula_83 is its length. It is obvious that if plasma electrons form an electron wind (flow) across the cylindrical probe axis with a velocity formula_84, the expression holds true. In plasmas produced by gas-discharge arc sources as well as inductively coupled sources, the electron wind can develop the Mach number formula_85 . Here the parameter formula_86 is introduced along with the Mach number for simplification of mathematical expressions. Note that formula_87, whereformula_88 is the most probable velocity for the Maxwellian distribution function, so that formula_89 . Thus the general case where formula_90 is of the theoretical and practical interest. Corresponding physical and mathematical considerations presented in Refs. [9,10] has shown that at the Maxwellian distribution function of the electrons in a reference system moving with the velocity formula_91 across axis of the cylindrical probe set at plasma potential formula_71, the electron current on the probe can be written down in the form where formula_92 and formula_93 are Bessel functions of imaginary arguments and Eq. (13) is reduced to Eq. (11) atformula_94 being reduced to Eq. (12) at formula_95 . The second derivative of the probe I-V characteristic formula_96 with respect to the probe potential formula_52 can be presented in this case in the form (see Fig. 3) where and the electron energy formula_97 is expressed in eV. All parameters of the electron population: formula_81, formula_98, formula_99 and formula_88 in plasma can be derived from the experimental probe I-V characteristic second derivative formula_96 by its least square best fitting with the theoretical curve expressed by Eq. (14). For detail and for problem of the general case of none-Maxwellian electron distribution functions see., Practical considerations. For laboratory and technical plasmas, the electrodes are most commonly tungsten or tantalum wires several thousandths of an inch thick, because they have a high melting point but can be made small enough not to perturb the plasma. Although the melting point is somewhat lower, molybdenum is sometimes used because it is easier to machine and solder than tungsten. For fusion plasmas, graphite electrodes with dimensions from 1 to 10 mm are usually used because they can withstand the highest power loads (also sublimating at high temperatures rather than melting), and result in reduced bremsstrahlung radiation (with respect to metals) due to the low atomic number of carbon. The electrode surface exposed to the plasma must be defined, e.g. by insulating all but the tip of a wire electrode. If there can be significant deposition of conducting materials (metals or graphite), then the insulator should be separated from the electrode by a to prevent short-circuiting. In a magnetized plasma, it appears to be best to choose a probe size a few times larger than the ion Larmor radius. A point of contention is whether it is better to use proud probes, where the angle between the magnetic field and the surface is at least 15°, or flush-mounted probes, which are embedded in the plasma-facing components and generally have an angle of 1 to 5 °. Many plasma physicists feel more comfortable with proud probes, which have a longer tradition and possibly are less perturbed by electron saturation effects, although this is disputed. Flush-mounted probes, on the other hand, being part of the wall, are less perturbative. Knowledge of the field angle is necessary with proud probes to determine the fluxes to the wall, whereas it is necessary with flush-mounted probes to determine the density. In very hot and dense plasmas, as found in fusion research, it is often necessary to limit the thermal load to the probe by limiting the exposure time. A reciprocating probe is mounted on an arm that is moved into and back out of the plasma, usually in about one second by means of either a pneumatic drive or an electromagnetic drive using the ambient magnetic field. Pop-up probes are similar, but the electrodes rest behind a shield and are only moved the few millimeters necessary to bring them into the plasma near the wall. A Langmuir probe can be purchased off the shelf for on the order of 15,000 U.S. dollars, or they can be built by an experienced researcher or technician. When working at frequencies under 100 MHz, it is advisable to use blocking filters, and take necessary grounding precautions. In low temperature plasmas, in which the probe does not get hot, surface contamination may become an issue. This effect can cause hysteresis in the I-V curve and may limit the current collected by the probe. A heating mechanism or a glow discharge plasma may be used to clean the probe and prevent misleading results. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n_i" }, { "math_id": 1, "text": "T_i" }, { "math_id": 2, "text": "f_e(v)" }, { "math_id": 3, "text": " c_s = \\sqrt{k_B(ZT_e+\\gamma_iT_i)/m_i}" }, { "math_id": 4, "text": "\\gamma_i" }, { "math_id": 5, "text": "\\gamma_i=1" }, { "math_id": 6, "text": "\\gamma_i=3" }, { "math_id": 7, "text": "Z=1" }, { "math_id": 8, "text": "T_i=T_e" }, { "math_id": 9, "text": "\\sqrt{2}" }, { "math_id": 10, "text": "q_e n_e" }, { "math_id": 11, "text": "q_e" }, { "math_id": 12, "text": "n_e" }, { "math_id": 13, "text": "j_i^{max} = q_{e}n_ec_s" }, { "math_id": 14, "text": "c_s" }, { "math_id": 15, "text": "f(v_x)\\,dv_x \\propto e^{-\\frac{1}{2}m_ev_x^2/k_BT_e}" }, { "math_id": 16, "text": "\n\\langle v_e \\rangle = \\frac\n{\\int_{v_{e0}}^\\infty f(v_x)\\,v_x\\,dv_x}\n{\\int_{-\\infty}^\\infty f(v_x)\\,dv_x}\n" }, { "math_id": 17, "text": "v_{e0} = \\sqrt{2q_{e}\\Delta V/m_e}" }, { "math_id": 18, "text": "\\Delta V" }, { "math_id": 19, "text": "\n\\langle v_e \\rangle = \n\\sqrt{\\frac{k_BT_e}{2\\pi m_e}}\\,\ne^{-q_{e}\\Delta V/k_BT_e}\n" }, { "math_id": 20, "text": "\nj_e = \nj_i^{max}\\sqrt{m_i/2\\pi m_e}\\,\ne^{-q_{e}\\Delta V/k_BT_e}\n" }, { "math_id": 21, "text": "\nj = j_i^{max} \n\\left( -1 + \\sqrt{m_i/2\\pi m_e}\\,e^{-q_{e}\\Delta V/k_BT_e} \\right)\n" }, { "math_id": 22, "text": "\\Delta V = (k_BT_e/q_e)\\,(1/2)\\ln(m_i/2\\pi m_e)" }, { "math_id": 23, "text": "\\mu_i=m_i/m_e" }, { "math_id": 24, "text": "\n\\Delta V = (k_BT_e/q_e)\\, ( 2.8 + 0.5\\ln \\mu_i )\n" }, { "math_id": 25, "text": " \nj = j_i^{max} \n\\left( -1 + \\,e^{q_{e}(V_{0}-\\Delta V)/k_BT_e} \\right) \n" }, { "math_id": 26, "text": "v_{e0} = 0" }, { "math_id": 27, "text": "\nj_e^{max} \n= j_i^{max}\\sqrt{m_i/\\pi m_e} \n= j_i^{max} \\left( 24.2 \\, \\sqrt{\\mu_i} \\right)\n" }, { "math_id": 28, "text": "\nj_e^{max} \n= q_en_e \\sqrt{k_B(\\gamma_eT_e+T_i)/m_e}\n= j_i^{max}\\sqrt{m_i/m_e} \n= j_i^{max} \\left( 42.8 \\, \\sqrt{\\mu_i} \\right)\n" }, { "math_id": 29, "text": "\n\\Phi_{pre} = \\frac{\\frac{1}{2}m_ic_s^2}{Ze} = k_B(T_e+Z\\gamma_iT_i)/(2Ze)\n" }, { "math_id": 30, "text": "A_{eff}" }, { "math_id": 31, "text": "n_{e,sh}=0.5\\,n_e" }, { "math_id": 32, "text": " I = I_i^{max}(-1+e^{q_e(V_{pr}-V_{fl})/(k_BT_e)} )" }, { "math_id": 33, "text": " I_i^{max} = q_en_e\\sqrt{k_BT_e/m_i}\\,A_{eff} " }, { "math_id": 34, "text": "V_{bias}" }, { "math_id": 35, "text": "\nI \n= I_i^{max} \\left( -1 + \\,e^{q_e(V_2-V_{fl})/k_BT_e} \\right)\n= -I_i^{max} \\left( -1 + \\,e^{q_e(V_1-V_{fl})/k_BT_e} \\right)\n" }, { "math_id": 36, "text": "V_{bias}=V_2-V_1" }, { "math_id": 37, "text": "\nI = I_i^{max} \\tanh\\left( \\frac{1}{2}\\,\\frac{q_eV_{bias}}{k_BT_e} \\right)\n" }, { "math_id": 38, "text": "A_1" }, { "math_id": 39, "text": "\nI = A_1 J_i^{max} \\left[ \\coth\\left(\\frac{q_eV_{bias}}{2k_BT_e}\\right) + \\frac{\\left(\\frac{A_1}{A_2}-1\\right)\\,e^{-q_eV_{bias}/2k_BT_e}}{2\\sinh\\left(\\frac{q_eV_{bias}}{2k_BT_e}\\right)} \\right]^{-1}\n" }, { "math_id": 40, "text": "\n-I_{+}=I_{-}=I_i^{max}\n" }, { "math_id": 41, "text": "\nI_{fl}=0\n" }, { "math_id": 42, "text": "\nI_{probe} = -I_{e} e^{-q_e V_{probe}/(k T_{e} )} + I_i^{max}\n" }, { "math_id": 43, "text": "\nI_{e} = S J_{e} = S n_{e} q_e \\sqrt{kT_{e}/2 \\pi m_{e}}\n" }, { "math_id": 44, "text": "\nI_{+} = -I_{e} e^{-q_e V_{+}/(k T_{e} )} + I_i^{max}\n" }, { "math_id": 45, "text": "\nI_{-} = -I_{e} e^{-q_e V_{-}/(k T_{e} )} + I_i^{max}\n" }, { "math_id": 46, "text": "\nI_{fl} = -I_{e} e^{-q_e V_{fl}/(k T_{e} )} + I_i^{max}\n" }, { "math_id": 47, "text": "\n\n\\left(I_{+} - I_{fl})/(I_{+} - I_{-}\\right) = \\left(1-e^{-q_e(V_{fl}-V_{+})/(k T_{e})}\\right)/ \\left(1-e^{-q_e(V_{-}-V_{+})/(k T_{e})}\\right)\n\n" }, { "math_id": 48, "text": "\n\n1/2 = \\left(1-e^{-q_e(V_{fl}-V_{+})/(k T_{e})}\\right)/ \\left(1-e^{-q_e(V_{-}-V_{+})/(k T_{e})}\\right)\n" }, { "math_id": 49, "text": " \n(V_{+}-V_{fl}) = (k_BT_e/q_e)\\ln 2 \n" }, { "math_id": 50, "text": "\n\\Phi_{turb} \n= \\langle \\tilde{n}_e \\tilde{v}_{E\\times B} \\rangle\n\\propto \\langle \n\\tilde{I}_i^{max} ( \\tilde{V}_{fl,2} - \\tilde{V}_{fl,1} ) \n\\rangle\n" }, { "math_id": 51, "text": "i(V)" }, { "math_id": 52, "text": "V" }, { "math_id": 53, "text": "S_z" }, { "math_id": 54, "text": "O" }, { "math_id": 55, "text": "h" }, { "math_id": 56, "text": "\\lambda_D \\ll\\lambda_{Te}" }, { "math_id": 57, "text": "\\lambda_D" }, { "math_id": 58, "text": "\\lambda_{Te}" }, { "math_id": 59, "text": "\\Delta S" }, { "math_id": 60, "text": "di" }, { "math_id": 61, "text": "v" }, { "math_id": 62, "text": "\\vec{v}" }, { "math_id": 63, "text": "2\\pi\\sin \\vartheta d\\vartheta" }, { "math_id": 64, "text": "2\\pi\\sin \\vartheta d\\vartheta / 4\\pi" }, { "math_id": 65, "text": "\\vartheta" }, { "math_id": 66, "text": "dv" }, { "math_id": 67, "text": "f(v)" }, { "math_id": 68, "text": "\\Delta S \\rightarrow S_z" }, { "math_id": 69, "text": " \\vartheta " }, { "math_id": 70, "text": " v " }, { "math_id": 71, "text": "V = 0" }, { "math_id": 72, "text": "\\sqrt{2q_eV/m}" }, { "math_id": 73, "text": "\\zeta" }, { "math_id": 74, "text": "-\\infty <V\\leq 0 " }, { "math_id": 75, "text": "f\\left ( \\sqrt{2q_eV/m}\\right ) " }, { "math_id": 76, "text": "v_p = \\langle v\\rangle \\sqrt{\\pi}/2" }, { "math_id": 77, "text": "\\mathcal{E}_p = k_B T" }, { "math_id": 78, "text": "i_{th} (0)" }, { "math_id": 79, "text": "S_z = 2\\pi r_z l_z " }, { "math_id": 80, "text": "\\langle v \\rangle " }, { "math_id": 81, "text": "n" }, { "math_id": 82, "text": "r_z" }, { "math_id": 83, "text": "l_z" }, { "math_id": 84, "text": "v_d\\gg \\langle v\\rangle" }, { "math_id": 85, "text": "M^{(0)} = v_d /\\langle v\\rangle = (\\sqrt{\\pi}/2)\\alpha \\gtrsim 1 " }, { "math_id": 86, "text": "\\alpha" }, { "math_id": 87, "text": "(\\sqrt{\\pi}/2)\\langle v\\rangle = v_p" }, { "math_id": 88, "text": "v_p" }, { "math_id": 89, "text": "\\alpha = v_d/v_p" }, { "math_id": 90, "text": "\\alpha \\gtrsim 1" }, { "math_id": 91, "text": "v_d" }, { "math_id": 92, "text": "I_0" }, { "math_id": 93, "text": "I_1" }, { "math_id": 94, "text": "\\alpha \\rightarrow 0" }, { "math_id": 95, "text": "\\alpha \\rightarrow \\infty" }, { "math_id": 96, "text": "i^{\\prime \\prime}(V)" }, { "math_id": 97, "text": "\\mathcal {E}_p/e" }, { "math_id": 98, "text": "\\alpha " }, { "math_id": 99, "text": "\\langle v\\rangle " } ]
https://en.wikipedia.org/wiki?curid=62389
6239
Contraction mapping
Function reducing distance between all points In mathematics, a contraction mapping, or contraction or contractor, on a metric space ("M", "d") is a function "f" from "M" to itself, with the property that there is some real number formula_0 such that for all "x" and "y" in "M", formula_1 The smallest such value of "k" is called the Lipschitz constant of "f". Contractive maps are sometimes called Lipschitzian maps. If the above condition is instead satisfied for "k" ≤ 1, then the mapping is said to be a non-expansive map. More generally, the idea of a contractive mapping can be defined for maps between metric spaces. Thus, if ("M", "d") and ("N", "d"') are two metric spaces, then formula_2 is a contractive mapping if there is a constant formula_0 such that formula_3 for all "x" and "y" in "M". Every contraction mapping is Lipschitz continuous and hence uniformly continuous (for a Lipschitz continuous function, the constant "k" is no longer necessarily less than 1). A contraction mapping has at most one fixed point. Moreover, the Banach fixed-point theorem states that every contraction mapping on a non-empty complete metric space has a unique fixed point, and that for any "x" in "M" the iterated function sequence "x", "f" ("x"), "f" ("f" ("x")), "f" ("f" ("f" ("x"))), ... converges to the fixed point. This concept is very useful for iterated function systems where contraction mappings are often used. Banach's fixed-point theorem is also applied in proving the existence of solutions of ordinary differential equations, and is used in one proof of the inverse function theorem. Contraction mappings play an important role in dynamic programming problems. Firmly non-expansive mapping. A non-expansive mapping with formula_4 can be generalized to a firmly non-expansive mapping in a Hilbert space formula_5 if the following holds for all "x" and "y" in formula_5: formula_6 where formula_7. This is a special case of formula_8 averaged nonexpansive operators with formula_9. A firmly non-expansive mapping is always non-expansive, via the Cauchy–Schwarz inequality. The class of firmly non-expansive maps is closed under convex combinations, but not compositions. This class includes proximal mappings of proper, convex, lower-semicontinuous functions, hence it also includes orthogonal projections onto non-empty closed convex sets. The class of firmly nonexpansive operators is equal to the set of resolvents of maximally monotone operators. Surprisingly, while iterating non-expansive maps has no guarantee to find a fixed point (e.g. multiplication by -1), firm non-expansiveness is sufficient to guarantee global convergence to a fixed point, provided a fixed point exists. More precisely, if formula_10, then for any initial point formula_11, iterating formula_12 yields convergence to a fixed point formula_13. This convergence might be weak in an infinite-dimensional setting. Subcontraction map. A subcontraction map or subcontractor is a map "f" on a metric space ("M", "d") such that formula_14 formula_15 If the image of a subcontractor "f" is compact, then "f" has a fixed point. Locally convex spaces. In a locally convex space ("E", "P") with topology given by a set "P" of seminorms, one can define for any "p" ∈ "P" a "p"-contraction as a map "f" such that there is some "k""p" &lt; 1 such that "p"("f"("x") − "f"("y")) ≤ "kp p"("x" − "y"). If "f" is a "p"-contraction for all "p" ∈ "P" and ("E", "P") is sequentially complete, then "f" has a fixed point, given as limit of any sequence "x""n"+1 = "f"("x""n"), and if ("E", "P") is Hausdorff, then the fixed point is unique. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0 \\leq k < 1" }, { "math_id": 1, "text": "d(f(x),f(y)) \\leq k\\,d(x,y)." }, { "math_id": 2, "text": "f:M \\rightarrow N" }, { "math_id": 3, "text": "d'(f(x),f(y)) \\leq k\\,d(x,y)" }, { "math_id": 4, "text": "k=1" }, { "math_id": 5, "text": "\\mathcal{H}" }, { "math_id": 6, "text": "\\|f(x)-f(y) \\|^2 \\leq \\, \\langle x-y, f(x) - f(y) \\rangle." }, { "math_id": 7, "text": "d(x,y) = \\|x-y\\|" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\alpha = 1/2" }, { "math_id": 10, "text": "\\text{Fix}f := \\{x \\in \\mathcal{H} \\ | \\ f(x) = x\\} \\neq \\varnothing" }, { "math_id": 11, "text": "x_0 \\in \\mathcal{H}" }, { "math_id": 12, "text": " (\\forall n \\in \\mathbb{N})\\quad x_{n+1} = f(x_n) " }, { "math_id": 13, "text": " x_n \\to z \\in \\text{Fix} f" }, { "math_id": 14, "text": " d(f(x), f(y)) \\leq d(x,y);" }, { "math_id": 15, "text": " d(f(f(x)),f(x)) < d(f(x),x) \\quad \\text{unless} \\quad x = f(x)." } ]
https://en.wikipedia.org/wiki?curid=6239
62395027
Nehemiah 2
Chapter from Nehemiah in the Old Testament Nehemiah 2 is the second chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 12th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. From the time he hears about Jerusalem during the month of Kislev (November/December), Nehemiah waited until the month of Nisan (March/April) to petition Artaxerxes I of Persia to be allowed to go and help the rebuilding of Jerusalem. His petition is granted by the king, and although with less authority than Ezra over the officials of "Beyond-the-River", Nehemiah was given an official position with an escort of officers and cavalry. Text. This chapter is divided into 20 verses. The original text of this chapter is in the Hebrew language. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Wise as serpents (2:1–8). The scene of this part is the banqueting hall of King Artaxerxes, where Nehemiah carries out his duties as a cup-bearer. H. E. Ryle suggests that Nehemiah is the king's "favourite cup-bearer". Nehemiah is sad, and the king asks why. McConville argues that the display of a long face before the king shows three significant aspects of Nehemiah: courage, godliness and wisdom, which bear dire risk of his life (cf. Esther before Ahasuerus, ). "And it came to pass in the month Nisan, in the twentieth year of Artaxerxes the king, that wine was before him: and I took up the wine, and gave it unto the king." "Now I had not been beforetime sad in his presence." "2So the king said to me, "Why is your face troubled though you do not seem sick? This is nothing but a troubled heart."" "Then I became very much afraid" 3 "and said to the king, "May the king live forever! Why should not my face be troubled when the city, the place of my fathers' tombs, lies waste, and its gates have been destroyed by fire?"" Reconnaissance and Opposition (2:9–20). This part describes Nehemiah's journey to Jerusalem, and his first actions when he arrived there, especially his preliminary reconnaissance of the walls at night, and the revelation of his plan to rebuild the walls of Jerusalem. The resentment from local people (verses 10–12) recalls Ezra 1–6. "Then I came to the governors beyond the river, and gave them the king's letters. Now the king had sent captains of the army and horsemen with me." Verse 9. The military escort given by the king to Nehemiah consisted of officers ("captains"; "sārê"), army ("ḥayil"), and cavalry ("horsemen"; "pārāšîm"). The evidence of Persian soldiers stationed in Judah is shown in the cist-type tombs which otherwise can only be found in Persian archaeological sites. "When Sanballat the Horonite, and Tobiah the servant, the Ammonite, heard of it, it grieved them exceedingly that there was come a man to seek the welfare of the children of Israel." "So I came to Jerusalem, and was there three days." "But when Sanballat the Horonite, Tobiah the Ammonite subordinate, and Geshem the Arabian heard it, they laughed us to scorn, and despised us, and said, "What is this thing that you are doing? Are you rebelling against the king?"" Verse 19. The three enemies geographically surrounded Nehemiah: Sanballat the Horonite to the north, Tobiah the Ammonite to the east, and Geshem ("Kedarites") to the south. Good versus evil. According to J. Gordon McConville, a conflict between good ("tob") and evil ("ra’") underlies the action of this chapter which is not immediately obvious in the English translation: Nehemiah's face is "sad" (verses 1–3) is actually described using the word "evil", which is also used for the word "trouble" of Jerusalem (verse 17, or in 1:3), whereas the expression "it pleased the king" (verses 6 &amp; 7) is literally "it was "good" to the king", as also in "the "good" hand of God is upon Nehemiah" (verses 8, 18), or "the good work" (verse 18), which is simply "the good" (or "the good thing”"). Verse 10 shows most pointed contrast, where "it displeased them" is literally "it was evil to them", whereas "welfare" of the Jews is "their "good"". In this context, the king's decision and the rebuilding of the walls are "good", whereas the broken walls, Nehemiah's grief, or the conspiration of Sanballat, Tobiah and Geshem, are "evil". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=62395027
62396576
Hypergraph removal lemma
In graph theory, the hypergraph removal lemma states that when a hypergraph contains few copies of a given sub-hypergraph, then all of the copies can be eliminated by removing a small number of hyperedges. It is a generalization of the graph removal lemma. The special case in which the graph is a tetrahedron is known as the tetrahedron removal lemma. It was first proved by Nagle, Rödl, Schacht and Skokan and, independently, by Gowers. The hypergraph removal lemma can be used to prove results such as Szemerédi's theorem and the multi-dimensional Szemerédi theorem. Statement. The hypergraph removal lemma states that for any formula_0, there exists formula_1 such that for any formula_2-uniform hypergraph formula_3 with formula_4 vertices the following is true: if formula_5 is any formula_6-vertex formula_2-uniform hypergraph with at most formula_7 subgraphs isomorphic to formula_3, then it is possible to eliminate all copies of formula_3 from formula_5 by removing at most formula_8 hyperedges from formula_5. An equivalent formulation is that, for any graph formula_5 with formula_9 copies of formula_3, we can eliminate all copies of formula_3 from formula_5 by removing formula_10 hyperedges. Proof idea of the hypergraph removal lemma. The high level idea of the proof is similar to that of graph removal lemma. We prove a hypergraph version of Szemerédi's regularity lemma (partition hypergraphs into pseudorandom blocks) and a counting lemma (estimate the number of hypergraphs in an appropriate pseudorandom block). The key difficulty in the proof is to define the correct notion of hypergraph regularity. There were multiple attempts to define "partition" and "pseudorandom (regular) blocks" in a hypergraph, but none of them are able to give a strong counting lemma. The first correct definition of Szemerédi's regularity lemma for general hypergraphs is given by Rödl et al. In Szemerédi's regularity lemma, the partitions are performed on vertices (1-hyperedge) to regulate edges (2-hyperedge). However, for formula_11, if we simply regulate formula_12-hyperedges using only 1-hyperedge, we will lose information of all formula_13-hyperedges in the middle where formula_14, and fail to find a counting lemma. The correct version has to partition formula_15-hyperedges in order to regulate formula_12-hyperedges. To gain more control of the formula_15-hyperedges, we can go a level deeper and partition on formula_16-hyperedges to regulate them, etc. In the end, we will reach a complex structure of regulating hyperedges. Proof idea for 3-uniform hypergraphs. For example, we demonstrate an informal 3-hypergraph version of Szemerédi's regularity lemma, first given by Frankl and Rödl. Consider a partition of edgesformula_17 such that for most triples formula_18 there are a lot of triangles on top of formula_19 We say that formula_20 is "pseudorandom" in the sense that for all subgraphs formula_21 with not too few triangles on top of formula_22 we have formula_23 where formula_24 denotes the proportion of formula_25-uniform hyperedge in formula_26 among all triangles on top of formula_27. We then subsequently define a regular partition as a partition in which the triples of parts that are not regular constitute at most an formula_28 fraction of all triples of parts in the partition. In addition to this, we need to further regularize formula_29 via a partition of the vertex set. As a result, we have the total data of hypergraph regularity as follows: After proving the hypergraph regularity lemma, we can prove a hypergraph counting lemma. The rest of proof proceeds similarly to that of Graph removal lemma. Proof of Szemerédi's theorem. Let formula_32 be the size of the largest subset of formula_33 that does not contain a length formula_12 arithmetic progression. Szemerédi's theorem states that, formula_34 for any constant formula_12. The high level idea of the proof is that, we construct a hypergraph from a subset without any length formula_12 arithmetic progression, then use graph removal lemma to show that this graph cannot have too many hyperedges, which in turn shows that the original subset cannot be too big. Let formula_35 be a subset that does not contain any length formula_12 arithmetic progression. Let formula_36 be a large enough integer. We can think of formula_37 as a subset of formula_38. Clearly, if formula_37 doesn't have length formula_12 arithmetic progression in formula_39, it also doesn't have length formula_12 arithmetic progression in formula_38. We will construct a formula_12-partite formula_15-uniform hypergraph formula_5 from formula_37 with parts formula_40, all of which are formula_41 element vertex sets indexed by formula_38. For each formula_42, we add a hyperedge among vertices formula_43 if and only if formula_44 Let formula_3 be the complete formula_12-partite formula_15-uniform hypergraph. If formula_5contains an isomorphic copy of formula_3 with vertices formula_45, then formula_46 for any formula_47. However, note that formula_48 is a length formula_12 arithmetic progression with common difference formula_49. Since formula_37 has no length formula_12 arithmetic progression, it must be the case that formula_50, so formula_51. Thus, for each hyperedge formula_43, we can find a unique copy of formula_3 that this edge lies in by finding formula_52. The number of copies of formula_3 in formula_5 equals formula_53. Therefore, by the hypergraph removal lemma, we can remove formula_54 edges to eliminate all copies of formula_3 in formula_5. Since every hyperedge of formula_5 is in a unique copy of formula_3, to eliminate all copies of formula_3 in formula_5, we need to remove at least formula_55 edges. Thus, formula_56. The number of hyperedges in formula_5 is formula_57, which concludes that formula_58. This method usually does not give a good quantitative bound, since the hidden constants in hypergraph removal lemma involves the inverse Ackermann function. For a better quantitive bound, Leng, Sah, and Sawhney proved that formula_59 for some constant formula_60 depending on formula_12. It is the best bound for formula_61 so far. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varepsilon, r, m > 0" }, { "math_id": 1, "text": "\\delta = \\delta(\\varepsilon, r, m) > 0" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "H" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "\\delta n^{v(H)}" }, { "math_id": 8, "text": "\\varepsilon n^r" }, { "math_id": 9, "text": "o(n^{v(H)})" }, { "math_id": 10, "text": "o(n^r)" }, { "math_id": 11, "text": "k>2" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "j" }, { "math_id": 14, "text": "1<j<k" }, { "math_id": 15, "text": "(k-1)" }, { "math_id": 16, "text": "(k-2)" }, { "math_id": 17, "text": "E(K_n) = G^{(2)}_1\\cup\\dots\\cup G^{(2)}_l" }, { "math_id": 18, "text": "(i,j,k)," }, { "math_id": 19, "text": "\\left(G^{(2)}_i,G^{(2)}_j,G^{(2)}_k\\right)." }, { "math_id": 20, "text": "\\left(G^{(2)}_i,G^{(2)}_j,G^{(2)}_k\\right)" }, { "math_id": 21, "text": "A^{(2)}_i\\subset G^{(2)}_i" }, { "math_id": 22, "text": "\\left(A^{(2)}_i,A^{(2)}_j,A^{(2)}_k\\right)," }, { "math_id": 23, "text": "\n \\left|d\\left(G^{(2)}_i,G^{(2)}_j,G^{(2)}_k\\right) - d\\left(A^{(2)}_i,A^{(2)}_j,A^{(2)}_k\\right)\\right|\\le\\varepsilon,\n" }, { "math_id": 24, "text": "d(X, Y, Z)" }, { "math_id": 25, "text": "3" }, { "math_id": 26, "text": "G^{(3)}" }, { "math_id": 27, "text": "(X, Y, Z)" }, { "math_id": 28, "text": "\\varepsilon" }, { "math_id": 29, "text": "G^{(2)}_1, \\dots, G^{(2)}_l" }, { "math_id": 30, "text": "E(K_n)" }, { "math_id": 31, "text": "V(G)" }, { "math_id": 32, "text": "r_k(N)" }, { "math_id": 33, "text": "\\{1, \\ldots, N\\}" }, { "math_id": 34, "text": "r_k(N) = o(N)" }, { "math_id": 35, "text": "A \\subset \\{1, \\ldots, N\\}" }, { "math_id": 36, "text": "M = k^2N + 1" }, { "math_id": 37, "text": "A" }, { "math_id": 38, "text": "\\mathbb{Z} / M\\mathbb{Z}" }, { "math_id": 39, "text": "\\mathbb{Z}" }, { "math_id": 40, "text": "V_1, V_2, \\ldots, V_k" }, { "math_id": 41, "text": "M" }, { "math_id": 42, "text": "1 \\le i \\le k" }, { "math_id": 43, "text": "(v_j \\in V_j)_{j \\in [k] \\setminus \\{i\\}}" }, { "math_id": 44, "text": "\\sum_{j \\ne i} (j-i) v_j \\in A." }, { "math_id": 45, "text": "v_1, \\ldots, v_k" }, { "math_id": 46, "text": "\\alpha_i = \\sum_{j \\ne i} (j-i) v_j \\in A" }, { "math_id": 47, "text": "1 \\le i \\le j" }, { "math_id": 48, "text": "\\alpha_i" }, { "math_id": 49, "text": "\\alpha_{i+1}-\\alpha_i = -\\sum_{j} v_j" }, { "math_id": 50, "text": "\\alpha_1 = \\cdots = \\alpha_k" }, { "math_id": 51, "text": "\\sum_{j} v_j = 0" }, { "math_id": 52, "text": "v_i = -\\sum_{j \\ne i} v_j" }, { "math_id": 53, "text": "\\frac{1}{k} e(G) = O(N^{k-1}) = o(N^k)" }, { "math_id": 54, "text": "o(N^{k-1})" }, { "math_id": 55, "text": "e(G) / k" }, { "math_id": 56, "text": "e(G) = o(N^{k-1})" }, { "math_id": 57, "text": "kM^{k-2}|A| = o(N^{k-1})" }, { "math_id": 58, "text": "|A| = o(N)" }, { "math_id": 59, "text": "|A| \\le \\frac{N}{\\exp(-(\\log \\log N)^{c_k})}" }, { "math_id": 60, "text": "c_k" }, { "math_id": 61, "text": "k \\ge 5" }, { "math_id": 62, "text": "S" }, { "math_id": 63, "text": "\\mathbb{Z}^r" }, { "math_id": 64, "text": "\\delta>0" }, { "math_id": 65, "text": "[n]^r" }, { "math_id": 66, "text": "\\delta n^r" }, { "math_id": 67, "text": "a\\cdot S + d" }, { "math_id": 68, "text": "S=\\{(0,0),(0,1),(1,0)\\}" } ]
https://en.wikipedia.org/wiki?curid=62396576
624033
Reynolds-averaged Navier–Stokes equations
Turbulence modeling approach The Reynolds-averaged Navier–Stokes equations (RANS equations) are time-averaged equations of motion for fluid flow. The idea behind the equations is Reynolds decomposition, whereby an instantaneous quantity is decomposed into its time-averaged and fluctuating quantities, an idea first proposed by Osborne Reynolds. The RANS equations are primarily used to describe turbulent flows. These equations can be used with approximations based on knowledge of the properties of flow turbulence to give approximate time-averaged solutions to the Navier–Stokes equations. For a stationary flow of an incompressible Newtonian fluid, these equations can be written in Einstein notation in Cartesian coordinates as: formula_0 The left hand side of this equation represents the change in mean momentum of a fluid element owing to the unsteadiness in the mean flow and the convection by the mean flow. This change is balanced by the mean body force, the isotropic stress owing to the mean pressure field, the viscous stresses, and apparent stress formula_1 owing to the fluctuating velocity field, generally referred to as the Reynolds stress. This nonlinear Reynolds stress term requires additional modeling to close the RANS equation for solving, and has led to the creation of many different turbulence models. The time-average operator formula_2 is a Reynolds operator. Derivation of RANS equations. The basic tool required for the derivation of the RANS equations from the instantaneous Navier–Stokes equations is the Reynolds decomposition. Reynolds decomposition refers to separation of the flow variable (like velocity formula_3) into the mean (time-averaged) component (formula_4) and the fluctuating component (formula_5). Because the mean operator is a Reynolds operator, it has a set of properties. One of these properties is that the mean of the fluctuating quantity is equal to zero formula_6. Thus, formula_7 where formula_8 is the position vector. Some authors prefer using formula_9 instead of formula_10 for the mean term (since an overbar is sometimes used to represent a vector). In this case, the fluctuating term formula_11 is represented instead by formula_3. This is possible because the two terms do not appear simultaneously in the same equation. To avoid confusion, the notation formula_12, formula_13, and formula_14 will be used to represent the instantaneous, mean, and fluctuating terms, respectively. The properties of Reynolds operators are useful in the derivation of the RANS equations. Using these properties, the Navier–Stokes equations of motion, expressed in tensor notation, are (for an incompressible Newtonian fluid): formula_15 formula_16 where formula_17 is a vector representing external forces. Next, each instantaneous quantity can be split into time-averaged and fluctuating components, and the resulting equation time-averaged, to yield: formula_18 formula_19 The momentum equation can also be written as, formula_20 On further manipulations this yields, formula_21 where, formula_22 is the mean rate of strain tensor. Finally, since integration in time removes the time dependence of the resultant terms, the time derivative must be eliminated, leaving: formula_23 Equations of Reynolds stress. The time evolution equation of Reynolds stress is given by: formula_24 This equation is very complicated. If formula_25 is traced, turbulence kinetic energy is obtained. The last term formula_26 is turbulent dissipation rate. All RANS models are based on the above equation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho\\bar{u}_j \\frac{\\partial \\bar{u}_i }{\\partial x_j}\n= \\rho \\bar{f}_i\n+ \\frac{\\partial}{\\partial x_j}\n\\left[ - \\bar{p}\\delta_{ij}\n+ \\mu \\left( \\frac{\\partial \\bar{u}_i}{\\partial x_j} + \\frac{\\partial \\bar{u}_j}{\\partial x_i} \\right)\n- \\rho \\overline{u_i^\\prime u_j^\\prime} \\right ].\n" }, { "math_id": 1, "text": " \\left( - \\rho \\overline{u_i^\\prime u_j^\\prime} \\right)" }, { "math_id": 2, "text": "\\overline{.}" }, { "math_id": 3, "text": "u" }, { "math_id": 4, "text": "\\overline{u}" }, { "math_id": 5, "text": "u^{\\prime}" }, { "math_id": 6, "text": "(\\bar{u'} = 0)" }, { "math_id": 7, "text": " u(\\boldsymbol{x},t) = \\bar{u}(\\boldsymbol{x}) + u'(\\boldsymbol{x},t) ," }, { "math_id": 8, "text": " \\boldsymbol{x} = (x,y,z) " }, { "math_id": 9, "text": "U" }, { "math_id": 10, "text": " \\bar{u} " }, { "math_id": 11, "text": "u^\\prime" }, { "math_id": 12, "text": " u" }, { "math_id": 13, "text": "\\bar{u}" }, { "math_id": 14, "text": "u' " }, { "math_id": 15, "text": " \\frac{\\partial u_i}{\\partial x_i} = 0 " }, { "math_id": 16, "text": " \\frac{\\partial u_i}{\\partial t} + u_j \\frac{\\partial u_i}{\\partial x_j}\n= f_i\n- \\frac{1}{\\rho} \\frac{\\partial p}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 u_i}{\\partial x_j \\partial x_j}\n" }, { "math_id": 17, "text": "f_i" }, { "math_id": 18, "text": " \\frac{\\partial \\bar{u}_i}{\\partial x_i} = 0" }, { "math_id": 19, "text": " \\frac{\\partial \\bar{u}_i}{\\partial t}\n+ \\bar{u}_j\\frac{\\partial \\bar{u}_i }{\\partial x_j}\n+ \\overline{u_j^\\prime \\frac{\\partial u_i^\\prime }{\\partial x_j}}\n= \\bar{f}_i\n- \\frac{1}{\\rho}\\frac{\\partial \\bar{p}}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 \\bar{u}_i}{\\partial x_j \\partial x_j}. " }, { "math_id": 20, "text": " \\frac{\\partial \\bar{u}_i}{\\partial t}\n+ \\bar{u}_j\\frac{\\partial \\bar{u}_i }{\\partial x_j}\n= \\bar{f}_i\n- \\frac{1}{\\rho}\\frac{\\partial \\bar{p}}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 \\bar{u}_i}{\\partial x_j \\partial x_j}\n- \\frac{\\partial \\overline{u_i^\\prime u_j^\\prime }}{\\partial x_j}.\n" }, { "math_id": 21, "text": "\\rho \\frac{\\partial \\bar{u}_i}{\\partial t}\n+ \\rho \\bar{u}_j \\frac{\\partial \\bar{u}_i }{\\partial x_j}\n= \\rho \\bar{f}_i\n+ \\frac{\\partial}{\\partial x_j}\n\\left[ - \\bar{p}\\delta_{ij}\n+ 2\\mu \\bar{S}_{ij}\n- \\rho \\overline{u_i^\\prime u_j^\\prime} \\right]\n" }, { "math_id": 22, "text": "\n\\bar{S}_{ij} = \\frac{1}{2}\\left( \\frac{\\partial \\bar{u}_i}{\\partial x_j} + \\frac{\\partial \\bar{u}_j}{\\partial x_i} \\right)\n" }, { "math_id": 23, "text": "\\rho \\bar{u}_j\\frac{\\partial \\bar{u}_i }{\\partial x_j}\n= \\rho \\bar{f_i}\n+ \\frac{\\partial}{\\partial x_j}\n\\left[ - \\bar{p}\\delta_{ij}\n+ 2\\mu \\bar{S}_{ij}\n- \\rho \\overline{u_i^\\prime u_j^\\prime} \\right].\n" }, { "math_id": 24, "text": "\n\\frac{\\partial \\overline{u_i^\\prime u_j^\\prime}}{\\partial t}\n + \\bar{u}_k \\frac{\\partial \\overline{u_i^\\prime u_j^\\prime}}{\\partial x_k} =\n -\\overline{u_i^\\prime u_k^\\prime}\\frac{\\partial \\bar{u}_j}{\\partial x_k}\n -\\overline{u_j^\\prime u_k^\\prime}\\frac{\\partial \\bar{u}_i}{\\partial x_k}\n +\\overline{ \\frac{p^\\prime}{\\rho}\\left( \\frac{\\partial u_i^\\prime}{\\partial x_j}\n +\\frac{\\partial u_j^\\prime}{\\partial x_i} \\right) }\n - \\frac{\\partial }{\\partial x_k} \\left( \\overline{u_i^\\prime u_j^\\prime u_k^\\prime}\n + \\frac{\\overline{p^\\prime u_i^\\prime } }{\\rho} \\delta_{jk}\n + \\frac{\\overline{p^\\prime u_j^\\prime } }{\\rho} \\delta_{ik}\n - \\nu \\frac{\\partial \\overline{u_i^\\prime u_j^\\prime}}{\\partial x_k} \\right)\n -2 \\nu \\overline{\\frac{\\partial u_i^\\prime}{\\partial x_k} \\frac{\\partial u_j^\\prime}{\\partial x_k}}\n" }, { "math_id": 25, "text": "\\overline{u_i^\\prime u_j^\\prime}" }, { "math_id": 26, "text": "\\nu \\overline{\\frac{\\partial u_i^\\prime}{\\partial x_k} \\frac{\\partial u_j^\\prime}{\\partial x_k}}" } ]
https://en.wikipedia.org/wiki?curid=624033
62405866
Cereceda's conjecture
Unsolved problem in the mathematics of graph coloring &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Can every two formula_0-colorings of a formula_1-degenerate graph be transformed into each other by quadratically many steps that change the color of one vertex at a time? In the mathematics of graph coloring, Cereceda’s conjecture is an unsolved problem on the distance between pairs of colorings of sparse graphs. It states that, for two different colorings of a graph of degeneracy d, both using at most "d" + 2 colors, it should be possible to reconfigure one coloring into the other by changing the color of one vertex at a time, using a number of steps that is quadratic in the size of the graph. The conjecture is named after Luis Cereceda, who formulated it in his 2007 doctoral dissertation. Background. The degeneracy of an undirected graph G is the smallest number d such that every non-empty subgraph of G has at least one vertex of degree at most d. If one repeatedly removes a minimum-degree vertex from G until no vertices are left, then the largest of the degrees of the vertices at the time of their removal will be exactly d, and this method of repeated removal can be used to compute the degeneracy of any graph in linear time. Greedy coloring the vertices in the reverse of this removal ordering will automatically produce a coloring with at most "d" + 1 colors, and for some graphs (such as complete graphs and odd-length cycle graphs) this number of colors is optimal. For colorings with "d" + 1 colors, it may not be possible to move from one coloring to another by changing the color of one vertex at a time. In particular, it is never possible to move between 2-colorings of a forest (the graphs of degeneracy 1) or between ("d" + 1)-colorings of a complete graph in this way; their colorings are said to be frozen. Cycle graphs of length other than four also have disconnected families of ("d" + 1)-colorings. However, with one additional color, using colorings with "d" + 2 colors, all pairs of colorings can be connected to each other by sequences of moves of this type. It follows from this that an appropriately designed random walk on the space of ("d" + 2)-colorings, using moves of this type, is mixing. This means that the random walk will eventually converge to the discrete uniform distribution on these colorings as its steady state, in which all colorings have equal probability of being chosen. More precisely, the random walk proceeds by repeatedly choosing a uniformly random vertex and choosing uniformly at random among all the available colors for that vertex, including the color it already had; this process is called the Glauber dynamics. Statement. The fact that the Glauber dynamics converges to the uniform distribution on ("d" + 2)-colorings naturally raises the question of how quickly it converges. That is, what is the mixing time? A lower bound on the mixing time is the diameter of the space of colorings, the maximum (over pairs of colorings) of the number of steps needed to change one coloring of the pair into the other. If the diameter is exponentially large in the number n of vertices in the graph, then the Glauber dynamics on colorings is certainly not rapidly mixing. On the other hand, when the diameter is bounded by a polynomial function of n, this suggests that the mixing time might also be polynomial. In his 2007 doctoral dissertation, Cereceda investigated this problem, and found that (even for connected components of the space of colors) the diameter can be exponential for ("d" + 1)-colorings of d-degenerate graphs. On the other hand, he proved that the diameter of the color space is at most quadratic (or, in big O notation, "O"("n"2)) for colorings that use at least 2"d" + 1 colors. He wrote that "it remains to determine" whether the diameter is polynomial for numbers of colors between these two extremes, or whether it is "perhaps even quadratic". Although Cereceda asked this question for a range of colors and did not phrase it as a conjecture, by 2018 a form of this question became known as Cereceda's conjecture. This unproven hypothesis is the most optimistic possibility among the questions posed by Cereceda: that for graphs with degeneracy at most d, and for ("d" + 2)-colorings of these graphs, the diameter of the space of colorings is "O"("n"2). If true, this would be best possible, as the space of 3-colorings of a path graph has quadratic diameter. Partial and related results. Although Cereceda's conjecture itself remains open even for degeneracy "d" = 2, it is known that for any fixed value of d the diameter of the space of ("d" + 2)-colorings is polynomial (with a different polynomial for different values of d). More precisely, the diameter is "O"("n""d" + 1). When the number of colorings is at least (3"d" + 3)/2, the diameter is quadratic. A related question concerns the possibility that, for numbers of colors greater than "d" + 2, the diameter of the space of colorings might decrease from quadratic to linear. suggest that this might be true whenever the number of colors is at least "d" + 3. The Glauber dynamics is not the only way to change colorings of graphs into each other. Alternatives include the Kempe dynamics in which one repeatedly finds and swaps the colors of Kempe chains, and the "heat bath" dynamics in which one chooses pairs of adjacent vertices and a valid recoloring of that pair. Both of these kinds of moves include the Glauber one-vertex moves as a special case, as changing the color of one vertex is the same as swapping the colors on a Kempe chain that only includes that one vertex. These moves may have stronger mixing properties and lower diameter of the space of colorings. For instance, both the Kempe dynamics and the heat bath dynamics mix rapidly on 3-colorings of cycle graphs, whereas the Glauber dynamics is not even connected when the length of the cycle is not four. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(d+2)" }, { "math_id": 1, "text": "d" } ]
https://en.wikipedia.org/wiki?curid=62405866