id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1243550 | Rigid rotor | Model of rotating physical systems
In rotordynamics, the rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space requires three angles, known as Euler angles. A special rigid rotor is the "linear rotor" requiring only two angles to describe, for example of a diatomic molecule. More general molecules are 3-dimensional, such as water (asymmetric rotor), ammonia (symmetric rotor), or methane (spherical rotor).
Linear rotor.
The linear rigid rotor model consists of two point masses located at fixed distances from their center of mass. The fixed distance between the two masses and the values of the masses are the only characteristics of the rigid model. However, for many actual diatomics this model is too restrictive since distances are usually not completely fixed. Corrections on the rigid model can be made to compensate for small variations in the distance. Even in such a case the rigid rotor model is a useful point of departure (zeroth-order model).
Classical linear rigid rotor.
The classical linear rotor consists of two point masses formula_0 and formula_1 (with reduced mass formula_2) at a distance formula_3 of each other. The rotor is rigid if formula_3 is independent of time. The kinematics of a linear rigid rotor is usually described by means of spherical polar coordinates, which form a coordinate system of R3. In the physics convention the coordinates are the co-latitude (zenith) angle formula_4, the longitudinal (azimuth) angle formula_5 and the distance formula_3. The angles specify the orientation of the rotor in space. The kinetic energy formula_6 of the linear rigid rotor is given by
formula_7
where formula_8 and formula_9 are scale (or Lamé) factors.
Scale factors are of importance for quantum mechanical applications since they enter the Laplacian expressed in curvilinear coordinates. In the case at hand (constant formula_3)
formula_10
The classical Hamiltonian function of the linear rigid rotor is
formula_11
Quantum mechanical linear rigid rotor.
The linear rigid rotor model can be used in quantum mechanics to predict the rotational energy of a diatomic molecule. The rotational energy depends on the moment of inertia for the system, formula_12. In the center of mass reference frame, the moment of inertia is equal to:
formula_13
where formula_14 is the reduced mass of the molecule and formula_3 is the distance between the two atoms.
According to quantum mechanics, the energy levels of a system can be determined by solving the Schrödinger equation:
formula_15
where formula_16 is the wave function and formula_17 is the energy (Hamiltonian) operator. For the rigid rotor in a field-free space, the energy operator corresponds to the kinetic energy of the system:
formula_18
where formula_19 is reduced Planck constant and formula_20 is the Laplacian. The Laplacian is given above in terms of spherical polar coordinates. The energy operator written in terms of these coordinates is:
formula_21
This operator appears also in the Schrödinger equation of the hydrogen atom after the radial part is separated off. The eigenvalue equation becomes
formula_22
The symbol formula_23 represents a set of functions known as the spherical harmonics. Note that the energy does depend on formula_24through "I". The energy
formula_25
is formula_26-fold degenerate: the functions with fixed formula_27 and formula_28 have the same energy.
Introducing the "rotational constant" formula_29, we write,
formula_30
In the units of reciprocal length the rotational constant is,
formula_31
with "c" the speed of light. If cgs units are used for formula_32, formula_33, and formula_34, formula_35 is expressed in cm−1, or wave numbers, which is a unit that is often used for rotational-vibrational spectroscopy. The rotational constant formula_36 depends on the distance formula_3. Often one writes formula_37 where formula_38 is the equilibrium value of formula_3 (the value for which the interaction energy of the atoms in the rotor has a minimum).
A typical rotational absorption spectrum consists of a series of peaks that correspond to transitions between levels with different values of the angular momentum quantum number (formula_27) such that formula_39, due to the selection rules (see below). Consequently, rotational peaks appear at energies with differences corresponding to an integer multiple of formula_40.
Selection rules.
Rotational transitions of a molecule occur when the molecule absorbs a photon [a particle of a quantized electromagnetic (em) field]. Depending on the energy of the photon (i.e., the wavelength of the em field) this transition may be seen as a sideband of a vibrational and/or
electronic transition. Pure rotational transitions, in which the vibronic (= vibrational plus electronic) wave function does not change, occur in the microwave region of the electromagnetic spectrum.
Typically, rotational transitions can only be observed when the angular momentum quantum number changes by formula_41 formula_42. This selection rule arises from a first-order perturbation theory approximation of the time-dependent Schrödinger equation. According to this treatment, rotational transitions can only be observed when one or more components of the dipole operator have a non-vanishing transition moment. If formula_43 is the direction of the electric field component of the incoming electromagnetic wave, the transition moment is,
formula_44
A transition occurs if this integral is non-zero. By separating the rotational part of the molecular wavefunction from the vibronic part, one can show that this means that the molecule must have a permanent dipole moment. After integration over the vibronic coordinates the following rotational part of the transition moment remains,
formula_45
Here formula_46 is the "z" component of the permanent dipole moment. The moment formula_14 is the vibronically averaged component of the dipole operator. Only the component of the permanent dipole along the axis of a heteronuclear molecule is non-vanishing.
By the use of the orthogonality of the spherical harmonics formula_47 it is possible to determine which values of formula_48, formula_49, formula_50, and formula_51 will result in nonzero values for the dipole transition moment integral. This constraint results in the observed selection rules for the rigid rotor:
formula_52
Non-rigid linear rotor.
The rigid rotor is commonly used to describe the rotational energy of diatomic molecules but it is not a completely accurate description of such molecules. This is because molecular bonds (and therefore the interatomic distance formula_3) are not completely fixed; the bond between the atoms stretches out as the molecule rotates faster (higher values of the rotational quantum number formula_48). This effect can be accounted for by introducing a correction factor known as the centrifugal distortion constant formula_53 (bars on top of various quantities indicate that these quantities are expressed in cm−1):
formula_54
where
The non-rigid rotor is an acceptably accurate model for diatomic molecules but is still somewhat imperfect. This is because, although the model does account for bond stretching due to rotation, it ignores any bond stretching due to vibrational energy in the bond (anharmonicity in the potential).
Arbitrarily shaped rigid rotor.
An arbitrarily shaped rigid rotor is a rigid body of arbitrary shape with its center of mass fixed (or in uniform rectilinear motion) in field-free space R3, so that its energy consists only of rotational kinetic energy (and possibly constant translational energy that can be ignored). A rigid body can be (partially) characterized by the three eigenvalues of its moment of inertia tensor, which are real nonnegative values known as "principal moments of inertia".
In microwave spectroscopy—the spectroscopy based on rotational transitions—one usually classifies molecules (seen as rigid rotors) as follows:
This classification depends on the relative magnitudes of the principal moments of inertia.
Coordinates of the rigid rotor.
Different branches of physics and engineering use different coordinates for the description of the kinematics of a rigid rotor. In molecular physics Euler angles are used almost exclusively. In quantum mechanical applications it is advantageous to use Euler angles in a convention that is a simple extension of the physical convention of spherical polar coordinates.
The first step is the attachment of a right-handed orthonormal frame (3-dimensional system of orthogonal axes) to the rotor (a body-fixed frame) . This frame can be attached arbitrarily to the body, but often one uses the principal axes frame—the normalized eigenvectors of the inertia tensor, which always can be chosen orthonormal, since the tensor is symmetric. When the rotor possesses a symmetry-axis, it usually coincides with one of the principal axes. It is convenient to choose
as body-fixed "z"-axis the highest-order symmetry axis.
One starts by aligning the body-fixed frame with a space-fixed frame (laboratory axes), so that the body-fixed "x", "y", and "z" axes coincide with the space-fixed "X", "Y", and "Z" axis. Secondly, the body and its frame are rotated actively over a positive angle formula_58 around the "z"-axis (by the right-hand rule), which moves the formula_59- to the formula_60-axis. Thirdly, one rotates the body and its frame over a positive angle formula_61 around the formula_60-axis. The "z"-axis of the body-fixed frame has after these two rotations the longitudinal angle formula_62 (commonly designated by formula_5) and the colatitude angle formula_61 (commonly designated by formula_63), both with respect to the space-fixed frame. If the rotor were cylindrical symmetric around its "z"-axis, like the linear rigid rotor, its orientation in space would be unambiguously specified at this point.
If the body lacks cylinder (axial) symmetry, a last rotation around its "z"-axis (which has polar coordinates formula_61 and formula_58) is necessary to specify its orientation completely. Traditionally the last rotation angle is called formula_64.
The convention for Euler angles described here is known as the formula_65 convention; it can be shown (in the same manner as in this article) that it is equivalent to the formula_66 convention in which the order of rotations is reversed.
The total matrix of the three consecutive rotations is the product
formula_67
Let formula_68 be the coordinate vector of an arbitrary point formula_69 in the body with respect to the body-fixed frame. The elements of formula_68 are the 'body-fixed coordinates' of formula_69. Initially formula_68 is also the space-fixed coordinate vector of formula_69. Upon rotation of the body, the body-fixed coordinates of formula_69 do not change, but the space-fixed coordinate vector of formula_69 becomes,
formula_70
In particular, if formula_69 is initially on the space-fixed "Z"-axis, it has the space-fixed coordinates
formula_71
which shows the correspondence with the spherical polar coordinates (in the physical convention).
Knowledge of the Euler angles as function of time "t" and the initial coordinates formula_68 determine the kinematics of the rigid rotor.
Classical kinetic energy.
It will be assumed from here on that the body-fixed frame is a principal axes frame; it diagonalizes the instantaneous inertia tensor formula_72 (expressed with respect to the space-fixed frame), i.e.,
formula_73
where the Euler angles are time-dependent and in fact determine the time dependence of formula_74 by the inverse of this equation. This notation implies
that at formula_75 the Euler angles are zero, so that at formula_75 the body-fixed frame coincides with the space-fixed frame.
The classical kinetic energy "T" of the rigid rotor can be expressed in different ways:
Since each of these forms has its use and can be found in textbooks we will present all of them.
Angular velocity form.
As a function of angular velocity "T" reads,
formula_76
with
formula_77
The vector formula_78 on the left hand side contains the components of the angular velocity of the rotor expressed with respect to the body-fixed frame. The angular velocity satisfies equations of motion known as Euler's equations (with zero applied torque, since by assumption the rotor is in field-free space). It can be shown that formula_79 is "not" the time derivative of any vector, in contrast to the usual definition of velocity.
The dots over the time-dependent Euler angles on the right hand side indicate time derivatives. Note that a different rotation matrix would result from a different choice of Euler angle convention used.
Lagrange form.
Backsubstitution of the expression of formula_79 into "T" gives
the kinetic energy in Lagrange form (as a function of the time derivatives of the Euler angles). In matrix-vector notation,
formula_80
where formula_81 is the metric tensor expressed in Euler angles—a non-orthogonal system of curvilinear coordinates—
formula_82
Angular momentum form.
Often the kinetic energy is written as a function of the angular momentum formula_83 of the rigid rotor. With respect to the body-fixed frame it has the components formula_84, and can be shown to be related to the angular velocity,
formula_85
This angular momentum is a conserved (time-independent) quantity if viewed from a stationary space-fixed frame. Since the body-fixed frame moves (depends on time) the components formula_84 are "not" time independent. If we were to represent formula_83 with respect to the stationary space-fixed frame, we would
find time independent expressions for its components.
The kinetic energy is expressed in terms of the angular momentum by
formula_86
Hamilton form.
The Hamilton form of the kinetic energy is written in terms of generalized momenta
formula_87
where it is used that the formula_81 is symmetric. In Hamilton form the kinetic energy is,
formula_88
with the inverse metric tensor given by
formula_89
This inverse tensor is needed to obtain the Laplace-Beltrami operator, which (multiplied by formula_90) gives the quantum mechanical energy operator of the rigid rotor.
The classical Hamiltonian given above can be rewritten to the following expression, which is needed in the phase integral arising in the classical statistical mechanics of rigid rotors,
formula_91
Quantum mechanical rigid rotor.
As usual quantization is performed by the replacement of the generalized momenta by operators that give first derivatives with respect to its canonically conjugate variables (positions). Thus,
formula_92
and similarly for formula_93 and formula_94. It is remarkable that this rule replaces the fairly complicated function formula_95 of all three Euler angles, time derivatives of Euler angles, and inertia moments (characterizing the rigid rotor) by a simple differential operator that does not depend on time or inertia moments and differentiates to one Euler angle only.
The quantization rule is sufficient to obtain the operators that correspond with the classical angular momenta. There are two kinds: space-fixed and body-fixed
angular momentum operators. Both are vector operators, i.e., both have three components that transform as vector components among themselves upon rotation of the space-fixed and the body-fixed frame, respectively. The explicit form of the rigid rotor angular momentum operators is given here (but beware, they must be multiplied with formula_19). The body-fixed angular momentum operators are written as formula_96. They satisfy "anomalous commutation relations".
The quantization rule is "not" sufficient to obtain the kinetic energy operator from the classical Hamiltonian. Since classically formula_93 commutes with formula_97 and formula_98 and the inverses of these functions, the position of these trigonometric functions in the classical Hamiltonian is arbitrary. After
quantization the commutation does no longer hold and the order of operators and functions in the Hamiltonian (energy operator) becomes a point of concern. Podolsky proposed in 1928 that the Laplace-Beltrami operator (times formula_99) has the appropriate form for the quantum mechanical kinetic energy operator. This operator has the general form (summation convention: sum over repeated indices—in this case over the three Euler angles formula_100):
formula_101
where formula_102 is the determinant of the g-tensor:
formula_103
Given the inverse of the metric tensor above, the explicit form of the kinetic energy operator in terms of Euler angles follows by simple substitution. (Note: The corresponding eigenvalue equation gives the Schrödinger equation for the rigid rotor in the form that it was solved for the first time by Kronig and Rabi (for the special case of the symmetric rotor). This is one of the few cases where the Schrödinger equation can be solved analytically. All these cases were solved within a year of the formulation of the Schrödinger equation.)
Nowadays it is common to proceed as follows. It can be shown that formula_104 can be expressed in body-fixed angular momentum operators (in this proof one must carefully commute differential operators with trigonometric functions). The result has the same appearance as the classical formula expressed in body-fixed coordinates,
formula_105
The action of the formula_96 on the Wigner D-matrix is simple. In particular
formula_106
so that the Schrödinger equation for the spherical rotor (formula_107) is solved with the formula_108 degenerate energy equal to formula_109.
The symmetric top (= symmetric rotor) is characterized by formula_110. It is a "prolate" (cigar shaped) top if formula_111. In the latter case we write the Hamiltonian as
formula_112
and use that
formula_113
Hence
formula_114
The eigenvalue formula_115 is formula_116-fold degenerate, for all eigenfunctions with formula_117 have the same eigenvalue. The energies with |k| > 0 are formula_118-fold degenerate. This exact solution of the Schrödinger equation of the symmetric top was first found in 1927.
The asymmetric top problem (formula_119) is not soluble analytically.
Direct experimental observation of molecular rotations.
For a long time, molecular rotations could not be directly observed experimentally. Only measurement techniques with atomic resolution made it possible to detect the rotation of a single molecule. At low temperatures, the rotations of molecules (or part thereof) can be frozen. This could be directly visualized by scanning tunneling microscopy, i.e., the stabilization could be explained at higher temperatures by the rotational entropy. The direct observation of rotational excitation at single molecule level was achieved recently using inelastic electron tunneling spectroscopy with the scanning tunneling microscope. The rotational excitation of molecular hydrogen and its isotopes were detected.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_1"
},
{
"math_id": 1,
"text": "m_2"
},
{
"math_id": 2,
"text": "\\mu = \\frac{m_1 m_2}{m_1 + m_2}"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "\\theta \\,"
},
{
"math_id": 5,
"text": "\\varphi\\,"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "\n2T = \\mu R^2 \\left[\\dot{\\theta}^2 + (\\dot\\varphi\\,\\sin\\theta)^2\\right] =\n\\mu R^2 \\begin{pmatrix}\\dot{\\theta} & \\dot{\\varphi}\\end{pmatrix}\n\\begin{pmatrix}\n1 & 0 \\\\\n0 & \\sin^2\\theta \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\\dot{\\theta} \\\\ \\dot{\\varphi}\\end{pmatrix}\n= \n\\mu \\begin{pmatrix}\\dot{\\theta} & \\dot{\\varphi}\\end{pmatrix}\n\\begin{pmatrix}\nh_\\theta^2 & 0 \\\\\n0 & h_\\varphi^2 \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\\dot{\\theta} \\\\ \\dot{\\varphi}\\end{pmatrix},\n"
},
{
"math_id": 8,
"text": "h_\\theta = R\\, "
},
{
"math_id": 9,
"text": "h_\\varphi= R\\sin\\theta\\,"
},
{
"math_id": 10,
"text": "\n\\nabla^2 = \\frac{1}{h_\\theta h_\\varphi}\\left[ \n \\frac{\\partial}{\\partial \\theta} \\frac{h_\\varphi}{h_\\theta} \\frac{\\partial}{\\partial \\theta}\n + \\frac{\\partial}{\\partial \\varphi} \\frac{h_\\theta}{h_\\varphi} \\frac{\\partial}{\\partial \\varphi}\n\\right] =\n\\frac{1}{R^2}\\left[\n \\frac{1}{\\sin\\theta}\\frac{\\partial}{\\partial\\theta}\n \\sin\\theta\\frac{\\partial}{\\partial\\theta}\n + \\frac{1}{\\sin^2\\theta}\\frac{\\partial^2}{\\partial\\varphi^2}\n\\right].\n"
},
{
"math_id": 11,
"text": "\nH = \\frac{1}{2\\mu R^2}\\left[p^2_{\\theta} + \\frac{p^2_{\\varphi}}{\\sin^2\\theta}\\right].\n"
},
{
"math_id": 12,
"text": "I "
},
{
"math_id": 13,
"text": " I = \\mu R^2"
},
{
"math_id": 14,
"text": "\\mu"
},
{
"math_id": 15,
"text": "\\hat H \\Psi = E \\Psi "
},
{
"math_id": 16,
"text": "\\Psi"
},
{
"math_id": 17,
"text": "\\hat H"
},
{
"math_id": 18,
"text": "\\hat H = - \\frac{\\hbar^2}{2\\mu} \\nabla^2"
},
{
"math_id": 19,
"text": "\\hbar"
},
{
"math_id": 20,
"text": "\\nabla^2"
},
{
"math_id": 21,
"text": "\\hat H =- \\frac{\\hbar^2}{2I} \\left [ {1 \\over \\sin \\theta} {\\partial \\over \\partial \\theta} \\left ( \\sin \\theta {\\partial \\over \\partial \\theta} \\right) + {1 \\over {\\sin^2 \\theta}} {\\partial^2 \\over \\partial \\varphi^2} \\right]"
},
{
"math_id": 22,
"text": "\n \\hat H Y_\\ell^m (\\theta, \\varphi) = \\frac{\\hbar^2}{2I} \\ell(\\ell+1) Y_\\ell^m (\\theta, \\varphi). \n"
},
{
"math_id": 23,
"text": "Y_\\ell^m (\\theta, \\varphi)"
},
{
"math_id": 24,
"text": "m \\,"
},
{
"math_id": 25,
"text": " E_\\ell = {\\hbar^2 \\over 2I} \\ell \\left (\\ell+1\\right)"
},
{
"math_id": 26,
"text": "2\\ell+1"
},
{
"math_id": 27,
"text": "\\ell"
},
{
"math_id": 28,
"text": "m=-\\ell,-\\ell+1,\\dots,\\ell"
},
{
"math_id": 29,
"text": "B"
},
{
"math_id": 30,
"text": " E_\\ell = B\\; \\ell \\left (\\ell+1\\right)\\quad\n\\textrm{with}\\quad B \\equiv \\frac{\\hbar^2}{2I}.\n"
},
{
"math_id": 31,
"text": " \\bar B \\equiv \\frac{B}{hc} = \\frac{h}{8\\pi^2cI} = \\frac{\\hbar}{4\\pi c \\mu R_e^2}, "
},
{
"math_id": 32,
"text": "h"
},
{
"math_id": 33,
"text": "c"
},
{
"math_id": 34,
"text": "I"
},
{
"math_id": 35,
"text": "\\bar B"
},
{
"math_id": 36,
"text": "\\bar B(R)"
},
{
"math_id": 37,
"text": " B_e = \\bar B(R_e) "
},
{
"math_id": 38,
"text": "R_e"
},
{
"math_id": 39,
"text": "\\Delta l = +1"
},
{
"math_id": 40,
"text": "2\\bar B"
},
{
"math_id": 41,
"text": "1"
},
{
"math_id": 42,
"text": "(\\Delta l = \\pm 1)"
},
{
"math_id": 43,
"text": "z"
},
{
"math_id": 44,
"text": "\n\\langle \\psi_2 | \\mu_z | \\psi_1\\rangle =\n\\left ( \\mu_z \\right )_{21} = \\int \\psi_2^*\\mu_z\\psi_1\\, \\mathrm{d}\\tau .\n"
},
{
"math_id": 45,
"text": " \n\\left ( \\mu_z \\right )_{l,m;l',m'} = \\mu \\int_0^{2\\pi} \\mathrm{d}\\phi \\int_0^\\pi Y_{l'}^{m'} \\left ( \\theta , \\phi \\right )^* \\cos \\theta\\,Y_l^m\\, \\left ( \\theta , \\phi \\right )\\; \\mathrm{d}\\cos\\theta .\n"
},
{
"math_id": 46,
"text": "\\mu \\cos\\theta \\, "
},
{
"math_id": 47,
"text": "Y_l^m\\, \\left ( \\theta , \\phi \\right )"
},
{
"math_id": 48,
"text": "l"
},
{
"math_id": 49,
"text": "m"
},
{
"math_id": 50,
"text": "l'"
},
{
"math_id": 51,
"text": "m'"
},
{
"math_id": 52,
"text": " \n\\Delta m = 0 \\quad\\hbox{and}\\quad \\Delta l = \\pm 1 \n"
},
{
"math_id": 53,
"text": "\\bar{D}"
},
{
"math_id": 54,
"text": " \\bar E_l = {E_l \\over hc} = \\bar {B}l \\left (l+1\\right ) - \\bar {D}l^2 \\left (l+1\\right )^2"
},
{
"math_id": 55,
"text": " \\bar D = {4 \\bar {B}^3 \\over \\bar{\\boldsymbol\\omega}^2}"
},
{
"math_id": 56,
"text": "\\bar{\\boldsymbol\\omega}"
},
{
"math_id": 57,
"text": " \\bar{\\boldsymbol\\omega} = {1\\over 2\\pi c} \\sqrt{k \\over \\mu }"
},
{
"math_id": 58,
"text": "\\alpha\\,"
},
{
"math_id": 59,
"text": "y"
},
{
"math_id": 60,
"text": "y'"
},
{
"math_id": 61,
"text": "\\beta\\,"
},
{
"math_id": 62,
"text": "\\alpha \\,"
},
{
"math_id": 63,
"text": "\\theta\\,"
},
{
"math_id": 64,
"text": "\\gamma\\,"
},
{
"math_id": 65,
"text": "z''-y'-z"
},
{
"math_id": 66,
"text": "z-y-z"
},
{
"math_id": 67,
"text": "\n\\mathbf{R}(\\alpha,\\beta,\\gamma)=\n\\begin{pmatrix}\n\\cos\\alpha & -\\sin\\alpha & 0 \\\\\n\\sin\\alpha & \\cos\\alpha & 0 \\\\\n 0 & 0 & 1\n\\end{pmatrix}\n\\begin{pmatrix}\n\\cos\\beta & 0 & \\sin\\beta \\\\\n 0 & 1 & 0 \\\\\n-\\sin\\beta & 0 & \\cos\\beta \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n\\cos\\gamma & -\\sin\\gamma & 0 \\\\\n\\sin\\gamma & \\cos\\gamma & 0 \\\\\n 0 & 0 & 1\n\\end{pmatrix}\n"
},
{
"math_id": 68,
"text": "\\mathbf{r}(0)"
},
{
"math_id": 69,
"text": "\\mathcal{P}"
},
{
"math_id": 70,
"text": " \n\\mathbf{r}(\\alpha,\\beta,\\gamma)= \\mathbf{R}(\\alpha,\\beta,\\gamma)\\mathbf{r}(0).\n"
},
{
"math_id": 71,
"text": " \n\\mathbf{R}(\\alpha,\\beta,\\gamma)\n\\begin{pmatrix}\n0 \\\\\n0 \\\\\nr \\\\\n\\end{pmatrix}=\n\\begin{pmatrix}\nr \\cos\\alpha\\sin\\beta \\\\\nr \\sin\\alpha \\sin\\beta \\\\\nr \\cos\\beta \\\\\n\\end{pmatrix},\n"
},
{
"math_id": 72,
"text": " \\mathbf{I}(t)"
},
{
"math_id": 73,
"text": " \n\\mathbf{R}(\\alpha,\\beta,\\gamma)^{-1}\\; \\mathbf{I}(t)\\; \\mathbf{R}(\\alpha,\\beta,\\gamma)\n= \\mathbf{I}(0)\\quad\\hbox{with}\\quad\n\\mathbf{I}(0) =\n\\begin{pmatrix}\nI_1 & 0 & 0 \\\\ 0 & I_2 & 0 \\\\ 0 & 0 & I_3 \\\\\n\\end{pmatrix},\n"
},
{
"math_id": 74,
"text": "\\mathbf{I}(t)"
},
{
"math_id": 75,
"text": "t=0"
},
{
"math_id": 76,
"text": "\n T = \\frac{1}{2} \\left[ I_1 \\omega_x^2 + I_2 \\omega_y^2+ I_3 \\omega_z^2 \\right]\n"
},
{
"math_id": 77,
"text": "\n\\begin{pmatrix}\n\\omega_x \\\\\n\\omega_y \\\\\n\\omega_z \\\\\n\\end{pmatrix}\n= \n\\begin{pmatrix}\n-\\sin\\beta\\cos\\gamma & \\sin\\gamma & 0 \\\\\n \\sin\\beta\\sin\\gamma & \\cos\\gamma & 0 \\\\\n \\cos\\beta & 0 & 1 \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n\\dot{\\alpha} \\\\\n\\dot{\\beta} \\\\\n\\dot{\\gamma} \\\\\n\\end{pmatrix}.\n"
},
{
"math_id": 78,
"text": "\\boldsymbol{\\omega} = (\\omega_x, \\omega_y, \\omega_z) "
},
{
"math_id": 79,
"text": "\\boldsymbol{\\omega}"
},
{
"math_id": 80,
"text": "\n2 T =\n\\begin{pmatrix} \n\\dot{\\alpha} & \\dot{\\beta} & \\dot{\\gamma}\n\\end{pmatrix}\n\\; \\mathbf{g} \\;\n\\begin{pmatrix} \n\\dot{\\alpha} \\\\ \\dot{\\beta} \\\\ \\dot{\\gamma}\\\\\n\\end{pmatrix},\n"
},
{
"math_id": 81,
"text": "\\mathbf{g}"
},
{
"math_id": 82,
"text": "\n\\mathbf{g}=\n\\begin{pmatrix}\nI_1 \\sin^2\\beta \\cos^2\\gamma+I_2\\sin^2\\beta\\sin^2\\gamma+I_3\\cos^2\\beta &\n(I_2-I_1) \\sin\\beta\\sin\\gamma\\cos\\gamma &\nI_3\\cos\\beta \\\\\n(I_2-I_1) \\sin\\beta\\sin\\gamma\\cos\\gamma &\nI_1\\sin^2\\gamma+I_2\\cos^2\\gamma & 0 \\\\\nI_3\\cos\\beta & 0 & I_3 \\\\\n\\end{pmatrix}.\n"
},
{
"math_id": 83,
"text": "\\mathbf{L}"
},
{
"math_id": 84,
"text": "L_i"
},
{
"math_id": 85,
"text": "\n\\mathbf{L} = \n\\mathbf{I}(0)\\;\n\\boldsymbol{\\omega}\\quad\\hbox{or}\\quad L_i = \\frac{\\partial T}{\\partial\\omega_i},\\;\\; i=x,\\,y,\\,z.\n"
},
{
"math_id": 86,
"text": "\n T = \\frac{1}{2} \\left[ \\frac{L_x^2}{I_1} + \\frac{L_y^2}{I_2}+ \\frac{L_z^2}{I_3}\\right].\n"
},
{
"math_id": 87,
"text": "\n\\begin{pmatrix}\np_\\alpha \\\\\np_\\beta \\\\\np_\\gamma \\\\\n\\end{pmatrix}\n\\mathrel\\stackrel{\\mathrm{def}}{=}\n\\begin{pmatrix}\n\\partial T/{\\partial \\dot{\\alpha}}\\\\ \n\\partial T/{\\partial \\dot{\\beta}} \\\\\n\\partial T/{\\partial \\dot{\\gamma}} \\\\\n\\end{pmatrix}\n= \\mathbf{g} \n\\begin{pmatrix} \\; \\,\n\\dot{\\alpha} \\\\ \\dot{\\beta} \\\\ \\dot{\\gamma}\\\\\n\\end{pmatrix},\n"
},
{
"math_id": 88,
"text": "\n2 T =\n\\begin{pmatrix} \np_{\\alpha} & p_{\\beta} & p_{\\gamma}\n\\end{pmatrix}\n\\; \\mathbf{g}^{-1} \\;\n\\begin{pmatrix} \np_{\\alpha} \\\\ p_{\\beta} \\\\ p_{\\gamma}\\\\\n\\end{pmatrix},\n"
},
{
"math_id": 89,
"text": "\n \\sin^2\\beta\\; \\mathbf{g}^{-1} = \n \\begin{pmatrix}\n \\frac{1}{I_1}\\cos^2\\gamma + \\frac{1}{I_2}\\sin^2\\gamma &\n \\left(\\frac{1}{I_2} - \\frac{1}{I_1}\\right)\\sin\\beta\\sin\\gamma\\cos\\gamma &\n -\\frac{1}{I_1}\\cos\\beta\\cos^2\\gamma - \\frac{1}{I_2}\\cos\\beta\\sin^2\\gamma \\\\\n \\left(\\frac{1}{I_2} - \\frac{1}{I_1}\\right)\\sin\\beta\\sin\\gamma\\cos\\gamma &\n \\frac{1}{I_1}\\sin^2\\beta\\sin^2\\gamma + \\frac{1}{I_2}\\sin^2\\beta\\cos^2\\gamma &\n \\left(\\frac{1}{I_1} - \\frac{1}{I_2}\\right)\\sin\\beta\\cos\\beta\\sin\\gamma\\cos\\gamma \\\\\n -\\frac{1}{I_1}\\cos\\beta\\cos^2\\gamma - \\frac{1}{I_2}\\cos\\beta\\sin^2\\gamma &\n \\left(\\frac{1}{I_1} - \\frac{1}{I_2}\\right)\\sin\\beta\\cos\\beta\\sin\\gamma\\cos\\gamma &\n \\frac{1}{I_1}\\cos^2\\beta\\cos^2\\gamma + \\frac{1}{I_2}\\cos^2\\beta\\sin^2\\gamma + \\frac{1}{I_3}\\sin^2\\beta \\\\\n \\end{pmatrix}.\n"
},
{
"math_id": 90,
"text": "-\\hbar^2"
},
{
"math_id": 91,
"text": "\\begin{align}\n T ={} &\\frac{1}{2I_1 \\sin^2\\beta}\n \\left( (p_\\alpha - p_\\gamma\\cos\\beta)\\cos\\gamma -\n p_\\beta\\sin\\beta\\sin\\gamma \\right)^2 +{} \\\\\n &\\frac{1}{2I_2 \\sin^2\\beta}\n \\left( (p_\\alpha - p_\\gamma\\cos\\beta)\\sin\\gamma +\n p_\\beta\\sin\\beta\\cos\\gamma \\right)^2 + \\frac{p_\\gamma^2}{2I_3}. \\\\\n\\end{align}"
},
{
"math_id": 92,
"text": "\np_\\alpha \\longrightarrow -i \\hbar \\frac{\\partial}{\\partial \\alpha}\n"
},
{
"math_id": 93,
"text": "p_\\beta"
},
{
"math_id": 94,
"text": "p_\\gamma"
},
{
"math_id": 95,
"text": "p_\\alpha"
},
{
"math_id": 96,
"text": "\\hat{\\mathcal{P}}_i"
},
{
"math_id": 97,
"text": "\\cos\\beta"
},
{
"math_id": 98,
"text": "\\sin\\beta"
},
{
"math_id": 99,
"text": "-\\tfrac{1}{2}\\hbar^2"
},
{
"math_id": 100,
"text": " q^1,\\,q^2,\\,q^3 \\equiv \\alpha,\\,\\beta,\\,\\gamma"
},
{
"math_id": 101,
"text": "\n\\hat{H} = - \\frac{\\hbar^2}{2}\\;|g|^{-\\frac{1}{2}}\n\\frac{\\partial}{\\partial q^i} |g|^\\frac{1}{2} g^{ij} \\frac{\\partial}{\\partial q^j},\n"
},
{
"math_id": 102,
"text": "|g|"
},
{
"math_id": 103,
"text": "\n|g| = I_1\\, I_2\\, I_3\\, \\sin^2\\beta \\quad \\hbox{and}\\quad g^{ij} = \\left(\\mathbf{g}^{-1}\\right)_{ij}.\n"
},
{
"math_id": 104,
"text": "\\hat{H}"
},
{
"math_id": 105,
"text": "\n\\hat{H} = \\frac{1}{2}\\left[ \\frac{\\mathcal{P}_x^2}{I_1} + \\frac{\\mathcal{P}_y^2}{I_2} +\n\\frac{\\mathcal{P}_z^2}{I_3} \\right].\n"
},
{
"math_id": 106,
"text": "\n\\mathcal{P}^2\\, D^j_{m'm}(\\alpha,\\beta,\\gamma)^* = \\hbar^2 j(j+1) D^j_{m'm}(\\alpha,\\beta,\\gamma)^* \\quad\\hbox{with}\\quad\n\\mathcal{P}^2 = \\mathcal{P}^2_x + \\mathcal{P}_y^2+ \\mathcal{P}_z^2,\n"
},
{
"math_id": 107,
"text": "I=I_1=I_2=I_3"
},
{
"math_id": 108,
"text": " (2j+1)^2 "
},
{
"math_id": 109,
"text": "\\tfrac{\\hbar^2 j(j+1)}{2I}"
},
{
"math_id": 110,
"text": "I_1=I_2"
},
{
"math_id": 111,
"text": "I_3 < I_1 = I_2"
},
{
"math_id": 112,
"text": "\n\\hat{H} = \\frac{1}{2}\\left[ \\frac{\\mathcal{P}^2}{I_1} + \\mathcal{P}_z^2\\left(\\frac{1}{I_3}\n-\\frac{1}{I_1} \\right) \\right],\n"
},
{
"math_id": 113,
"text": "\n\\mathcal{P}_z^2\\, D^j_{m k}(\\alpha,\\beta,\\gamma)^* = \\hbar^2 k^2\\, D^j_{m k}(\\alpha,\\beta,\\gamma)^*.\n"
},
{
"math_id": 114,
"text": "\n\\hat{H}\\,D^j_{m k}(\\alpha,\\beta,\\gamma)^* = E_{jk} D^j_{m k}(\\alpha,\\beta,\\gamma)^*\n\\quad \\hbox{with}\\quad \\frac{1}{\\hbar^2}E_{jk} = \\frac{j(j + 1)}{2I_1} + k^2\\left(\\frac{1}{2I_3} - \\frac{1}{2I_1}\\right).\n"
},
{
"math_id": 115,
"text": "E_{j0}"
},
{
"math_id": 116,
"text": "2j+1"
},
{
"math_id": 117,
"text": "m = -j, -j+1, \\dots, j"
},
{
"math_id": 118,
"text": "2(2j+1)"
},
{
"math_id": 119,
"text": " I_1 \\ne I_2 \\ne I_3 "
}
] | https://en.wikipedia.org/wiki?curid=1243550 |
12437622 | Problems in Latin squares | In mathematics, the theory of Latin squares is an active research area with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Problems posed here appeared in, for instance, the "Loops (Prague)" conferences and the "Milehigh (Denver)" conferences.
Open problems.
Bounds on maximal number of transversals in a Latin square.
A "transversal" in a Latin square of order "n" is a set "S" of "n" cells such that every row and every column contains exactly one cell of "S", and such that the symbols in "S" form {1, ..., "n"}. Let "T"("n") be the maximum number of transversals in a Latin square of order "n". Estimate "T"("n").
"n" "n"!, where "c" > 1 and "d" is about 0.6. A conjecture by Rivin, Vardi and Zimmermann (Rivin et al., 1994) says that you can place at least exp("c" "n" log "n") queens in non-attacking positions on a toroidal chessboard (for some constant "c"). If true this would imply that "T"("n") > exp("c" "n" log "n"). A related question is to estimate the number of transversals in the Cayley tables of cyclic groups of odd order. In other words, how many orthomorphisms do these groups have?
The minimum number of transversals of a Latin square is also an open problem. H. J. Ryser conjectured (Oberwolfach, 1967) that every Latin square of odd order has one. Closely related is the conjecture, attributed to Richard Brualdi, that every Latin square of order "n" has a partial transversal of order at least "n" − 1.
Characterization of Latin subsquares in multiplication tables of Moufang loops.
Describe how all Latin subsquares in multiplication tables of Moufang loops arise.
Densest partial Latin squares with Blackburn property.
A partial Latin square has "Blackburn property" if whenever the cells ("i", "j") and ("k", "l") are occupied by the same symbol, the opposite corners ("i", "l") and ("k", "j") are empty. What is the highest achievable density of filled cells in a partial Latin square with the Blackburn property? In particular, is there some constant "c" > 0 such that we can always fill at least "c" "n"2 cells?
Largest power of 2 dividing the number of Latin squares.
Let formula_0 be the number of Latin squares of order "n". What is the largest integer formula_1 such that formula_2 divides formula_0? Does formula_1 grow quadratically in "n"?
This table suggests that the power of 2 is growing superlinearly. The best current result is that formula_4 is always divisible by "f"!, where "f" is about "n"/2. See (McKay and Wanless, 2003). Two authors noticed the suspiciously high power of 2 (without being able to shed much light on it): (Alter, 1975), (Mullen, 1978). | [
{
"math_id": 0,
"text": "L_n"
},
{
"math_id": 1,
"text": "p(n)"
},
{
"math_id": 2,
"text": "2^{p(n)}"
},
{
"math_id": 3,
"text": "L_n=n!(n-1)!R_n"
},
{
"math_id": 4,
"text": "R_n"
}
] | https://en.wikipedia.org/wiki?curid=12437622 |
12437648 | Antisymmetrizer | Operator in Quantum mechanics ensuring fermionic compliance with the Pauli exclusion principle
In quantum mechanics, an antisymmetrizer formula_0 (also known as an antisymmetrizing operator) is a linear operator that makes a wave function of "N" identical fermions antisymmetric under the exchange of the coordinates of any pair of fermions. After application of formula_0 the wave function satisfies the Pauli exclusion principle. Since formula_0 is a projection operator, application of the antisymmetrizer to a wave function that is already totally antisymmetric has no effect, acting as the identity operator.
Mathematical definition.
Consider a wave function depending on the space and spin coordinates of "N" fermions:
formula_1
where the position vector r"i" of particle "i" is a vector in formula_2 and σi takes on 2"s"+1 values, where "s" is the half-integral intrinsic spin of the fermion. For electrons "s" = 1/2 and σ can have two values ("spin-up": 1/2 and "spin-down": −1/2). It is assumed that the positions of the coordinates in the notation for Ψ have a well-defined meaning. For instance, the 2-fermion function Ψ(1,2) will in general be not the same as Ψ(2,1). This implies that in general formula_3 and therefore we can define meaningfully a "transposition operator" formula_4 that interchanges the coordinates of particle "i" and "j". In general this operator will not be equal to the identity operator (although in special cases it may be).
A transposition has the
parity (also known as signature) −1. The Pauli principle postulates that a wave function of identical fermions must be an eigenfunction of a transposition operator with its parity as eigenvalue
formula_5
Here we associated the transposition operator formula_4 with the permutation of coordinates "π" that acts on the set of "N" coordinates. In this case "π" = ("ij"), where ("ij") is the cycle notation for the transposition of the coordinates of particle "i" and "j".
Transpositions may be composed (applied in sequence). This defines a product between the transpositions that is associative.
It can be shown that an arbitrary permutation of "N" objects can be written as a product of transpositions and that the number of transposition in this decomposition is of fixed parity. That is, either a permutation is always decomposed in an even number of transpositions (the permutation is called even and has the parity +1), or a permutation is always decomposed in an odd number of transpositions and then it is an odd permutation with parity −1. Denoting the parity of an arbitrary permutation "π" by (−1)"π", it follows that an antisymmetric wave function satisfies
formula_6
where we associated the linear operator formula_7 with the permutation π.
The set of all "N"! permutations with the associative product: "apply one permutation after the other", is a group, known as the permutation group or symmetric group, denoted by "S""N". We define the antisymmetrizer as
formula_8
Properties of the antisymmetrizer.
In the representation theory of finite groups the antisymmetrizer is a well-known object, because the set of parities formula_9 forms a one-dimensional (and hence irreducible) representation of the permutation group known as the "antisymmetric representation". The representation being one-dimensional, the set of parities form the character of the antisymmetric representation. The antisymmetrizer is in fact a character projection operator and is quasi-idempotent,
This has the consequence that for "any" "N"-particle wave function Ψ(1, ...,"N") we have
formula_10
Either Ψ does not have an antisymmetric component, and then the antisymmetrizer projects onto zero, or it has one and then the antisymmetrizer projects out this antisymmetric component Ψ'.
The antisymmetrizer carries a left and a right representation of the group:
formula_11
with the operator formula_7 representing the coordinate permutation π.
Now it holds, for "any" "N"-particle wave function Ψ(1, ...,"N") with a non-vanishing antisymmetric component, that
formula_12
showing that the non-vanishing component is indeed antisymmetric.
If a wave function is symmetric under any odd parity permutation it has no antisymmetric component. Indeed, assume that the permutation π, represented by the operator formula_7, has odd parity and that Ψ is symmetric, then
formula_13
As an example of an application of this result, we assume that Ψ is a spin-orbital product. Assume further that a spin-orbital occurs twice (is "doubly occupied") in this product, once with coordinate "k" and once with coordinate "q". Then the product is symmetric under the transposition ("k", "q") and hence vanishes. Notice that this result gives the original formulation of the Pauli principle: no two electrons can have the same set of quantum numbers (be in the same spin-orbital).
Permutations of identical particles are unitary, (the Hermitian adjoint is equal to the inverse of the operator), and since π and π−1 have the same parity, it follows that the antisymmetrizer is Hermitian,
formula_14
The antisymmetrizer commutes with any observable formula_15 (Hermitian operator corresponding to a physical—observable—quantity)
formula_16
If it were otherwise, measurement of formula_15 could distinguish the particles, in contradiction with the assumption that only the coordinates of indistinguishable particles are affected by the antisymmetrizer.
Connection with Slater determinant.
In the special case that the wave function to be antisymmetrized is a product of spin-orbitals
formula_17
the Slater determinant is created by the antisymmetrizer operating on the product of spin-orbitals, as below:
formula_18
The correspondence follows immediately from the Leibniz formula for determinants, which reads
formula_19
where B is the matrix
formula_20
To see the correspondence we notice that the fermion labels, permuted by the terms in the antisymmetrizer, label different columns (are second indices). The first indices are orbital indices, "n"1, ..., "n"N labeling the rows.
Example.
By the definition of the antisymmetrizer
formula_21
Consider the Slater determinant
formula_22
By the Laplace expansion along the first row of "D"
formula_23
so that
formula_24
By comparing terms we see that
formula_25
Intermolecular antisymmetrizer.
One often meets a wave function of the product form
formula_26 where the total wave function is not antisymmetric, but the factors are antisymmetric,
formula_27
and
formula_28
Here formula_29 antisymmetrizes the first "N""A" particles and formula_30 antisymmetrizes the second set of "N""B" particles. The operators appearing in these two antisymmetrizers represent the elements of the subgroups "S""N""A" and "S""N""B", respectively, of "S""N""A"+"N""B".
Typically, one meets such partially antisymmetric wave functions in the theory of intermolecular forces, where formula_31 is the electronic wave function of molecule "A" and formula_32 is the wave function of molecule "B". When "A" and "B" interact, the Pauli principle requires the antisymmetry of the total wave function, also under intermolecular permutations.
The total system can be antisymmetrized by the total antisymmetrizer formula_33 which consists of the ("N""A" + "N""B")! terms in the group "S""N""A"+"N""B". However, in this way one does not take advantage of the partial antisymmetry that is already present. It is more economic to use the fact that the product of the two subgroups is also a subgroup, and to consider the left cosets of this product group in "S""N""A"+"N""B":
formula_34
where τ is a left coset representative. Since
formula_35
we can write
formula_36
The operator formula_37 represents the coset representative τ (an intermolecular coordinate permutation). Obviously the intermolecular antisymmetrizer formula_38 has a factor "N""A"! "N""B"! fewer terms then the total antisymmetrizer.
Finally,
formula_39
so that we see that it suffices to act with formula_38 if the wave functions of the subsystems are already antisymmetric. | [
{
"math_id": 0,
"text": " \\mathcal{A}"
},
{
"math_id": 1,
"text": "\n\\Psi(1,2, \\ldots, N)\\quad\\text{with} \\quad i \\leftrightarrow (\\mathbf{r}_i, \\sigma_i),\n"
},
{
"math_id": 2,
"text": "\\mathbb{R}^3"
},
{
"math_id": 3,
"text": " \\Psi(1,2)- \\Psi(2,1) \\ne 0 "
},
{
"math_id": 4,
"text": "\\hat{P}_{ij}"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n\\hat{P}_{ij} \\Psi\\big(1,2,\\ldots,i, \\ldots,j,\\ldots, N\\big)& \\equiv \\Psi\\big(\\pi(1),\\pi(2),\\ldots,\\pi(i), \\ldots,\\pi(j),\\ldots, \\pi(N)\\big) \\\\\n&\\equiv \\Psi(1,2,\\ldots,j, \\ldots,i,\\ldots, N) \\\\\n&= - \\Psi(1,2,\\ldots,i, \\ldots,j,\\ldots, N).\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\n\\hat{P} \\Psi\\big(1,2,\\ldots, N\\big) \\equiv \\Psi\\big(\\pi(1),\\pi(2),\\ldots, \\pi(N)\\big) = (-1)^\\pi \\Psi(1,2,\\ldots, N),\n"
},
{
"math_id": 7,
"text": "\\hat{P}"
},
{
"math_id": 8,
"text": "\n\\mathcal{A} \\equiv \\frac{1}{N!} \\sum_{P \\in S_N} (-1)^\\pi \\hat{P} .\n"
},
{
"math_id": 9,
"text": "\\{ (-1)^\\pi \\}"
},
{
"math_id": 10,
"text": "\n\\mathcal{A}\\Psi(1,\\ldots, N) = \\begin{cases}\n&0 \\\\\n&\\Psi'(1,\\dots, N) \\ne 0.\n\\end{cases}\n"
},
{
"math_id": 11,
"text": "\n \\hat{P} \\mathcal{A} = \\mathcal{A} \\hat{P} = (-1)^\\pi \\mathcal{A},\\qquad \\forall \\pi \\in S_N,\n"
},
{
"math_id": 12,
"text": "\n\\hat{P} \\mathcal{A}\\Psi(1,\\ldots, N) \\equiv \\hat{P} \\Psi'(1,\\ldots, N)=(-1)^\\pi \\Psi'(1,\\ldots, N),\n"
},
{
"math_id": 13,
"text": "\n\\hat{P} \\Psi = \\Psi \\Longrightarrow \\mathcal{A} \\hat{P} \\Psi = \\mathcal{A}\\Psi \\Longrightarrow -\\mathcal{A} \\Psi = \\mathcal{A}\\Psi \\Longrightarrow \\mathcal{A} \\Psi = 0.\n"
},
{
"math_id": 14,
"text": "\n \\mathcal{A}^\\dagger = \\mathcal{A}.\n"
},
{
"math_id": 15,
"text": "\\hat{H}\\,"
},
{
"math_id": 16,
"text": "\n[\\mathcal{A}, \\hat{H}] = 0.\n"
},
{
"math_id": 17,
"text": "\n\\Psi(1,2, \\ldots, N) = \\psi_{n_1}(1) \\psi_{n_2}(2) \\cdots \\psi_{n_N}(N)\n"
},
{
"math_id": 18,
"text": "\n\n\\sqrt{N!}\\ \\mathcal{A} \\Psi(1,2, \\ldots, N) = \n\\frac{1}{\\sqrt{N!}} \n\\begin{vmatrix}\n\\psi_{n_1}(1) & \\psi_{n_1}(2) & \\cdots & \\psi_{n_1}(N) \\\\\n\\psi_{n_2}(1) & \\psi_{n_2}(2) & \\cdots & \\psi_{n_2}(N) \\\\\n\\vdots & \\vdots & & \\vdots \\\\\n\\psi_{n_N}(1) & \\psi_{n_N}(2) & \\cdots & \\psi_{n_N}(N) \\\\\n\\end{vmatrix}\n"
},
{
"math_id": 19,
"text": "\n\\det(\\mathbf{B}) = \n\\sum_{\\pi \\in S_N} (-1)^\\pi B_{1,\\pi(1)}\\cdot B_{2,\\pi(2)}\\cdot B_{3,\\pi(3)}\\cdot\\,\\cdots\\,\\cdot B_{N,\\pi(N)},\n"
},
{
"math_id": 20,
"text": "\n\\mathbf{B} = \n\\begin{pmatrix}\nB_{1,1} & B_{1,2} & \\cdots & B_{1,N} \\\\\nB_{2,1} & B_{2,2} & \\cdots & B_{2,N} \\\\\n\\vdots & \\vdots & & \\vdots \\\\\nB_{N,1} & B_{N,2} & \\cdots & B_{N,N} \\\\\n\\end{pmatrix}.\n"
},
{
"math_id": 21,
"text": "\n\\begin{align}\n\\mathcal{A} \\psi_a(1)\\psi_b(2)\\psi_c(3) = &\n\\frac{1}{6} \\Big( \\psi_a(1)\\psi_b(2)\\psi_c(3) + \\psi_a(3)\\psi_b(1)\\psi_c(2) + \\psi_a(2)\\psi_b(3)\\psi_c(1) \\\\\n&{}-\\psi_a(2)\\psi_b(1)\\psi_c(3) - \\psi_a(3)\\psi_b(2)\\psi_c(1)- \\psi_a(1)\\psi_b(3)\\psi_c(2)\\Big).\n\\end{align}\n"
},
{
"math_id": 22,
"text": "\nD\\equiv\n\\frac{1}{\\sqrt{6}}\n\\begin{vmatrix}\n\\psi_a(1) & \\psi_a(2) & \\psi_a(3) \\\\\n\\psi_b(1) & \\psi_b(2) & \\psi_b(3) \\\\\n\\psi_c(1) & \\psi_c(2) & \\psi_c(3)\n\\end{vmatrix}.\n"
},
{
"math_id": 23,
"text": "\nD = \n\\frac{1}{\\sqrt{6}}\n\\psi_a(1)\n\\begin{vmatrix}\n\\psi_b(2) & \\psi_b(3) \\\\\n\\psi_c(2) & \\psi_c(3)\n\\end{vmatrix}\n-\\frac{1}{\\sqrt{6}}\n\\psi_a(2)\n\\begin{vmatrix}\n\\psi_b(1) & \\psi_b(3) \\\\\n\\psi_c(1) & \\psi_c(3)\n\\end{vmatrix}\n+\\frac{1}{\\sqrt{6}}\n\\psi_a(3)\n\\begin{vmatrix}\n\\psi_b(1) & \\psi_b(2) \\\\\n\\psi_c(1) & \\psi_c(2)\n\\end{vmatrix},\n"
},
{
"math_id": 24,
"text": "\n\\begin{align}\nD=& \\frac{1}{\\sqrt{6}} \\psi_a(1)\\Big( \\psi_b(2) \\psi_c(3) - \\psi_b(3) \\psi_c(2)\\Big)\n- \\frac{1}{\\sqrt{6}} \\psi_a(2)\\Big( \\psi_b(1) \\psi_c(3) - \\psi_b(3) \\psi_c(1)\\Big) \\\\\n& {}+ \\frac{1}{\\sqrt{6}} \\psi_a(3)\\Big( \\psi_b(1) \\psi_c(2) - \\psi_b(2) \\psi_c(1)\\Big) .\n\\end{align}\n"
},
{
"math_id": 25,
"text": "\nD = \\sqrt{6}\\ \\mathcal{A} \\psi_a(1)\\psi_b(2)\\psi_c(3).\n"
},
{
"math_id": 26,
"text": "\\Psi_A(1,2,\\dots,N_A) \\Psi_B(N_A+1,N_A+2,\\dots,N_A+N_B)"
},
{
"math_id": 27,
"text": " \n\\mathcal{A}^A \\Psi_A(1,2,\\dots,N_A) = \\Psi_A(1,2,\\dots,N_A)\n"
},
{
"math_id": 28,
"text": "\n\\mathcal{A}^B\\Psi_B(N_A+1,N_A+2,\\dots,N_A+N_B) = \\Psi_B(N_A+1,N_A+2,\\dots,N_A+N_B).\n"
},
{
"math_id": 29,
"text": "\\mathcal{A}^A"
},
{
"math_id": 30,
"text": "\\mathcal{A}^B"
},
{
"math_id": 31,
"text": "\\Psi_A(1,2,\\dots,N_A)"
},
{
"math_id": 32,
"text": "\\Psi_B(N_A+1,N_A+2,\\dots,N_A+N_B)"
},
{
"math_id": 33,
"text": "\\mathcal{A}^{AB}"
},
{
"math_id": 34,
"text": "\nS_{N_A}\\otimes S_{N_B} \\subset S_{N_A+N_B} \\Longrightarrow \\forall \\pi \\in S_{N_A+N_B}:\\quad \\pi = \\tau \\pi_A \\pi_B, \\quad \n\\pi_A\\in S_{N_A}, \\;\\; \\pi_B \\in S_{N_B},\n"
},
{
"math_id": 35,
"text": "\n(-1)^\\pi = (-1)^\\tau (-1)^{\\pi_A} (-1)^{\\pi_B},\n"
},
{
"math_id": 36,
"text": "\n\\mathcal{A}^{AB} = \\tilde{\\mathcal{A}}^{AB} \\mathcal{A}^A \\mathcal{A}^B\\quad\\hbox{with}\\quad\n\\tilde{\\mathcal{A}}^{AB} = \\sum_{T=1}^{C_{AB}}(-1)^\\tau \\hat{T}, \\quad C_{AB} = \\binom{N_A+N_B}{N_A} .\n"
},
{
"math_id": 37,
"text": "\\hat{T}"
},
{
"math_id": 38,
"text": "\\tilde{\\mathcal{A}}^{AB}"
},
{
"math_id": 39,
"text": "\n\\begin{align}\n\\mathcal{A}^{AB}\\Psi_A(1,2,\\dots,N_A)&\\Psi_B(N_A+1,N_A+2,\\dots,N_A+N_B)\\\\\n &= \\tilde{\\mathcal{A}}^{AB}\\Psi_A(1,2,\\dots,N_A) \\Psi_B(N_A+1,N_A+2,\\dots,N_A+N_B),\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=12437648 |
1243766 | Horn antenna | Funnel-shaped waveguide radio device
A horn antenna or microwave horn is an antenna that consists of a flaring metal waveguide shaped like a horn to direct radio waves in a beam. Horns are widely used as antennas at UHF and microwave frequencies, above 300 MHz. They are used as feed antennas (called feed horns) for larger antenna structures such as parabolic antennas, as standard calibration antennas to measure the gain of other antennas, and as directive antennas for such devices as radar guns, automatic door openers, and microwave radiometers. Their advantages are moderate directivity, broad bandwidth, low losses, and simple construction and adjustment.
One of the first horn antennas was constructed in 1897 by Bengali-Indian radio researcher Jagadish Chandra Bose in his pioneering experiments with microwaves. The modern horn antenna was invented independently in 1938 by Wilmer Barrow and G. C. Southworth The development of radar in World War II stimulated horn research to design feed horns for radar antennas. The corrugated horn invented by Kay in 1962 has become widely used as a feed horn for microwave antennas such as satellite dishes and radio telescopes.
An advantage of horn antennas is that since they have no resonant elements, they can operate over a wide range of frequencies, a wide bandwidth. The usable bandwidth of horn antennas is typically of the order of 10:1, and can be up to 20:1 (for example allowing it to operate from 1 GHz to 20 GHz). The input impedance is slowly varying over this wide frequency range, allowing low voltage standing wave ratio (VSWR) over the bandwidth. The gain of horn antennas ranges up to 25 dBi, with 10–20 dBi being typical.
Description.
A horn antenna is used to transmit radio waves from a waveguide (a metal pipe used to carry radio waves) out into space, or collect radio waves into a waveguide for reception. It typically consists of a short length of rectangular or cylindrical metal tube (the waveguide), closed at one end, flaring into an open-ended conical or pyramidal shaped horn on the other end. The radio waves are usually introduced into the waveguide by a coaxial cable attached to the side, with the central conductor projecting into the waveguide to form a quarter-wave monopole antenna. The waves then radiate out the horn end in a narrow beam. In some equipment the radio waves are conducted between the transmitter or receiver and the antenna by a waveguide; in this case the horn is attached to the end of the waveguide. In outdoor horns, such as the feed horns of satellite dishes, the open mouth of the horn is often covered by a plastic sheet transparent to radio waves, to exclude moisture.
How it works.
A horn antenna serves the same function for electromagnetic waves that an acoustical horn does for sound waves in a musical instrument such as a trumpet. It provides a gradual transition structure to match the impedance of a tube to the impedance of free space, enabling the waves from the tube to radiate efficiently into space.
If a simple open-ended waveguide is used as an antenna, without the horn, the sudden end of the conductive walls causes an abrupt impedance change at the aperture, from the wave impedance in the waveguide to the impedance of free space, (about 377 Ω). When radio waves travelling through the waveguide hit the opening, this impedance-step reflects a significant fraction of the wave energy back down the guide toward the source, so that not all of the power is radiated. This is similar to the reflection at an open-ended transmission line or a boundary between optical mediums with a low and high index of refraction, like at a glass surface. The reflected waves cause standing waves in the waveguide, increasing the SWR, wasting energy and possibly overheating the transmitter. In addition, the small aperture of the waveguide (less than one wavelength) causes significant diffraction of the waves issuing from it, resulting in a wide radiation pattern without much directivity.
To improve these poor characteristics, the ends of the waveguide are flared out to form a horn. The taper of the horn changes the impedance gradually along the horn's length. This acts like an impedance matching transformer, allowing most of the wave energy to radiate out the end of the horn into space, with minimal reflection. The taper functions similarly to a tapered transmission line, or an optical medium with a smoothly varying refractive index. In addition, the wide aperture of the horn projects the waves in a narrow beam.
The horn shape that gives minimum reflected power is an exponential taper. Exponential horns are used in special applications that require minimum signal loss, such as satellite antennas and radio telescopes. However conical and pyramidal horns are most widely used, because they have straight sides and are easier to design and fabricate.
Radiation pattern.
The waves travel down a horn as spherical wavefronts, with their origin at the apex of the horn, a point called the phase center. The pattern of electric and magnetic fields at the aperture plane at the mouth of the horn, which determines the radiation pattern, is a scaled-up reproduction of the fields in the waveguide. Because the wavefronts are spherical, the phase increases smoothly from the edges of the aperture plane to the center, because of the difference in length of the center point and the edge points from the apex point. The difference in phase between the center point and the edges is called the "phase error". This phase error, which increases with the flare angle, reduces the gain and increases the beamwidth, giving horns wider beamwidths than similar-sized plane-wave antennas such as parabolic dishes.
At the flare angle, the radiation of the beam lobe is down about 20 dB from its maximum value.
As the size of a horn (expressed in wavelengths) is increased, the phase error increases, giving the horn a wider radiation pattern. Keeping the beamwidth narrow requires a longer horn (smaller flare angle) to keep the phase error constant. The increasing phase error limits the aperture size of practical horns to about 15 wavelengths; larger apertures would require impractically long horns. This limits the gain of practical horns to about 1000 (30 dBi) and the corresponding minimum beamwidth to about 5–10°.
Types.
Below are the main types of horn antennas. Horns can have different flare angles as well as different expansion curves (elliptic, hyperbolic, etc.) in the E-field and H-field directions, making possible a wide variety of different beam profiles.
Pyramidal horn (fig. a) – a horn antenna with the horn in the shape of a four-sided pyramid, with a rectangular cross section. They are a common type, used with rectangular waveguides, and radiate linearly polarized radio waves.
Sectoral horn – A pyramidal horn with only one pair of sides flared and the other pair parallel. It produces a fan-shaped beam, which is narrow in the plane of the flared sides, but wide in the plane of the narrow sides. These types are often used as feed horns for wide search radar antennas.
E-plane horn (fig. b) – A sectoral horn flared in the direction of the electric or E-field in the waveguide.
H-plane horn (fig. c) – A sectoral horn flared in the direction of the magnetic or H-field in the waveguide.
Conical horn (fig. d) – A horn in the shape of a cone, with a circular cross section. They are used with cylindrical waveguides.
Exponential horn (fig. e) – A horn with curved sides, in which the separation of the sides increases as an exponential function of length. Also called a scalar horn, they can have pyramidal or conical cross sections. Exponential horns have minimum internal reflections, and almost constant impedance and other characteristics over a wide frequency range. They are used in applications requiring high performance, such as feed horns for communication satellite antennas and radio telescopes.
Corrugated horn – A horn with parallel slots or grooves, small compared with a wavelength, covering the inside surface of the horn, transverse to the axis. Corrugated horns have wider bandwidth and smaller sidelobes and cross-polarization, and are widely used as feed horns for satellite dishes and radio telescopes.
Dual-mode conical horn – (The Potter horn ) This horn can be used to replace the corrugated horn for use at sub-mm wavelengths where the corrugated horn is lossy and difficult to fabricate.
Diagonal horn – This simple dual-mode horn superficially looks like a pyramidal horn with a square output aperture. On closer inspection, however, the square output aperture is seen to be rotated 45° relative to the waveguide. These horns are typically machined into split blocks and used at sub-mm wavelengths.
Ridged horn – A pyramidal horn with ridges or fins attached to the inside of the horn, extending down the center of the sides. The fins lower the cutoff frequency, increasing the antenna's bandwidth.
Septum horn – A horn which is divided into several subhorns by metal partitions (septums) inside, attached to opposite walls.
Aperture-limited horn – a long narrow horn, long enough so the phase error is a negligible fraction of a wavelength, so it essentially radiates a plane wave. It has an aperture efficiency of 1.0 so it gives the maximum gain and minimum beamwidth for a given aperture size. The gain is not affected by the length but only limited by diffraction at the aperture. Used as feed horns in radio telescopes and other high-resolution antennas.
Open Boundary Quad-Ridged Horn Antenna – This horn antenna is a special type of horn antenna designed as a four-pronged structure with open boundaries. It covers width the frequency range and polarization is dual Linear.
Open Boundary Double-Ridged Horn Antenna – This kind of antenna is similar to an open boundary quad-ridged horn Antenna. It was designed to operate over a wide frequency range, low VSWR, and high gain.
Optimum horn.
For a given frequency and horn length, there is some flare angle that gives minimum reflection and maximum gain. The internal reflections in straight-sided horns come from the two locations along the wave path where the impedance changes abruptly; the mouth or aperture of the horn, and the throat where the sides begin to flare out. The amount of reflection at these two sites varies with the "flare angle" of the horn (the angle the sides make with the axis). In narrow horns with small flare angles most of the reflection occurs at the mouth of the horn. The gain of the antenna is low because the small mouth approximates an open-ended waveguide, with a large impedance step. As the angle is increased, the reflection at the mouth decreases rapidly and the antenna's gain increases. In contrast, in wide horns with flare angles approaching 90° most of the reflection is at the throat. The horn's gain is again low because the throat approximates an open-ended waveguide. As the angle is decreased, the amount of reflection at this site drops, and the horn's gain again increases.
This discussion shows that there is some flare angle between 0° and 90° which gives maximum gain and minimum reflection. This is called the "optimum horn". Most practical horn antennas are designed as optimum horns. In a pyramidal horn, the dimensions that give an optimum horn are:
formula_0
For a conical horn, the dimensions that give an optimum horn are:
formula_1
where
"aE" is the width of the aperture in the E-field direction
"aH" is the width of the aperture in the H-field direction
"LE" is the slant height of the side in the E-field direction
"LH" is the slant height of the side in the H-field direction
"d" is the diameter of the cylindrical horn aperture
"L" is the slant height of the cone from the apex
"λ" is the wavelength
An optimum horn does not yield maximum gain for a given "aperture size". That is achieved with a very long horn (an "aperture limited" horn). The optimum horn yields maximum gain for a given horn "length". Tables showing dimensions for optimum horns for various frequencies are given in microwave handbooks.
Gain.
Horns have very little loss, so the directivity of a horn is roughly equal to its gain. The gain "G" of a pyramidal horn antenna (the ratio of the radiated power intensity along its beam axis to the intensity of an isotropic antenna with the same input power) is:
formula_2
For conical horns, the gain is:
formula_3
where
"A" is the area of the aperture,
"d" is the aperture diameter of a conical horn
"λ" is the wavelength,
"eA" is a dimensionless parameter between 0 and 1 called the "aperture efficiency",
The aperture efficiency ranges from 0.4 to 0.8 in practical horn antennas. For optimum pyramidal horns, "eA" = 0.511., while for optimum conical horns "eA" = 0.522. So an approximate figure of 0.5 is often used. The aperture efficiency increases with the length of the horn, and for aperture-limited horns is approximately unity.
Horn-reflector antenna.
A type of antenna that combines a horn with a parabolic reflector is known as a Hogg-horn, or horn-reflector antenna, invented by Alfred C. Beck and Harald T. Friis in 1941 and further developed by David C. Hogg at Bell Labs in 1961. It is also referred to as the "sugar scoop" due to its characteristic shape. It consists of a horn antenna with a reflector mounted in the mouth of the horn at a 45 degree angle so the radiated beam is at right angles to the horn axis. The reflector is a segment of a parabolic reflector, and the focus of the reflector is at the apex of the horn, so the device is equivalent to a parabolic antenna fed off-axis. The advantage of this design over a standard parabolic antenna is that the horn shields the antenna from radiation coming from angles outside the main beam axis, so its radiation pattern has very small sidelobes. Also, the aperture isn't partially obstructed by the feed and its supports, as with ordinary front-fed parabolic dishes, allowing it to achieve aperture efficiencies of 70% as opposed to 55–60% for front-fed dishes. The disadvantage is that it is far larger and heavier for a given aperture area than a parabolic dish, and must be mounted on a cumbersome turntable to be fully steerable. This design was used for a few radio telescopes and communication satellite ground antennas during the 1960s. Its largest use, however, was as fixed antennas for microwave relay links in the AT&T Long Lines microwave network. Since the 1970s this design has been superseded by shrouded parabolic dish antennas, which can achieve equally good sidelobe performance with a lighter more compact construction. Probably the most photographed and well-known example is the Holmdel Horn Antenna at Bell Labs in Holmdel, New Jersey, with which Arno Penzias and Robert Wilson discovered cosmic microwave background radiation in 1965, for which they won the 1978 Nobel Prize in Physics. Another more recent horn-reflector design is the cass-horn, which is a combination of a horn with a cassegrain parabolic antenna using two reflectors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_E = \\sqrt{2 \\lambda L_E} \\qquad a_H = \\sqrt{3 \\lambda L_H}"
},
{
"math_id": 1,
"text": "d = \\sqrt{3 \\lambda L}"
},
{
"math_id": 2,
"text": "G = \\frac{4 \\pi A}{\\lambda^2} e_A "
},
{
"math_id": 3,
"text": "G = \\left ( \\frac{\\pi d}{\\lambda} \\right )^2 e_A "
}
] | https://en.wikipedia.org/wiki?curid=1243766 |
12438057 | Frobenius covariant | In matrix theory, the Frobenius covariants of a square matrix A are special polynomials of it, namely projection matrices "A""i" associated with the eigenvalues and eigenvectors of A. They are named after the mathematician Ferdinand Frobenius.
Each covariant is a projection on the eigenspace associated with the eigenvalue "λ""i".
Frobenius covariants are the coefficients of Sylvester's formula, which expresses a function of a matrix "f"("A") as a matrix polynomial, namely a linear combination
of that function's values on the eigenvalues of A.
Formal definition.
Let A be a diagonalizable matrix with eigenvalues "λ"1, ..., "λ""k".
The Frobenius covariant "A""i", for "i" = 1..., "k", is the matrix
formula_0
It is essentially the Lagrange polynomial with matrix argument. If the eigenvalue "λ""i" is simple, then as an idempotent projection matrix to a one-dimensional subspace, "A""i" has a unit trace.
Computing the covariants.
The Frobenius covariants of a matrix A can be obtained from any eigendecomposition "A"
"SDS"−1, where S is non-singular and D is diagonal with "D""i","i"
"λ""i".
If A has no multiple eigenvalues, then let "c""i" be the ith right eigenvector of A, that is, the ith column of S; and let "r""i" be the ith left eigenvector of A, namely the ith row of S−1. Then "A""i"
"c""i" "r""i".
If A has an eigenvalue "λ""i" appearing multiple times, then "A""i"
Σ"j" "c""j" "r""j", where the sum is over all rows and columns associated with the eigenvalue "λ""i".
Example.
Consider the two-by-two matrix:
formula_1
This matrix has two eigenvalues, 5 and −2; hence ("A" − 5)("A" + 2)
0.
The corresponding eigen decomposition is
formula_2
Hence the Frobenius covariants, manifestly projections, are
formula_3
with
formula_4
Note tr "A"1
tr "A"2
1, as required.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " A_i \\equiv \\prod_{j=1 \\atop j \\ne i}^k \\frac{1}{\\lambda_i-\\lambda_j} (A - \\lambda_j I)~. "
},
{
"math_id": 1,
"text": " A = \\begin{bmatrix} 1 & 3 \\\\ 4 & 2 \\end{bmatrix}."
},
{
"math_id": 2,
"text": " A = \\begin{bmatrix} 3 & 1/7 \\\\ 4 & -1/7 \\end{bmatrix} \\begin{bmatrix} 5 & 0 \\\\ 0 & -2 \\end{bmatrix} \\begin{bmatrix} 3 & 1/7 \\\\ 4 & -1/7 \\end{bmatrix}^{-1} = \\begin{bmatrix} 3 & 1/7 \\\\ 4 & -1/7 \\end{bmatrix} \\begin{bmatrix} 5 & 0 \\\\ 0 & -2 \\end{bmatrix} \\begin{bmatrix} 1/7 & 1/7 \\\\ 4 & -3 \\end{bmatrix}. "
},
{
"math_id": 3,
"text": " \\begin{array}{rl}\nA_1 &= c_1 r_1 = \\begin{bmatrix} 3 \\\\ 4 \\end{bmatrix} \\begin{bmatrix} 1/7 & 1/7 \\end{bmatrix} = \\begin{bmatrix} 3/7 & 3/7 \\\\ 4/7 & 4/7 \\end{bmatrix} = A_1^2\\\\\nA_2 &= c_2 r_2 = \\begin{bmatrix} 1/7 \\\\ -1/7 \\end{bmatrix} \\begin{bmatrix} 4 & -3 \\end{bmatrix} = \\begin{bmatrix} 4/7 & -3/7 \\\\ -4/7 & 3/7 \\end{bmatrix}=A_2^2 ~,\n\\end{array} "
},
{
"math_id": 4,
"text": "A_1 A_2 = 0 , \\qquad A_1 + A_2 = I ~."
}
] | https://en.wikipedia.org/wiki?curid=12438057 |
12438552 | EDGE species | Evolutionarily Distinct and Globally Endangered (EDGE) species are animal species which have a high 'EDGE score', a metric combining endangered conservation status with the genetic distinctiveness of the particular taxon. Distinctive species have few closely related species, and EDGE species are often the only surviving member of their genus or even higher taxonomic rank. The extinction of such species would therefore represent a disproportionate loss of unique evolutionary history and biodiversity.
Some EDGE species, such as elephants and pandas, are well-known and already receive considerable conservation attention, but many others, such as the vaquita (the world's rarest cetacean) the bumblebee bat (arguably the world's smallest mammal) and the egg-laying long-beaked echidnas, are highly threatened yet remain poorly understood, and are frequently overlooked by existing conservation frameworks.
The Zoological Society of London launched the EDGE of Existence Programme in 2007 to raise awareness and funds for the conservation of these species. As of 2019, the programme has awarded 97 fellows funds to help conserve 87 different species in over 40 countries.
Calculating EDGE Scores.
ED.
Some species are more distinct than others because they represent a larger amount of unique evolution. Species like the aardvark have few close relatives and have been evolving independently for many millions of years. Others like the domestic dog originated only recently and have many close relatives. Species uniqueness can be measured as an 'Evolutionary Distinctiveness' (ED) score, using a phylogeny, or evolutionary tree. ED scores are calculated relative to a clade of species descended from a common ancestor. The three clades for which the EDGE of Existence Programme has calculated scores are all classes, namely mammals, amphibians, and corals.
The phylogenetic tree has the most recent common ancestor at the root, all the current species as the leaves, and intermediate nodes at each point of branching divergence. The branches are divided into segments (between one node and another node, a leaf, or the root). Each segment is assigned an ED score defined as the timespan it covers (in millions of years) divided by the number of species at the end of the subtree it forms. The ED of a species is the sum of the ED of the segments connecting it to the root. Thus, a long branch which produces few species will have a high ED, as the corresponding species are relatively distinctive, with few close relatives. ED metrics are not exact, because of uncertainties in both the ordering of nodes and the length of segments.
GE.
GE is a number corresponding to a species' conservation status according to the International Union for Conservation of Nature with more endangered species having a higher GE:
EDGE.
The EDGE score of a species is derived from its scores for Evolutionary Distinctness (ED) and for Globally Endangered status (GE) as follows:
formula_0
This means that a doubling in ED affects the EDGE score almost as much as increasing the threat level by one (e.g. from 'vulnerable' to 'endangered'). EDGE scores are an estimate of the expected loss of evolutionary history per unit time.
EDGE species are species which have an above average ED score and are threatened with extinction (critically endangered, endangered or vulnerable). There are currently 564 EDGE mammal species (≈12% of the total). Potential EDGE species are those with high ED scores but whose conservation status is unclear (data deficient or not evaluated).
Focal species.
Focal species are typically selected from the priority EDGE species —the top 100 amphibians, birds, mammals and reptiles, top 50 sharks and rays, and top 25 corals— however, they also prioritise species outside these rankings. Such species can also have a very high ED but fall outside the top 100 EDGE rankings. These species are conserved by 'EDGE Fellows', who collect data on these species and develop conservation action plans.
Top 20 2019/20 focal species
Numbers refer to EDGE rank
1. Largetooth sawfish ("Pristis pristis")
2. Attenborough’s long-beaked echidna ("Zaglossus attenboroughi")
3. Chinese giant salamander ("Andrias davidianus")
4. Green sawfish ("Pristis zijsron")
5. Purple frog ("Nasikabatrachus sahyadrensis")
6. Seychelles palm frog ("Sooglossus pipilodryas")
7. Thomasset's Seychelles frogs ("Sooglossus thomasseti")
8. Hispaniolan solenodon ("Solenodon paradoxus")
9. Chinese crocodile lizard ("Shinisaurus crocodilurus)"
10. Bengal florican "(Houbaropsis bengalensis)"
11. Black rhinoceros ("Diceros bicornis)"
12. Mountainous star coral ("Orbicella faveolata)"
13. Ganges river dolphin ("Platanista gangetica)"
14. Bactrian camel ("Camelus ferus)"
15. Philippine eagle ("Pithecophaga jefferyi)"
16. Northern Darwin’s frog ("Rhinoderma rufum)"
17. Hawksbill turtle ("Eretmochelys imbricata)"
18. Gharial ("Gavialis gangeticus)"
19. Chinese pangolin ("Manis pentadactyla)"
20. Togo slippery frog ("Conraua derooi)"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{EDGE} = \\ln (1+\\text{ED}) + \\text{GE}\\cdot \\ln (2)= \\ln[(1+\\text{ED})\\cdot 2^{\\text{GE}}]"
}
] | https://en.wikipedia.org/wiki?curid=12438552 |
12439 | Guanine | Chemical compound of DNA and RNA
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Guanine () (symbol G or Gua) is one of the four main nucleotide bases found in the nucleic acids DNA and RNA, the others being adenine, cytosine, and thymine (uracil in RNA). In DNA, guanine is paired with cytosine. The guanine nucleoside is called guanosine.
With the formula C5H5N5O, guanine is a derivative of purine, consisting of a fused pyrimidine-imidazole ring system with conjugated double bonds. This unsaturated arrangement means the bicyclic molecule is planar.
Properties.
Guanine, along with adenine and cytosine, is present in both DNA and RNA, whereas thymine is usually seen only in DNA, and uracil only in RNA. Guanine has two tautomeric forms, the major keto form (see figures) and rare enol form.
It binds to cytosine through three hydrogen bonds. In cytosine, the amino group acts as the hydrogen bond donor and the C-2 carbonyl and the N-3 amine as the hydrogen-bond acceptors. Guanine has the C-6 carbonyl group that acts as the hydrogen bond acceptor, while a group at N-1 and the amino group at C-2 act as the hydrogen bond donors.
Guanine can be hydrolyzed with strong acid to glycine, ammonia, carbon dioxide, and carbon monoxide. First, guanine gets deaminated to become xanthine. Guanine oxidizes more readily than adenine, the other purine-derivative base in DNA. Its high melting point of 350 °C reflects the intermolecular hydrogen bonding between the oxo and amino groups in the molecules in the crystal. Because of this intermolecular bonding, guanine is relatively insoluble in water, but it is soluble in dilute acids and bases.
History.
The first isolation of guanine was reported in 1844 by the German chemist Julius Bodo Unger (1819–1885), who obtained it as a mineral formed from the excreta of sea birds, which is known as guano and which was used as a source of fertilizer; guanine was named in 1846. Between 1882 and 1906, Emil Fischer determined the structure and also showed that uric acid can be converted to guanine.
Synthesis.
Trace amounts of guanine form by the polymerization of ammonium cyanide (NH4CN). Two experiments conducted by Levy et al. showed that heating 10 mol·L−1 NH4CN at 80 °C for 24 hours gave a yield of 0.0007%, while using 0.1 mol·L−1 NH4CN frozen at −20 °C for 25 years gave a 0.0035% yield. These results indicate guanine could arise in frozen regions of the primitive earth. In 1984, Yuasa reported a 0.00017% yield of guanine after the electrical discharge of NH3, CH4, C2H6, and 50 mL of water, followed by a subsequent acid hydrolysis. However, it is unknown whether the presence of guanine was not simply a resultant contaminant of the reaction.
10NH3 + 2CH4 + 4C2H6 + 2H2O → 2C5H8N5O (guanine) + 25H2
A Fischer–Tropsch synthesis can also be used to form guanine, along with adenine, uracil, and thymine. Heating an equimolar gas mixture of CO, H2, and NH3 to 700 °C for 15 to 24 minutes, followed by quick cooling and then sustained reheating to 100 to 200 °C for 16 to 44 hours with an alumina catalyst, yielded guanine and uracil:
10CO + H2 + 10NH3 → 2C5H8N5O (guanine) + 8H2O
Another possible abiotic route was explored by quenching a 90% N2–10%CO–H2O gas mixture high-temperature plasma.
Traube's synthesis involves heating 2,4,5-triamino-1,6-dihydro-6-oxypyrimidine (as the sulfate) with formic acid for several hours.
Biosynthesis.
Guanine is not synthesized de novo, instead it's split from the more complex molecule, guanosine, by the enzyme guanosine phosphorylase:
guanosine + phosphate formula_0 guanine + alpha-D-ribose 1-phosphate
Guanine can be synthesized de novo, with the rate-limiting enzyme of inosine monophosphate dehydrogenase.
Other occurrences and biological uses.
The word guanine derives from the Spanish loanword ('bird/bat droppings'), which itself is from the Quechua word , meaning 'dung'. As the Oxford English Dictionary notes, guanine is "A white amorphous substance obtained abundantly from guano, forming a constituent of the excrement of birds".
In 1656 in Paris, a Mr. Jaquin extracted from the scales of the fish "Alburnus alburnus" so-called "pearl essence", which is crystalline guanine. In the cosmetics industry, crystalline guanine is used as an additive to various products (e.g., shampoos), where it provides a pearly iridescent effect. It is also used in metallic paints and simulated pearls and plastics. It provides shimmering luster to eye shadow and nail polish. Facial treatments using the droppings, or guano, from Japanese nightingales have been used in Japan and elsewhere, because the guanine in the droppings makes the skin look paler. Guanine crystals are rhombic platelets composed of multiple transparent layers, but they have a high index of refraction that partially reflects and transmits light from layer to layer, thus producing a pearly luster. It can be applied by spray, painting, or dipping. It may irritate the eyes. Its alternatives are mica, faux pearl (from ground shells), and aluminium and bronze particles.
Guanine has a very wide variety of biological uses that include a range of functions ranging in both complexity and versatility. These include camouflage, display, and vision among other purposes.
Spiders, scorpions, and some amphibians convert ammonia, as a product of protein metabolism in the cells, to guanine, as it can be excreted with minimal water loss.
Guanine is also found in specialized skin cells of fish called iridocytes (e.g., the sturgeon), as well as being present in the reflective deposits of the eyes of deep-sea fish and some reptiles, such as crocodiles and chameleons.
On 8 August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA and RNA (guanine, adenine and related organic molecules) may have been formed extra-terrestrially in outer space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=12439 |
12439068 | Medical test | Medical procedure
A medical test is a medical procedure performed to detect, diagnose, or monitor diseases, disease processes, susceptibility, or to determine a course of treatment. Medical tests such as, physical and visual exams, diagnostic imaging, genetic testing, chemical and cellular analysis, relating to clinical chemistry and molecular diagnostics, are typically performed in a medical setting.
Types of tests.
By purpose.
Medical tests can be classified by their purposes, including diagnosis, screening or monitoring.
Diagnostic.
A diagnostic test is a procedure performed to confirm or determine the presence of disease in an individual suspected of having a disease, usually following the report of symptoms, or based on other medical test results. This includes posthumous diagnosis. Examples of such tests are:
Screening.
Screening refers to a medical test or series of tests used to detect or predict the presence of disease in at-risk individuals within a defined group such as a population, family, or workforce. Screenings may be performed to monitor disease prevalence, manage epidemiology, aid in prevention, or strictly for statistical purposes.
Examples of screenings include measuring the level of TSH in the blood of a newborn infant as part of newborn screening for congenital hypothyroidism, checking for Lung cancer in non-smoking individuals who are exposed to second-hand smoke in an unregulated working environment, and Pap smear screening for prevention or early detection of cervical cancer.
Monitoring.
Some medical tests are used to monitor the progress of, or response to medical treatment.
By method.
Most test methods can be classified into one of the following broad groups:
By sample location.
In vitro tests can be classified according to the location of the sample being tested, including:
Detection and quantification.
Tests performed in a physical examination are usually aimed at detecting a symptom or sign, and in these cases, a test that detects a symptom or sign is designated a positive test, and a test that indicated absence of a symptom or sign is designated a negative test, as further detailed in a separate section below.A quantification of a target substance, a cell type or another specific entity is a common output of, for example, most blood tests. This is not only answering "if" a target entity is present or absent, but also "how much" is present. In blood tests, the quantification is relatively well specified, such as given in mass concentration, while most other tests may be quantifications as well although less specified, such as a sign of being "very pale" rather than "slightly pale". Similarly, radiologic images are technically quantifications of radiologic opacity of tissues.
Especially in the taking of a medical history, there is no clear limit between a detecting or quantifying test versus rather "descriptive" information of an individual. For example, questions regarding the occupation or social life of an individual may be regarded as tests that can be regarded as positive or negative for the presence of various risk factors, or they may be regarded as "merely" descriptive, although the latter may be at least as clinically important.
Positive or negative.
The result of a test aimed at detection of an entity may be positive or negative: this has nothing to do with a bad prognosis, but rather means that the test worked or not, and a certain parameter that was evaluated was present or not. For example, a negative screening test for breast cancer means that no sign of breast cancer could be found (which is in fact very positive for the patient).
The classification of tests into either positive or negative gives a binary classification, with resultant ability to perform bayesian probability and performance metrics of tests, including calculations of sensitivity and specificity.
Continuous values.
Tests whose results are of continuous values, such as most blood values, can be interpreted as they are, or they can be converted to a binary ones by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff.
Interpretation.
In the finding of a "pathognomonic" sign or symptom it is almost certain that the target condition is present, and in the absence of finding a "sine qua non" sign or symptom it is almost certain that the target condition is absent. In reality, however, the subjective probability of the presence of a condition is never exactly 100% or 0%, so tests are rather aimed at estimating a post-test probability of a condition or other entity.
Most diagnostic tests basically use a reference group to establish performance data such as predictive values, likelihood ratios and relative risks, which are then used to interpret the post-test probability for an individual.
In monitoring tests of an individual, the test results from previous tests on that individual may be used as a reference to interpret subsequent tests.
Risks.
Some medical testing procedures have associated health risks, and even require general anesthesia, such as the mediastinoscopy. Other tests, such as the blood test or pap smear have little to no direct risks. Medical tests may also have indirect risks, such as the stress of testing, and riskier tests may be required as follow-up for a (potentially) false positive test result. Consult the health care provider (including physicians, physician assistants, and nurse practitioners) prescribing any test for further information.
Indications.
Each test has its own indications and contraindications. An "indication" is a valid medical reason to perform the test. A "contraindication" is a valid medical reason not to perform the test. For example, a basic cholesterol test may be "indicated" (medically appropriate) for a middle-aged person. However, if the same test was performed on that person very recently, then the existence of the previous test is a contraindication for the test (a medically valid reason to not perform it).
Information bias is the cognitive bias that causes healthcare providers to order tests that produce information that they do not realistically expect or intend to use for the purpose of making a medical decision. Medical tests are indicated when the information they produce will be used. For example, a screening mammogram is not indicated (not medically appropriate) for a woman who is dying, because even if breast cancer is found, she will die before any cancer treatment could begin.
In a simplified fashion, how much a test is indicated for an individual depends largely on its "net benefit" for that individual. Tests are chosen when the expected benefit is greater than the expected harm. The net benefit may roughly be estimated by:
formula_0
, where:
Some additional factors that influence a decision whether a medical test should be performed or not included: cost of the test, availability of additional tests, potential interference with subsequent test (such as an abdominal palpation potentially inducing intestinal activity whose sounds interfere with a subsequent abdominal auscultation), time taken for the test or other practical or administrative aspects. The possible benefits of a diagnostic test may also be weighed against the costs of unnecessary tests and resulting unnecessary follow-up and possibly even unnecessary treatment of incidental findings.
In some cases, tests being performed are expected to have no benefit for the individual being tested. Instead, the results may be useful for the establishment of statistics in order to improve health care for other individuals. Patients may give informed consent to undergo medical tests that will benefit other people.
Patient expectations.
In addition to considerations of the nature of medical testing noted above, other realities can lead to misconceptions and unjustified expectations among patients. These include: Different labs have different normal reference ranges; slightly different values will result from repeating a test; "normal" is defined by a spectrum along a bell curve resulting from the testing of a population, not by "rational, science-based, physiological principles"; sometimes tests are used in the hope of turning something up to give the doctor a clue as to the nature of a given condition; and imaging tests are subject to fallible human interpretation and can show "incidentalomas", most of which "are benign, will never cause symptoms, and do not require further evaluation," although clinicians are developing guidelines for deciding when to pursue diagnoses of incidentalomas.
Standard for the reporting and assessment.
The QUADAS-2 revision is available.
See also.
List of medical tests
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " b_n = \\Delta p \\times r_i \\times ( b_i - h_i ) - h_t"
}
] | https://en.wikipedia.org/wiki?curid=12439068 |
12440145 | Revealed comparative advantage | The revealed comparative advantage is an index used in international economics for calculating the relative advantage or disadvantage of a certain country in a certain class of goods or services as evidenced by trade flows. It is based on the Ricardian comparative advantage concept.
It most commonly refers to an index, called the Balassa index, introduced by Béla Balassa (1965).
In particular, the revealed comparative advantage of country formula_0 in product/commodity/good formula_1 is defined by:
formula_2, where:
That is, the RCA is equal to the proportion of the country's exports that are of the class under consideration, formula_3, divided by the proportion of world exports that are of that class, formula_4.
A comparative advantage is "revealed" if RCA>1. If RCA is less than unity, the country is said to have a
comparative disadvantage in the commodity or industry.
The concept of revealed comparative advantage is similar to that of economic base theory, which is the same calculation, but considers employment rather than exports.
Example: in 2010, soybeans represented 0.35% of world trade with exports of $42 billion. Of this total, Brazil exported nearly $11 billion, and since Brazil's total exports for that year were $140 billion, soybeans accounted for 7.9% of Brazil's exports. Because 7.9/0.35 = 22, Brazil exports 22 times its "fair share" of soybean exports, and so we can say that Brazil has a high revealed comparative advantage in soybeans.
References.
Obtain RCA for predefined Product Groups like SITC Revision 2 Groups, Sector classification based on HS or UNCTAD's Stages of processing in in WITS Indicators by Product Group page.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "RCA_{cp} = \\frac {E_{cp}/\\sum_{p' \\in P} E_{cp'}} {\\sum_{c'\\in C} E_{c'p}/\\sum_{c'\\in C, p'\\in P}E_{c'p'}}"
},
{
"math_id": 3,
"text": "\\frac{E_{cp}}{\\sum_{p'}E_{cp'}}"
},
{
"math_id": 4,
"text": "\\frac{\\sum_{c'} E_{c'p}}{\\sum_{c', p'} E_{c'p'}}"
}
] | https://en.wikipedia.org/wiki?curid=12440145 |
12442391 | Continuous stochastic process | Stochastic process that is a continuous function of time or index parameter
In probability theory, a continuous stochastic process is a type of stochastic process that may be said to be "continuous" as a function of its "time" or index parameter. Continuity is a nice property for (the sample paths of) a process to have, since it implies that they are well-behaved in some sense, and, therefore, much easier to analyze. It is implicit here that the index of the stochastic process is a continuous variable. Some authors define a "continuous (stochastic) process" as only requiring that the index variable be continuous, without continuity of sample paths: in another terminology, this would be a continuous-time stochastic process, in parallel to a "discrete-time process". Given the possible confusion, caution is needed.
Definitions.
Let (Ω, Σ, P) be a probability space, let "T" be some interval of time, and let "X" : "T" × Ω → "S" be a stochastic process. For simplicity, the rest of this article will take the state space "S" to be the real line R, but the definitions go through "mutatis mutandis" if "S" is R"n", a normed vector space, or even a general metric space.
Continuity almost surely.
Given a time "t" ∈ "T", "X" is said to be continuous with probability one at "t" if
formula_0
Mean-square continuity.
Given a time "t" ∈ "T", "X" is said to be continuous in mean-square at "t" if E[|"X""t"|2] < +∞ and
formula_1
Continuity in probability.
Given a time "t" ∈ "T", "X" is said to be continuous in probability at "t" if, for all "ε" > 0,
formula_2
Equivalently, "X" is continuous in probability at time "t" if
formula_3
Continuity in distribution.
Given a time "t" ∈ "T", "X" is said to be continuous in distribution at "t" if
formula_4
for all points "x" at which "F""t" is continuous, where "F""t" denotes the cumulative distribution function of the random variable "X""t".
Sample continuity.
"X" is said to be sample continuous if "X""t"("ω") is continuous in "t" for P-almost all "ω" ∈ Ω. Sample continuity is the appropriate notion of continuity for processes such as Itō diffusions.
Feller continuity.
"X" is said to be a Feller-continuous process if, for any fixed "t" ∈ "T" and any bounded, continuous and Σ-measurable function "g" : "S" → R, E"x"["g"("X""t")] depends continuously upon "x". Here "x" denotes the initial state of the process "X", and E"x" denotes expectation conditional upon the event that "X" starts at "x".
Relationships.
The relationships between the various types of continuity of stochastic processes are akin to the relationships between the various types of convergence of random variables. In particular:
It is tempting to confuse continuity with probability one with sample continuity. Continuity with probability one at time "t" means that P("A""t") = 0, where the event "A""t" is given by
formula_5
and it is perfectly feasible to check whether or not this holds for each "t" ∈ "T". Sample continuity, on the other hand, requires that P("A") = 0, where
formula_6
"A" is an uncountable union of events, so it may not actually be an event itself, so P("A") may be undefined! Even worse, even if "A" is an event, P("A") can be strictly positive even if P("A""t") = 0 for every "t" ∈ "T". This is the case, for example, with the telegraph process.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{P} \\left( \\left\\{ \\omega \\in \\Omega \\left| \\lim_{s \\to t} \\big| X_{s} (\\omega) - X_{t} (\\omega) \\big| = 0 \\right. \\right\\} \\right) = 1."
},
{
"math_id": 1,
"text": "\\lim_{s \\to t} \\mathbf{E} \\left[ \\big| X_{s} - X_{t} \\big|^{2} \\right] = 0."
},
{
"math_id": 2,
"text": "\\lim_{s \\to t} \\mathbf{P} \\left( \\left\\{ \\omega \\in \\Omega \\left| \\big| X_{s} (\\omega) - X_{t} (\\omega) \\big| \\geq \\varepsilon \\right. \\right\\} \\right) = 0."
},
{
"math_id": 3,
"text": "\\lim_{s \\to t} \\mathbf{E} \\left[ \\frac{\\big| X_{s} - X_{t} \\big|}{1 + \\big| X_{s} - X_{t} \\big|} \\right] = 0."
},
{
"math_id": 4,
"text": "\\lim_{s \\to t} F_{s} (x) = F_{t} (x)"
},
{
"math_id": 5,
"text": "A_{t} = \\left\\{ \\omega \\in \\Omega \\left| \\lim_{s \\to t} \\big| X_{s} (\\omega) - X_{t} (\\omega) \\big| \\neq 0 \\right. \\right\\},"
},
{
"math_id": 6,
"text": "A = \\bigcup_{t \\in T} A_{t}."
}
] | https://en.wikipedia.org/wiki?curid=12442391 |
1244240 | Noise shaping | Digital signal performance enhancement
Noise shaping is a technique typically used in digital audio, image, and video processing, usually in combination with dithering, as part of the process of quantization or bit-depth reduction of a signal. Its purpose is to increase the apparent signal-to-noise ratio of the resultant signal. It does this by altering the spectral shape of the error that is introduced by dithering and quantization; such that the noise power is at a lower level in frequency bands at which noise is considered to be less desirable and at a correspondingly higher level in bands where it is considered to be more desirable. A popular noise shaping algorithm used in image processing is known as ‘Floyd Steinberg dithering’; and many noise shaping algorithms used in audio processing are based on an ‘Absolute threshold of hearing’ model.
Operation.
Any feedback loop functions as a filter. Noise shaping works by putting quantization noise in a feedback loop designed to filter the noise as desired.
Low-pass boxcar filter example.
For example, consider the feedback system:
formula_0
where b is a constant, n is the cycle number, "x"["n"] is the input sample value, "y"["n"] is the value being quantized, and "e"["n"] is its quantization error:
formula_1
In this model, when any sample's bit depth is reduced, the quantization error is measured and on the next cycle added with the next sample prior to quantization. The effect is that the quantization error is low-pass filtered by a 2-sample boxcar filter (also known as a simple moving average filter). As a result, compared to before, the quantization error has lower power at higher frequencies and higher power at lower frequencies. The filter's cutoff frequency can be adjusted by modifying b, the proportion of error from the previous sample that is fed back.
Impulse response filters in general.
More generally, any FIR filter or IIR filter can be used to create a more complex frequency response curve. Such filters can be designed using the weighted least squares method. In the case of digital audio, typically the weighting function used is one divided by the absolute threshold of hearing curve, i.e.
formula_2
Dithering.
Adding an appropriate amount of dither during quantization prevents determinable errors correlated to the signal. If dither is not used then noise shaping effectively functions merely as distortion shaping — pushing the distortion energy around to different frequency bands, but it is still distortion. If dither is added to the process as
formula_3
then the quantization error truly becomes noise, and the process indeed yields noise shaping.
In digital audio.
Noise shaping in audio is most commonly applied as a bit-reduction scheme. The most basic form of dither is flat, white noise. The ear, however, is less sensitive to certain frequencies than others at low levels (see Equal-loudness contour). By using noise shaping the quantization error can be effectively spread around so that more of it is focused on frequencies that can't be heard as well and less of it is focused on frequencies that can. The result is that where the ear is most critical the quantization error can be reduced greatly and where the ears are less sensitive the noise is much greater. This can give a perceived noise reduction of 4 bits compared to straight dither. So although 16-bit samples only have 96 dB of dynamic range across the entire spectrum (see quantization distortion calculations), noise-shaped dithering can however increase the perceived audio dynamic range to 120 dB.
Noise shaping and 1-bit converters.
Since around 1989, 1-bit delta-sigma modulators have been used in analog-to-digital converters. This involves sampling the audio at a very high rate (2.8224 million samples per second, for example) but only using a single bit. Because only 1 bit is used, this converter only has 6.02 dB of dynamic range. The noise floor, however, is spread throughout the entire non-aliased frequency range below the Nyquist frequency of 1.4112 MHz. Noise shaping is used to lower the noise present in the audible range (20 Hz to 20 kHz) and increase the noise above the audible range. This results in a broadband dynamic range of only 7.78 dB, but it is not consistent among frequency bands, and in the lowest frequencies (the audible range) the dynamic range is much greater — over 100 dB. Noise shaping is inherently built into the delta-sigma modulators.
The 1-bit converter is the basis of the DSD format by Sony. One criticism of the 1-bit converter (and thus the DSD system) is that because only 1 bit is used in both the signal and the feedback loop, adequate amounts of dither cannot be used in the feedback loop and distortion can be heard under some conditions (more discussion at ).
Most A/D converters made since 2000 use multi-bit or multi-level delta-sigma modulators that yield more than 1 bit output so that proper dither can be added in the feedback loop. For traditional PCM sampling the signal is then decimated to 44.1 kHz or other appropriate sample rates.
In modern ADCs.
Analog Devices uses what they refer to as "Noise Shaping Requantizer", and Texas Instruments uses what they refer to as "SNRBoost" to lower the noise floor approximately 30db compared to the surrounding frequencies. This comes at a cost of non-continuous operation but produces a nice bathtub shape to the spectrum floor. This can be combined with other techniques such as Bit-Boost to further enhance the resolution of the spectrum.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y[n] = x[n] + b \\cdot e[n-1],"
},
{
"math_id": 1,
"text": "\\ e[n] = y_\\text{quantized}[n] - y[n]."
},
{
"math_id": 2,
"text": "\\ W(f) = \\frac{1}{A(f)}."
},
{
"math_id": 3,
"text": "\\ y[n] = x[n] + b \\cdot e[n-1] + \\mathrm{dither},"
}
] | https://en.wikipedia.org/wiki?curid=1244240 |
1244292 | Electromagnetically induced transparency | Electromagnetically induced transparency (EIT) is a coherent optical nonlinearity which renders a medium transparent within a narrow spectral range around an absorption line. Extreme dispersion is also created within this transparency "window" which leads to "slow light", described below. It is in essence a quantum interference effect that permits the propagation of light through an otherwise opaque atomic medium.
Observation of EIT involves two optical fields (highly coherent light sources, such as lasers) which are tuned to interact with three quantum states of a material. The "probe" field is tuned near resonance between two of the states and measures the absorption spectrum of the transition. A much stronger "coupling" field is tuned near resonance at a different transition. If the states are selected properly, the presence of the coupling field will create a spectral "window" of transparency which will be detected by the probe. The coupling laser is sometimes referred to as the "control" or "pump", the latter in analogy to incoherent optical nonlinearities such as spectral hole burning or saturation.
EIT is based on the destructive interference of the transition probability amplitude between atomic states. Closely related to EIT are coherent population trapping (CPT) phenomena.
The quantum interference in EIT can be exploited to laser cool atomic particles, even down to the quantum mechanical ground state of motion. This was used in 2015 to directly image individual atoms trapped in an optical lattice.
Medium requirements.
There are specific restrictions on the configuration of the three states. Two of the three possible transitions between the states must be "dipole allowed", i.e. the transitions can be induced by an oscillating electric field. The third transition must be "dipole forbidden." One of the three states is connected to the other two by the two optical fields. The three types of EIT schemes are differentiated by the energy differences between this state and the other two. The schemes are the ladder, vee, and lambda. Any real material system may contain many triplets of states which could theoretically support EIT, but there are several practical limitations on which levels can actually be used.
Also important are the dephasing rates of the individual states. In any real system at non-zero temperature there are processes which cause a scrambling of the phase of the quantum states. In the gas phase, this means usually collisions. In solids, dephasing is due to interaction of the electronic states with the host lattice. The dephasing of state formula_0 is especially important; ideally formula_0 should be a robust, metastable state.
Currently EIT research uses atomic systems in dilute gases, solid solutions, or more exotic states such as Bose–Einstein condensate. EIT has been demonstrated in electromechanical and optomechanical systems, where it is known as optomechanically induced transparency. Work is also being done in semiconductor nanostructures such as quantum wells, quantum wires and quantum dots.
Theory.
EIT was first proposed theoretically by professor Jakob Khanin and graduate student Olga Kocharovskaya at Gorky State University (renamed to Nizhny Novgorod in 1990), Russia; there are now several different approaches to a theoretical treatment of EIT. One approach is to extend the density matrix treatment used to drive Rabi oscillation of a two-state, single field system. In this picture the probability amplitude for the system to transfer between states can interfere destructively, preventing absorption. In this context, "interference" refers to interference between "quantum events" (transitions) and not optical interference of any kind. As a specific example, consider the lambda scheme shown above. Absorption of the probe is defined by transition from formula_1 to formula_2. The fields can drive population from formula_1-formula_2 directly or from formula_1-formula_2-formula_0-formula_2. The probability amplitudes for the different paths interfere destructively. If formula_0 has a comparatively long lifetime, then the result will be a transparent window completely inside of the formula_1-formula_2 absorption line.
Another approach is the "dressed state" picture, wherein the system + coupling field Hamiltonian is diagonalized and the effect on the probe is calculated in the new basis. In this picture EIT resembles a combination of Autler-Townes splitting and Fano interference between the dressed states. Between the doublet peaks, in the center of the transparency window, the quantum probability amplitudes for the probe to cause a transition to either state cancel.
A polariton picture is particularly important in describing stopped light schemes. Here, the photons of the probe are coherently "transformed" into "dark state polaritons" which are excitations of the medium. These excitations exist (or can be "stored") for a length of time dependent only on the dephasing rates.
Slow light and stopped light.
EIT is only one of many diverse mechanisms which can produce slow light. The Kramers–Kronig relations dictate that a change in absorption (or gain) over a narrow spectral range must be accompanied by a change in refractive index over a similarly narrow region. This rapid and "positive" change in refractive index produces an extremely low group velocity. The first experimental observation of the low group velocity produced by EIT was by Boller, İmamoğlu, and Harris at Stanford University in 1991 in strontium. In 1999 Lene Hau reported slowing light in a medium of ultracold sodium atoms, achieving this by using quantum interference effects responsible for electromagnetically induced transparency (EIT). Her group performed copious research regarding EIT with Stephen E. Harris. "Using detailed numerical simulations, and analytical theory, we study properties of micro-cavities which incorporate materials that exhibit Electro-magnetically Induced Transparency (EIT) or Ultra Slow Light (USL). We find that such systems, while being miniature in size (order wavelength), and integrable, can have some outstanding properties. In particular, they could have lifetimes orders of magnitude longer than other existing systems, and could exhibit non-linear all-optical switching at single photon power levels. Potential applications include miniature atomic clocks, and all-optical quantum information processing." The current record for slow light in an EIT medium is held by Budker, Kimball, Rochester, and Yashchuk at U.C. Berkeley in 1999. Group velocities as low as 8 m/s were measured in a warm thermal rubidium vapor.
"Stopped" light, in the context of an EIT medium, refers to the "coherent" transfer of photons to the quantum system and back again. In principle, this involves switching "off" the coupling beam in an adiabatic fashion while the probe pulse is still inside of the EIT medium. There is experimental evidence of trapped pulses in EIT medium. Authors created a stationary light pulse inside the atomic coherent media. In 2009 researchers from Harvard University and MIT demonstrated a few-photon optical switch for quantum optics based on the slow light ideas. Lene Hau and a team from Harvard University were the first to demonstrate stopped light.
EIT cooling.
EIT has been used to laser cool long strings of atoms to their motional ground state in an ion trap. To illustrate the cooling technique, consider a three level atom as shown with a ground state formula_3, an excited state formula_4, and a stable or metastable state formula_5 that lies in between them. The excited state formula_4 is dipole coupled to formula_5 and formula_3. An intense "coupling" laser drives the formula_6 transition at detuning formula_7 above resonance. Due to the quantum interference of transition amplitudes, a weaker "cooling" laser driving the formula_8 transition at detuning formula_9 above resonance sees a Fano-like feature on the absorption profile. EIT cooling is realized when formula_10, such that the carrier transition formula_11 lies on the dark resonance of the Fano-like feature, where formula_12 is used to label the quantized motional state of the atom. The Rabi frequency formula_13 of the coupling laser is chosen such that the formula_14 "red" sideband lies on the narrow maximum of the Fano-like feature. Conversely the formula_15 "blue" sideband lies in a region of low excitation probability, as shown in the figure below. Due to the large ratio of the excitation probabilities, the cooling limit is lowered in comparison to doppler or sideband cooling (assuming the same cooling rate).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|3\\rangle"
},
{
"math_id": 1,
"text": "|1\\rangle"
},
{
"math_id": 2,
"text": "|2\\rangle"
},
{
"math_id": 3,
"text": "|g\\rangle"
},
{
"math_id": 4,
"text": "|e\\rangle"
},
{
"math_id": 5,
"text": "|m\\rangle"
},
{
"math_id": 6,
"text": "|m \\rangle \\rightarrow |e\\rangle"
},
{
"math_id": 7,
"text": "\\Delta_m"
},
{
"math_id": 8,
"text": "|g \\rangle \\rightarrow |e\\rangle"
},
{
"math_id": 9,
"text": "\\Delta_g"
},
{
"math_id": 10,
"text": "\\Delta_g = \\Delta_m"
},
{
"math_id": 11,
"text": "|g,n \\rangle \\rightarrow |e, n\\rangle"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\Omega_m"
},
{
"math_id": 14,
"text": "|g,n \\rangle \\rightarrow |e, n-1\\rangle"
},
{
"math_id": 15,
"text": "|g,n \\rangle \\rightarrow |e, n+1\\rangle"
}
] | https://en.wikipedia.org/wiki?curid=1244292 |
1244523 | Newton's method in optimization | Method for finding stationary points of a function
In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function formula_0, which are solutions to the equation formula_1. As such, Newton's method can be applied to the derivative formula_2 of a twice-differentiable function formula_3 to find the roots of the derivative (solutions to formula_4), also known as the critical points of formula_3. These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function formula_3.
Newton's method.
The central problem of optimization is minimization of functions. Let us first consider the case of univariate functions, i.e., functions of a single real variable. We will later consider the more general and more practically useful multivariate case.
Given a twice differentiable function formula_5, we seek to solve the optimization problem
formula_6
Newton's method attempts to solve this problem by constructing a sequence formula_7 from an initial guess (starting point) formula_8 that converges towards a minimizer formula_9 of formula_3 by using a sequence of second-order Taylor approximations of formula_3 around the iterates. The second-order Taylor expansion of "f" around formula_10 is
formula_11
The next iterate formula_12 is defined so as to minimize this quadratic approximation in formula_13, and setting formula_14. If the second derivative is positive, the quadratic approximation is a convex function of formula_13, and its minimum can be found by setting the derivative to zero. Since
formula_15
the minimum is achieved for
formula_16
Putting everything together, Newton's method performs the iteration
formula_17
Geometric interpretation.
The geometric interpretation of Newton's method is that at each iteration, it amounts to the fitting of a parabola to the graph of formula_18 at the trial value formula_10, having the same slope and curvature as the graph at that point, and then proceeding to the maximum or minimum of that parabola (in higher dimensions, this may also be a saddle point), see below. Note that if formula_3 happens to be a quadratic function, then the exact extremum is found in one step.
Higher dimensions.
The above iterative scheme can be generalized to formula_19 dimensions by replacing the derivative with the gradient (different authors use different notation for the gradient, including formula_20), and the reciprocal of the second derivative with the inverse of the Hessian matrix (different authors use different notation for the Hessian, including formula_21). One thus obtains the iterative scheme
formula_22
Often Newton's method is modified to include a small step size formula_23 instead of formula_24:
formula_25
This is often done to ensure that the Wolfe conditions, or much simpler and efficient Armijo's condition, are satisfied at each step of the method. For step sizes other than 1, the method is often referred to as the relaxed or damped Newton's method.
Convergence.
If "f" is a strongly convex function with Lipschitz Hessian, then provided that formula_26 is close enough to formula_27, the sequence formula_28 generated by Newton's method will converge to the (necessarily unique) minimizer formula_9 of formula_29 quadratically fast. That is,
formula_30
Computing the Newton direction.
Finding the inverse of the Hessian in high dimensions to compute the Newton direction formula_31 can be an expensive operation. In such cases, instead of directly inverting the Hessian, it is better to calculate the vector formula_32 as the solution to the system of linear equations
formula_33
which may be solved by various factorizations or approximately (but to great accuracy) using iterative methods. Many of these methods are only applicable to certain types of equations, for example the Cholesky factorization and conjugate gradient will only work if formula_34 is a positive definite matrix. While this may seem like a limitation, it is often a useful indicator of something gone wrong; for example if a minimization problem is being approached and formula_34 is not positive definite, then the iterations are converging to a saddle point and not a minimum.
On the other hand, if a constrained optimization is done (for example, with Lagrange multipliers), the problem may become one of saddle point finding, in which case the Hessian will be symmetric indefinite and the solution of formula_12 will need to be done with a method that will work for such, such as the formula_35 variant of Cholesky factorization or the conjugate residual method.
There also exist various quasi-Newton methods, where an approximation for the Hessian (or its inverse directly) is built up from changes in the gradient.
If the Hessian is close to a non-invertible matrix, the inverted Hessian can be numerically unstable and the solution may diverge. In this case, certain workarounds have been tried in the past, which have varied success with certain problems. One can, for example, modify the Hessian by adding a correction matrix formula_36 so as to make formula_37 positive definite. One approach is to diagonalize the Hessian and choose formula_36 so that formula_37 has the same eigenvectors as the Hessian, but with each negative eigenvalue replaced by formula_38.
An approach exploited in the Levenberg–Marquardt algorithm (which uses an approximate Hessian) is to add a scaled identity matrix to the Hessian, formula_39, with the scale adjusted at every iteration as needed. For large formula_40 and small Hessian, the iterations will behave like gradient descent with step size formula_41. This results in slower but more reliable convergence where the Hessian doesn't provide useful information.
Some caveats.
Newton's method, in its original version, has several caveats:
The popular modifications of Newton's method, such as quasi-Newton methods or Levenberg-Marquardt algorithm mentioned above, also have caveats:
For example, it is usually required that the cost function is (strongly) convex and the Hessian is globally bounded or Lipschitz continuous, for example this is mentioned in the section "Convergence" in this article. If one looks at the papers by Levenberg and Marquardt in the reference for Levenberg–Marquardt algorithm, which are the original sources for the mentioned method, one can see that there is basically no theoretical analysis in the paper by Levenberg, while the paper by Marquardt only analyses a local situation and does not prove a global convergence result. One can compare with Backtracking line search method for Gradient descent, which has good theoretical guarantee under more general assumptions, and can be implemented and works well in practical large scale problems such as Deep Neural Networks.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F"
},
{
"math_id": 1,
"text": "F'(x)=0"
},
{
"math_id": 2,
"text": "f'"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "f'(x)=0"
},
{
"math_id": 5,
"text": " f:\\mathbb{R}\\to \\mathbb{R}"
},
{
"math_id": 6,
"text": " \\min_{x\\in \\mathbb{R}} f(x) ."
},
{
"math_id": 7,
"text": "\\{x_k\\}"
},
{
"math_id": 8,
"text": "x_0\\in \\mathbb{R}"
},
{
"math_id": 9,
"text": "x_*"
},
{
"math_id": 10,
"text": "x_k"
},
{
"math_id": 11,
"text": "f(x_k + t) \\approx f(x_k) + f'(x_k) t + \\frac{1}{2} f''(x_k) t^2."
},
{
"math_id": 12,
"text": "x_{k+1}"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "x_{k+1}=x_k + t"
},
{
"math_id": 15,
"text": "\\displaystyle 0 = \\frac{\\rm d}{{\\rm d} t} \\left(f(x_k) + f'(x_k) t + \\frac 1 2 f''(x_k) t^2\\right) = f'(x_k) + f'' (x_k) t, "
},
{
"math_id": 16,
"text": " t = -\\frac{f'(x_k)}{f''(x_k)} ."
},
{
"math_id": 17,
"text": " x_{k+1} = x_k + t = x_k - \\frac{f'(x_k)}{f''(x_k)}. "
},
{
"math_id": 18,
"text": "f(x)"
},
{
"math_id": 19,
"text": " d>1"
},
{
"math_id": 20,
"text": "f'(x) = \\nabla f(x) = g_f(x)\\in \\mathbb{R}^d"
},
{
"math_id": 21,
"text": "f''(x) = \\nabla^2 f(x) = H_f(x)\\in \\mathbb{R}^{d\\times d}"
},
{
"math_id": 22,
"text": "x_{k+1} = x_k - [f''(x_k)]^{-1} f'(x_k), \\qquad k \\ge 0."
},
{
"math_id": 23,
"text": " 0 < \\gamma \\le 1 "
},
{
"math_id": 24,
"text": "\\gamma=1"
},
{
"math_id": 25,
"text": "x_{k+1} = x_k - \\gamma [f''(x_k)]^{-1} f' (x_k)."
},
{
"math_id": 26,
"text": "x_0 "
},
{
"math_id": 27,
"text": " x_*=\\arg \\min f(x) "
},
{
"math_id": 28,
"text": "x_0, x_1, x_2, \\dots"
},
{
"math_id": 29,
"text": " f "
},
{
"math_id": 30,
"text": " \\|x_{k+1}-x_*\\| \\leq \\frac{1}{2} \\|x_{k}-x_*\\|^2, \\qquad \\forall k\\geq 0."
},
{
"math_id": 31,
"text": "h = - (f''(x_k))^{-1} f'(x_k)"
},
{
"math_id": 32,
"text": " h "
},
{
"math_id": 33,
"text": "[f''(x_k)] h = - f'(x_k)"
},
{
"math_id": 34,
"text": "f''(x_k)"
},
{
"math_id": 35,
"text": "LDL^\\top"
},
{
"math_id": 36,
"text": "B_k"
},
{
"math_id": 37,
"text": " f''(x_k) + B_k"
},
{
"math_id": 38,
"text": "\\epsilon>0"
},
{
"math_id": 39,
"text": "\\mu I"
},
{
"math_id": 40,
"text": "\\mu "
},
{
"math_id": 41,
"text": "1/\\mu"
}
] | https://en.wikipedia.org/wiki?curid=1244523 |
1244926 | Graveyard orbit | Spacecraft end-of-life orbit
A graveyard orbit, also called a junk orbit or disposal orbit, is an orbit that lies away from common operational orbits. One significant graveyard orbit is a supersynchronous orbit well beyond geosynchronous orbit. Some satellites are moved into such orbits at the end of their operational life to reduce the probability of colliding with operational spacecraft and generating space debris.
Overview.
A graveyard orbit is used when the change in velocity required to perform a de-orbit maneuver is too large. De-orbiting a geostationary satellite requires a delta-v of about , whereas re-orbiting it to a graveyard orbit only requires about .
For satellites in geostationary orbit and geosynchronous orbits, the graveyard orbit is a few hundred kilometers beyond the operational orbit. The transfer to a graveyard orbit beyond geostationary orbit requires the same amount of fuel as a satellite needs for about three months of stationkeeping. It also requires a reliable attitude control during the transfer maneuver. While most satellite operators plan to perform such a maneuver at the end of their satellites' operational lives, through 2005 only about one-third succeeded. Given the economic value of the positions at geosynchronous altitude, unless premature spacecraft failure precludes it, satellites are moved to a graveyard orbit prior to decommissioning.
According to the Inter-Agency Space Debris Coordination Committee (IADC) the minimum perigee altitude formula_0 beyond the geostationary orbit is:
formula_1
where formula_2 is the solar radiation pressure coefficient and formula_3 is the aspect area [m2] to mass [kg] ratio of the satellite. This formula includes about 200 km for the GEO-protected zone to also permit orbit maneuvers in GEO without interference with the graveyard orbit. Another of tolerance must be allowed for the effects of gravitational perturbations (primarily solar and lunar). The remaining part of the equation considers the effects of the solar radiation pressure, which depends on the physical parameters of the satellite.
In order to obtain a license to provide telecommunications services in the United States, the Federal Communications Commission (FCC) requires all geostationary satellites launched after March 18, 2002, to commit to moving to a graveyard orbit at the end of their operational lives. U.S. government regulations require a boost, formula_4, of about . In 2023 DISH received the first-ever fine by the FCC for failing to de-orbit its EchoStar VII satellite according to the terms of its license.
A spacecraft moved to a graveyard orbit will typically be passivated.
Uncontrolled objects in a near geostationary [Earth] orbit (GEO) exhibit a 53-year cycle of orbital inclination due to the interaction of the Earth's tilt with the lunar orbit. The orbital inclination varies ± 7.4°, at up to 0.8°pa.
Disposal orbit.
While the standard geosynchronous satellite graveyard orbit results in an expected orbital lifetime of millions of years, the increasing number of satellites, the launch of microsatellites, and the FCC approval of large megaconstellations of thousands of satellites for launch by 2022 necessitates new approaches for deorbiting to assure earlier removal of the objects once they have reached end-of-life. Contrary to GEO graveyard orbits requiring three months' worth of fuel (delta-V of 11 m/s) to reach, large satellite networks in LEO require orbits that passively decay into the Earth's atmosphere. For example, both OneWeb and SpaceX have committed to the FCC regulatory authorities that decommissioned satellites will decay to a lower orbit – a disposal orbit – where the satellite orbital altitude would decay due to atmospheric drag and then naturally reenter the atmosphere and burn up within one year of end-of-life.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta{H} \\,"
},
{
"math_id": 1,
"text": "\\Delta{H} = 235\\mbox{ km} + \\left ( 1000 C_R \\frac{A}{m} \\right )\\mbox{ km}"
},
{
"math_id": 2,
"text": "C_R \\,"
},
{
"math_id": 3,
"text": "\\frac{A}{m} \\,"
},
{
"math_id": 4,
"text": "\\Delta{H}"
}
] | https://en.wikipedia.org/wiki?curid=1244926 |
1244992 | Moment problem | Trying to map moments to a measure that generates them
In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure formula_0 to the sequence of moments
formula_1
More generally, one may consider
formula_2
for an arbitrary sequence of functions formula_3.
Introduction.
In the classical setting, formula_0 is a measure on the real line, and formula_4 is the sequence formula_5. In this form the question appears in probability theory, asking whether there is a probability measure having specified mean, variance and so on, and whether it is unique.
There are three named classical moment problems: the Hamburger moment problem in which the support of formula_0 is allowed to be the whole real line; the Stieltjes moment problem, for formula_6; and the Hausdorff moment problem for a bounded interval, which without loss of generality may be taken as formula_7.
The moment problem also extends to complex analysis as the trigonometric moment problem in which the Hankel matrices are replaced by Toeplitz matrices and the support of "μ" is the complex unit circle instead of the real line.
Existence.
A sequence of numbers formula_8 is the sequence of moments of a measure formula_0 if and only if a certain positivity condition is fulfilled; namely, the Hankel matrices formula_9,
formula_10
should be positive semi-definite. This is because a positive-semidefinite Hankel matrix corresponds to a linear functional formula_11 such that formula_12 and formula_13 (non-negative for sum of squares of polynomials). Assume formula_11 can be extended to formula_14. In the univariate case, a non-negative polynomial can always be written as a sum of squares. So the linear functional formula_11 is positive for all the non-negative polynomials in the univariate case. By Haviland's theorem, the linear functional has a measure form, that is formula_15. A condition of similar form is necessary and sufficient for the existence of a measure formula_0 supported on a given interval formula_16.
One way to prove these results is to consider the linear functional formula_17 that sends a polynomial
formula_18
to
formula_19
If formula_20 are the moments of some measure formula_0 supported on formula_16, then evidently
Vice versa, if (1) holds, one can apply the M. Riesz extension theorem and extend formula_17 to a functional on the space of continuous functions with compact support formula_21), so that
By the Riesz representation theorem, (2) holds iff there exists a measure formula_0 supported on formula_16, such that
formula_22
for every formula_23.
Thus the existence of the measure formula_0 is equivalent to (1). Using a representation theorem for positive polynomials on formula_16, one can reformulate (1) as a condition on Hankel matrices.
Uniqueness (or determinacy).
The uniqueness of formula_0 in the Hausdorff moment problem follows from the Weierstrass approximation theorem, which states that polynomials are dense under the uniform norm in the space of continuous functions on formula_7. For the problem on an infinite interval, uniqueness is a more delicate question. There are distributions, such as log-normal distributions, which have finite moments for all the positive integers but where other distributions have the same moments.
Formal solution.
When the solution exists, it can be formally written using derivatives of the Dirac delta function as
formula_24.
The expression can be derived by taking the inverse Fourier transform of its characteristic function.
Variations.
An important variation is the truncated moment problem, which studies the properties of measures with fixed first "k" moments (for a finite "k"). Results on the truncated moment problem have numerous applications to extremal problems, optimisation and limit theorems in probability theory.
Probability.
The moment problem has applications to probability theory. The following is commonly used:
<templatestyles src="Math_theorem/styles.css" />
Theorem (Fréchet-Shohat) — If formula_25 is a determinate measure (i.e. its moments determine it uniquely), and the measures formula_26 are such that formula_27 then formula_28 in distribution.
By checking Carleman's condition, we know that the standard normal distribution is a determinate measure, thus we have the following form of the central limit theorem:
<templatestyles src="Math_theorem/styles.css" />
Corollary — If a sequence of probability distributions formula_29 satisfy formula_30 then formula_29 converges to formula_31 in distribution.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "m_n = \\int_{-\\infty}^\\infty x^n \\,d\\mu(x)\\,."
},
{
"math_id": 2,
"text": "m_n = \\int_{-\\infty}^\\infty M_n(x) \\,d\\mu(x)\\,."
},
{
"math_id": 3,
"text": "M_n"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "\\{x^n : n=1,2,\\dotsc\\}"
},
{
"math_id": 6,
"text": "[0,\\infty)"
},
{
"math_id": 7,
"text": "[0,1]"
},
{
"math_id": 8,
"text": "m_n"
},
{
"math_id": 9,
"text": "H_n"
},
{
"math_id": 10,
"text": "(H_n)_{ij} = m_{i+j}\\,,"
},
{
"math_id": 11,
"text": " \\Lambda"
},
{
"math_id": 12,
"text": "\\Lambda(x^n) = m_n"
},
{
"math_id": 13,
"text": " \\Lambda(f^2) \\geq 0 "
},
{
"math_id": 14,
"text": " \\mathbb{R}[x]^*"
},
{
"math_id": 15,
"text": " \\Lambda(x^n) = \\int_{-\\infty}^{\\infty} x^n d \\mu"
},
{
"math_id": 16,
"text": "[a,b]"
},
{
"math_id": 17,
"text": "\\varphi"
},
{
"math_id": 18,
"text": "P(x) = \\sum_k a_k x^k "
},
{
"math_id": 19,
"text": "\\sum_k a_k m_k."
},
{
"math_id": 20,
"text": "m_k"
},
{
"math_id": 21,
"text": "C_c([a,b])"
},
{
"math_id": 22,
"text": " \\varphi(f) = \\int f \\, d\\mu"
},
{
"math_id": 23,
"text": "f \\in C_c([a,b])"
},
{
"math_id": 24,
"text": " d\\mu(x) = \\rho(x) dx, \\quad \\rho(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n}{n!}\\delta^{(n)}(x)m_n\n"
},
{
"math_id": 25,
"text": "\\mu"
},
{
"math_id": 26,
"text": "\\mu_n"
},
{
"math_id": 27,
"text": "\n \\forall k \\geq 0 \\quad \\lim _{n \\rightarrow \\infty} m_k\\left[\\mu_n\\right]=m_k[\\mu],\n "
},
{
"math_id": 28,
"text": "\\mu_n \\rightarrow \\mu"
},
{
"math_id": 29,
"text": "\\nu_n"
},
{
"math_id": 30,
"text": "m_{2k}[\\nu_n] \\to \\frac{(2k)!}{2^k k!}; \\quad m_{2k+1}[\\nu_n] \\to 0"
},
{
"math_id": 31,
"text": "N(0, 1)"
}
] | https://en.wikipedia.org/wiki?curid=1244992 |
12450 | Gödel's completeness theorem | Fundamental theorem in mathematical logic
Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.
The completeness theorem applies to any first-order theory: If "T" is such a theory, and φ is a sentence (in the same language) and every model of "T" is a model of φ, then there is a (first-order) proof of φ using the statements of "T" as axioms. One sometimes says this as "anything true in all models is provable". (This does not contradict Gödel's incompleteness theorem, which is about a formula φu that is unprovable in a certain theory "T" but true in the "standard" model of the natural numbers: φu is false in some other, "non-standard" models of "T".)
The completeness theorem makes a close link between model theory, which deals with what is true in different models, and proof theory, which studies what can be formally proven in particular formal systems.
It was first proved by Kurt Gödel in 1929. It was then simplified when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem (published in 1949). Henkin's proof was simplified by Gisbert Hasenjaeger in 1953.
Preliminaries.
There are numerous deductive systems for first-order logic, including systems of natural deduction and Hilbert-style systems. Common to all deductive systems is the notion of a "formal deduction". This is a sequence (or, in some cases, a finite tree) of formulae with a specially designated "conclusion". The definition of a deduction is such that it is finite and that it is possible to verify algorithmically (by a computer, for example, or by hand) that a given sequence (or tree) of formulae is indeed a deduction.
A first-order formula is called "logically valid" if it is true in every structure for the language of the formula (i.e. for any assignment of values to the variables of the formula). To formally state, and then prove, the completeness theorem, it is necessary to also define a deductive system. A deductive system is called "complete" if every logically valid formula is the conclusion of some formal deduction, and the completeness theorem for a particular deductive system is the theorem that it is complete in this sense. Thus, in a sense, there is a different completeness theorem for each deductive system. A converse to completeness is "soundness", the fact that only logically valid formulas are provable in the deductive system.
If some specific deductive system of first-order logic is sound and complete, then it is "perfect" (a formula is provable if and only if it is logically valid), thus equivalent to any other deductive system with the same quality (any proof in one system can be converted into the other).
Statement.
We first fix a deductive system of first-order predicate calculus, choosing any of the well-known equivalent systems. Gödel's original proof assumed the Hilbert-Ackermann proof system.
Gödel's original formulation.
The completeness theorem says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula.
Thus, the deductive system is "complete" in the sense that no additional inference rules are required to prove all the logically valid formulae. A converse to completeness is "soundness", the fact that only logically valid formulae are provable in the deductive system. Together with soundness (whose verification is easy), this theorem implies that a formula is logically valid if and only if it is the conclusion of a formal deduction.
More general form.
The theorem can be expressed more generally in terms of logical consequence. We say that a sentence "s" is a "syntactic consequence" of a theory "T", denoted formula_0, if "s" is provable from "T" in our deductive system. We say that "s" is a "semantic consequence" of "T", denoted formula_1, if "s" holds in every model of "T". The completeness theorem then says that for any first-order theory "T" with a well-orderable language, and any sentence "s" in the language of "T",
<templatestyles src="Block indent/styles.css"/>if formula_1, then formula_0.
Since the converse (soundness) also holds, it follows that formula_1 if and only if formula_0, and thus that syntactic and semantic consequence are equivalent for first-order logic.
This more general theorem is used implicitly, for example, when a sentence is shown to be provable from the axioms of group theory by considering an arbitrary group and showing that the sentence is satisfied by that group.
Gödel's original formulation is deduced by taking the particular case of a theory without any axiom.
Model existence theorem.
The completeness theorem can also be understood in terms of consistency, as a consequence of Henkin's model existence theorem. We say that a theory "T" is "syntactically consistent" if there is no sentence "s" such that both "s" and its negation ¬"s" are provable from "T" in our deductive system. The model existence theorem says that for any first-order theory "T" with a well-orderable language,
<templatestyles src="Block indent/styles.css"/>if formula_2 is syntactically consistent, then formula_2 has a model.
Another version, with connections to the Löwenheim–Skolem theorem, says:
<templatestyles src="Block indent/styles.css"/>Every syntactically consistent, countable first-order theory has a finite or countable model.
Given Henkin's theorem, the completeness theorem can be proved as follows: If formula_3, then formula_4 does not have models. By the contrapositive of Henkin's theorem, then formula_4 is syntactically inconsistent. So a contradiction (formula_5) is provable from formula_4 in the deductive system. Hence formula_6, and then by the properties of the deductive system, formula_0.
As a theorem of arithmetic.
The model existence theorem and its proof can be formalized in the framework of Peano arithmetic. Precisely, we can systematically define a model of any consistent effective first-order theory "T" in Peano arithmetic by interpreting each symbol of "T" by an arithmetical formula whose free variables are the arguments of the symbol. (In many cases, we will need to assume, as a hypothesis of the construction, that "T" is consistent, since Peano arithmetic may not prove that fact.) However, the definition expressed by this formula is not recursive (but is, in general, Δ2).
Consequences.
An important consequence of the completeness theorem is that it is possible to recursively enumerate the semantic consequences of any effective first-order theory, by enumerating all the possible formal deductions from the axioms of the theory, and use this to produce an enumeration of their conclusions.
This comes in contrast with the direct meaning of the notion of semantic consequence, that quantifies over all structures in a particular language, which is clearly not a recursive definition.
Also, it makes the concept of "provability", and thus of "theorem", a clear concept that only depends on the chosen system of axioms of the theory, and not on the choice of a proof system.
Relationship to the incompleteness theorems.
Gödel's incompleteness theorems show that there are inherent limitations to what can be proven within any given first-order theory in mathematics. The "incompleteness" in their name refers to another meaning of "complete" (see model theory – Using the compactness and completeness theorems): A theory formula_2 is complete (or decidable) if every sentence formula_7 in the language of formula_2 is either provable (formula_8) or disprovable (formula_9).
The first incompleteness theorem states that any formula_2 which is consistent, effective and contains Robinson arithmetic ("Q") must be incomplete in this sense, by explicitly constructing a sentence formula_10 which is demonstrably neither provable nor disprovable within formula_2. The second incompleteness theorem extends this result by showing that formula_10 can be chosen so that it expresses the consistency of formula_2 itself.
Since formula_10 cannot be proven in formula_2, the completeness theorem implies the existence of a model of formula_2 in which formula_10 is false. In fact, formula_10 is a Π1 sentence, i.e. it states that some finitistic property is true of all natural numbers; so if it is false, then some natural number is a counterexample. If this counterexample existed within the standard natural numbers, its existence would disprove formula_10 within formula_2; but the incompleteness theorem showed this to be impossible, so the counterexample must not be a standard number, and thus any model of formula_2 in which formula_10 is false must include non-standard numbers.
In fact, the model of "any" theory containing "Q" obtained by the systematic construction of the arithmetical model existence theorem, is "always" non-standard with a non-equivalent provability predicate and a non-equivalent way to interpret its own construction, so that this construction is non-recursive (as recursive definitions would be unambiguous).
Also, if formula_2 is at least slightly stronger than "Q" (e.g. if it includes induction for bounded existential formulas), then Tennenbaum's theorem shows that it has no recursive non-standard models.
Relationship to the compactness theorem.
The completeness theorem and the compactness theorem are two cornerstones of first-order logic. While neither of these theorems can be proven in a completely effective manner, each one can be effectively obtained from the other.
The compactness theorem says that if a formula φ is a logical consequence of a (possibly infinite) set of formulas Γ then it is a logical consequence of a finite subset of Γ. This is an immediate consequence of the completeness theorem, because only a finite number of axioms from Γ can be mentioned in a formal deduction of φ, and the soundness of the deductive system then implies φ is a logical consequence of this finite set. This proof of the compactness theorem is originally due to Gödel.
Conversely, for many deductive systems, it is possible to prove the completeness theorem as an effective consequence of the compactness theorem.
The ineffectiveness of the completeness theorem can be measured along the lines of reverse mathematics. When considered over a countable language, the completeness and compactness theorems are equivalent to each other and equivalent to a weak form of choice known as weak Kőnig's lemma, with the equivalence provable in RCA0 (a second-order variant of Peano arithmetic restricted to induction over Σ01 formulas). Weak Kőnig's lemma is provable in ZF, the system of Zermelo–Fraenkel set theory without axiom of choice, and thus the completeness and compactness theorems for countable languages are provable in ZF. However the situation is different when the language is of arbitrary large cardinality since then, though the completeness and compactness theorems remain provably equivalent to each other in ZF, they are also provably equivalent to a weak form of the axiom of choice known as the ultrafilter lemma. In particular, no theory extending ZF can prove either the completeness or compactness theorems over arbitrary (possibly uncountable) languages without also proving the ultrafilter lemma on a set of the same cardinality.
Completeness in other logics.
The completeness theorem is a central property of first-order logic that does not hold for all logics. Second-order logic, for example, does not have a completeness theorem for its standard semantics (though does have the completeness property for Henkin semantics), and the set of logically-valid formulas in second-order logic is not recursively enumerable. The same is true of all higher-order logics. It is possible to produce sound deductive systems for higher-order logics, but no such system can be complete.
Lindström's theorem states that first-order logic is the strongest (subject to certain constraints) logic satisfying both compactness and completeness.
A completeness theorem can be proved for modal logic or intuitionistic logic with respect to Kripke semantics.
Proofs.
Gödel's original proof of the theorem proceeded by reducing the problem to a special case for formulas in a certain syntactic form, and then handling this form with an "ad hoc" argument.
In modern logic texts, Gödel's completeness theorem is usually proved with Henkin's proof, rather than with Gödel's original proof. Henkin's proof directly constructs a term model for any consistent first-order theory. James Margetson (2004) developed a computerized formal proof using the Isabelle theorem prover. Other proofs are also known.
Further reading.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T\\vdash s"
},
{
"math_id": 1,
"text": "T\\models s"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "T \\models s"
},
{
"math_id": 4,
"text": "T\\cup\\lnot s"
},
{
"math_id": 5,
"text": "\\bot"
},
{
"math_id": 6,
"text": "(T\\cup\\lnot s) \\vdash \\bot"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "T\\vdash S"
},
{
"math_id": 9,
"text": "T\\vdash \\neg S"
},
{
"math_id": 10,
"text": "S_T"
}
] | https://en.wikipedia.org/wiki?curid=12450 |
1245202 | John Earman | American philosopher of physics (born 1942)
John Earman (born 1942) is an American philosopher of physics. He is an emeritus professor in the History and Philosophy of Science department at the University of Pittsburgh. He has also taught at the University of California, Los Angeles, Rockefeller University, and the University of Minnesota, and was president of the Philosophy of Science Association.
Life and career.
John Earman was born in Washington, D.C. in 1942. Earman received his PhD at Princeton University in 1968 with a dissertation on temporal asymmetry (titled "Some Aspects of Temporal Asymmetry") and it was directed by Carl Gustav Hempel and Paul Benacerraf. After holding professorships at UCLA, the Rockefeller University, and the University of Minnesota, he joined the faculty of the History and Philosophy of Science department of the University of Pittsburgh in 1985. He remained at Pittsburgh for the rest of his career.
Earman is a former president of the Philosophy of Science Association and a fellow of the American Academy of Arts and Sciences, and of the American Association for the Advancement of Sciences. He is a member of the Archive Board of the Phil-Sci Archive.
The hole argument.
Earman has notably contributed to debate about the "hole argument". The hole argument was invented for different purposes by Albert Einstein late in 1913 as part of his quest for the general theory of relativity (GTR). It was revived and reformulated in the modern context by John3 (a short form for the "three Johns": John Earman, John Stachel, and John Norton).
With the GTR, the traditional debate between absolutism and relationalism has been shifted to whether or not spacetime is a substance, since the GTR largely rules out the existence of, e.g., absolute positions. The "hole argument" offered by John Earman is a powerful argument against manifold substantialism.
This is a technical mathematical argument but can be paraphrased as follows:
Define a function formula_0 as the identity function over all elements over the manifold formula_1, excepting a small neighbourhood (topology) formula_2 belonging to formula_1. Over formula_2, formula_0 comes to differ from identity by a smooth function.
With use of this function formula_0 we can construct two mathematical models, where the second is generated by applying formula_0 to proper elements of the first, such that the two models are identical prior to the time formula_3, where formula_4 is a time function created by a foliation of spacetime, but differ after formula_3.
These considerations show that, since substantialism allows the construction of holes, that the universe must, on that view, be indeterministic. Which, Earman argues, is a case against substantialism, as the case between determinism or indeterminism should be a question of physics, not of our commitment to substantialism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "H"
},
{
"math_id": 3,
"text": "t=0"
},
{
"math_id": 4,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=1245202 |
1245377 | 5α-Reductase | Enzyme family
5α-Reductases, also known as 3-oxo-5α-steroid 4-dehydrogenases, are enzymes involved in steroid metabolism. They participate in three metabolic pathways: bile acid biosynthesis, androgen and estrogen metabolism. There are three isozymes of 5α-reductase encoded by the genes SRD5A1, SRD5A2, and SRD5A3.
5α-Reductases catalyze the following generalized chemical reaction:
a 3-oxo-5α-steroid + acceptor ⇌ a 3-oxo-Δ4-steroid + reduced acceptor
Where a 3-oxo-5α-steroid and acceptor are substrates, and a corresponding 3-oxo-Δ4-steroid and the reduced acceptor are products. An instance of this generalized reaction that 5α-reductase type 2 catalyzes is:
dihydrotestosterone + NADP+ formula_0 testosterone + NADPH + H+
where dihydrotestosterone is the 3-oxo-5α-steroid, NADP+ is the acceptor and testosterone is the 3-oxo-Δ4-steroid and NADPH the reduced acceptor.
Production and activity.
The enzyme is produced in many tissues in both males and females, in the reproductive tract, testes and ovaries, skin, seminal vesicles, prostate, epididymis and many organs, including the nervous system. There are three isoenzymes of 5α-reductase: steroid 5α-reductase 1, 2, and 3 (SRD5A1, SRD5A2 and SRD5A3).
5α-Reductases act on 3-oxo (3-keto), Δ4,5 C19/C21 steroids as its substrates; "3-keto" refers to the double bond of the third carbon to oxygen. Carbons 4 and 5 also have a double bond, represented by 'Δ4,5'. The reaction involves a stereospecific and permanent break of the Δ4,5 with the help of NADPH as a cofactor. A hydride anion (H−) is also placed on the α face at the fifth carbon, and a proton on the β face at carbon 4.
Distribution with age.
5α-R1 is expressed in fetal scalp and nongenital skin of the back, anywhere from 5 to 50 times less than in the adult. 5α-R2 is expressed in fetal prostates similar to adults. 5α-R1 is expressed mainly in the epithelium and 5α-R2 the stroma of the fetal prostate. Scientists looked for 5α-R2 expression in fetal liver, adrenal, testis, ovary, brain, scalp, chest, and genital skin, using immunoblotting, and were only able to find it in genital skin.
After birth, the 5α-R1 is expressed in more locations, including the liver, skin, scalp and prostate. 5α-R2 is expressed in prostate, seminal vesicles, epididymis, liver, and to a lesser extent the scalp and skin. Hepatic expression of both 5α-R1 and 2 is immediate, but disappears in the skin and scalp at month 18. Then, at puberty, only 5α-R2 is reexpressed in the skin and scalp.
5α-R1 and 5α-R2 appear to be expressed in the prostate in male fetuses and throughout postnatal life. 5α-R1 and 5α-R2 are also expressed, although to different degrees in liver, genital and nongenital skin, prostate, epididymis, seminal vesicle, testis, ovary, uterus, kidney, exocrine pancreas, and the brain.
In adulthood, 5α-R1-3 is ubiquitously expressed.
Substrates.
Specific substrates include testosterone, progesterone, androstenedione, epitestosterone, cortisol, aldosterone, and deoxycorticosterone. Outside of dihydrotestosterone, much of the physiological role of 5α-reduced steroids is unknown. Beyond reducing testosterone to dihydrotestosterone, 5alpha-reductase enzyme isoforms I and II reduce progesterone to dihydroprogesterone (DHP) and deoxycorticosterone to dihydrodeoxycorticosterone (DHDOC). In vitro and animal models suggest subsequent 3alpha-reduction of DHT, DHP and DHDOC lead to steroid metabolites with effects on cerebral function achieved by enhancing GABAergic inhibition. These neuroactive steroid derivatives enhance GABA via allosteric modulation at GABA(A) receptors and have anticonvulsant, antidepressant and anxiolytic effects, and also alter sexual and alcohol related behavior. 5α-dihydrocortisol is present in the aqueous humor of the eye, is synthesized in the lens, and might help make the aqueous humor itself. Allopregnanolone and THDOC are neurosteroids, with the latter having effects on the susceptibility of animals to seizures. In socially isolated mice, 5α-R1 is specifically down-regulated in glutamatergic pyramidal neurons that converge on the amygdala from cortical and hippocampal regions. This down-regulation may account for the appearance of behavioral disorders such as anxiety, aggression, and cognitive dysfunction. 5α-dihydroaldosterone is a potent antinatriuretic agent, although different from aldosterone. Its formation in the kidney is enhanced by restriction of dietary salt, suggesting it may help retain sodium as follows:
"<chem>{Substrate} + {NADPH} + H+ -> {5\alpha-substrate} + NADP+</chem>"
5α-DHP is a major hormone in circulation of normal cycling and pregnant women.
Testosterone.
5α-Reductase is most known for converting testosterone, the male sex hormone, into the more potent dihydrotestosterone:
The major difference is the Δ4,5 double-bond on the A (leftmost) ring. The other differences between the diagrams are unrelated to structure.
List of conversions.
The following reactions are known to be catalyzed by 5α-reductase:
Structure.
5α-Reductase is a membrane bound enzyme that catalyzes the NADPH dependent reduction of double bonds in steroid substrates to increase potency. The crystal structure of a homolog of 5α-reductase isoenzymes 1 and 2 has been found in "Proteobacteria" (proteobacteria 5α-reductase). This exists as a monomer with a seven alpha-helix transmembrane structure housing a hydrophobic pocket that holds cofactor NADPH and monoolein which occupies the steroid substrate binding pocket. In insect cells monoolein is not found, but is subbed out for other androgens and inhibitors. The integral seven transmembrane topology is likely conserved across species, with the N terminus in the endoplasmic reticulum lumen and the C terminus facing the cytosol. High conformational dynamics of the cytosolic region likely regulate NADPH/NADP+ exchange. Sequence conservation across known crystal structures has corroborated high conservation in enzyme structure.
Inhibition.
The mechanism of 5α reductase inhibition is complex, but involves the binding of NADPH to the enzyme followed by the substrate. 5α-Reductase inhibitor drugs are used in benign prostatic hyperplasia, prostate cancer, pattern hair loss (androgenetic alopecia), and hormone replacement therapy for transgender women.
Inhibition of the enzyme can be classified into two categories: steroidal, which are irreversible, and nonsteroidal. There are more steroidal inhibitors, with examples including finasteride (MK-906), dutasteride (GG745), 4-MA, turosteride, MK-386, MK-434, and MK-963. Researchers have pursued synthesis of nonsteroidals to inhibit 5α-reductase due to the undesired side effects of steroidals. The most potent and selective inhibitors of 5α-R1 are found in this class, and include benzoquinolones, nonsteroidal aryl acids, butanoic acid derivatives, and more recognizably, polyunsaturated fatty acids (especially linolenic acid), zinc, and green tea. Riboflavin was also identified as a 5α-reductase inhibitor .
Additionally, it has been claimed that alfatradiol works through this mechanism of activity (5α-reductase), as well as the Ganoderic acids in lingzhi mushroom, and the Saw Palmetto.
Inhibition of 5α-reductase results in decreased conversion of testosterone to DHT, leading to increased testosterone and estradiol. Other enzymes compensate to a degree for the absent conversion, specifically with local expression at the skin of reductive 17β-hydroxysteroid dehydrogenase, oxidative 3α-hydroxysteroid dehydrogenase, and 3β-hydroxysteroid dehydrogenase enzymes.
Gynecomastia, erectile dysfunction, impaired cognitive function, fatigue, hypoglycemia, impaired liver function, constipation, and depression, are only a few of the possible side-effects of 5α-reductase inhibition. Long term side effects, that continued even after discontinuation of the drug have been reported.
Finasteride.
Finasteride inhibits two 5α-reductase isoenzymes (II and III), while dutasteride inhibits all three. Finasteride potently inhibits 5α-R2 at a mean inhibitory concentration IC50 of 69 nM, but is less effective with 5α-R1 with an IC50 of 360 nM. Finasteride decreases mean serum level of DHT by 71% after 6 months, and was shown in vitro to inhibit 5α-R3 at a similar potency to 5α-R2 in transfected cell lines.
Dutasteride.
Dutasteride inhibits 5α-reductase isoenzymes type 1 and 2 better than finasteride, leading to a more complete reduction in DHT at 24 weeks (94.7% versus 70.8%). It also reduces intraprostatic DHT 97% in men with prostate cancer at 5 mg/day over three months. A second study with 3.5 mg/day for 4 months decreased intraprostatic DHT even further by 99%. The suppression of DHT in vivo, and the report that dutasteride inhibits 5α-R3 in vitro suggest that dutasteride may be a triple 5α reductase inhibitor.
Congenital deficiencies.
5α-Reductase 1.
5α-Reductase type 1 inactivated male mice have reduced bone mass and forelimb muscle grip strength, which has been proposed to be due to lack of 5α-reductase type 1 expression in bone and muscle. In 5 alpha reductase type 2 deficient males, the type 1 isoenzyme is thought to be responsible for their virilization at puberty.
5α-Reductase 2.
Impaired 5α-reductase 2 activity can result from mutations in the underlying SRD5A2 gene. The condition, known as 5α-reductase 2 deficiency, has a range of presentations as atypical appearances of the external genitalia in males. This is because 5α-reductase 2 catalyzes the transformation of testosterone to the potent androgen dihydrotestosterone, which is required for the proper masculinization of male genitalia.
5α-Reductase 3.
When small interfering RNA is used to knock down the expression of 5α-R3 isozyme in cell lines, there is decreased cell growth, viability, and a decrease in DHT/T ratios. It has also shown the ability to reduce testosterone, androstenedione, and progesterone in androgen stimulated prostate cell lines by adenovirus vectors.
Congenital deficiency of 5α-R3 at the gene SRD53A has been linked to a rare, autosomal recessive condition in which patients are born with severe intellectual dysfunction and cerebellar and ocular defects. The presumed deficiency is reduction of the terminal bond of polyprenol to dolichol, an important step in N-glycosylation of proteins, which in turn is important for proper folding of asparagine residues on nascent protein in the endoplasmic reticulum.
Nervous system.
Affective disorders.
Isolation rearing has been shown to lower protein expression of 5α-reductase isoenzymes 1 and 2 in cortical and subcortical brain regions of rat models. However, the amount of 5α-reduced metabolite remained unaffected. This means isolation rearing likely leads to changes in the expression and activity of 5α-reductase in the brain, leading to dysregulation of dopamine neurotransmission, resulting in early chronic stress Treatment with finasteride, a 5α-reductase inhibitor, has been shown to mimic the effects of SSRI's causing sexual dysfunction. Research has shown that 5α-reductase is the rate-limiting enzyme in neurosteroid synthesis, specifically in the conversion of progesterone to allopregnanolone, low levels of allopregnanolone has been tied to depression, anxiety and schizophrenia. Sleep deprivation can enhance 5α-reductase expression and activity in the prefrontal cortex, leading to mania-related symptoms in rats. It is also contested whether the use of 5α-reductase inhibitors is associated with suicidal ideation and depression in patient populations who use them for benign prostatic hyperplasia. These symptoms have been found during active use of inhibitors and in immediate followup. However, it is unknown if these symptoms arise naturally from benign prostatic hyperplasia.
Hypothalamic–pituitary–adrenal axis dysfunction.
An alternative mechanism of cortisol regulation is regulated via 5α-reductase which catalyzes an A-ring reduction of cortisol, metabolizing the compound. Type 1 and 2 of 5α-reductase are the principal enzymes involved in cortisol clearance through the liver. Excess cortisol has been tied to non-alcoholic fatty liver disease (NAFLD), but in-vitro studies have found that an over expression of 5α-reductase type 2 can suppress lipogenesis. The key role of 5α-reductase in cortisol breakdown and fat buildup has elucidated some of the side effects of 5α-reductase inhibitors. In randomized studies on human volunteers it was found that 5α-reductase inhibition through the use of dutasteride and finasteride can lead to hepatic lipid accumulation in men. In critical illness, overstimulation of cortisol as part of a stress response can lead to decreased clearance of cortisol through the liver via 5α-reductase and kidneys via 11β-hydroxysteroid dehydrogenase type 2, longterm elevation of cortisol can lead to Cushing's syndrome.
Nomenclature.
This enzyme belongs to the family of oxidoreductases, to be specific, those acting on the CH-CH group of donor with other acceptors. The systematic name of this enzyme class is "3-oxo-5α-steroid:acceptor Δ4-oxidoreductase". Other names in common use include:
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=1245377 |
12457310 | Bergman metric | In differential geometry, the Bergman metric is a Hermitian metric that can be defined on certain types of complex manifold. It is so called because it is derived from the Bergman kernel, both of which are named after Stefan Bergman.
Definition.
Let formula_0 be a domain and let formula_1 be the Bergman kernel
on "G". We define a Hermitian metric on the tangent bundle formula_2 by
formula_3
for formula_4. Then the length of a tangent vector formula_5 is
given by
formula_6
This metric is called the Bergman metric on "G".
The length of a (piecewise) "C"1 curve formula_7 is
then computed as
formula_8
The distance formula_9 of two points formula_10 is then defined as
formula_11
The distance "dG" is called the "Bergman distance".
The Bergman metric is in fact a positive definite matrix at each point if "G" is a bounded domain. More importantly, the distance "dG" is invariant under
biholomorphic mappings of "G" to another domain formula_12. That is if "f"
is a biholomorphism of "G" and formula_12, then formula_13.
References.
"This article incorporates material from Bergman metric on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "G \\subset {\\mathbb{C}}^n"
},
{
"math_id": 1,
"text": "K(z,w)"
},
{
"math_id": 2,
"text": "T_z{\\mathbb{C}}^n"
},
{
"math_id": 3,
"text": "\ng_{ij} (z)\n:=\n\\frac{\\partial^2}{\\partial z_i\\, \\partial \\bar{z}_j}\n\\log K(z,z) ,\n"
},
{
"math_id": 4,
"text": "z \\in G"
},
{
"math_id": 5,
"text": "\\xi \\in T_z{\\mathbb{C}}^n"
},
{
"math_id": 6,
"text": "\\left\\vert \\xi \\right\\vert_{B,z}:=\\sqrt{\\sum_{i,j=1}^n g_{ij}(z) \\xi_i \\bar{\\xi}_j }."
},
{
"math_id": 7,
"text": "\\gamma \\colon [0,1] \\to {\\mathbb{C}}^n"
},
{
"math_id": 8,
"text": "\n\\ell (\\gamma) =\n\\int_0^1 \\left\\vert \\frac{\\partial \\gamma}{\\partial t}(t) \\right\\vert_{B,\\gamma(t)} dt .\n"
},
{
"math_id": 9,
"text": "d_G(p,q)"
},
{
"math_id": 10,
"text": "p,q \\in G"
},
{
"math_id": 11,
"text": "\nd_G(p,q):=\n\\inf \\{ \\ell (\\gamma) \\mid \\text{ all piecewise }C^1\\text{ curves }\\gamma\\text{ such that }\\gamma(0)=p\\text{ and }\\gamma(1)=q \\} .\n"
},
{
"math_id": 12,
"text": "G'"
},
{
"math_id": 13,
"text": "d_G(p,q) = d_{G'}(f(p),f(q))"
}
] | https://en.wikipedia.org/wiki?curid=12457310 |
12458499 | Liouville's theorem (conformal mappings) | Theorem limiting types of conformal mappings in Euclidean space of dimension > 2
In mathematics, Liouville's theorem, proved by Joseph Liouville in 1850, is a rigidity theorem about conformal mappings in Euclidean space. It states that every smooth conformal mapping on a domain of R"n", where "n" > 2, can be expressed as a composition of translations, similarities, orthogonal transformations and inversions: they are Möbius transformations (in "n" dimensions). This theorem severely limits the variety of possible conformal mappings in R3 and higher-dimensional spaces. By contrast, conformal mappings in R2 can be much more complicated – for example, all simply connected planar domains are conformally equivalent, by the Riemann mapping theorem.
Generalizations of the theorem hold for transformations that are only weakly differentiable . The focus of such a study is the non-linear Cauchy–Riemann system that is a necessary and sufficient condition for a smooth mapping "f" : Ω → R"n" to be conformal:
formula_0
where "Df" is the Jacobian derivative, "T" is the matrix transpose, and "I" is the identity matrix. A weak solution of this system is defined to be an element "f" of the Sobolev space "W"(Ω, R"n") with non-negative Jacobian determinant almost everywhere, such that the Cauchy–Riemann system holds at almost every point of Ω. Liouville's theorem is then that every weak solution (in this sense) is a Möbius transformation, meaning that it has the form
formula_1
where "a", "b" are vectors in R"n", "α" is a scalar, "A" is a rotation matrix, "ε" = 0 or 2, and the matrix in parentheses is "I" or a Householder matrix (so, orthogonal). Equivalently stated, any quasiconformal map of a domain in Euclidean space that is also conformal is a Möbius transformation. This equivalent statement justifies using the Sobolev space "W"1,"n", since "f" ∈ "W"("Ω", R"n") then follows from the geometrical condition of conformality and the ACL characterization of Sobolev space. The result is not optimal however: in even dimensions "n" = 2"k", the theorem also holds for solutions that are only assumed to be in the space "W", and this result is sharp in the sense that there are weak solutions of the Cauchy–Riemann system in "W"1,"p" for any "p" < "k" that are not Möbius transformations. In odd dimensions, it is known that "W"1,"n" is not optimal, but a sharp result is not known.
Similar rigidity results (in the smooth case) hold on any conformal manifold. The group of conformal isometries of an "n"-dimensional conformal Riemannian manifold always has dimension that cannot exceed that of the full conformal group SO("n" + 1, 1). Equality of the two dimensions holds exactly when the conformal manifold is isometric with the "n"-sphere or projective space. Local versions of the result also hold: The Lie algebra of conformal Killing fields in an open set has dimension less than or equal to that of the conformal group, with equality holding if and only if the open set is locally conformally flat.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Df^\\mathrm{T} Df = \\left|\\det Df\\right|^{2/n} I"
},
{
"math_id": 1,
"text": "f(x) = b + \\frac{\\alpha A (x-a)}{|x-a|^\\varepsilon},\\qquad\nDf = \\frac{\\alpha A}{|x-a|^\\varepsilon}\\left(I-\\varepsilon\\frac{x-a}{|x-a|}\\frac{(x-a)^\\mathrm{T}}{|x-a|}\\right),"
}
] | https://en.wikipedia.org/wiki?curid=12458499 |
12461 | Gradient | Multivariate derivative (mathematics)
In vector calculus, the gradient of a scalar-valued differentiable function formula_0 of several variables is the vector field (or vector-valued function) formula_1 whose value at a point formula_2 gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of formula_0. If the gradient of a function is non-zero at a point formula_2, the direction of the gradient is the direction in which the function increases most quickly from formula_2, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function formula_3 may be defined by:
formula_4
where formula_5 is the total infinitesimal change in formula_0 for an infinitesimal displacement formula_6, and is seen to be maximal when formula_6 is in the direction of the gradient formula_1. The nabla symbol formula_7, written as an upside-down triangle and pronounced "del", denotes the vector differential operator.
When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of formula_0 at formula_2. That is, for formula_8, its gradient formula_9 is defined at the point formula_10 in "n"-dimensional space as the vector
formula_12
Note that the above definition for gradient is only defined for the function formula_0, if it is differentiable at formula_2. There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the Metric tensor at that point needs to be taken into account.
For example, the function formula_13 unless at origin where formula_14, is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin. In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase.
The gradient is dual to the total derivative formula_5: the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a "co"tangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of formula_0 at a point formula_2 with another tangent vector formula_16 equals the directional derivative of formula_0 at formula_2 of the function along formula_16; that is, formula_17.
The gradient admits multiple generalizations to more general functions on manifolds; see .
Motivation.
Consider a room where the temperature is given by a scalar field, "T", so at each point ("x", "y", "z") the temperature is "T"("x", "y", "z"), independent of time. At each point in the room, the gradient of "T" at that point will show the direction in which the temperature rises most quickly, moving away from ("x", "y", "z"). The magnitude of the gradient will determine how fast the temperature rises in that direction.
Consider a surface whose height above sea level at point ("x", "y") is "H"("x", "y"). The gradient of "H" at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.
The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, as the dot product measures how much the unit vector along the road aligns with the steepest slope, which is 40% times the cosine of 60°, or 20%.
More generally, if the hill height function "H" is differentiable, then the gradient of "H" dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of "H" along the unit vector.
Notation.
The gradient of a function formula_0 at point formula_18 is usually written as formula_19. It may also be denoted by any of the following:
Definition.
The gradient (or gradient vector field) of a scalar function "f"("x"1, "x"2, "x"3, …, "xn") is denoted ∇"f" or where ∇ (nabla) denotes the vector differential operator, del. The notation grad "f" is also commonly used to represent the gradient. The gradient of "f" is defined as the unique vector field whose dot product with any vector v at each point "x" is the directional derivative of "f" along v. That is,
formula_24
where the right-hand side is the directional derivative and there are many ways to represent it. Formally, the derivative is "dual" to the gradient; see relationship with derivative.
When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient).
The magnitude and direction of the gradient vector are independent of the particular coordinate representation.
Cartesian coordinates.
In the three-dimensional Cartesian coordinate system with a Euclidean metric, the gradient, if it exists, is given by
formula_25
where i, j, k are the standard unit vectors in the directions of the "x", "y" and "z" coordinates, respectively. For example, the gradient of the function
formula_26
is
formula_27
or
formula_28
In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.
Cylindrical and spherical coordinates.
In cylindrical coordinates with a Euclidean metric, the gradient is given by:
formula_29
where "ρ" is the axial distance, "φ" is the azimuthal or azimuth angle, "z" is the axial coordinate, and e"ρ", e"φ" and e"z" are unit vectors pointing along the coordinate directions.
In spherical coordinates, the gradient is given by:
formula_30
where "r" is the radial distance, "φ" is the azimuthal angle and "θ" is the polar angle, and e"r", e"θ" and e"φ" are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis).
For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).
General coordinates.
We consider general coordinates, which we write as "x"1, …, "x""i", …, "x""n", where n is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so "x"2 refers to the second component—not the quantity "x" squared. The index variable "i" refers to an arbitrary element "x""i". Using Einstein notation, the gradient can then be written as:
formula_31 (Note that its dual is formula_32),
where formula_33 and formula_34 refer to the unnormalized local covariant and contravariant bases respectively, formula_35 is the inverse metric tensor, and the Einstein summation convention implies summation over "i" and "j".
If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as formula_36 and formula_37, using the scale factors (also known as Lamé coefficients) formula_38 :
formula_39 (and formula_40),
where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, formula_41, formula_42, and formula_43 are neither contravariant nor covariant.
The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.
Relationship with derivative.
Relationship with total derivative.
The gradient is closely related to the total derivative (total differential) formula_5: they are transpose (dual) to each other. Using the convention that vectors in formula_11 are represented by column vectors, and that covectors (linear maps formula_15) are represented by row vectors, the gradient formula_1 and the derivative formula_5 are expressed as a column and row vector, respectively, with the same components, but transpose of each other:
formula_44
formula_45
While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a linear form (or covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, formula_46, while the derivative is a map from the tangent space to the real numbers, formula_47. The tangent spaces at each point of formula_11 can be "naturally" identified with the vector space formula_11 itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space formula_48 of covectors; thus the value of the gradient at a point can be thought of a vector in the original formula_11, not just as a tangent vector.
Computationally, given a tangent vector, the vector can be "multiplied" by the derivative (as matrices), which is equal to taking the dot product with the gradient:
formula_49
Differential or (exterior) derivative.
The best linear approximation to a differentiable function
formula_50
at a point formula_51 in formula_11 is a linear map from formula_11 to formula_52 which is often denoted by formula_53 or formula_54 and called the differential or total derivative of formula_0 at formula_51. The function formula_5, which maps formula_51 to formula_53, is called the total differential or exterior derivative of formula_0 and is an example of a differential 1-form.
Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector.
The gradient is related to the differential by the formula
formula_55
for any formula_56, where formula_57 is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector.
If formula_11 is viewed as the space of (dimension formula_58) column vectors (of real numbers), then one can regard formula_5 as the row vector with components
formula_59
so that formula_60 is given by matrix multiplication. Assuming the standard Euclidean metric on formula_11, the gradient is then the corresponding column vector, that is,
formula_61
Linear approximation to a function.
The best linear approximation to a function can be expressed in terms of the gradient, rather than the derivative. The gradient of a function formula_0 from the Euclidean space formula_11 to formula_52 at any particular point formula_62 in formula_11 characterizes the best linear approximation to formula_0 at formula_62. The approximation is as follows:
formula_63
for formula_51 close to formula_62, where formula_64 is the gradient of formula_0 computed at formula_62, and the dot denotes the dot product on formula_11. This equation is equivalent to the first two terms in the multivariable Taylor series expansion of formula_0 at formula_62.
Relationship with <templatestyles src="Template:Visible anchor/styles.css" />Fréchet derivative.
Let "U" be an open set in R"n". If the function "f" : "U" → R is differentiable, then the differential of "f" is the Fréchet derivative of "f". Thus ∇"f" is a function from "U" to the space R"n" such that
formula_65
where · is the dot product.
As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative:
The gradient is linear in the sense that if "f" and "g" are two real-valued functions differentiable at the point "a" ∈ R"n", and α and β are two constants, then "αf" + "βg" is differentiable at "a", and moreover formula_66
If "f" and "g" are real-valued functions differentiable at a point "a" ∈ R"n", then the product rule asserts that the product "fg" is differentiable at "a", and formula_67
Suppose that "f" : "A" → R is a real-valued function defined on a subset "A" of R"n", and that "f" is differentiable at a point "a". There are two forms of the chain rule applying to the gradient. First, suppose that the function "g" is a parametric curve; that is, a function "g" : "I" → R"n" maps a subset "I" ⊂ R into R"n". If "g" is differentiable at a point "c" ∈ "I" such that "g"("c")
"a", then formula_68 where ∘ is the composition operator: ("f" ∘ "g")("x") = "f"("g"("x")).
More generally, if instead "I" ⊂ R"k", then the following holds:
formula_69
where ("Dg")T denotes the transpose Jacobian matrix.
For the second form of the chain rule, suppose that "h" : "I" → R is a real valued function on a subset "I" of R, and that "h" is differentiable at the point "f"("a") ∈ "I". Then
formula_70
Further properties and applications.
Level sets.
A level surface, or isosurface, is the set of all points where some function has a given value.
If "f" is differentiable, then the dot product (∇"f" )"x" ⋅ "v" of the gradient at a point "x" with a vector "v" gives the directional derivative of "f" at "x" in the direction "v". It follows that in this case the gradient of "f" is orthogonal to the level sets of "f". For example, a level surface in three-dimensional space is defined by an equation of the form "F"("x", "y", "z") = "c". The gradient of "F" is then normal to the surface.
More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form "F"("P") = 0 such that "dF" is nowhere zero. The gradient of "F" is then normal to the hypersurface.
Similarly, an affine algebraic hypersurface may be defined by an equation "F"("x"1, ..., "x""n") = 0, where "F" is a polynomial. The gradient of "F" is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.
Conservative vector fields and the gradient theorem.
The gradient of a function is called a gradient field. A (continuous) gradient field is always a conservative vector field: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.
Gradient is direction of steepest ascent.
The gradient of a function formula_8 at point "x" is also the direction of its steepest ascent, i.e. it maximizes its directional derivative:
Let formula_71 be an arbitrary unit vector. With the directional derivative defined as
formula_72
we get, by substituting the function formula_73 with its Taylor series,
formula_74
where formula_75 denotes higher order terms in formula_76.
Dividing by formula_77, and taking the limit yields a term which is bounded from above by the Cauchy-Schwarz inequality
formula_78
Choosing formula_79 maximizes the directional derivative, and equals the upper bound
formula_80
Generalizations.
Jacobian.
The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. A further generalization for a function between Banach spaces is the Fréchet derivative.
Suppose f : R"n" → R"m" is a function such that each of its first-order partial derivatives exist on ℝ"n". Then the Jacobian matrix of f is defined to be an "m"×"n" matrix, denoted by formula_81 or simply formula_82. The ("i","j")th entry is formula_83. Explicitly
formula_84
Gradient of a vector field.
Since the total derivative of a vector field is a linear mapping from vectors to vectors, it is a tensor quantity.
In rectangular coordinates, the gradient of a vector field f = ( "f"1, "f"2, "f"3) is defined by:
formula_85
(where the Einstein summation notation is used and the tensor product of the vectors e"i" and e"k" is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix:
formula_86
In curvilinear coordinates, or more generally on a curved manifold, the gradient involves Christoffel symbols:
formula_87
where "g""jk" are the components of the inverse metric tensor and the e"i" are the coordinate basis vectors.
Expressed more invariantly, the gradient of a vector field f can be defined by the Levi-Civita connection and metric tensor:
formula_88
where ∇"c" is the connection.
Riemannian manifolds.
For any smooth function f on a Riemannian manifold ("M", "g"), the gradient of "f" is the vector field ∇"f" such that for any vector field "X",
formula_89
that is,
formula_90
where "g""x"( , ) denotes the inner product of tangent vectors at "x" defined by the metric "g" and ∂"X" "f" is the function that takes any point "x" ∈ "M" to the directional derivative of "f" in the direction "X", evaluated at "x". In other words, in a coordinate chart "φ" from an open subset of "M" to an open subset of R"n", (∂"X" "f" )("x") is given by:
formula_91
where "X""j" denotes the "j"th component of "X" in this coordinate chart.
So, the local form of the gradient takes the form:
formula_92
Generalizing the case "M" = R"n", the gradient of a function is related to its exterior derivative, since
formula_93
More precisely, the gradient ∇"f" is the vector field associated to the differential 1-form "df" using the musical isomorphism
formula_94
(called "sharp") defined by the metric "g". The relation between the exterior derivative and the gradient of a function on R"n" is a special case of this in which the metric is the flat metric given by the dot product.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "\\nabla f"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "f(\\mathbf{r})"
},
{
"math_id": 4,
"text": "df=\\nabla f \\cdot d\\mathbf{r}"
},
{
"math_id": 5,
"text": "df"
},
{
"math_id": 6,
"text": "d\\mathbf{r}"
},
{
"math_id": 7,
"text": "\\nabla"
},
{
"math_id": 8,
"text": "f \\colon \\R^n \\to \\R"
},
{
"math_id": 9,
"text": "\\nabla f \\colon \\R^n \\to \\R^n"
},
{
"math_id": 10,
"text": "p = (x_1,\\ldots,x_n)"
},
{
"math_id": 11,
"text": "\\R^n"
},
{
"math_id": 12,
"text": "\\nabla f(p) = \\begin{bmatrix}\n \\frac{\\partial f}{\\partial x_1}(p) \\\\\n \\vdots \\\\\n \\frac{\\partial f}{\\partial x_n}(p)\n\\end{bmatrix}."
},
{
"math_id": 13,
"text": "f(x,y)=\\frac {x^2 y}{x^2+y^2}"
},
{
"math_id": 14,
"text": "f(0,0)=0"
},
{
"math_id": 15,
"text": "\\R^n \\to \\R"
},
{
"math_id": 16,
"text": "\\mathbf{v}"
},
{
"math_id": 17,
"text": "\\nabla f(p) \\cdot \\mathbf v = \\frac{\\partial f}{\\partial\\mathbf{v}}(p) = df_{p}(\\mathbf{v}) "
},
{
"math_id": 18,
"text": "a"
},
{
"math_id": 19,
"text": "\\nabla f (a)"
},
{
"math_id": 20,
"text": "\\vec{\\nabla} f (a)"
},
{
"math_id": 21,
"text": "\\operatorname{grad} f"
},
{
"math_id": 22,
"text": "\\partial_i f"
},
{
"math_id": 23,
"text": "f_{i}"
},
{
"math_id": 24,
"text": "\\big(\\nabla f(x)\\big)\\cdot \\mathbf{v} = D_{\\mathbf v}f(x)"
},
{
"math_id": 25,
"text": "\\nabla f = \\frac{\\partial f}{\\partial x} \\mathbf{i} + \\frac{\\partial f}{\\partial y} \\mathbf{j} + \\frac{\\partial f}{\\partial z} \\mathbf{k},"
},
{
"math_id": 26,
"text": "f(x,y,z)= 2x+3y^2-\\sin(z)"
},
{
"math_id": 27,
"text": "\\nabla f(x, y, z) = 2\\mathbf{i}+ 6y\\mathbf{j} -\\cos(z)\\mathbf{k}."
},
{
"math_id": 28,
"text": "\\nabla f(x, y, z) = \n\\begin{bmatrix}\n 2 \\\\\n 6y \\\\\n -\\cos z\n\\end{bmatrix}.\n"
},
{
"math_id": 29,
"text": "\\nabla f(\\rho, \\varphi, z) = \\frac{\\partial f}{\\partial \\rho}\\mathbf{e}_\\rho + \\frac{1}{\\rho}\\frac{\\partial f}{\\partial \\varphi}\\mathbf{e}_\\varphi + \\frac{\\partial f}{\\partial z}\\mathbf{e}_z,"
},
{
"math_id": 30,
"text": "\\nabla f(r, \\theta, \\varphi) = \\frac{\\partial f}{\\partial r}\\mathbf{e}_r + \\frac{1}{r}\\frac{\\partial f}{\\partial \\theta}\\mathbf{e}_\\theta + \\frac{1}{r \\sin\\theta}\\frac{\\partial f}{\\partial \\varphi}\\mathbf{e}_\\varphi,"
},
{
"math_id": 31,
"text": "\\nabla f = \\frac{\\partial f}{\\partial x^{i}}g^{ij} \\mathbf{e}_j"
},
{
"math_id": 32,
"text": "\\mathrm{d}f = \\frac{\\partial f}{\\partial x^{i}}\\mathbf{e}^i"
},
{
"math_id": 33,
"text": "\\mathbf{e}_i = \\partial \\mathbf{x}/\\partial x^i"
},
{
"math_id": 34,
"text": "\\mathbf{e}^i = \\mathrm{d}x^i"
},
{
"math_id": 35,
"text": "g^{ij}"
},
{
"math_id": 36,
"text": "\\hat{\\mathbf{e}}_i"
},
{
"math_id": 37,
"text": "\\hat{\\mathbf{e}}^i"
},
{
"math_id": 38,
"text": "h_i= \\lVert \\mathbf{e}_i \\rVert = \\sqrt{g_{i i}} = 1\\, / \\lVert \\mathbf{e}^i \\rVert"
},
{
"math_id": 39,
"text": "\\nabla f = \\frac{\\partial f}{\\partial x^{i}}g^{ij} \\hat{\\mathbf{e}}_{j}\\sqrt{g_{jj}} = \\sum_{i=1}^n \\, \\frac{\\partial f}{\\partial x^{i}} \\frac{1}{h_i} \\mathbf{\\hat{e}}_i"
},
{
"math_id": 40,
"text": "\\mathrm{d}f = \\sum_{i=1}^n \\, \\frac{\\partial f}{\\partial x^{i}} \\frac{1}{h_i} \\mathbf{\\hat{e}}^i"
},
{
"math_id": 41,
"text": "\\mathbf{\\hat{e}}_i"
},
{
"math_id": 42,
"text": "\\mathbf{\\hat{e}}^i"
},
{
"math_id": 43,
"text": "h_i"
},
{
"math_id": 44,
"text": "\\nabla f(p) = \\begin{bmatrix}\\frac{\\partial f}{\\partial x_1}(p) \\\\ \\vdots \\\\ \\frac{\\partial f}{\\partial x_n}(p) \\end{bmatrix} ;"
},
{
"math_id": 45,
"text": "df_p = \\begin{bmatrix}\\frac{\\partial f}{\\partial x_1}(p) & \\cdots & \\frac{\\partial f}{\\partial x_n}(p) \\end{bmatrix} ."
},
{
"math_id": 46,
"text": "\\nabla f(p) \\in T_p \\R^n"
},
{
"math_id": 47,
"text": "df_p \\colon T_p \\R^n \\to \\R"
},
{
"math_id": 48,
"text": "(\\R^n)^*"
},
{
"math_id": 49,
"text": "\n(df_p)(v) = \\begin{bmatrix}\\frac{\\partial f}{\\partial x_1}(p) & \\cdots & \\frac{\\partial f}{\\partial x_n}(p) \\end{bmatrix}\n\\begin{bmatrix}v_1 \\\\ \\vdots \\\\ v_n\\end{bmatrix}\n= \\sum_{i=1}^n \\frac{\\partial f}{\\partial x_i}(p) v_i\n= \\begin{bmatrix}\\frac{\\partial f}{\\partial x_1}(p) \\\\ \\vdots \\\\ \\frac{\\partial f}{\\partial x_n}(p) \\end{bmatrix} \\cdot \\begin{bmatrix}v_1 \\\\ \\vdots \\\\ v_n\\end{bmatrix}\n= \\nabla f(p) \\cdot v"
},
{
"math_id": 50,
"text": "f : \\R^n \\to \\R"
},
{
"math_id": 51,
"text": "x"
},
{
"math_id": 52,
"text": "\\R"
},
{
"math_id": 53,
"text": "df_x"
},
{
"math_id": 54,
"text": "Df(x)"
},
{
"math_id": 55,
"text": "(\\nabla f)_x\\cdot v = df_x(v)"
},
{
"math_id": 56,
"text": "v\\in\\R^n"
},
{
"math_id": 57,
"text": "\\cdot"
},
{
"math_id": 58,
"text": "n"
},
{
"math_id": 59,
"text": "\\left( \\frac{\\partial f}{\\partial x_1}, \\dots, \\frac{\\partial f}{\\partial x_n}\\right),"
},
{
"math_id": 60,
"text": "df_x(v)"
},
{
"math_id": 61,
"text": "(\\nabla f)_i = df^\\mathsf{T}_i."
},
{
"math_id": 62,
"text": "x_0"
},
{
"math_id": 63,
"text": "f(x) \\approx f(x_0) + (\\nabla f)_{x_0}\\cdot(x-x_0)"
},
{
"math_id": 64,
"text": "(\\nabla f)_{x_0}"
},
{
"math_id": 65,
"text": "\\lim_{h\\to 0} \\frac{|f(x+h)-f(x) -\\nabla f(x)\\cdot h|}{\\|h\\|} = 0,"
},
{
"math_id": 66,
"text": "\\nabla\\left(\\alpha f+\\beta g\\right)(a) = \\alpha \\nabla f(a) + \\beta\\nabla g (a)."
},
{
"math_id": 67,
"text": "\\nabla (fg)(a) = f(a)\\nabla g(a) + g(a)\\nabla f(a)."
},
{
"math_id": 68,
"text": "(f\\circ g)'(c) = \\nabla f(a)\\cdot g'(c),"
},
{
"math_id": 69,
"text": "\\nabla (f\\circ g)(c) = \\big(Dg(c)\\big)^\\mathsf{T} \\big(\\nabla f(a)\\big),"
},
{
"math_id": 70,
"text": "\\nabla (h\\circ f)(a) = h'\\big(f(a)\\big)\\nabla f(a)."
},
{
"math_id": 71,
"text": " v \\in \\R^n"
},
{
"math_id": 72,
"text": "\\nabla_v f (x) = \\lim_{h \\rightarrow 0} \\frac{f(x + vh) - f(x)}{h},"
},
{
"math_id": 73,
"text": "f(x + vh)"
},
{
"math_id": 74,
"text": "\\nabla_v f (x) = \\lim_{h \\rightarrow 0} \\frac{(f(x) + \\nabla f \\cdot vh + R) - f(x)}{h},"
},
{
"math_id": 75,
"text": "R"
},
{
"math_id": 76,
"text": "vh"
},
{
"math_id": 77,
"text": "h"
},
{
"math_id": 78,
"text": "|\\nabla_v f (x)| = |\\nabla f \\cdot v| \\le |\\nabla f| |v| = |\\nabla f|."
},
{
"math_id": 79,
"text": "v^* = \\nabla f/|\\nabla f|"
},
{
"math_id": 80,
"text": "|\\nabla_{v^*} f (x)| = |(\\nabla f)^2/|\\nabla f|| = |\\nabla f|."
},
{
"math_id": 81,
"text": "\\mathbf{J}_\\mathbb{f}(\\mathbb{x})"
},
{
"math_id": 82,
"text": "\\mathbf{J}"
},
{
"math_id": 83,
"text": "\\mathbf J_{ij} = {\\partial f_i} / {\\partial x_j}"
},
{
"math_id": 84,
"text": "\\mathbf J = \\begin{bmatrix}\n \\dfrac{\\partial \\mathbf{f}}{\\partial x_1} & \\cdots & \\dfrac{\\partial \\mathbf{f}}{\\partial x_n} \\end{bmatrix}\n= \\begin{bmatrix}\n \\nabla^\\mathsf{T} f_1 \\\\ \n \\vdots \\\\\n \\nabla^\\mathsf{T} f_m \n \\end{bmatrix}\n= \\begin{bmatrix}\n \\dfrac{\\partial f_1}{\\partial x_1} & \\cdots & \\dfrac{\\partial f_1}{\\partial x_n}\\\\\n \\vdots & \\ddots & \\vdots\\\\\n \\dfrac{\\partial f_m}{\\partial x_1} & \\cdots & \\dfrac{\\partial f_m}{\\partial x_n} \\end{bmatrix}."
},
{
"math_id": 85,
"text": "\\nabla \\mathbf{f}=g^{jk}\\frac{\\partial f^i}{\\partial x^j} \\mathbf{e}_i \\otimes \\mathbf{e}_k,"
},
{
"math_id": 86,
"text": "\\frac{\\partial f^i}{\\partial x^j} = \\frac{\\partial (f^1,f^2,f^3)}{\\partial (x^1,x^2,x^3)}."
},
{
"math_id": 87,
"text": "\\nabla \\mathbf{f}=g^{jk}\\left(\\frac{\\partial f^i}{\\partial x^j}+{\\Gamma^i}_{jl}f^l\\right) \\mathbf{e}_i \\otimes \\mathbf{e}_k,"
},
{
"math_id": 88,
"text": "\\nabla^a f^b = g^{ac} \\nabla_c f^b ,"
},
{
"math_id": 89,
"text": "g(\\nabla f, X) = \\partial_X f,"
},
{
"math_id": 90,
"text": "g_x\\big((\\nabla f)_x, X_x \\big) = (\\partial_X f) (x),"
},
{
"math_id": 91,
"text": "\\sum_{j=1}^n X^{j} \\big(\\varphi(x)\\big) \\frac{\\partial}{\\partial x_{j}}(f \\circ \\varphi^{-1}) \\Bigg|_{\\varphi(x)},"
},
{
"math_id": 92,
"text": "\\nabla f = g^{ik} \\frac{\\partial f}{\\partial x^k} {\\textbf e}_i ."
},
{
"math_id": 93,
"text": "(\\partial_X f) (x) = (df)_x(X_x) ."
},
{
"math_id": 94,
"text": "\\sharp=\\sharp^g\\colon T^*M\\to TM"
}
] | https://en.wikipedia.org/wiki?curid=12461 |
1246115 | Countersteering | Single-track vehicle steering technique
Countersteering is used by single-track vehicle operators, such as cyclists and motorcyclists, to initiate a turn toward a given direction by momentarily steering counter to the desired direction ("steer left to turn right"). To negotiate a turn successfully, the combined center of mass of the rider and the single-track vehicle must first be leaned in the direction of the turn, and steering briefly in the opposite direction causes that lean. The rider's action of countersteering is sometimes referred to as "giving a steering command".
The scientific literature does not provide a clear and comprehensive definition of countersteering. In fact, "a proper distinction between steer torque and steer angle ... is not always made."
How it works.
When countersteering to turn right, the following is performed:
While this appears to be a complex sequence of motions, it is performed by every child who rides a bicycle. The entire sequence goes largely unnoticed by most riders, which is why some assert that they do not do it.
It is also important to distinguish the steering torque necessary to initiate the lean required for a given turn from the sustained steering torque and steering angle necessary to maintain a constant radius and lean angle until it is time to exit the turn.
Need to lean to turn.
A bike can negotiate a curve only when the combined center of mass of bike and rider leans toward the inside of the turn at an angle appropriate for the velocity and the radius of the turn:
formula_0
where formula_1 is the forward speed, formula_2 is the radius of the turn and formula_3 is the acceleration of gravity.
Higher speeds and tighter turns require greater lean angles. If the mass is not first leaned into the turn, the inertia of the rider and bike will cause them to continue in a straight line as the tires track out from under them along the curve. The transition of riding in a straight line to negotiating a turn is a process of leaning the bike into the turn, and the most practical way to cause that lean (of the combined center of mass of bike and rider) is to move the support points in the opposite direction first.
Stable lean.
As the desired angle is approached, the front wheel must usually be steered into the turn to maintain that angle or the bike will continue to lean with gravity, increasing in rate, until the side contacts the ground. This process often requires little or no physical effort, because the geometry of the steering system of most bikes is designed in such a way that the front wheel has a strong tendency to steer in the direction of a lean.
The actual torque the rider must apply to the handlebars to maintain a steady-state turn is a complex function of bike geometry, mass distribution, rider position, tire properties, turn radius, and forward speed. At low speeds, the steering torque necessary from the rider is usually negative, that is opposite the direction of the turn, even when the steering angle is in the direction of the turn. At higher speeds, the direction of the necessary input torque often becomes positive, that is in the same direction as the turn.
At low speeds.
At low speeds countersteering is equally necessary, but the countersteering is then so subtle that it is hidden by the continuous corrections that are made in balancing the bike, often falling below a just noticeable difference or threshold of perception of the rider. Countersteering at low speed may be further concealed by the ensuing much larger steering angle possible in the direction of the turn.
Gyroscopic effects.
One effect of turning the front wheel is a roll moment caused by gyroscopic precession. The magnitude of this moment is proportional to the moment of inertia of the front wheel, its spin rate (forward motion), the rate that the rider turns the front wheel by applying a torque to the handlebars, and the cosine of the angle between the steering axis and the vertical.
For a sample motorcycle moving at 22 m/s (50 mph) that has a front wheel with a moment of inertia of 0.6 kgm2, turning the front wheel one degree in half a second generates a roll moment of 3.5 Nm. In comparison, the lateral force on the front tire as it tracks out from under the motorcycle reaches a maximum of 50 N. This, acting on the 0.6 m (2 ft) height of the center of mass, generates a roll moment of 30 Nm.
While the moment from gyroscopic forces is only 12% of this, it can play a significant part because it begins to act as soon as the rider applies the torque, instead of building up more slowly as the wheel out-tracks. This can be especially helpful in motorcycle racing.
Motorcycles.
Deliberately countersteering is essential for safe motorcycle riding, and as a result is generally a part of safe riding courses run by organisations like the Motorcycle Safety Foundation, the Canada Safety Council, or Australian Q-Ride providers. Deliberately countersteering a motorcycle is a much more efficient way to steer than to just lean. At higher speeds the self-balancing property of the bike gets stiffer, and a given input force applied to the handlebars produces smaller changes in lean angle.
Training.
Much of the art of motorcycle cornering is learning how to effectively push the grips into corners and how to maintain proper lean angles through the turn. When the need for a quick swerve to one side suddenly arises in an emergency, it is essential to know, through prior practice, that countersteering is the most efficient way to change the motorcycle's course. Many accidents result when otherwise experienced riders who have never carefully developed this skill encounter an unexpected obstacle.
To encourage an understanding of the phenomena around countersteering, the phrase "positive steering" is sometimes used. Other phrases are "PRESS – To turn, the motorcycle must lean", "To lean the motorcycle, press on the handgrip in the direction of the turn" or "Press left – lean left – go left".
The Motorcycle Safety Foundation teaches countersteering to all students in all of its schools, as do all motorcycle racing schools. Countersteering is included in United States state motorcycle operator manuals and tests, such as Washington, New Jersey, California, and Missouri.
Safety.
According to the Hurt Report, most motorcycle riders in the United States would over-brake and skid the rear wheel and under-brake the front when trying hard to avoid a collision. The ability to countersteer and swerve was essentially absent with many motorcycle operators. The often small amount of initial countersteering input required to get the bike to lean, which may be as little as 0.125 seconds, keeps many riders unaware of the concept.
Multi-track vehicles.
Three wheeled motorcycles without the ability to lean have no need to be countersteered, and an initial steer torque in one direction does not automatically result in a turn in the other direction. This includes sidecar rigs where the car is rigidly mounted on the bike. The three wheeled BRP Can-Am Spyder Roadster uses two front wheels which do not lean, and so it steers like a car.
Some sidecars allow the motorcycle to lean independent of the sidecar, and in some cases, the sidecar even leans in parallel with the motorcycle. These vehicles must be countersteered the same way as a solo motorcycle. The three wheel Piaggio MP3 uses mechanical linkages to lean the two front wheels in parallel with the rear frame, and so that it is countersteered in the same manner as a two-wheeled motorcycle.
Free-leaning multi-track vehicles must be balanced by countersteering before turning. Multi-track leaning vehicles that are forced-tilted, such as the Carver, are tilted without countersteering the control and are not balanced by the operator. Later versions of the Carver introduced automatic countersteer to increase tilt speed and reduce the force required to tilt the vehicle. Other forced-tilted vehicles may incorporate automatic countersteering. A prototype tilting multi-track free leaning vehicle was developed in 1984 that employs automatic countersteering and does not require any balancing skills.
Countersteering by weight shifting.
With a sufficiently light bike (especially a bicycle), the rider can initiate a lean and turn without using the handlebars by shifting body weight, called counter lean by some authors.
Documented physical experimentation shows that on heavy bikes (many motorcycles) shifting body weight is less effective at initiating leans.
The following is done when countersteering using weight shifting to turn left:
The amount of leftward steering necessary to balance the leftward lean appropriate for the forward speed and radius of the turn is controlled by the torque generated by the rider, again either at the seat or in the torso.
To straighten back out of the turn, the rider simply reverses the procedure for entering it: cause the bike to lean farther to the left; this causes it to steer farther to the left, which moves the wheel contact patches farther to the left, eventually reducing the leftward lean and exiting the turn.
A National Highway Traffic Safety Administration study showed that rider lean has a larger influence on a lighter motorcycle than a heavier one, which helps explain why no-hands steering is less effective on heavy motorcycles. Leaning the torso with respect to the bike does not cause the bike to lean far enough to generate anything but the shallowest turns. No-hands riders may be able to keep a heavy bike centered in a lane and negotiate shallow highway turns, but not much else.
Complex maneuvers are not possible using weight shifting alone because even for a light machine there is insufficient control authority.
Although on a sufficiently light bike (especially a bicycle), the rider can initiate a lean and turn by shifting body weight, there is no evidence that complex maneuvers can be performed by bodyweight alone.
Other uses.
The term countersteering is also used by some authors to refer to the need on bikes to steer in the opposite direction of the turn (negative steering angle) to maintain control in response to significant rear wheel slippage. Motorcycle speedway racing takes place on an oval track with a loose surface of dirt, cinders or shale. Riders slide their machines sideways, powersliding or broadsiding into the turns, using an extreme form of this type of countersteering that is maintained throughout the turn. This also works, without power, for bicycles on loose or slippery surfaces, although it is an advanced technique.
The term is also used in the discussion of the automobile driving technique called drifting.
The Wright Brothers.
Wilbur Wright explained countersteering this way:
<templatestyles src="Template:Blockquote/styles.css" />I have asked dozens of bicycle riders how they turn to the left. I have never found a single person who stated all the facts correctly when first asked. They almost invariably said that to turn to the left, they turned the handlebar to the left and as a result made a turn to the left. But on further questioning them, some would agree that they first turned the handlebar a little to the right, and then as the machine inclined to the left, they turned the handlebar to the left and as a result made the circle, inclining inward.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta = \\arctan \\left (\\frac{v^2}{gr}\\right )"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "g"
}
] | https://en.wikipedia.org/wiki?curid=1246115 |
12461348 | Blom's scheme | Blom's scheme is a symmetric threshold key exchange protocol in cryptography. The scheme was proposed by the Swedish cryptographer Rolf Blom in a series of articles in the early 1980s.
A trusted party gives each participant a secret key and a public identifier, which enables any two participants to independently create a shared key for communicating. However, if an attacker can compromise the keys of at least k users, they can break the scheme and reconstruct every shared key. Blom's scheme is a form of threshold secret sharing.
Blom's scheme is currently used by the HDCP (Version 1.x only) copy protection scheme to generate shared keys for high-definition content sources and receivers, such as HD DVD players and high-definition televisions.
The protocol.
The key exchange protocol involves a trusted party (Trent) and a group of formula_0 users. Let Alice and Bob be two users of the group.
Protocol setup.
Trent chooses a random and secret symmetric matrix formula_1 over the finite field formula_2, where p is a prime number. formula_3 is required when a new user is to be added to the key sharing group.
For example:
formula_4
Inserting a new participant.
New users Alice and Bob want to join the key exchanging group. Trent chooses public identifiers for each of them; i.e., k-element vectors:
formula_5.
For example:
formula_6
Trent then computes their private keys:
formula_7
Using formula_8 as described above:
formula_9
Each will use their private key to compute shared keys with other participants of the group.
Computing a shared key between Alice and Bob.
Now Alice and Bob wish to communicate with one another. Alice has Bob's identifier formula_10 and her private key formula_11.
She computes the shared key formula_12, where formula_13 denotes matrix transpose. Bob does the same, using his private key and her identifier, giving the same result:
formula_14
They will each generate their shared key as follows:
formula_15
Attack resistance.
In order to ensure at least k keys must be compromised before every shared key can be computed by an attacker, identifiers must be k-linearly independent: all sets of k randomly selected user identifiers must be linearly independent. Otherwise, a group of malicious users can compute the key of any other member whose identifier is linearly dependent to theirs. To ensure this property, the identifiers shall be preferably chosen from a MDS-Code matrix (maximum distance separable error correction code matrix). The rows of the MDS-Matrix would be the identifiers of the users. A MDS-Code matrix can be chosen in practice using the code-matrix of the Reed–Solomon error correction code (this error correction code requires only easily understandable mathematics and can be computed extremely quickly).
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle n"
},
{
"math_id": 1,
"text": "\\scriptstyle D_{k,k}"
},
{
"math_id": 2,
"text": "\\scriptstyle GF(p)"
},
{
"math_id": 3,
"text": "\\scriptstyle D"
},
{
"math_id": 4,
"text": "\\begin{align}\n k &= 3\\\\\n p &= 17\\\\\n D &= \\begin{pmatrix} 1&6&2\\\\6&3&8\\\\2&8&2\\end{pmatrix}\\ \\mathrm{mod}\\ 17\n\\end{align}"
},
{
"math_id": 5,
"text": "I_{\\mathrm{Alice}}, I_{\\mathrm{Bob}} \\in GF^k(p)"
},
{
"math_id": 6,
"text": "I_{\\mathrm{Alice}} = \\begin{pmatrix} 1 \\\\ 2 \\\\ 3 \\end{pmatrix}, I_{\\mathrm{Bob}} = \\begin{pmatrix} 5 \\\\ 3 \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 7,
"text": "\\begin{align}\n g_{\\mathrm{Alice}} &= DI_{\\mathrm{Alice}}\\\\\n g_{\\mathrm{Bob}} &= DI_{\\mathrm{Bob}}\n\\end{align}"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "\\begin{align}\n g_{\\mathrm{Alice}} &= \\begin{pmatrix} 1&6&2\\\\6&3&8\\\\2&8&2\\end{pmatrix}\\begin{pmatrix} 1 \\\\ 2 \\\\ 3 \\end{pmatrix} = \\begin{pmatrix} 19\\\\36\\\\24\\end{pmatrix}\\ \\mathrm{mod}\\ 17 = \\begin{pmatrix} 2\\\\2\\\\7\\end{pmatrix}\\ \\\\\n g_{\\mathrm{Bob}} &= \\begin{pmatrix} 1&6&2\\\\6&3&8\\\\2&8&2\\end{pmatrix}\\begin{pmatrix} 5 \\\\ 3 \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} 25\\\\47\\\\36\\end{pmatrix}\\ \\mathrm{mod}\\ 17 = \\begin{pmatrix} 8\\\\13\\\\2\\end{pmatrix}\\ \n\\end{align}"
},
{
"math_id": 10,
"text": "\\scriptstyle I_{\\mathrm{Bob}}"
},
{
"math_id": 11,
"text": "\\scriptstyle g_{\\mathrm{Alice}}"
},
{
"math_id": 12,
"text": "\\scriptstyle k_{\\mathrm{Alice / Bob}} = g_{\\mathrm{Alice}}^T I_{\\mathrm{Bob}}"
},
{
"math_id": 13,
"text": "\\scriptstyle T"
},
{
"math_id": 14,
"text": "k_{\\mathrm{Alice / Bob}} = k_{\\mathrm{Alice / Bob}}^T = (g_{\\mathrm{Alice}}^T I_{\\mathrm{Bob}})^T = (I_{\\mathrm{Alice}}^T D^T I_{\\mathrm{Bob}})^T = I_{\\mathrm{Bob}}^T D I_{\\mathrm{Alice}} = k_{\\mathrm{Bob / Alice}}"
},
{
"math_id": 15,
"text": "\\begin{align}\n k_{\\mathrm{Alice / Bob}} &= \\begin{pmatrix} 2\\\\2\\\\7 \\end{pmatrix}^T \\begin{pmatrix} 5\\\\3\\\\1 \\end{pmatrix} = 2 \\times 5 + 2 \\times 3 + 7 \\times 1 = 23\\ \\mathrm{mod}\\ 17 = 6\\\\\n k_{\\mathrm{Bob / Alice}} &= \\begin{pmatrix} 8\\\\13\\\\2 \\end{pmatrix}^T \\begin{pmatrix} 1\\\\2\\\\3 \\end{pmatrix} = 8 \\times 1 + 13 \\times 2 + 2 \\times 3 = 40\\ \\mathrm{mod}\\ 17 = 6\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=12461348 |
12462 | Gauss (unit) | Unit of magnetic induction
<templatestyles src="Template:Infobox/styles-images.css" />
The gauss (symbol: , sometimes Gs) is a unit of measurement of magnetic induction, also known as magnetic flux density. The unit is part of the Gaussian system of units, which inherited it from the older centimetre–gram–second electromagnetic units (CGS-EMU) system. It was named after the German mathematician and physicist Carl Friedrich Gauss in 1936. One gauss is defined as one maxwell per square centimetre.
As the centimetre–gram–second system of units (cgs system) has been superseded by the International System of Units (SI), the use of the gauss has been deprecated by the standards bodies, but is still regularly used in various subfields of science. The SI unit for magnetic flux density is the tesla (symbol T), which corresponds to .
Name, symbol, and metric prefixes.
Albeit not a component of the International System of Units, the usage of the gauss generally follows the rules for SI units. Since the name is derived from a person's name, its symbol is the uppercase letter "G". When the unit is spelled out, it is written in lowercase ("gauss"), unless it begins a sentence. The gauss may be combined with metric prefixes, such as in milligauss, mG (or mGs), or kilogauss, kG (or kGs).
Unit conversions.
formula_0
The gauss is the unit of magnetic flux density B in the system of Gaussian units and is equal to Mx/cm2 or g/Bi/s2, while the oersted is the unit of H-field. One tesla (T) corresponds to 104 gauss, and one ampere (A) per metre corresponds to 4π × 10−3 oersted.
The units for magnetic flux Φ, which is the integral of magnetic B-field over an area, are the weber (Wb) in the SI and the maxwell (Mx) in the CGS-Gaussian system. The conversion factor is , since flux is the integral of field over an area, area having the units of the square of distance, thus (magnetic field conversion factor) times the square of (linear distance conversion factor). 108 Mx/Wb = 104 G/T × (102 cm/m)2.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n1\\,{\\rm G} &= {\\rm Mx}{\\cdot}{\\rm cm}^{-2} = \\frac{\\rm g}{{\\rm Bi}{\\cdot}{\\rm s}^2}\\\\\n &\\text{ ≘ } 10^{-4}\\,{\\rm T} = 10^{-4}\\frac{\\rm kg}{{\\rm A}{\\cdot}{\\rm s^2}}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=12462 |
12463 | Glacier | Persistent body of ice that is moving downhill under its own weight
A glacier (; ) is a persistent body of dense ice that is constantly moving downhill under its own weight. A glacier forms where the accumulation of snow exceeds its ablation over many years, often centuries. It acquires distinguishing features, such as crevasses and seracs, as it slowly flows and deforms under stresses induced by its weight. As it moves, it abrades rock and debris from its substrate to create landforms such as cirques, moraines, or fjords. Although a glacier may flow into a body of water, it forms only on land and is distinct from the much thinner sea ice and lake ice that form on the surface of bodies of water.
On Earth, 99% of glacial ice is contained within vast ice sheets (also known as "continental glaciers") in the polar regions, but glaciers may be found in mountain ranges on every continent other than the Australian mainland, including Oceania's high-latitude oceanic island countries such as New Zealand. Between latitudes 35°N and 35°S, glaciers occur only in the Himalayas, Andes, and a few high mountains in East Africa, Mexico, New Guinea and on Zard-Kuh in Iran. With more than 7,000 known glaciers, Pakistan has more glacial ice than any other country outside the polar regions. Glaciers cover about 10% of Earth's land surface. Continental glaciers cover nearly or about 98% of Antarctica's , with an average thickness of ice . Greenland and Patagonia also have huge expanses of continental glaciers. The volume of glaciers, not including the ice sheets of Antarctica and Greenland, has been estimated at 170,000 km3.
Glacial ice is the largest reservoir of fresh water on Earth, holding with ice sheets about 69 percent of the world's freshwater. Many glaciers from temperate, alpine and seasonal polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals and human uses when other sources may be scant. However, within high-altitude and Antarctic environments, the seasonal temperature difference is often not sufficient to release meltwater.
Since glacial mass is affected by long-term climatic changes, e.g., precipitation, mean temperature, and cloud cover, glacial mass changes are considered among the most sensitive indicators of climate change and are a major source of variations in sea level.
A large piece of compressed ice, or a glacier, appears blue, as large quantities of water appear blue, because water molecules absorb other colors more efficiently than blue. The other reason for the blue color of glaciers is the lack of air bubbles. Air bubbles, which give a white color to ice, are squeezed out by pressure increasing the created ice's density.
Etymology and related terms.
The word "glacier" is a loanword from French and goes back, via Franco-Provençal, to the Vulgar Latin ', derived from the Late Latin ', and ultimately Latin "", meaning "ice". The processes and features caused by or related to glaciers are referred to as glacial. The process of glacier establishment, growth and flow is called glaciation. The corresponding area of study is called glaciology. Glaciers are important components of the global cryosphere.
Types.
Classification by size, shape and behavior.
Glaciers are categorized by their morphology, thermal characteristics, and behavior. "Alpine glaciers" form on the crests and slopes of mountains. A glacier that fills a valley is called a "valley glacier", or alternatively, an "alpine glacier" or "mountain glacier". A large body of glacial ice astride a mountain, mountain range, or volcano is termed an "ice cap" or "ice field". Ice caps have an area less than by definition.
Glacial bodies larger than are called "ice sheets" or "continental glaciers". Several kilometers deep, they obscure the underlying topography. Only nunataks protrude from their surfaces. The only extant ice sheets are the two that cover most of Antarctica and Greenland. They contain vast quantities of freshwater, enough that if both melted, global sea levels would rise by over . Portions of an ice sheet or cap that extend into water are called ice shelves; they tend to be thin with limited slopes and reduced velocities. Narrow, fast-moving sections of an ice sheet are called "ice streams". In Antarctica, many ice streams drain into large ice shelves. Some drain directly into the sea, often with an ice tongue, like Mertz Glacier.
"Tidewater glaciers" are glaciers that terminate in the sea, including most glaciers flowing from Greenland, Antarctica, Baffin, Devon, and Ellesmere Islands in Canada, Southeast Alaska, and the Northern and Southern Patagonian Ice Fields. As the ice reaches the sea, pieces break off or calve, forming icebergs. Most tidewater glaciers calve above sea level, which often results in a tremendous impact as the iceberg strikes the water. Tidewater glaciers undergo centuries-long cycles of advance and retreat that are much less affected by climate change than other glaciers.
Classification by thermal state.
Thermally, a "temperate glacier" is at a melting point throughout the year, from its surface to its base. The ice of a "polar glacier" is always below the freezing threshold from the surface to its base, although the surface snowpack may experience seasonal melting. A "subpolar glacier" includes both temperate and polar ice, depending on the depth beneath the surface and position along the length of the glacier. In a similar way, the thermal regime of a glacier is often described by its basal temperature. A "cold-based glacier" is below freezing at the ice-ground interface and is thus frozen to the underlying substrate. A "warm-based glacier" is above or at freezing at the interface and is able to slide at this contact. This contrast is thought to a large extent to govern the ability of a glacier to effectively erode its bed, as sliding ice promotes plucking at rock from the surface below. Glaciers which are partly cold-based and partly warm-based are known as "polythermal".
Formation.
Glaciers form where the accumulation of snow and ice exceeds ablation. A glacier usually originates from a cirque landform (alternatively known as a corrie or as a ) – a typically armchair-shaped geological feature (such as a depression between mountains enclosed by arêtes) – which collects and compresses through gravity the snow that falls into it. This snow accumulates and the weight of the snow falling above compacts it, forming névé (granular snow). Further crushing of the individual snowflakes and squeezing the air from the snow turns it into "glacial ice". This glacial ice will fill the cirque until it "overflows" through a geological weakness or vacancy, such as a gap between two mountains. When the mass of snow and ice reaches sufficient thickness, it begins to move by a combination of surface slope, gravity, and pressure. On steeper slopes, this can occur with as little as of snow-ice.
In temperate glaciers, snow repeatedly freezes and thaws, changing into granular ice called firn. Under the pressure of the layers of ice and snow above it, this granular ice fuses into denser firn. Over a period of years, layers of firn undergo further compaction and become glacial ice. Glacier ice is slightly more dense than ice formed from frozen water because glacier ice contains fewer trapped air bubbles.
Glacial ice has a distinctive blue tint because it absorbs some red light due to an overtone of the infrared OH stretching mode of the water molecule. (Liquid water appears blue for the same reason. The blue of glacier ice is sometimes misattributed to Rayleigh scattering of bubbles in the ice.)
Structure.
A glacier originates at a location called its glacier head and terminates at its glacier foot, snout, or terminus.
Glaciers are broken into zones based on surface snowpack and melt conditions. The ablation zone is the region where there is a net loss in glacier mass. The upper part of a glacier, where accumulation exceeds ablation, is called the accumulation zone. The equilibrium line separates the ablation zone and the accumulation zone; it is the contour where the amount of new snow gained by accumulation is equal to the amount of ice lost through ablation. In general, the accumulation zone accounts for 60–70% of the glacier's surface area, more if the glacier calves icebergs. Ice in the accumulation zone is deep enough to exert a downward force that erodes underlying rock. After a glacier melts, it often leaves behind a bowl- or amphitheater-shaped depression that ranges in size from large basins like the Great Lakes to smaller mountain depressions known as cirques.
The accumulation zone can be subdivided based on its melt conditions.
The health of a glacier is usually assessed by determining the glacier mass balance or observing terminus behavior. Healthy glaciers have large accumulation zones, more than 60% of their area is snow-covered at the end of the melt season, and they have a terminus with a vigorous flow.
Following the Little Ice Age's end around 1850, glaciers around the Earth have retreated substantially. A slight cooling led to the advance of many alpine glaciers between 1950 and 1985, but since 1985 glacier retreat and mass loss has become larger and increasingly ubiquitous.
Motion.
Glaciers move downhill by the force of gravity and the internal deformation of ice. At the molecular level, ice consists of stacked layers of molecules with relatively weak bonds between layers. When the amount of strain (deformation) is proportional to the stress being applied, ice will act as an elastic solid. Ice needs to be at least thick to even start flowing, but once its thickness exceeds about (160 ft), stress on the layer above will exceeds the inter-layer binding strength, and then it'll move faster than the layer below. This means that small amounts of stress can result in a large amount of strain, causing the deformation to become a plastic flow rather than elastic. Then, the glacier will begin to deform under its own weight and flow across the landscape. According to the Glen–Nye flow law, the relationship between stress and strain, and thus the rate of internal flow, can be modeled as follows:
formula_0
where:
formula_1 = shear strain (flow) rate
formula_2 = stress
formula_3 = a constant between 2–4 (typically 3 for most glaciers)
formula_4 = a temperature-dependent constant
The lowest velocities are near the base of the glacier and along valley sides where friction acts against flow, causing the most deformation. Velocity increases inward toward the center line and upward, as the amount of deformation decreases. The highest flow velocities are found at the surface, representing the sum of the velocities of all the layers below.
Because ice can flow faster where it is thicker, the rate of glacier-induced erosion is directly proportional to the thickness of overlying ice. Consequently, pre-glacial low hollows will be deepened and pre-existing topography will be amplified by glacial action, while nunataks, which protrude above ice sheets, barely erode at all – erosion has been estimated as 5 m per 1.2 million years. This explains, for example, the deep profile of fjords, which can reach a kilometer in depth as ice is topographically steered into them. The extension of fjords inland increases the rate of ice sheet thinning since they are the principal conduits for draining ice sheets. It also makes the ice sheets more sensitive to changes in climate and the ocean.
Although evidence in favor of glacial flow was known by the early 19th century, other theories of glacial motion were advanced, such as the idea that meltwater, refreezing inside glaciers, caused the glacier to dilate and extend its length. As it became clear that glaciers behaved to some degree as if the ice were a viscous fluid, it was argued that "regelation", or the melting and refreezing of ice at a temperature lowered by the pressure on the ice inside the glacier, was what allowed the ice to deform and flow. James Forbes came up with the essentially correct explanation in the 1840s, although it was several decades before it was fully accepted.
Fracture zone and cracks.
The top of a glacier are rigid because they are under low pressure. This upper section is known as the "fracture zone" and moves mostly as a single unit over the plastic-flowing lower section. When a glacier moves through irregular terrain, cracks called crevasses develop in the fracture zone. Crevasses form because of differences in glacier velocity. If two rigid sections of a glacier move at different speeds or directions, shear forces cause them to break apart, opening a crevasse. Crevasses are seldom more than deep but, in some cases, can be at least deep. Beneath this point, the plasticity of the ice prevents the formation of cracks. Intersecting crevasses can create isolated peaks in the ice, called seracs.
Crevasses can form in several different ways. Transverse crevasses are transverse to flow and form where steeper slopes cause a glacier to accelerate. Longitudinal crevasses form semi-parallel to flow where a glacier expands laterally. Marginal crevasses form near the edge of the glacier, caused by the reduction in speed caused by friction of the valley walls. Marginal crevasses are largely transverse to flow. Moving glacier ice can sometimes separate from the stagnant ice above, forming a bergschrund. Bergschrunds resemble crevasses but are singular features at a glacier's margins. Crevasses make travel over glaciers hazardous, especially when they are hidden by fragile snow bridges.
Below the equilibrium line, glacial meltwater is concentrated in stream channels. Meltwater can pool in proglacial lakes on top of a glacier or descend into the depths of a glacier via moulins. Streams within or beneath a glacier flow in englacial or sub-glacial tunnels. These tunnels sometimes reemerge at the glacier's surface.
Subglacial processes.
Most of the important processes controlling glacial motion occur in the ice-bed contact—even though it is only a few meters thick. The bed's temperature, roughness and softness define basal shear stress, which in turn defines whether movement of the glacier will be accommodated by motion in the sediments, or if it'll be able to slide. A soft bed, with high porosity and low pore fluid pressure, allows the glacier to move by sediment sliding: the base of the glacier may even remain frozen to the bed, where the underlying sediment slips underneath it like a tube of toothpaste. A hard bed cannot deform in this way; therefore the only way for hard-based glaciers to move is by basal sliding, where meltwater forms between the ice and the bed itself. Whether a bed is hard or soft depends on the porosity and pore pressure; higher porosity decreases the sediment strength (thus increases the shear stress τB).
Porosity may vary through a range of methods.
Bed softness may vary in space or time, and changes dramatically from glacier to glacier. An important factor is the underlying geology; glacial speeds tend to differ more when they change bedrock than when the gradient changes. Further, bed roughness can also act to slow glacial motion. The roughness of the bed is a measure of how many boulders and obstacles protrude into the overlying ice. Ice flows around these obstacles by melting under the high pressure on their stoss side; the resultant meltwater is then forced into the cavity arising in their lee side, where it re-freezes.
As well as affecting the sediment stress, fluid pressure (pw) can affect the friction between the glacier and the bed. High fluid pressure provides a buoyancy force upwards on the glacier, reducing the friction at its base. The fluid pressure is compared to the ice overburden pressure, pi, given by ρgh. Under fast-flowing ice streams, these two pressures will be approximately equal, with an effective pressure (pi – pw) of 30 kPa; i.e. all of the weight of the ice is supported by the underlying water, and the glacier is afloat.
Basal melting and sliding.
Glaciers may also move by basal sliding, where the base of the glacier is lubricated by the presence of liquid water, reducing basal shear stress and allowing the glacier to slide over the terrain on which it sits. Meltwater may be produced by pressure-induced melting, friction or geothermal heat. The more variable the amount of melting at surface of the glacier, the faster the ice will flow. Basal sliding is dominant in temperate or warm-based glaciers.
τD = ρgh sin α
where τD is the driving stress, and α the ice surface slope in radians.
τB is the basal shear stress, a function of bed temperature and softness.
τF, the shear stress, is the lower of τB and τD. It controls the rate of plastic flow.
The presence of basal meltwater depends on both bed temperature and other factors. For instance, the melting point of water decreases under pressure, meaning that water melts at a lower temperature under thicker glaciers. This acts as a "double whammy", because thicker glaciers have a lower heat conductance, meaning that the basal temperature is also likely to be higher. Bed temperature tends to vary in a cyclic fashion. A cool bed has a high strength, reducing the speed of the glacier. This increases the rate of accumulation, since newly fallen snow is not transported away. Consequently, the glacier thickens, with three consequences: firstly, the bed is better insulated, allowing greater retention of geothermal heat.
Secondly, the increased pressure can facilitate melting. Most importantly, τD is increased. These factors will combine to accelerate the glacier. As friction increases with the square of velocity, faster motion will greatly increase frictional heating, with ensuing melting – which causes a positive feedback, increasing ice speed to a faster flow rate still: west Antarctic glaciers are known to reach velocities of up to a kilometre per year.
Eventually, the ice will be surging fast enough that it begins to thin, as accumulation cannot keep up with the transport. This thinning will increase the conductive heat loss, slowing the glacier and causing freezing. This freezing will slow the glacier further, often until it is stationary, whence the cycle can begin again.
The flow of water under the glacial surface can have a large effect on the motion of the glacier itself. Subglacial lakes contain significant amounts of water, which can move fast: cubic kilometres can be transported between lakes over the course of a couple of years. This motion is thought to occur in two main modes: "pipe flow" involves liquid water moving through pipe-like conduits, like a sub-glacial river; "sheet flow" involves motion of water in a thin layer. A switch between the two flow conditions may be associated with surging behaviour. Indeed, the loss of sub-glacial water supply has been linked with the shut-down of ice movement in the Kamb ice stream. The subglacial motion of water is expressed in the surface topography of ice sheets, which slump down into vacated subglacial lakes.
Speed.
The speed of glacial displacement is partly determined by friction. Friction makes the ice at the bottom of the glacier move more slowly than ice at the top. In alpine glaciers, friction is also generated at the valley's sidewalls, which slows the edges relative to the center.
Mean glacial speed varies greatly but is typically around per day. There may be no motion in stagnant areas; for example, in parts of Alaska, trees can establish themselves on surface sediment deposits. In other cases, glaciers can move as fast as per day, such as in Greenland's Jakobshavn Isbræ. Glacial speed is affected by factors such as slope, ice thickness, snowfall, longitudinal confinement, basal temperature, meltwater production, and bed hardness.
A few glaciers have periods of very rapid advancement called surges. These glaciers exhibit normal movement until suddenly they accelerate, then return to their previous movement state. These surges may be caused by the failure of the underlying bedrock, the pooling of meltwater at the base of the glacier — perhaps delivered from a supraglacial lake — or the simple accumulation of mass beyond a critical "tipping point". Temporary rates up to per day have occurred when increased temperature or overlying pressure caused bottom ice to melt and water to accumulate beneath a glacier.
In glaciated areas where the glacier moves faster than one km per year, glacial earthquakes occur. These are large scale earthquakes that have seismic magnitudes as high as 6.1. The number of glacial earthquakes in Greenland peaks every year in July, August, and September and increased rapidly in the 1990s and 2000s. In a study using data from January 1993 through October 2005, more events were detected every year since 2002, and twice as many events were recorded in 2005 as there were in any other year.
Ogives.
Ogives or Forbes bands are alternating wave crests and valleys that appear as dark and light bands of ice on glacier surfaces. They are linked to seasonal motion of glaciers; the width of one dark and one light band generally equals the annual movement of the glacier. Ogives are formed when ice from an icefall is severely broken up, increasing ablation surface area during summer. This creates a swale and space for snow accumulation in the winter, which in turn creates a ridge. Sometimes ogives consist only of undulations or color bands and are described as wave ogives or band ogives.
Geography.
Glaciers are present on every continent and in approximately fifty countries, excluding those (Australia, South Africa) that have glaciers only on distant subantarctic island territories. Extensive glaciers are found in Antarctica, Argentina, Chile, Canada, Pakistan, Alaska, Greenland and Iceland. Mountain glaciers are widespread, especially in the Andes, the Himalayas, the Rocky Mountains, the Caucasus, Scandinavian Mountains, and the Alps. Snezhnika glacier in Pirin Mountain, Bulgaria with a latitude of 41°46′09″ N is the southernmost glacial mass in Europe. Mainland Australia currently contains no glaciers, although a small glacier on Mount Kosciuszko was present in the last glacial period. In New Guinea, small, rapidly diminishing, glaciers are located on Puncak Jaya. Africa has glaciers on Mount Kilimanjaro in Tanzania, on Mount Kenya, and in the Rwenzori Mountains. Oceanic islands with glaciers include Iceland, several of the islands off the coast of Norway including Svalbard and Jan Mayen to the far north, New Zealand and the subantarctic islands of Marion, Heard, Grande Terre (Kerguelen) and Bouvet. During glacial periods of the Quaternary, Taiwan, Hawaii on Mauna Kea and Tenerife also had large alpine glaciers, while the Faroe and Crozet Islands were completely glaciated.
The permanent snow cover necessary for glacier formation is affected by factors such as the degree of slope on the land, amount of snowfall and the winds. Glaciers can be found in all latitudes except from 20° to 27° north and south of the equator where the presence of the descending limb of the Hadley circulation lowers precipitation so much that with high insolation snow lines reach above . Between 19˚N and 19˚S, however, precipitation is higher, and the mountains above usually have permanent snow.
Even at high latitudes, glacier formation is not inevitable. Areas of the Arctic, such as Banks Island, and the McMurdo Dry Valleys in Antarctica are considered polar deserts where glaciers cannot form because they receive little snowfall despite the bitter cold. Cold air, unlike warm air, is unable to transport much water vapor. Even during glacial periods of the Quaternary, Manchuria, lowland Siberia, and central and northern Alaska, though extraordinarily cold, had such light snowfall that glaciers could not form.
In addition to the dry, unglaciated polar regions, some mountains and volcanoes in Bolivia, Chile and Argentina are high () and cold, but the relative lack of precipitation prevents snow from accumulating into glaciers. This is because these peaks are located near or in the hyperarid Atacama Desert.
Glacial geology.
Erosion.
Glaciers erode terrain through two principal processes: plucking and abrasion.
As glaciers flow over bedrock, they soften and lift blocks of rock into the ice. This process, called plucking, is caused by subglacial water that penetrates fractures in the bedrock and subsequently freezes and expands. This expansion causes the ice to act as a lever that loosens the rock by lifting it. Thus, sediments of all sizes become part of the glacier's load. If a retreating glacier gains enough debris, it may become a rock glacier, like the Timpanogos Glacier in Utah.
Abrasion occurs when the ice and its load of rock fragments slide over bedrock and function as sandpaper, smoothing and polishing the bedrock below. The pulverized rock this process produces is called rock flour and is made up of rock grains between 0.002 and 0.00625 mm in size. Abrasion leads to steeper valley walls and mountain slopes in alpine settings, which can cause avalanches and rock slides, which add even more material to the glacier. Glacial abrasion is commonly characterized by glacial striations. Glaciers produce these when they contain large boulders that carve long scratches in the bedrock. By mapping the direction of the striations, researchers can determine the direction of the glacier's movement. Similar to striations are chatter marks, lines of crescent-shape depressions in the rock underlying a glacier. They are formed by abrasion when boulders in the glacier are repeatedly caught and released as they are dragged along the bedrock.The rate of glacier erosion varies. Six factors control erosion rate:
When the bedrock has frequent fractures on the surface, glacial erosion rates tend to increase as plucking is the main erosive force on the surface; when the bedrock has wide gaps between sporadic fractures, however, abrasion tends to be the dominant erosive form and glacial erosion rates become slow. Glaciers in lower latitudes tend to be much more erosive than glaciers in higher latitudes, because they have more meltwater reaching the glacial base and facilitate sediment production and transport under the same moving speed and amount of ice.
Material that becomes incorporated in a glacier is typically carried as far as the zone of ablation before being deposited. Glacial deposits are of two distinct types:
Larger pieces of rock that are encrusted in till or deposited on the surface are called "glacial erratics". They range in size from pebbles to boulders, but as they are often moved great distances, they may be drastically different from the material upon which they are found. Patterns of glacial erratics hint at past glacial motions.
Moraines.
Glacial moraines are formed by the deposition of material from a glacier and are exposed after the glacier has retreated. They usually appear as linear mounds of till, a non-sorted mixture of rock, gravel, and boulders within a matrix of fine powdery material. Terminal or end moraines are formed at the foot or terminal end of a glacier. Lateral moraines are formed on the sides of the glacier. Medial moraines are formed when two different glaciers merge and the lateral moraines of each coalesce to form a moraine in the middle of the combined glacier. Less apparent are ground moraines, also called "glacial drift", which often blankets the surface underneath the glacier downslope from the equilibrium line. The term "moraine" is of French origin. It was coined by peasants to describe alluvial embankments and rims found near the margins of glaciers in the French Alps. In modern geology, the term is used more broadly and is applied to a series of formations, all of which are composed of till. Moraines can also create moraine-dammed lakes.
Drumlins.
Drumlins are asymmetrical, canoe-shaped hills made mainly of till. Their heights vary from 15 to 50 meters, and they can reach a kilometer in length. The steepest side of the hill faces the direction from which the ice advanced ("stoss"), while a longer slope is left in the ice's direction of movement ("lee"). Drumlins are found in groups called "drumlin fields" or "drumlin camps". One of these fields is found east of Rochester, New York; it is estimated to contain about 10,000 drumlins. Although the process that forms drumlins is not fully understood, their shape implies that they are products of the plastic deformation zone of ancient glaciers. It is believed that many drumlins were formed when glaciers advanced over and altered the deposits of earlier glaciers.
Glacial valleys, cirques, arêtes, and pyramidal peaks.
Before glaciation, mountain valleys have a characteristic "V" shape, produced by eroding water. During glaciation, these valleys are often widened, deepened and smoothed to form a U-shaped glacial valley or glacial trough, as it is sometimes called. The erosion that creates glacial valleys truncates any spurs of rock or earth that may have earlier extended across the valley, creating broadly triangular-shaped cliffs called truncated spurs. Within glacial valleys, depressions created by plucking and abrasion can be filled by lakes, called paternoster lakes. If a glacial valley runs into a large body of water, it forms a fjord.
Typically glaciers deepen their valleys more than their smaller tributaries. Therefore, when glaciers recede, the valleys of the tributary glaciers remain above the main glacier's depression and are called hanging valleys.
At the start of a classic valley glacier is a bowl-shaped cirque, which have escarped walls on three sides but is open on the side that descends into the valley. Cirques are where ice begins to accumulate in a glacier. Two glacial cirques may form back to back and erode their backwalls until only a narrow ridge, called an arête is left. This structure may result in a mountain pass. If multiple cirques encircle a single mountain, they create pointed pyramidal peaks; particularly steep examples are called horns.
Roches moutonnées.
Passage of glacial ice over an area of bedrock may cause the rock to be sculpted into a knoll called a "roche moutonnée," or "sheepback" rock. Roches moutonnées may be elongated, rounded and asymmetrical in shape. They range in length from less than a meter to several hundred meters long. Roches moutonnées have a gentle slope on their up-glacier sides and a steep to vertical face on their down-glacier sides. The glacier abrades the smooth slope on the upstream side as it flows along, but tears rock fragments loose and carries them away from the downstream side via plucking.
Alluvial stratification.
As the water that rises from the ablation zone moves away from the glacier, it carries fine eroded sediments with it. As the speed of the water decreases, so does its capacity to carry objects in suspension. The water thus gradually deposits the sediment as it runs, creating an alluvial plain. When this phenomenon occurs in a valley, it is called a "valley train". When the deposition is in an estuary, the sediments are known as bay mud. Outwash plains and valley trains are usually accompanied by basins known as "kettles". These are small lakes formed when large ice blocks that are trapped in alluvium melt and produce water-filled depressions. Kettle diameters range from 5 m to 13 km, with depths of up to 45 meters. Most are circular in shape because the blocks of ice that formed them were rounded as they melted.
Glacial deposits.
When a glacier's size shrinks below a critical point, its flow stops and it becomes stationary. Meanwhile, meltwater within and beneath the ice leaves stratified alluvial deposits. These deposits, in the forms of columns, terraces and clusters, remain after the glacier melts and are known as "glacial deposits". Glacial deposits that take the shape of hills or mounds are called "kames". Some kames form when meltwater deposits sediments through openings in the interior of the ice. Others are produced by fans or deltas created by meltwater. When the glacial ice occupies a valley, it can form terraces or kames along the sides of the valley. Long, sinuous glacial deposits are called "eskers". Eskers are composed of sand and gravel that was deposited by meltwater streams that flowed through ice tunnels within or beneath a glacier. They remain after the ice melts, with heights exceeding 100 meters and lengths of as long as 100 km.
Loess deposits.
Very fine glacial sediments or rock flour is often picked up by wind blowing over the bare surface and may be deposited great distances from the original fluvial deposition site. These eolian loess deposits may be very deep, even hundreds of meters, as in areas of China and the Midwestern United States. Katabatic winds can be important in this process.
Retreat of glaciers due to climate change.
Glaciers, which can be hundreds of thousands of years old, are used to track climate change over long periods of time. Researchers melt or crush samples from glacier ice cores whose progressively deep layers represent respectively earlier times in Earth's climate history. The researchers apply various instruments to the content of bubbles trapped in the cores' layers in order to track changes in the atmosphere's composition. Temperatures are deduced from differing relative concentrations of respective gases, confirming that for at least the last million years, global temperatures have been linked to carbon dioxide concentrations.
Human activities in the industrial era have increased the concentration of carbon dioxide and other heat-trapping greenhouse gases in the air, causing current global warming. Human influence is the principal driver of changes to the cryosphere of which glaciers are a part.
Global warming creates positive feedback loops with glaciers. For example, in ice–albedo feedback, rising temperatures increase glacier melt, exposing more of earth's land and sea surface (which is darker than glacier ice), allowing sunlight to warm the surface rather than being reflected back into space. Reference glaciers tracked by the World Glacier Monitoring Service have lost ice every year since 1988. A study that investigated the period 1995 to 2022 showed that the flow velocity of glaciers in the Alps accelerates and slows down to a similar extent at the same time, despite large distances. This clearly shows that their speed is controlled by the climate change.
Water runoff from melting glaciers causes global sea level to rise, a phenomenon the IPCC terms a "slow onset" event. Impacts at least partially attributable to sea level rise include for example encroachment on coastal settlements and infrastructure, existential threats to small islands and low-lying coasts, losses of coastal ecosystems and ecosystem services, groundwater salinization, and compounding damage from tropical cyclones, flooding, storm surges, and land subsidence.
Isostatic rebound.
Large masses, such as ice sheets or glaciers, can depress the crust of the Earth into the mantle. The depression usually totals a third of the ice sheet or glacier's thickness. After the ice sheet or glacier melts, the mantle begins to flow back to its original position, pushing the crust back up. This post-glacial rebound, which proceeds very slowly after the melting of the ice sheet or glacier, is currently occurring in measurable amounts in Scandinavia and the Great Lakes region of North America.
A geomorphological feature created by the same process on a smaller scale is known as "dilation-faulting". It occurs where previously compressed rock is allowed to return to its original shape more rapidly than can be maintained without faulting. This leads to an effect similar to what would be seen if the rock were hit by a large hammer. Dilation faulting can be observed in recently de-glaciated parts of Iceland and Cumbria.
On other planets.
The polar ice caps of Mars show geologic evidence of glacial deposits. The south polar cap is especially comparable to glaciers on Earth. Topographical features and computer models indicate the existence of more glaciers in Mars' past. At mid-latitudes, between 35° and 65° north or south, Martian glaciers are affected by the thin Martian atmosphere. Because of the low atmospheric pressure, ablation near the surface is solely caused by sublimation, not melting. As on Earth, many glaciers are covered with a layer of rocks which insulates the ice. A radar instrument on board the Mars Reconnaissance Orbiter found ice under a thin layer of rocks in formations called lobate debris aprons (LDAs).
In 2015, as "New Horizons" flew by the Pluto-Charon system, the spacecraft discovered a massive basin covered in a layer of nitrogen ice on Pluto. A large portion of the basin's surface is divided into irregular polygonal features separated by narrow troughs, interpreted as convection cells fuelled by internal heat from Pluto's interior. Glacial flows were also observed near Sputnik Planitia's margins, appearing to flow both into and out of the basin.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
General references.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\Sigma = k \\tau^n,\\,\n"
},
{
"math_id": 1,
"text": "\\Sigma\\,"
},
{
"math_id": 2,
"text": "\\tau\\,"
},
{
"math_id": 3,
"text": "n\\,"
},
{
"math_id": 4,
"text": "k\\,"
}
] | https://en.wikipedia.org/wiki?curid=12463 |
1246675 | Dead time | Time after an event when a detector can't record another event
For detection systems that record discrete events, such as particle and nuclear detectors, the dead time is the time after each event during which the system is not able to record another event.
An everyday life example of this is what happens when someone takes a photo using a flash - another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits in quantum cryptography.
Overview.
The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the ion drift time in a gaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of the data acquisition (the conversion time of the analog-to-digital converters and the readout and storage times).
The intrinsic dead time of a detector is often due to its physical characteristics; for example a spark chamber is "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so-called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution.
The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually from .5 up to 10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution.
Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account.
Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates.
From the total time a detection system is running, the dead time must be subtracted to obtain the live time.
Paralyzable and non-paralyzable behaviour.
A detector, or detection system, can be characterized by a "paralyzable" or "non-paralyzable" behaviour.
In a non-paralyzable detector, an event happening during the dead time is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time.
In a paralyzable detector, an event happening during the dead time will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all.
A semi-paralyzable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.
Analysis.
It will be assumed that the events are occurring randomly with an average frequency of "f". That is, they constitute a Poisson process. The probability that an event will occur in an infinitesimal time interval "dt" is then "f dt". It follows that the probability "P(t)" that an event will occur at time "t" to "t+dt" with no events occurring between "t=0" and time "t" is given by the exponential distribution (Lucke 1974, Meeks 2008):
formula_0
The expected time between events is then
formula_1
Non-paralyzable analysis.
For the non-paralyzable case, with a dead time of formula_2, the probability of measuring an event between formula_3 and formula_4 is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at time "t" with no intervening measurements is then given by an exponential distribution shifted by formula_2:
formula_5 for formula_6
formula_7 for formula_8
The expected time between measurements is then
formula_9
In other words, if formula_10 counts are recorded during a particular time interval formula_11 and the dead time is known, the actual number of events ("N") may be estimated by
formula_12
If the dead time is not known, a statistical analysis can yield the correct count. For example, (Meeks 2008), if formula_13 are a set of intervals between measurements, then the formula_13 will have a shifted exponential distribution, but if a fixed value "D" is subtracted from each interval, with negative values discarded, the distribution will be exponential as long as "D" is greater than the dead time formula_2. For an exponential distribution, the following relationship holds:
formula_14
where "n" is any integer. If the above function is estimated for many measured intervals with various values of "D" subtracted (and for various values of "n") it should be found that for values of "D" above a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate.
Time-To-Count.
With a modern microprocessor based ratemeter one technique for measuring field strength with detectors (e.g., Geiger–Müller tubes) with a recovery time is Time-To-Count. In this technique, the detector is armed at the same time a counter is started. When a strike occurs, the counter is stopped. If this happens many times in a certain time period (e.g., two seconds), then the mean time between strikes can be determined, and thus the count rate. Live time, dead time, and total time are thus measured, not estimated. This technique is used quite widely in radiation monitoring systems used in nuclear power generating stations.
Further reading.
Morris, S.L. and Naftilan, S.A., "Determining Photometric Dead Time by Using Hydrogen Filters", Astron. Astrophys. Suppl. Ser. 107, 71-75, Oct. 1994 | [
{
"math_id": 0,
"text": "P(t)dt=fe^{-ft}dt\\,"
},
{
"math_id": 1,
"text": "\\langle t \\rangle = \\int_0^\\infty tP(t)dt = 1/f"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "t=0"
},
{
"math_id": 4,
"text": "t=\\tau"
},
{
"math_id": 5,
"text": "P_m(t)dt=0\\,"
},
{
"math_id": 6,
"text": "t\\le\\tau\\,"
},
{
"math_id": 7,
"text": "P_m(t)dt=\\frac{fe^{-ft}dt}{\\int_\\tau^\\infty fe^{-ft}dt} = fe^{-f(t-\\tau)}dt"
},
{
"math_id": 8,
"text": "t>\\tau\\,"
},
{
"math_id": 9,
"text": "\\langle t_m \\rangle = \\int_\\tau^\\infty tP_m(t)dt = \\langle t \\rangle+\\tau"
},
{
"math_id": 10,
"text": "N_m"
},
{
"math_id": 11,
"text": "T"
},
{
"math_id": 12,
"text": "N \\approx \\frac{N_m}{1 - N_m \\frac{\\tau}{T}}"
},
{
"math_id": 13,
"text": "t_i"
},
{
"math_id": 14,
"text": "\\frac{\\langle t^n \\rangle}{\\langle t \\rangle^n} = n!"
}
] | https://en.wikipedia.org/wiki?curid=1246675 |
12469994 | Weisz–Prater criterion | Effects of pore diffusion on Rate of heterogeneous chemical reaction
The Weisz–Prater criterion is a method used to estimate the influence of pore diffusion on reaction rates in heterogeneous catalytic reactions. If the criterion is satisfied, pore diffusion limitations are negligible. The criterion is
formula_0
Where formula_1 is the reaction rate per volume of catalyst, formula_2 is the catalyst particle radius, formula_3 is the reactant concentration at the particle surface, and formula_4 is the effective diffusivity. Diffusion is usually in the Knudsen regime when average pore radius is less than 100 nm.
For a given effectiveness factor,formula_5, and reaction order, n, the quantity formula_6 is defined by the equation:
formula_7
for small values of beta this can be approximated using the binomial theorem:
formula_8
Assuming formula_9 with a reaction order formula_10 gives value of formula_6 equal to 0.1. Therefore, for many conditions, if formula_11 then pore diffusion limitations can be excluded. | [
{
"math_id": 0,
"text": "N_{W-P} = \\dfrac{\\mathfrak{R} R^2_p}{C_s D_{eff}} \\le 3\\beta"
},
{
"math_id": 1,
"text": "\\mathfrak{R}"
},
{
"math_id": 2,
"text": "R_p"
},
{
"math_id": 3,
"text": "C_s"
},
{
"math_id": 4,
"text": "D_{eff}"
},
{
"math_id": 5,
"text": "\\eta"
},
{
"math_id": 6,
"text": "\\beta"
},
{
"math_id": 7,
"text": "\\eta = \\dfrac{3}{R^3_p} \\int_{0}^{R_p} [1-\\beta (1-r/R_p)^n] r^2\\ dr"
},
{
"math_id": 8,
"text": "\\eta = 1-\\dfrac{n \\beta}{4}"
},
{
"math_id": 9,
"text": "\\eta = 0.95"
},
{
"math_id": 10,
"text": "n = 2"
},
{
"math_id": 11,
"text": "N_{W-P} \\le 0.3"
}
] | https://en.wikipedia.org/wiki?curid=12469994 |
12471936 | Hadamard manifold | In mathematics, a Hadamard manifold, named after Jacques Hadamard — more often called a Cartan–Hadamard manifold, after Élie Cartan — is a Riemannian manifold formula_0 that is complete and simply connected and has everywhere non-positive sectional curvature. By Cartan–Hadamard theorem all Cartan–Hadamard manifolds are diffeomorphic to the Euclidean space formula_1 Furthermore it follows from the Hopf–Rinow theorem that every pairs of points in a Cartan–Hadamard manifold may be connected by a unique geodesic segment. Thus Cartan–Hadamard manifolds are some of the closest relatives of formula_1
Examples.
The Euclidean space formula_2 with its usual metric is a Cartan–Hadamard manifold with constant sectional curvature equal to formula_3
Standard formula_4-dimensional hyperbolic space formula_5 is a Cartan–Hadamard manifold with constant sectional curvature equal to formula_6
Properties.
In Cartan-Hadamard manifolds, the map formula_7 is a diffeomorphism for all formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(M, g)"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n."
},
{
"math_id": 2,
"text": "\\mathbb{R}^n"
},
{
"math_id": 3,
"text": "0."
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\mathbb{H}^n"
},
{
"math_id": 6,
"text": "-1."
},
{
"math_id": 7,
"text": "\\exp_p : \\operatorname{T}M_p \\to M"
},
{
"math_id": 8,
"text": "p \\in M."
}
] | https://en.wikipedia.org/wiki?curid=12471936 |
1247265 | Volume integral | Integral over a 3-D domain
In mathematics (particularly multivariable calculus), a volume integral (∭) is an integral over a 3-dimensional domain; that is, it is a special case of multiple integrals. Volume integrals are especially important in physics for many applications, for example, to calculate flux densities, or to calculate mass from a corresponding density function.
In coordinates.
It can also mean a triple integral within a region formula_0 of a function formula_1 and is usually written as:
formula_2
A volume integral in cylindrical coordinates is
formula_3
and a volume integral in spherical coordinates (using the ISO convention for angles with formula_4 as the azimuth and formula_5 measured from the polar axis (see more on conventions)) has the form
formula_6
Example.
Integrating the equation formula_7 over a unit cube yields the following result:
formula_8
So the volume of the unit cube is 1 as expected. This is rather trivial however, and a volume integral is far more powerful. For instance if we have a scalar density function on the unit cube then the volume integral will give the total mass of the cube. For example for density function:
formula_9
the total mass of the cube is:
formula_10 | [
{
"math_id": 0,
"text": "D \\subset \\R^3"
},
{
"math_id": 1,
"text": "f(x,y,z),"
},
{
"math_id": 2,
"text": "\\iiint_D f(x,y,z)\\,dx\\,dy\\,dz."
},
{
"math_id": 3,
"text": "\\iiint_D f(\\rho,\\varphi,z) \\rho \\,d\\rho \\,d\\varphi \\,dz,"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "\\iiint_D f(r,\\theta,\\varphi) r^2 \\sin\\theta \\,dr \\,d\\theta\\, d\\varphi ."
},
{
"math_id": 7,
"text": " f(x,y,z) = 1 "
},
{
"math_id": 8,
"text": "\\int_0^1 \\int_0^1 \\int_0^1 1 \\,dx \\,dy \\,dz = \\int_0^1 \\int_0^1 (1 - 0) \\,dy \\,dz = \\int_0^1 \\left(1 - 0\\right) dz = 1 - 0 = 1"
},
{
"math_id": 9,
"text": " \\begin{cases}\nf: \\R^3 \\to \\R \\\\\nf: (x,y,z) \\mapsto x+y+z\n\\end{cases}"
},
{
"math_id": 10,
"text": "\\int_0^1 \\int_0^1 \\int_0^1 (x+y+z) \\,dx \\,dy \\,dz = \\int_0^1 \\int_0^1 \\left(\\frac 1 2 + y + z\\right) dy \\,dz = \\int_0^1 (1 + z) \\, dz = \\frac 3 2"
}
] | https://en.wikipedia.org/wiki?curid=1247265 |
12473239 | Stochastic partial differential equation | Partial differential equations with random force terms and coefficients
Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations.
They have relevance to quantum field theory, statistical mechanics, and spatial modeling.
Examples.
One of the most studied SPDEs is the stochastic heat equation, which may formally be written as
formula_0
where formula_1 is the Laplacian and formula_2 denotes space-time white noise. Other examples also include stochastic versions of famous linear equations, such as the wave equation and the Schrödinger equation.
Discussion.
One difficulty is their lack of regularity. In one dimensional space, solutions to the stochastic heat equation are only almost 1/2-Hölder continuous in space and 1/4-Hölder continuous in time. For dimensions two and higher, solutions are not even function-valued, but can be made sense of as random distributions.
For linear equations, one can usually find a mild solution via semigroup techniques.
However, problems start to appear when considering non-linear equations. For example
formula_3
where formula_4 is a polynomial. In this case it is not even clear how one should make sense of the equation. Such an equation will also not have a function-valued solution in dimension larger than one, and hence no pointwise meaning. It is well known that the space of distributions has no product structure. This is the core problem of such a theory. This leads to the need of some form of renormalization.
An early attempt to circumvent such problems for some specific equations was the so called "da Prato–Debussche trick" which involved studying such non-linear equations as perturbations of linear ones. However, this can only be used in very restrictive settings, as it depends on both the non-linear factor and on the regularity of the driving noise term. In recent years, the field has drastically expanded, and now there exists a large machinery to guarantee local existence for a variety of "sub-critical" SPDEs.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\partial_t u = \\Delta u + \\xi\\;,\n"
},
{
"math_id": 1,
"text": "\\Delta"
},
{
"math_id": 2,
"text": "\\xi"
},
{
"math_id": 3,
"text": "\n\\partial_t u = \\Delta u + P(u) + \\xi,\n"
},
{
"math_id": 4,
"text": "P"
}
] | https://en.wikipedia.org/wiki?curid=12473239 |
1247989 | Tunnel ionization | In physics, tunnel ionization is a process in which electrons in an atom (or a molecule) tunnel through the potential barrier and escape from the atom (or molecule). In an intense electric field, the potential barrier of an atom (molecule) is distorted drastically. Therefore, as the length of the barrier that electrons have to pass decreases, the electrons can escape from the atom's potential more easily. Tunneling ionization is a quantum mechanical phenomenon since in the classical picture an electron does not have sufficient energy to overcome the potential barrier of the atom.
When the atom is in a DC external field, the Coulomb potential barrier is lowered and the electron has an increased, non-zero probability of tunnelling through the potential barrier. In the case of an alternating electric field, the direction of the electric field reverses after the half period of the field. The ionized electron may come back to its parent ion. The electron may recombine with the nucleus (nuclei) and its kinetic energy is released as light (high harmonic generation). If the recombination does not occur, further ionization may proceed by collision between high-energy electrons and a parent atom (molecule). This process is known as non-sequential ionization.
DC tunneling ionization.
Tunneling ionization from the ground state of a hydrogen atom in an electrostatic (DC) field was solved schematically by Lev Landau, using parabolic coordinates. This provides a simplified physical system that given it proper exponential dependence of the ionization rate on the applied external field. When &NoBreak;&NoBreak;, the ionization rate for this system is given by:
formula_0
Landau expressed this in atomic units, where &NoBreak;&NoBreak;. In SI units the previous parameters can be expressed as:
formula_1,
formula_2.
The ionization rate is the total probability current through the outer classical turning point. This rate is found using the WKB approximation to match the ground state hydrogen wavefunction through the suppressed coulomb potential barrier.
A more physically meaningful form for the ionization rate above can be obtained by noting that the Bohr radius and hydrogen atom ionization energy are given by
formula_3,
formula_4,
where formula_5 is the Rydberg energy. Then, the parameters formula_6 and formula_7 can be written as
formula_8, formula_9.
so that the total ionization rate can be rewritten
formula_10.
This form for the ionization rate formula_11 emphasizes that the characteristic electric field needed for ionization formula_12 is proportional to the ratio of the ionization energy formula_13 to the characteristic size of the electron's orbital &NoBreak;&NoBreak;. Thus, atoms with low ionization energy (such as alkali metals) with electrons occupying orbitals with high principal quantum number formula_14 (i.e. far down the periodic table) ionize most easily under a DC field. Furthermore, for a hydrogenic atom, the scaling of this characteristic ionization field goes as &NoBreak;&NoBreak;, where formula_15 is the nuclear charge. This scaling arises because the ionization energy scales as formula_16 and the orbital radius as &NoBreak;}&NoBreak;. More accurate and general formulas for the tunneling from Hydrogen orbitals can also be obtained.
As an empirical point of reference, the characteristic electric field formula_6 for the ordinary hydrogen atom is about (or ) and the characteristic frequency formula_17 is .
AC electric field.
The ionization rate of a hydrogen atom in an alternating electric field, like that of a laser, can be treated, in the appropriate limit, as the DC ionization rate averaged over a single period of the electric field's oscillation. Multiphoton and tunnel ionization of an atom or a molecule describes the same process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. The difference between them is a matter of definition under different conditions. They can henceforth be called multiphoton ionization (MPI) whenever the distinction is not necessary. The dynamics of the MPI can be described by finding the time evolution of the state of the atom which is described by the Schrödinger equation.
When the intensity of the laser is strong, the "lowest-order perturbation theory" is not sufficient to describe the MPI process. In this case, the laser field on larger distances from the nucleus is more important than the Coulomb potential and the dynamic of the electron in the field should be properly taken into account. The first work in this category was published by Leonid Keldysh. He modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states (the state of a free electron in the electromagnetic field). In this model, the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As is observed from the figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. A. M. Perelomov, V. S. Popov and M. V. Terent'ev included the Coulomb interaction at larger internuclear distances. Their model (which is called the PPT model after their initials) was derived for short-range potential and includes the effect of the long-range Coulomb interaction through the first-order correction in the quasi-classical action. In the quasi-static limit, the PPT model approaches the ADK model by M. V. Ammosov, N. B. Delone, and V. P. Krainov.
Many experiments have been carried out on the MPI of rare gas atoms using strong laser pulses, through measuring both the total ion yield and the kinetic energy of the electrons. Here, one only considers the experiments designed to measure the total ion yield. Among these experiments are those by S. L. Chin et al., S. Augst et al. and T. Auguste et al. Chin et al. used a 10.6 μm CO2 laser in their experiment. Due to the very small frequency of the laser, the tunneling is strictly quasi-static, a characteristic that is not easily attainable using pulses in the near infrared or visible region of frequencies. These findings weakened the suspicion on the applicability of models basically founded on the assumption of a structureless atom. S. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fits very well the experimental ion yields for all rare gases in the intermediate regime of Keldysh parameter.
Analytical formula for the rate of MPI.
The dynamics of the MPI can be described by finding the time evolution of the state of the atom which is described by the Schrödinger equation. The form of this equation in the electric field gauge, assuming the single active electron (SAE) approximation and using dipole approximation, is the following
formula_18
where formula_19 is the electric field of the laser and formula_20 is the static Coulomb potential of the atomic core at the position of the active electron. By finding the exact solution of equation (1) for a potential formula_21 (&NoBreak;}&NoBreak; the magnitude of the ionization potential of the atom), the probability current formula_22 is calculated. Then, the total MPI rate from short-range potential for linear polarization, &NoBreak;&NoBreak;, is found from
formula_23
where formula_24 is the frequency of the laser, which is assumed to be polarized in the direction of the formula_25 axis. The effect of the ionic potential, which behaves like formula_26 (formula_15 is the charge of atomic or ionic core) at a long distance from the nucleus, is calculated through first order correction on the semi-classical action. The result is that the effect of ionic potential is to increase the rate of MPI by a factor of
formula_27
Where formula_28 and formula_29 is the peak electric field of laser. Thus, the total rate of MPI from a state with quantum numbers formula_30 and formula_31 in a laser field for linear polarization is calculated to be
formula_32
where formula_33 is the Keldysh's adiabaticity parameter and &NoBreak;&NoBreak;. The coefficients formula_34, formula_35 and formula_36 are given by
formula_37
formula_38
formula_39
The coefficient formula_40 is given by
formula_41,
where
formula_42
formula_43
formula_44
The ADK model is the limit of the PPT model when formula_45 approaches zero (quasi-static limit). In this case, which is known as quasi-static tunnelling (QST), the ionization rate is given by
formula_46.
In practice, the limit for the QST regime is &NoBreak;&NoBreak;. This is justified by the following consideration. Referring to the figure, the ease or difficulty of tunneling can be expressed as the ratio between the equivalent classical time it takes for the electron to tunnel out the potential barrier while the potential is bent down. This ratio is indeed &NoBreak;&NoBreak;, since the potential is bent down during half a cycle of the field oscillation and the ratio can be expressed as
formula_47,
where formula_48 is the tunneling time (classical time of flight of an electron through a potential barrier, and formula_49 is the period of laser field oscillation.
MPI of molecules.
Contrary to the abundance of theoretical and experimental work on the MPI of rare gas atoms, the amount of research on the prediction of the rate of MPI of neutral molecules was scarce until recently. Walsh et al. have measured the MPI rate of some diatomic molecules interacting with a CO2 laser. They found that these molecules are tunnel-ionized as if they were structureless atoms with an ionization potential equivalent to that of the molecular ground state. A. Talebpour et al. were able to quantitatively fit the ionization yield of diatomic molecules interacting with a Ti:sapphire laser pulse. The conclusion of the work was that the MPI rate of a diatomic molecule can be predicted from the PPT model by assuming that the electron tunnels through a barrier given by formula_50 instead of barrier formula_51 which is used in the calculation of the MPI rate of atoms. The importance of this finding is in its practicality; the only parameter needed for predicting the MPI rate of a diatomic molecule is a single parameter, &NoBreak;}&NoBreak;. Using the semi-empirical model for the MPI rate of unsaturated hydrocarbons is feasible. This simplistic view ignores the ionization dependence on orientation of molecular axis with respect to polarization of the electric field of the laser, which is determined by the symmetries of the molecular orbitals. This dependence can be used to follow molecular dynamics using strong field multiphoton ionization.
Tunneling time.
The question of how long a tunneling particle spends inside the barrier region has remained unresolved since the early days of quantum mechanics. It is sometimes suggested that the tunneling time is instantaneous because both the Keldysh and the closely related Buttiker-Landauer times are imaginary (corresponding to the decay of the wavefunction under the barrier). In a recent publication the main competing theories of tunneling time are compared against experimental measurements using the attoclock in strong laser field ionization of helium atoms. Refined attoclock measurements reveal a real and not instantaneous tunneling delay time over a large intensity regime. It is found that the experimental results are compatible with the probability distribution of tunneling times constructed using a Feynman path integral (FPI) formulation. However, later work in atomic hydrogen has demonstrated that most of the tunneling time measured in the experiment is purely from the long-range Coulomb force exerted by the ion core on the outgoing electron.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " w = 4 \\omega_a \\frac{E_a}{\\left|E\\right|} \\exp\\left[ -\\frac{2}{3}\\frac{E_a}{\\left|E\\right|} \\right]"
},
{
"math_id": 1,
"text": "E_a = \\frac{m_\\text{e}^2 e^5}{(4\\pi \\epsilon_0)^3 \\hbar^4} "
},
{
"math_id": 2,
"text": "\\omega_a = \\frac{m_\\text{e} e^4}{(4\\pi \\epsilon_0)^2 \\hbar^3}"
},
{
"math_id": 3,
"text": "a_0 = \\frac{4\\pi \\epsilon_0 \\hbar^2}{m_\\text{e} e^2} "
},
{
"math_id": 4,
"text": "E_\\text{ion}=R_\\text{H} = \\frac{m_\\text{e} e^4}{8 \\epsilon_0^2 h^2} "
},
{
"math_id": 5,
"text": "R_\\text{H} \\approx \\mathrm{13.6\\, eV} "
},
{
"math_id": 6,
"text": " E_a "
},
{
"math_id": 7,
"text": "\\omega_a "
},
{
"math_id": 8,
"text": "E_a = \\frac{2 R_\\text{H}}{e a_0} "
},
{
"math_id": 9,
"text": "\\omega_a = \\frac{2 R_\\text{H}}{\\hbar}"
},
{
"math_id": 10,
"text": " w = 8 \\frac{R_\\text{H}}{\\hbar} \\frac{2 R_\\text{H}/a_0}{\\left|e E\\right|} \\exp\\left[ -\\frac{4}{3}\\frac{R_\\text{H}/a_0}{\\left|eE\\right|} \\right]"
},
{
"math_id": 11,
"text": " w "
},
{
"math_id": 12,
"text": "E_a = {2 E_\\text{ion}} / {e a_0} "
},
{
"math_id": 13,
"text": "E_\\text{ion} "
},
{
"math_id": 14,
"text": " n "
},
{
"math_id": 15,
"text": " Z "
},
{
"math_id": 16,
"text": " \\propto Z^2 "
},
{
"math_id": 17,
"text": " \\omega_a "
},
{
"math_id": 18,
"text": "i\\frac{\\partial}{\\partial t}\\Psi(\\mathbf{r},\\,t)=-\\frac{1}{2m}\\nabla^2\\Psi(\\mathbf{r},\\,t) + (\\mathbf{E}(t)\\cdot\\mathbf{r}+V(\\mathbf{r}))\\Psi(\\mathbf{r},\\,t) ,"
},
{
"math_id": 19,
"text": " \\mathbf{E}(t) "
},
{
"math_id": 20,
"text": " V(r) "
},
{
"math_id": 21,
"text": " \\sqrt{2 E_\\text{i}}.\\delta(\\mathbf{r}) "
},
{
"math_id": 22,
"text": " \\mathbf{J}(\\mathbf{r}, t) "
},
{
"math_id": 23,
"text": " W(\\mathbf{E}, \\omega)=\\lim_{x\\to\\infty}\\int_0^\\frac{2\\pi}{\\omega} \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty \\mathbf{J}(\\mathbf{r}, t)\\,dz\\,dy\\,dt "
},
{
"math_id": 24,
"text": " \\omega "
},
{
"math_id": 25,
"text": " x "
},
{
"math_id": 26,
"text": " {Z} / {r} "
},
{
"math_id": 27,
"text": " I_\\text{PPT}=(2(E_\\text{i})^{\\frac{3}{2}}/F)^{n^{*}} "
},
{
"math_id": 28,
"text": " n^{*}=Z/\\sqrt{2 E_\\text{i} } "
},
{
"math_id": 29,
"text": " F "
},
{
"math_id": 30,
"text": " l "
},
{
"math_id": 31,
"text": " m "
},
{
"math_id": 32,
"text": " W_\\text{PPT}=I_\\text{PPT}W(\\mathbf{E}, \\omega)=|C_{n^{*}l^{*}}|^{2}\\sqrt{\\frac{6}{\\pi}}f_{lm}E_{i}(2(2 E_\\text{i})^{\\frac{3}{2}}/F)^{2n^{*}-|m|-3/2}(1+\\gamma^{2})^{|m/2|+3/4}A_{m}(\\omega, \\gamma)e^{-\\frac{2}{3}g(\\gamma)(2 E_\\text{i})^{\\frac{3}{2}}/F} "
},
{
"math_id": 33,
"text": " \\gamma= \\frac{\\omega \\sqrt {2 E_\\text{i}}}{F} "
},
{
"math_id": 34,
"text": " f_{lm} "
},
{
"math_id": 35,
"text": " g(\\gamma) "
},
{
"math_id": 36,
"text": " C_{n^{*}l^{*}} "
},
{
"math_id": 37,
"text": " f_{lm}= \\frac{(2l+1)(l+|m|){!}}{2^{|m|}|m|{!}(l-|m|){!}} "
},
{
"math_id": 38,
"text": " g(\\gamma)=\\frac{3}{2\\gamma} ((1+\\frac{1}{2\\gamma^{2}})\\sinh^{-1}(\\gamma)-\\frac{\\sqrt{1+\\gamma^{2}}}{2\\gamma})"
},
{
"math_id": 39,
"text": "|C_{n^{*}l^{*}}|^{2}= \\frac{2^{2n^{*}}}{n^{*}\\Gamma(n^{*}+l^{*}+1)\\Gamma(n^{*}-l^{*})}"
},
{
"math_id": 40,
"text": " A_{m}(\\omega, \\gamma)"
},
{
"math_id": 41,
"text": " A_{m}(\\omega, \\gamma)=\\frac{4}{\\sqrt{3\\pi}}\\frac{1}{|m|!}\\frac{\\gamma^{2}}{1+\\gamma^{2}}\\sum_{n>v}^{\\infty}e^{-(n-v)\\alpha(\\gamma)}w_{m}\\left(\\sqrt{\\frac{2\\gamma}{\\sqrt{1+\\gamma^{2}}}(n-v)}\\right)"
},
{
"math_id": 42,
"text": " w_{m}(x)=e^{-x^{2}}\\int_0^x (x^2-y^2)^m e^{y^2}\\,dy "
},
{
"math_id": 43,
"text": " \\alpha(\\gamma)= 2(\\sinh^{-1}(\\gamma)-\\frac{\\gamma}{\\sqrt{1+\\gamma^{2}}})"
},
{
"math_id": 44,
"text": " v= \\frac{E_\\text{i}}{\\omega}(1+\\frac{1}{2\\gamma^{2}}) "
},
{
"math_id": 45,
"text": " \\gamma "
},
{
"math_id": 46,
"text": " W_\\text{ADK}=|C_{n^{*}l^{*}}|^{2}\\sqrt{\\frac{6}{\\pi}}f_{lm}E_{i}(2(2 E_\\text{i})^{\\frac{3}{2}}/F)^{2n^{*}-|m|-3/2}e^{-(2(2 E_\\text{i})^{\\frac{3}{2}}/3F)} "
},
{
"math_id": 47,
"text": " \\gamma =\\frac {\\tau_\\text{T}} {\\frac{1}{2}\\tau_\\text{L}}"
},
{
"math_id": 48,
"text": " \\tau_\\text{T} "
},
{
"math_id": 49,
"text": " \\tau_\\text{L} "
},
{
"math_id": 50,
"text": " {Z_\\text{eff}} / {r} "
},
{
"math_id": 51,
"text": " {1} / {r} "
}
] | https://en.wikipedia.org/wiki?curid=1247989 |
12480332 | Van Wijngaarden transformation | In mathematics and numerical analysis, the van Wijngaarden transformation is a variant on the Euler transform used to accelerate the convergence of an alternating series.
One algorithm to compute Euler's transform runs as follows: Compute a row of partial sums formula_0 and form rows of averages between neighbors formula_1 The first column formula_2 then contains the partial sums of the Euler transform.
Adriaan van Wijngaarden's contribution was to point out that it is better not to carry this procedure through to the very end, but to stop two-thirds of the way. If formula_3 are available, then formula_4 is almost always a better approximation to the sum than formula_5. In many cases the diagonal terms do not converge in one cycle so process of averaging is to be repeated with diagonal terms by bringing them in a row. (For example, this will be needed in a geometric series with ratio formula_6.) This process of successive averaging of the average of partial sum can be replaced by using the formula to calculate the diagonal term.
For a simple-but-concrete example, recall the Leibniz formula for pi The algorithm described above produces the following table:
These correspond to the following algorithmic outputs:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s_{0,k} = \\sum_{n=0}^k(-1)^n a_n"
},
{
"math_id": 1,
"text": "s_{j+1,k} = \\frac{s_{j,k}+s_{j,k+1}}2"
},
{
"math_id": 2,
"text": "s_{j,0}"
},
{
"math_id": 3,
"text": "a_0,a_1,\\ldots,a_{12}"
},
{
"math_id": 4,
"text": "s_{8,4}"
},
{
"math_id": 5,
"text": "s_{12,0}"
},
{
"math_id": 6,
"text": "-4"
}
] | https://en.wikipedia.org/wiki?curid=12480332 |
12480975 | Implicit k-d tree | An implicit "k"-d tree is a "k"-d tree defined implicitly above a rectilinear grid. Its split planes' positions and orientations are not given explicitly but implicitly by some recursive splitting-function defined on the hyperrectangles belonging to the tree's nodes. Each inner node's split plane is positioned on a grid plane of the underlying grid, partitioning the node's grid into two subgrids.
Nomenclature and references.
The terms "min/max "k"-d tree" and "implicit "k"-d tree" are sometimes mixed up. This is because the first publication using the term "implicit "k"-d tree" did actually use explicit min/max "k"-d trees but referred to them as "implicit "k"-d trees" to indicate that they may be used to ray trace implicitly given iso surfaces. Nevertheless, this publication used also slim "k"-d trees which are a subset of the implicit "k"-d trees with the restriction that they can only be built over integer hyperrectangles with sidelengths that are powers of two. Implicit "k"-d trees as defined here have recently been introduced, with applications in computer graphics. As it is possible to assign attributes to implicit "k"-d tree nodes, one may refer to an implicit "k"-d tree which has min/max values assigned to its nodes as an "implicit min/max "k"-d tree".
Construction.
Implicit "k"-d trees are generally not constructed explicitly. When accessing a node, its split plane orientation and position are evaluated using the specific splitting-function defining the tree. Different splitting-functions may result in different trees for the same underlying grid.
Splitting-functions.
Splitting-functions may be adapted to special purposes. Underneath two specifications of special splitting-function classes.
A complete splitting function is for example the grid median splitting-function. It creates fairly balanced implicit "k"-d trees by using "k"-dimensional integer hyperrectangles "hyprec[2][k]" belonging to each node of the implicit "k"-d tree. The hyperrectangles define which gridcells of the rectilinear grid belong to their corresponding node. If the volume of this hyperrectangle equals one, the corresponding node is a single grid cell and is therefore not further subdivided and marked as leaf node. Otherwise the hyperrectangle's longest extent is chosen as orientation "o". The corresponding split plane "p" is positioned onto the grid plane that is closest to the hyperrectangle's grid median along that orientation.
Split plane orientation "o":
Split plane position "p":
p = roundDown((hyprec[0][o] + hyprec[1][o]) / 2)
Assigning attributes to implicit "k"-d tree nodes.
An advantage of implicit "k"-d trees is that their split plane's orientations and positions need not to be stored explicitly.
But some applications require besides the split plane's orientations and positions further attributes at the inner tree nodes. These attributes may be for example single bits or single scalar values, defining if the subgrids belonging to the nodes are of interest or not. For complete implicit "k"-d trees it is possible to pre-allocate a correctly sized array of attributes and to assign each inner node of the tree to a unique element in that allocated array.
The amount of gridcells in the grid is equal the volume of the integer hyperrectangle belonging to the grid. As a complete implicit "k"-d tree has one inner node less than grid cells, it is known in advance how many attributes need to be stored. The relation "Volume of integer hyperrectangle to inner nodes" defines together with the complete splitting-function a recursive formula assigning to each split plane a unique element in the allocated array. The corresponding algorithm is given in C-pseudo code underneath.
// Assigning attributes to inner nodes of a complete implicit k-d tree
// create an integer help hyperrectangle hyprec (its volume vol(hyprec) is equal the amount of leaves)
int hyprec[2][k] = { { 0, ..., 0 }, { length_1, ..., length_k } };
// allocate once the array of attributes for the entire implicit k-d tree
attr *a = new attr[volume(hyprec) - 1];
attr implicitKdTreeAttributes(int hyprec[2][k], attr *a)
if (vol(hyprec) > 1) // the current node is an inner node
// evaluate the split plane's orientation o and its position p using the underlying complete split-function
int o, p;
completeSplittingFunction(hyprec, &o, &p);
// evaluate the children's integer hyperrectangles hyprec_l and hyprec_r
int hyprec_l[2][k], hyprec_r[2][k];
hyprec_l = hyprec;
hyprec_l[1][o] = p;
hyprec_r = hyprec;
hyprec_r[0][o] = p;
// evaluate the children's memory location a_l and a_r
attr* a_l = a + 1;
attr* a_r = a + vol(hyprec_l);
// evaluate recursively the children's attributes c_l and c_r
attr c_l = implicitKdTreeAttributes(hyprec_l, a_l);
attr c_r = implicitKdTreeAttributes(hyprec_r, a_r);
// merge the children's attributes to the current attribute c
attr c = merge(c_l, c_r);
// store the current attribute and return it
a[0] = c;
return c;
// The current node is a leaf node. Return the attribute belonging to the corresponding gridcell
return attribute(hyprec);
It is worth mentioning that this algorithm works for all rectilinear grids. The corresponding integer hyperrectangle does not necessarily have to have sidelengths that are powers of two.
Applications.
Implicit max-"k"-d trees are used for ray casting isosurfaces/MIP (maximum intensity projection). The attribute assigned to each inner node is the maximal scalar value given in the subgrid belonging to the node. Nodes are not traversed if their scalar values are smaller than the searched iso-value/current maximum intensity along the ray. The low storage requirements of the implicit max "k"d-tree and the favorable visualization complexity of ray casting allow to ray cast (and even change the isosurface for) very large scalar fields at interactive framerates on commodity PCs. Similarly an implicit min/max kd-tree may be used to efficiently evaluate queries such as terrain line of sight.
Complexity.
Given an implicit "k"-d tree spanned over a "k"-dimensional grid with "n" gridcells. | [
{
"math_id": 0,
"text": "\\mathrm{O}(kn)"
},
{
"math_id": 1,
"text": "\\mathrm{O}(n)"
},
{
"math_id": 2,
"text": "\\mathrm{O}(\\log(n))"
}
] | https://en.wikipedia.org/wiki?curid=12480975 |
12481749 | TITAN2D | TITAN2D is a geoflow simulation software application, intended for geological researchers. It is distributed as free software.
Overview.
TITAN2D is a free software application developed by the Geophysical Mass Flow Group at the State University of New York (SUNY) at Buffalo.
TITAN2D was developed for the purpose of simulating granular flows (primarily geological mass
flows such as debris avalanches and landslides) over
digital elevation models (DEM)s of natural terrain.
The code is designed to help scientists and civil protection authorities assess the
risk of, and mitigate, hazards due to dry debris flows and avalanches.
TITAN2D combines numerical simulations of a flow with digital elevation data of natural terrain
supported through a Geographical Information System (GIS) interface such as GRASS.
TITAN2D is capable of multiprocessor runs.
A Message Passing Interface (MPI) Application
Programming Interface (API) allows
for parallel computing on multiple processors, which effectively increases computational power, decreases computing time,
and allows for the use of large data sets.
Adaptive gridding allows
for the concentration of computing power on regions of special
interest. Mesh refinement captures the complex flow features that occur at the leading edge
of a flow, as well as locations where rapid changes in topography induce large mass and momentum fluxes. Mesh
unrefinement is applied where solution values are relatively constant
or small to further improve computational efficiency.
TITAN2D requires an initial volume and shape estimate for the starting material, a basal friction angle, and an
internal friction angle for the simulated granular flow. The direct outputs of the program are
dynamic representations of a flow's depth and momentum. Secondary or derived outputs include flow velocity, and such field-observable quantities as run-up height, deposit thickness, and inundation area.
Mathematical Model.
The TITAN2D program is based upon a depth-averaged model for an incompressible
Coulomb continuum, a “shallow-water” granular flow. The conservation equations
for mass and momentum are solved with a Coulomb-type friction
term for the interactions between the grains of the media and between the granular material
and the basal surface. The resulting hyperbolic system
of equations is solved using a parallel, adaptive mesh,
Godunov scheme. The basic form of the depth-averaged governing equations appear as follows.
The depth-averaged conservation of mass is:
formula_0
The depth-averaged x,y momentum balances are:
formula_1
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\underbrace{\\partial h \\over \\partial t}}_{\n\\begin{smallmatrix}\n \\text{Change}\\\\\n \\text{in mass}\\\\\n \\text{over time}\n\\end{smallmatrix}} \n+ \\underbrace{{\\partial \\overline{hu} \\over \\partial x} + {\\partial \\overline{hv} \\over \\partial y}}_{\n\\begin{smallmatrix}\n \\text{Total spatial}\\\\\n \\text{variation of}\\\\\n \\text{x,y mass fluxes}\n\\end{smallmatrix}}\n = 0"
},
{
"math_id": 1,
"text": "{\\underbrace{\\partial \\overline{hu} \\over \\partial t}}_{\n\\begin{smallmatrix}\n \\text{Change in}\\\\\n \\text{x mass flux}\\\\\n \\text{over time}\n\\end{smallmatrix}}\n+ \\underbrace{{\\partial \\over \\partial x} \\left( \\overline{hu^2}+{1 \\over 2}{k_{ap}g_zh^2}\\right) + {\\partial \\overline{huv} \\over \\partial y}}_{\n\\begin{smallmatrix}\n \\text{Total spatial variation}\\\\\n \\text{of x,y momentum fluxes}\\\\\n \\text{in x-direction}\n\\end{smallmatrix}}\n= \\underbrace{-hk_{ap} \\sgn \\left({\\partial u \\over \\partial y}\\right){\\partial hg_z \\over \\partial y}\\sin \\phi_{int}}_{\n\\begin{smallmatrix}\n \\text{Dissipative internal}\\\\\n \\text{friction force}\\\\\n \\text{in x-direction}\n\\end{smallmatrix}}\n- \\underbrace{{u \\over \\sqrt{u^2+v^2}}\\left[ g_zh\\left(1+{u^2 \\over r_xg_z}\\right) \\right]\\tan \\phi_{bed}}_{\n\\begin{smallmatrix}\n \\text{Dissipative basal}\\\\\n \\text{friction force}\\\\\n \\text{in x-direction}\n\\end{smallmatrix}}\n + \\underbrace{g_xh}_{\n\\begin{smallmatrix}\n \\text{Driving}\\\\\n \\text{gravitational}\\\\\n \\text{force in}\\\\\n \\text{x-direction}\n\\end{smallmatrix}}\n"
},
{
"math_id": 2,
"text": "{\\underbrace{\\partial \\overline{hv} \\over \\partial t}}_{\n\\begin{smallmatrix}\n \\text{Change in}\\\\\n \\text{y mass flux}\\\\\n \\text{over time}\n\\end{smallmatrix}}\n+ \\underbrace{{\\partial \\overline{huv} \\over \\partial x} + {\\partial \\over \\partial y} \\left( \\overline{hv^2}+{1 \\over 2}{k_{ap}g_zh^2}\\right)}_{\n\\begin{smallmatrix}\n \\text{Total spatial variation}\\\\\n \\text{of x,y momentum fluxes}\\\\\n \\text{in y-direction}\n\\end{smallmatrix}}\n= \\underbrace{-hk_{ap} \\sgn \\left({\\partial v \\over \\partial x}\\right){\\partial hg_z \\over \\partial x}\\sin \\phi_{int}}_{\n\\begin{smallmatrix}\n \\text{Dissipative internal}\\\\\n \\text{friction force}\\\\\n \\text{in y-direction}\n\\end{smallmatrix}}\n- \\underbrace{{v \\over \\sqrt{u^2+v^2}}\\left[ g_zh \\left(1+{v^2 \\over r_yg_z}\\right) \\right]\\tan \\phi_{bed}}_{\n\\begin{smallmatrix}\n \\text{Dissipative basal}\\\\\n \\text{friction force}\\\\\n \\text{in y-direction}\n\\end{smallmatrix}}\n + \\underbrace{g_yh}_{\n\\begin{smallmatrix}\n \\text{Driving}\\\\\n \\text{gravitational}\\\\\n \\text{force in}\\\\\n \\text{y-direction}\n\\end{smallmatrix}}\n"
}
] | https://en.wikipedia.org/wiki?curid=12481749 |
12484243 | Wagner VI projection | Pseudocylindrical compromise map projection
Wagner VI is a pseudocylindrical whole Earth map projection. Like the Robinson projection, it is a compromise projection, not having any special attributes other than a pleasing, low distortion appearance. Wagner VI is equivalent to the Kavrayskiy VII horizontally elongated by a factor of <templatestyles src="Fraction/styles.css" />formula_0⁄formula_1. This elongation results in proper preservation of shapes near the equator but slightly more distortion overall. The aspect ratio of this projection is 2:1, as formed by the ratio of the equator to the central meridian. This matches the ratio of Earth’s equator to any meridian.
The Wagner VI is defined by:
formula_2
where formula_3 is the longitude and formula_4 is the latitude.
Inverse formula:
formula_5
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2"
},
{
"math_id": 1,
"text": "\\sqrt{3}"
},
{
"math_id": 2,
"text": "\\begin{align} x &= \\lambda \\sqrt{1 - 3\\left(\\frac{\\varphi}{\\pi}\\right)^2} \\\\ y &= \\varphi \\end{align}"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "\\begin{align} \\psi &= \\arcsin\\left({\\frac{\\sqrt{3}}{\\pi}}y\\right) \\\\ \\lambda &= \\frac{x}{\\cos{\\psi}} \\\\ \\varphi &= y \\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=12484243 |
12485744 | Quantum walk | Quantum walks are quantum analogs of classical random walks. In contrast to the classical random walk, where the walker occupies definite states and the randomness arises due to stochastic transitions between states, in quantum walks randomness arises through (1) quantum superposition of states, (2) non-random, reversible unitary evolution and (3) collapse of the wave function due to state measurements.
As with classical random walks, quantum walks admit formulations in both discrete time and continuous time.
Motivation.
Quantum walks are motivated by the widespread use of classical random walks in the design of randomized algorithms and are part of several quantum algorithms. For some oracular problems, quantum walks provide an exponential speedup over any classical algorithm. Quantum walks also give polynomial speedups over classical algorithms for many practical problems, such as the element distinctness problem, the triangle finding problem, and evaluating NAND trees. The well-known Grover search algorithm can also be viewed as a quantum walk algorithm.
Relation to classical random walks.
Quantum walks exhibit very different features from classical random walks. In particular, they do not converge to limiting distributions and due to the power of quantum interference, they may spread significantly faster or slower than their classical equivalents.
Continuous time.
Continuous-time quantum walks arise when one replaces the continuum spatial domain in the Schrödinger equation with a discrete set. That is, instead of having a quantum particle propagate in a continuum, one restricts the set of possible position states to the vertex set formula_0 of some graph formula_1 which can be either finite or countably infinite. Under particular conditions, continuous-time quantum walks can provide a model for universal quantum computation.
Relation to non-relativistic Schrödinger dynamics.
Consider the dynamics of a non-relativistic, spin-less free quantum particle with mass formula_2 propagating on an infinite one-dimensional spatial domain. The particle's motion is completely described by its wave function formula_3 which satisfies the one-dimensional, free particle Schrödinger equation
formula_4
where formula_5 and formula_6 is the reduced Planck's constant. Now suppose that only the spatial part of the domain is discretized, formula_7 being replaced with formula_8 where formula_9 is the separation between the spatial sites the particle can occupy. The wave function becomes the map formula_10 and the second spatial partial derivative becomes the discrete laplacian
formula_11
The evolution equation for a continuous time quantum walk on formula_12 is thus
formula_13
where formula_14is a characteristic frequency. This construction naturally generalizes to the case that the discretized spatial domain is an arbitrary graph formula_1 and the discrete laplacian formula_15is replaced by the graph Laplacian formula_16where formula_17 and formula_18 are the degree matrix and the adjacency matrix, respectively. Common choices of graphs that show up in the study of continuous time quantum walks are the "d"-dimensional lattices formula_19, cycle graphs formula_20, "d"-dimensional discrete tori formula_21, the "d"-dimensional hypercube formula_22and random graphs.
Discrete time.
Discrete-time quantum walks on formula_23.
The evolution of a quantum walk in discrete time is specified by the product of two unitary operators: (1) a "coin flip" operator and (2) a conditional shift operator, which are applied repeatedly. The following example is instructive here. Imagine a particle with a spin-1/2-degree of freedom propagating on a linear array of discrete sites. If the number of such sites is countably infinite, we identify the state space with formula_23. The particle's state can then be described by a product state
formula_24
consisting of an internal spin state
formula_25
and a position state
formula_26
where formula_27 is the "coin space" and formula_28 is the space of physical quantum position states. The product formula_29 in this setting is the Kronecker (tensor) product. The conditional shift operator for the quantum walk on the line is given by
formula_30
i.e. the particle jumps right if it has spin up and left if it has spin down. Explicitly, the conditional shift operator acts on product states according to
formula_31
formula_32
If we first rotate the spin with some unitary transformation formula_33 and then apply formula_34, we get a non-trivial quantum motion on formula_23. A popular choice for such a transformation is the Hadamard gate formula_35, which, with respect to the standard "z"-component spin basis, has matrix representation
formula_36
When this choice is made for the coin flip operator, the operator itself is called the "Hadamard coin" and the resulting quantum walk is called the "Hadamard walk". If the walker is initialized at the origin and in the spin-up state, a single time step of the Hadamard walk on formula_23 is
formula_37
Measurement of the system's state at this point would reveal an up spin at position 1 or a down spin at position −1, both with probability 1/2. Repeating the procedure would correspond to a classical simple random walk on formula_23. In order to observe non-classical motion, no measurement is performed on the state at this point (and therefore do not force a collapse of the wave function). Instead, repeat the procedure of rotating the spin with the coin flip operator and conditionally jumping with formula_34. This way, quantum correlations are preserved and different position states can interfere with one another. This gives a drastically different probability distribution than the classical random walk (Gaussian distribution) as seen in the figure to the right. Spatially one sees that the distribution is not symmetric: even though the Hadamard coin gives both up and down spin with equal probability, the distribution tends to drift to the right when the initial spin is formula_38. This asymmetry is entirely due to the fact that the Hadamard coin treats the formula_38 and formula_39 state asymmetrically. A symmetric probability distribution arises if the initial state is chosen to be
formula_40
Dirac equation.
Consider what happens when we discretize a massive Dirac operator over one spatial dimension. In the absence of a mass term, we have left-movers and right-movers. They can be characterized by an internal degree of freedom, "spin" or a "coin". When we turn on a mass term, this corresponds to a rotation in this internal "coin" space. A quantum walk corresponds to iterating the shift and coin operators repeatedly.
This is very much like Richard Feynman's model of an electron in 1 (one) spatial and 1 (one) time dimension. He summed up the zigzagging paths, with left-moving segments corresponding to one spin (or coin), and right-moving segments to the other. See Feynman checkerboard for more details.
The transition probability for a 1-dimensional quantum walk behaves like the Hermite functions which
(1) asymptotically oscillate in the classically allowed region,
(2) is approximated by the Airy function around the wall of the potential, and
(3) exponentially decay in the classically hidden region.
Realization.
Atomic lattice is the leading quantum platform in terms of scalability. Coined and coinless discrete-time quantum-walk could be realized in the atomic lattice via a distance-selective spin-exchange interaction. Remarkably the platform preserves the coherence over couple of hundred sites and steps in 1, 2 or 3 dimensions in the spatial space. The long-range dipolar interaction allows designing periodic boundary conditions, facilitating the QW over topological surfaces.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "G = (V,E)"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "\\psi : \\mathbb{R}\\times \\mathbb{R}_{\\geq 0}\\to\\mathbb{C}"
},
{
"math_id": 4,
"text": "\\textbf{i}\\hbar\\frac{\\partial\\psi}{\\partial t} = -\\frac{\\hbar^2}{2m}\\frac{\\partial^2\\psi}{\\partial x^2}"
},
{
"math_id": 5,
"text": "\\textbf{i} = \\sqrt{-1}"
},
{
"math_id": 6,
"text": "\\hbar"
},
{
"math_id": 7,
"text": "\\mathbb{R}"
},
{
"math_id": 8,
"text": "\\mathbb{Z}_{\\Delta x} \\equiv \\{\\ldots,-2\\,\\Delta x,-\\Delta x,0,\\Delta x, 2\\,\\Delta x,\\ldots\\}"
},
{
"math_id": 9,
"text": "\\Delta x"
},
{
"math_id": 10,
"text": "\\psi : \\mathbb{Z}_{\\Delta x}\\times\\mathbb{R}_{\\geq 0}\\to\\mathbb{C}"
},
{
"math_id": 11,
"text": "\\frac{\\partial^2\\psi}{\\partial x^2} \\to \\frac{L_{\\mathbb{Z}}\\psi(j\\,\\Delta x,t)}{\\Delta x^2} \\equiv \\frac{\\psi\\left((j+1)\\,\\Delta x,t\\right)-2\\psi\\left(j\\,\\Delta x,t\\right)+\\psi\\left((j-1)\\,\\Delta x,t\\right)}{\\Delta x^2}"
},
{
"math_id": 12,
"text": "\\mathbb{Z}_{\\Delta x}"
},
{
"math_id": 13,
"text": "\\textbf{i}\\frac{\\partial\\psi}{\\partial t} = -\\omega_{\\Delta x} L_{\\mathbb{Z}}\\psi"
},
{
"math_id": 14,
"text": "\\omega_{\\Delta x} \\equiv \\hbar/2m\\,\\Delta x^2"
},
{
"math_id": 15,
"text": "L_\\mathbb{Z}"
},
{
"math_id": 16,
"text": "L_G \\equiv D_G - A_G"
},
{
"math_id": 17,
"text": "D_G"
},
{
"math_id": 18,
"text": "A_G"
},
{
"math_id": 19,
"text": "\\mathbb{Z}^d"
},
{
"math_id": 20,
"text": "\\mathbb{Z}/N\\mathbb{Z}"
},
{
"math_id": 21,
"text": "(\\mathbb{Z}/N\\mathbb{Z})^d"
},
{
"math_id": 22,
"text": "\\mathbb{Q}^d"
},
{
"math_id": 23,
"text": "\\mathbb{Z}"
},
{
"math_id": 24,
"text": "|\\Psi\\rangle = |s\\rangle \\otimes |\\psi \\rangle"
},
{
"math_id": 25,
"text": "|s\\rangle \\in \\mathcal{H}_C=\\left\\{a_{\\uparrow}|{\\uparrow}\\rangle + a_{\\downarrow}|{\\downarrow}\\rangle: a_{\\uparrow/\\downarrow} \\in \\mathbb{C} \\right\\}"
},
{
"math_id": 26,
"text": "|\\psi\\rangle \\in \\mathcal{H}_P=\\left\\{ \\sum_{x\\in\\mathbb{Z}}\\alpha_x|x\\rangle: \\sum_{x\\in\\mathbb{Z}} |\\alpha_x|^2 < \\infty \\right\\}"
},
{
"math_id": 27,
"text": "\\mathcal{H}_C = \\mathbb{C}^2"
},
{
"math_id": 28,
"text": "\\mathcal{H}_P = \\ell^2(\\mathbb{Z})"
},
{
"math_id": 29,
"text": "\\otimes"
},
{
"math_id": 30,
"text": "S= |{\\uparrow}\\rangle \\langle{\\uparrow}| \\otimes \\sum\\limits_i |i+1\\rangle\\langle i| + |{\\downarrow}\\rangle \\langle{\\downarrow}| \\otimes \\sum\\limits_i |i-1\\rangle \\langle i|,"
},
{
"math_id": 31,
"text": "S(|{\\uparrow}\\rangle \\otimes |i\\rangle) = |{\\uparrow}\\rangle \\otimes |i+1\\rangle"
},
{
"math_id": 32,
"text": "S(|{\\downarrow}\\rangle \\otimes |i\\rangle) = |{\\downarrow}\\rangle \\otimes |i-1\\rangle"
},
{
"math_id": 33,
"text": "C: \\mathcal{H}_C \\to \\mathcal{H}_C"
},
{
"math_id": 34,
"text": "S"
},
{
"math_id": 35,
"text": "C = H"
},
{
"math_id": 36,
"text": "H = \\frac{1}{\\sqrt{2}}\\begin{pmatrix}1 & \\;\\;1\\\\ 1 & -1\\end{pmatrix}"
},
{
"math_id": 37,
"text": "|{\\uparrow}\\rangle \\otimes |0\\rangle \\;\\,\\overset{H}{\\longrightarrow}\\;\\, \\frac{1}{\\sqrt{2}} (|{\\uparrow}\\rangle + |{\\downarrow}\\rangle) \\otimes |0\\rangle \\;\\,\\overset{S}{\\longrightarrow}\\;\\, \\frac{1}{\\sqrt{2}} (|{\\uparrow}\\rangle \\otimes |1\\rangle + |{\\downarrow}\\rangle \\otimes |{-1}\\rangle)."
},
{
"math_id": 38,
"text": "|{\\uparrow}\\rangle"
},
{
"math_id": 39,
"text": "|{\\downarrow}\\rangle"
},
{
"math_id": 40,
"text": "|\\Psi^{\\text{symm}}_0\\rangle = \\frac{1}{\\sqrt{2}} (|{\\uparrow}\\rangle - \\textbf{i} |{\\downarrow}\\rangle) \\otimes |0\\rangle"
}
] | https://en.wikipedia.org/wiki?curid=12485744 |
1248603 | Excision theorem | Theorem in algebraic topology
In algebraic topology, a branch of mathematics, the excision theorem is a theorem about relative homology and one of the Eilenberg–Steenrod axioms. Given a topological space formula_0 and subspaces formula_1 and formula_2 such that formula_2 is also a subspace of formula_1, the theorem says that under certain circumstances, we can cut out (excise) formula_2 from both spaces such that the relative homologies of the pairs formula_3 into formula_4 are isomorphic.
This assists in computation of singular homology groups, as sometimes after excising an appropriately chosen subspace we obtain something easier to compute.
Theorem.
Statement.
If formula_5 are as above, we say that formula_2 can be excised if the inclusion map of the pair formula_3 into formula_4 induces an isomorphism on the relative homologies:
formula_6
The theorem states that if the closure of formula_2 is contained in the interior of formula_1, then formula_2 can be excised.
Often, subspaces that do not satisfy this containment criterion still can be excised—it suffices to be able to find a deformation retract of the subspaces onto subspaces that do satisfy it.
Proof Sketch.
The proof of the excision theorem is quite intuitive, though the details are rather involved. The idea is to subdivide the simplices in a relative cycle in formula_4 to get another chain consisting of "smaller" simplices, and continuing the process until each simplex in the chain lies entirely in the interior of formula_1 or the interior of formula_7. Since these form an open cover for formula_0 and simplices are compact, we can eventually do this in a finite number of steps. This process leaves the original homology class of the chain unchanged (this says the subdivision operator is chain homotopic to the identity map on homology).
In the relative homology formula_8, then, this says all the terms contained entirely in the interior of formula_2 can be dropped without affecting the homology class of the cycle. This allows us to show that the inclusion map is an isomorphism, as each relative cycle is equivalent to one that avoids formula_2 entirely.
Applications.
Eilenberg–Steenrod Axioms.
The excision theorem is taken to be one of the Eilenberg-Steenrod axioms.
Mayer-Vietoris Sequences.
The Mayer–Vietoris sequence may be derived with a combination of excision theorem and the long-exact sequence.
Suspension Theorem for Homology.
The excision theorem may be used to derive the suspension theorem for homology, which says formula_9 for all formula_10, where formula_11 is the suspension of formula_0.
Invariance of Dimension.
If nonempty open sets formula_12 and formula_13
are homeomorphic, then m = n. This follows from the excision theorem, the long exact sequence for the pair formula_14, and the fact that formula_15 deformation retracts onto a sphere.
In particular, formula_16 is not homeomorphic to formula_17 if formula_18.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "(X \\setminus U,A \\setminus U )"
},
{
"math_id": 4,
"text": "(X, A)"
},
{
"math_id": 5,
"text": "U\\subseteq A \\subseteq X"
},
{
"math_id": 6,
"text": "H_n(X \\setminus U,A \\setminus U) \\cong H_n(X,A)"
},
{
"math_id": 7,
"text": "X \\setminus U"
},
{
"math_id": 8,
"text": "H_n(X, A)"
},
{
"math_id": 9,
"text": "\\tilde{H}_n(X) \\cong \\tilde{H}_{n+1}(SX)"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "SX"
},
{
"math_id": 12,
"text": " U\\subset \\mathbb{R}^n"
},
{
"math_id": 13,
"text": " V\\subset \\mathbb{R}^m"
},
{
"math_id": 14,
"text": "(\\mathbb{R}^n,\\mathbb{R}^n-x)"
},
{
"math_id": 15,
"text": " \\mathbb{R}^n-x"
},
{
"math_id": 16,
"text": "\\mathbb{R}^n"
},
{
"math_id": 17,
"text": "\\mathbb{R}^m"
},
{
"math_id": 18,
"text": "m\\neq n"
}
] | https://en.wikipedia.org/wiki?curid=1248603 |
1248647 | Relative homology | In algebraic topology, a branch of mathematics, the (singular) homology of a topological space relative to a subspace is a construction in singular homology, for pairs of spaces. The relative homology is useful and important in several ways. Intuitively, it helps determine what part of an absolute homology group comes from which subspace.
Definition.
Given a subspace formula_0, one may form the short exact sequence
formula_1
where formula_2 denotes the singular chains on the space "X". The boundary map on formula_2 descendsa to formula_3 and therefore induces a boundary map formula_4 on the quotient. If we denote this quotient by formula_5, we then have a complex
formula_6
By definition, the nth relative homology group of the pair of spaces formula_7 is
formula_8
One says that relative homology is given by the relative cycles, chains whose boundaries are chains on "A", modulo the relative boundaries (chains that are homologous to a chain on "A", i.e., chains that would be boundaries, modulo "A" again).
Properties.
The above short exact sequences specifying the relative chain groups give rise to a chain complex of short exact sequences. An application of the snake lemma then yields a long exact sequence
formula_9
The connecting map "formula_10" takes a relative cycle, representing a homology class in formula_11, to its boundary (which is a cycle in "A").
It follows that formula_12, where formula_13 is a point in "X", is the "n"-th reduced homology group of "X". In other words, formula_14 for all formula_15. When formula_16, formula_17 is the free module of one rank less than formula_18. The connected component containing formula_13 becomes trivial in relative homology.
The excision theorem says that removing a sufficiently nice subset formula_19 leaves the relative homology groups formula_11 unchanged. If formula_20 has a neighbourhood formula_21 in formula_22 that deformation retracts to formula_20, then using the long exact sequence of pairs and the excision theorem, one can show that formula_11 is the same as the "n"-th reduced homology groups of the quotient space formula_23.
Relative homology readily extends to the triple formula_24 for formula_25.
One can define the Euler characteristic for a pair formula_26 by
formula_27
The exactness of the sequence implies that the Euler characteristic is "additive", i.e., if formula_25, one has
formula_28
Local homology.
The formula_29-th local homology group of a space formula_22 at a point formula_13, denoted
formula_30
is defined to be the relative homology group formula_31. Informally, this is the "local" homology of formula_22 close to formula_13.
Local homology of the cone CX at the origin.
One easy example of local homology is calculating the local homology of the cone (topology) of a space at the origin of the cone. Recall that the cone is defined as the quotient space
formula_32
where formula_33 has the subspace topology. Then, the origin formula_34 is the equivalence class of points formula_35. Using the intuition that the local homology group formula_36 of formula_37 at formula_13 captures the homology of formula_37 "near" the origin, we should expect this is the homology of formula_38 since formula_39 has a homotopy retract to formula_22. Computing the local homology can then be done using the long exact sequence in homology
formula_40
Because the cone of a space is contractible, the middle homology groups are all zero, giving the isomorphism
formula_41
since formula_39 is contractible to formula_22.
In algebraic geometry.
Note the previous construction can be proven in algebraic geometry using the affine cone of a projective variety formula_22 using Local cohomology.
Local homology of a point on a smooth manifold.
Another computation for local homology can be computed on a point formula_42 of a manifold formula_43. Then, let formula_44 be a compact neighborhood of formula_42 isomorphic to a closed disk formula_45 and let formula_46. Using the excision theorem there is an isomorphism of relative homology groups
formula_47
hence the local homology of a point reduces to the local homology of a point in a closed ball formula_48. Because of the homotopy equivalence
formula_49
and the fact
formula_50
the only non-trivial part of the long exact sequence of the pair formula_51 is
formula_52
hence the only non-zero local homology group is formula_53.
Functoriality.
Just as in absolute homology, continuous maps between spaces induce homomorphisms between relative homology groups. In fact, this map is exactly the induced map on homology groups, but it descends to the quotient.
Let formula_7 and formula_54 be pairs of spaces such that formula_0 and formula_55, and let formula_56 be a continuous map. Then there is an induced map formula_57 on the (absolute) chain groups. If formula_58, then formula_59. Let
formula_60
be the natural projections which take elements to their equivalence classes in the quotient groups. Then the map formula_61 is a group homomorphism. Since formula_62, this map descends to the quotient, inducing a well-defined map formula_63 such that the following diagram commutes:
Chain maps induce homomorphisms between homology groups, so formula_64 induces a map formula_65 on the relative homology groups.
Examples.
One important use of relative homology is the computation of the homology groups of quotient spaces formula_23. In the case that formula_20 is a subspace of formula_22 fulfilling the mild regularity condition that there exists a neighborhood of formula_20 that has formula_20 as a deformation retract, then the group formula_66 is isomorphic to formula_67. We can immediately use this fact to compute the homology of a sphere. We can realize formula_68 as the quotient of an n-disk by its boundary, i.e. formula_69. Applying the exact sequence of relative homology gives the following:<br> formula_70
Because the disk is contractible, we know its reduced homology groups vanish in all dimensions, so the above sequence collapses to the short exact sequence:
formula_71
Therefore, we get isomorphisms formula_72. We can now proceed by induction to show that formula_73. Now because formula_74 is the deformation retract of a suitable neighborhood of itself in formula_75, we get that formula_76.
Another insightful geometric example is given by the relative homology of formula_77 where formula_78. Then we can use the long exact sequence
formula_79
Using exactness of the sequence we can see that formula_80 contains a loop formula_81 counterclockwise around the origin. Since the cokernel of formula_82 fits into the exact sequence
formula_83
it must be isomorphic to formula_84. One generator for the cokernel is the formula_85-chain formula_86 since its boundary map is
formula_87
Notes.
<templatestyles src="Citation/styles.css"/>^ i.e., the boundary formula_88 maps formula_89 to formula_90
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A\\subseteq X"
},
{
"math_id": 1,
"text": "0\\to C_\\bullet(A) \\to C_\\bullet(X)\\to \nC_\\bullet(X) /C_\\bullet(A) \\to 0 ,"
},
{
"math_id": 2,
"text": "C_\\bullet(X)"
},
{
"math_id": 3,
"text": "C_\\bullet(A)"
},
{
"math_id": 4,
"text": "\\partial'_\\bullet"
},
{
"math_id": 5,
"text": "C_n(X,A):=C_n(X)/C_n(A)"
},
{
"math_id": 6,
"text": "\\cdots\\longrightarrow C_n(X,A) \\xrightarrow{\\partial'_n} C_{n-1}(X,A) \\longrightarrow \\cdots ."
},
{
"math_id": 7,
"text": "(X,A)"
},
{
"math_id": 8,
"text": "H_n(X,A) := \\ker\\partial'_n/\\operatorname{im}\\partial'_{n+1}."
},
{
"math_id": 9,
"text": "\\cdots \\to H_n(A) \\stackrel{i_*}{\\to} H_n(X) \\stackrel{j_*}{\\to} H_n (X,A) \\stackrel{\\partial}{\\to} H_{n-1}(A) \\to \\cdots ."
},
{
"math_id": 10,
"text": "\\partial"
},
{
"math_id": 11,
"text": "H_n(X,A)"
},
{
"math_id": 12,
"text": "H_n(X,x_0)"
},
{
"math_id": 13,
"text": "x_0"
},
{
"math_id": 14,
"text": "H_i(X,x_0) = H_i(X)"
},
{
"math_id": 15,
"text": "i > 0"
},
{
"math_id": 16,
"text": "i = 0"
},
{
"math_id": 17,
"text": "H_0(X,x_0)"
},
{
"math_id": 18,
"text": "H_0(X)"
},
{
"math_id": 19,
"text": "Z \\subset A"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "V"
},
{
"math_id": 22,
"text": "X"
},
{
"math_id": 23,
"text": "X/A"
},
{
"math_id": 24,
"text": "(X,Y,Z)"
},
{
"math_id": 25,
"text": "Z \\subset Y \\subset X"
},
{
"math_id": 26,
"text": "Y \\subset X"
},
{
"math_id": 27,
"text": "\\chi (X, Y) = \\sum _{j=0} ^n (-1)^j \\operatorname{rank} H_j (X, Y) . "
},
{
"math_id": 28,
"text": "\n\\chi (X, Z) = \\chi (X, Y) + \\chi (Y, Z) ."
},
{
"math_id": 29,
"text": "n"
},
{
"math_id": 30,
"text": "H_{n,\\{x_0\\}}(X)"
},
{
"math_id": 31,
"text": "H_n(X,X\\setminus \\{x_0\\})"
},
{
"math_id": 32,
"text": "CX = (X\\times I)/(X\\times\\{0\\}) ,"
},
{
"math_id": 33,
"text": "X \\times \\{0\\}"
},
{
"math_id": 34,
"text": "x_0 = 0"
},
{
"math_id": 35,
"text": "[X\\times 0]"
},
{
"math_id": 36,
"text": "H_{*,\\{x_0\\}}(CX)"
},
{
"math_id": 37,
"text": "CX"
},
{
"math_id": 38,
"text": "H_*(X)"
},
{
"math_id": 39,
"text": "CX \\setminus \\{x_0\\}"
},
{
"math_id": 40,
"text": "\\begin{align}\n\\to &H_n(CX\\setminus \\{x_0 \\})\\to H_n(CX) \\to H_{n,\\{x_{0}\\}}(CX)\\\\\n\\to & H_{n-1}(CX\\setminus \\{x_0 \\})\\to H_{n-1}(CX) \\to H_{n-1,\\{x_{0}\\}}(CX).\n\\end{align}"
},
{
"math_id": 41,
"text": "\\begin{align}\nH_{n,\\{x_0\\}}(CX) & \\cong \nH_{n-1}(CX \\setminus \\{ x_0 \\}) \\\\\n& \\cong H_{n-1}(X),\n\\end{align}"
},
{
"math_id": 42,
"text": "p"
},
{
"math_id": 43,
"text": "M"
},
{
"math_id": 44,
"text": "K"
},
{
"math_id": 45,
"text": "\\mathbb{D}^n = \\{ x \\in \\R^n : |x| \\leq 1 \\}"
},
{
"math_id": 46,
"text": "U = M \\setminus K"
},
{
"math_id": 47,
"text": "\\begin{align}\nH_n(M,M\\setminus\\{p\\}) &\\cong H_n(M\\setminus U, M\\setminus (U\\cup \\{p\\})) \\\\\n&= H_n(K, K\\setminus\\{p\\}),\n\\end{align}"
},
{
"math_id": 48,
"text": "\\mathbb{D}^n"
},
{
"math_id": 49,
"text": "\\mathbb{D}^n \\setminus \\{0\\} \\simeq S^{n-1}"
},
{
"math_id": 50,
"text": "H_k(\\mathbb{D}^n) \\cong \\begin{cases}\n\\Z & k = 0 \\\\\n0 & k \\neq 0 ,\n\\end{cases}"
},
{
"math_id": 51,
"text": "(\\mathbb{D},\\mathbb{D}\\setminus\\{0\\})"
},
{
"math_id": 52,
"text": "0 \\to H_{n,\\{0\\}}(\\mathbb{D}^n) \\to H_{n-1}(S^{n-1}) \\to 0 ,"
},
{
"math_id": 53,
"text": "H_{n,\\{0\\}}(\\mathbb{D}^n)"
},
{
"math_id": 54,
"text": "(Y,B)"
},
{
"math_id": 55,
"text": "B\\subseteq Y"
},
{
"math_id": 56,
"text": "f\\colon X\\to Y"
},
{
"math_id": 57,
"text": "f_\\#\\colon C_n(X)\\to C_n(Y)"
},
{
"math_id": 58,
"text": "f(A)\\subseteq B"
},
{
"math_id": 59,
"text": "f_\\#(C_n(A))\\subseteq C_n(B)"
},
{
"math_id": 60,
"text": "\\begin{align}\n \\pi_X&:C_n(X)\\longrightarrow C_n(X)/C_n(A) \\\\\n \\pi_Y&:C_n(Y)\\longrightarrow C_n(Y)/C_n(B) \\\\\n\\end{align}"
},
{
"math_id": 61,
"text": "\\pi_Y\\circ f_\\#\\colon C_n(X)\\to C_n(Y)/C_n(B)"
},
{
"math_id": 62,
"text": "f_\\#(C_n(A))\\subseteq C_n(B)=\\ker\\pi_Y"
},
{
"math_id": 63,
"text": "\\varphi\\colon C_n(X)/C_n(A)\\to C_n(Y)/C_n(B)"
},
{
"math_id": 64,
"text": "f"
},
{
"math_id": 65,
"text": "f_*\\colon H_n(X,A)\\to H_n(Y,B)"
},
{
"math_id": 66,
"text": "\\tilde H_n(X/A)"
},
{
"math_id": 67,
"text": " H_n(X,A)"
},
{
"math_id": 68,
"text": "S^n"
},
{
"math_id": 69,
"text": "S^n = D^n/S^{n-1}"
},
{
"math_id": 70,
"text": "\\cdots\\to \\tilde H_n(D^n)\\rightarrow H_n(D^n,S^{n-1})\\rightarrow \\tilde H_{n-1}(S^{n-1})\\rightarrow \\tilde H_{n-1}(D^n)\\to \\cdots."
},
{
"math_id": 71,
"text": "0\\rightarrow H_n(D^n,S^{n-1}) \\rightarrow \\tilde H_{n-1}(S^{n-1}) \\rightarrow 0. "
},
{
"math_id": 72,
"text": "H_n(D^n,S^{n-1})\\cong \\tilde H_{n-1}(S^{n-1})"
},
{
"math_id": 73,
"text": "H_n(D^n,S^{n-1})\\cong \\Z"
},
{
"math_id": 74,
"text": "S^{n-1}"
},
{
"math_id": 75,
"text": "D^n"
},
{
"math_id": 76,
"text": "H_n(D^n,S^{n-1})\\cong \\tilde H_n(S^n)\\cong \\Z"
},
{
"math_id": 77,
"text": "(X=\\Complex^*, D = \\{1,\\alpha\\})"
},
{
"math_id": 78,
"text": "\\alpha \\neq 0, 1"
},
{
"math_id": 79,
"text": "\n\\begin{align}\n0 &\\to H_1(D)\\to H_1(X) \\to H_1(X,D) \\\\\n& \\to H_0(D)\\to H_0(X) \\to H_0(X,D)\n\\end{align}\n=\n\\begin{align}\n0 & \\to 0 \\to \\Z \\to H_1(X,D) \\\\\n& \\to \\Z^{\\oplus 2} \\to \\Z \\to 0\n\\end{align}\n"
},
{
"math_id": 80,
"text": "H_1(X,D)"
},
{
"math_id": 81,
"text": "\\sigma"
},
{
"math_id": 82,
"text": "\\phi\\colon \\Z \\to H_1(X,D)"
},
{
"math_id": 83,
"text": " 0 \\to \\operatorname{coker}(\\phi) \\to \\Z^{\\oplus 2} \\to \\Z \\to 0"
},
{
"math_id": 84,
"text": "\\Z"
},
{
"math_id": 85,
"text": "1"
},
{
"math_id": 86,
"text": "[1,\\alpha]"
},
{
"math_id": 87,
"text": "\\partial([1,\\alpha]) = [\\alpha] - [1]"
},
{
"math_id": 88,
"text": "\\partial\\colon C_n(X)\\to C_{n-1}(X)"
},
{
"math_id": 89,
"text": "C_n(A)"
},
{
"math_id": 90,
"text": "C_{n-1}(A)"
}
] | https://en.wikipedia.org/wiki?curid=1248647 |
1248704 | Ideal theory | Theory of ideals in commutative rings in mathematics
In mathematics, ideal theory is the theory of ideals in commutative rings. While the notion of an ideal exists also for non-commutative rings, a much more substantial theory exists only for commutative rings (and this article therefore only considers ideals in commutative rings.)
Throughout the articles, rings refer to commutative rings. See also the article ideal (ring theory) for basic operations such as sum or products of ideals.
Ideals in a finitely generated algebra over a field.
Ideals in a finitely generated algebra over a field (that is, a quotient of a polynomial ring over a field) behave somehow nicer than those in a general commutative ring. First, in contrast to the general case, if formula_0 is a finitely generated algebra over a field, then the radical of an ideal in formula_0 is the intersection of all maximal ideals containing the ideal (because formula_0 is a Jacobson ring). This may be thought of as an extension of Hilbert's Nullstellensatz, which concerns the case when formula_0 is a polynomial ring.
Topology determined by an ideal.
If "I" is an ideal in a ring "A", then it determines the topology on "A" where a subset "U" of "A" is open if, for each "x" in "U",
formula_1
for some integer formula_2. This topology is called the "I"-adic topology. It is also called an "a"-adic topology if formula_3 is generated by an element formula_4.
For example, take formula_5, the ring of integers and formula_6 an ideal generated by a prime number "p". For each integer formula_7, define formula_8 when formula_9, formula_10 prime to formula_11. Then, clearly,
formula_12
where formula_13 denotes an open ball of radius formula_14 with center formula_7. Hence, the formula_11-adic topology on formula_15 is the same as the metric space topology given by formula_16. As a metric space, formula_15 can be completed. The resulting complete metric space has a structure of a ring that extended the ring structure of formula_15; this ring is denoted as formula_17 and is called the ring of "p"-adic integers.
Ideal class group.
In a Dedekind domain "A" (e.g., a ring of integers in a number field or the coordinate ring of a smooth affine curve) with the field of fractions formula_18, an ideal formula_19 is invertible in the sense: there exists a fractional ideal formula_20 (that is, an "A"-submodule of formula_18) such that formula_21, where the product on the left is a product of submodules of "K". In other words, fractional ideals form a group under a product. The quotient of the group of fractional ideals by the subgroup of principal ideals is then the ideal class group of "A".
In a general ring, an ideal may not be invertible (in fact, already the definition of a fractional ideal is not clear). However, over a Noetherian integral domain, it is still possible to develop some theory generalizing the situation in Dedekind domains. For example, Ch. VII of Bourbaki's "Algèbre commutative" gives such a theory.
The ideal class group of "A", when it can be defined, is closely related to the Picard group of the spectrum of "A" (often the two are the same; e.g., for Dedekind domains).
In algebraic number theory, especially in class field theory, it is more convenient to use a generalization of an ideal class group called an idele class group.
Closure operations.
There are several operations on ideals that play roles of closures. The most basic one is the radical of an ideal. Another is the integral closure of an ideal. Given an irredundant primary decomposition formula_22, the intersection of formula_23's whose radicals are minimal (don’t contain any of the radicals of other formula_24's) is uniquely determined by formula_19; this intersection is then called the unmixed part of formula_19. It is also a closure operation.
Given ideals formula_25 in a ring formula_0, the ideal
formula_26
is called the saturation of formula_19 with respect to formula_27 and is a closure operation (this notion is closely related to the study of local cohomology).
See also tight closure.
Local cohomology in ideal theory.
Local cohomology can sometimes be used to obtain information on an ideal. This section assumes some familiarity with sheaf theory and scheme theory.
Let formula_28 be a module over a ring formula_29 and formula_19 an ideal. Then formula_28 determines the sheaf formula_30 on formula_31 (the restriction to "Y" of the sheaf associated to "M"). Unwinding the definition, one sees:
formula_32.
Here, formula_33 is called the ideal transform of formula_28 with respect to formula_19.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "x + I^n \\subset U."
},
{
"math_id": 2,
"text": "n > 0"
},
{
"math_id": 3,
"text": "I = aA"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "A = \\mathbb{Z}"
},
{
"math_id": 6,
"text": "I = pA"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "|x|_p = p^{-n}"
},
{
"math_id": 9,
"text": "x = p^n y"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "x + p^n A = B(x, p^{-(n-1)})"
},
{
"math_id": 13,
"text": "B(x, r) = \\{ z \\in \\mathbb{Z} \\mid |z - x|_p < r \\}"
},
{
"math_id": 14,
"text": "r"
},
{
"math_id": 15,
"text": "\\mathbb{Z}"
},
{
"math_id": 16,
"text": "d(x, y) = |x - y|_p"
},
{
"math_id": 17,
"text": "\\mathbb{Z}_p"
},
{
"math_id": 18,
"text": "K"
},
{
"math_id": 19,
"text": "I"
},
{
"math_id": 20,
"text": "I^{-1}"
},
{
"math_id": 21,
"text": "I \\, I^{-1} = A"
},
{
"math_id": 22,
"text": "I = \\cap Q_i"
},
{
"math_id": 23,
"text": "Q_i"
},
{
"math_id": 24,
"text": "Q_j"
},
{
"math_id": 25,
"text": "I, J"
},
{
"math_id": 26,
"text": "(I : J^{\\infty}) = \\{ f \\in A \\mid fJ^n \\subset I, n \\gg 0 \\} = \\bigcup_{n > 0} \\operatorname{Ann}_A((J^n + I)/I)"
},
{
"math_id": 27,
"text": "J"
},
{
"math_id": 28,
"text": "M"
},
{
"math_id": 29,
"text": "R"
},
{
"math_id": 30,
"text": "\\widetilde{M}"
},
{
"math_id": 31,
"text": "Y = \\operatorname{Spec}(R) - V(I)"
},
{
"math_id": 32,
"text": "\\Gamma_I(M) := \\Gamma(Y, \\widetilde{M}) = \\varinjlim \\operatorname{Hom}(I^n, M)"
},
{
"math_id": 33,
"text": "\\Gamma_I(M)"
}
] | https://en.wikipedia.org/wiki?curid=1248704 |
12487279 | Search theory | In microeconomics, search theory studies buyers or sellers who cannot instantly find a trading partner, and must therefore search for a partner prior to transacting. It involves determining the best approach to use when looking for a specific item or person in a sizable, uncharted environment. The goal of the theory is to determine the best search strategy, one that maximises the chance of finding the target while minimising search-related expenses.
Search theory clarifies how buyers and sellers choose when to acknowledge a coordinating offer for a transaction. Search theory also provides an explanation for why frictional unemployment happens as people look for jobs and corporations look for new employees.
Search theory has been used primarily to explain labor market inefficiencies, but also for all forms of "buyers" and "sellers", whether products, homes or even spouses/partners. It can be applied. The clearing price will be met quickly as supply and demand react freely. However, this does not happen in the real world. Search theory tries to explain how. Real-world transactions involve discrete quantities of goods and services, imperfect and expensive information, and possible physical or other barriers separating buyers and sellers. parties looking to conduct business, such as a potential employee and an employer, or a buyer and a seller of goods. Their search for one another is strained by this encounter. These restrictions can come in the form of geographical differences, differing expectations regarding price and specifications, and slow response and negotiation times from one of the parties.
Search theory has been applied in labor economics to analyze frictional unemployment resulting from job hunting by workers. In consumer theory, it has been applied to analyze purchasing decisions. From a worker's perspective, an acceptable job would be one that pays a high wage, one that offers desirable benefits, and/or one that offers pleasant and safe working conditions. From a consumer's perspective, a product worth purchasing would have sufficiently high quality and be offered at a sufficiently low price. In both cases, whether a given job or product is acceptable depends on the searcher's beliefs about the alternatives available in the market.
More precisely, search theory studies an individual's optimal strategy when choosing from a series of potential opportunities of random quality, under the assumption that delaying choice is costly. Search models illustrate how best to balance the cost of delay against the value of the option to try again. Mathematically, search models are optimal stopping problems.
Macroeconomists have extended search theory by studying general equilibrium models in which one or more types of searchers interact. These macroeconomic theories have been called 'matching theory', or 'search and matching theory.
Foundation of search theory.
In a traditional economic equilibrium, small changes in supply or demand have only a small effect on the price. However, in a pairwise matching setting, even slight imbalances can have significant effects on the allocation of resources. For example, in a marriage market with slightly more men than women, all matching rents go to women, and vice versa. Furthermore, the unique nature of the items for sale in a matching market makes it challenging to model as a traditional market. This poses a challenge for online matching services that aim to organize such markets efficiently. Therefore the search frictions affect equilibirum outcomes in matching markets and search theory examines the role of option value in decision-making, including where to search and how long to search. It highlights the relationship between risk and option value and can be modeled as sequential or simultaneous search.
Simultaneous search.
The literature or research theory in economics regarding the Simultaneous Search in economics was first introduced by Stigler G. in 1961. In Stigler's simultaneous search model, a consumer selects how many searches to conduct while sampling prices from a distribution. For some distributions, the ideal sample size can be calculated using a straightforward one-variable optimization problem and expressed in closed form. It is assumed that a non-degenerate distribution F(p) on [0, 1] provides the distribution of prices. A consumer chooses a fixed sample size n to minimize the expected total cost C (expected purchase cost plus search cost) of purchasing the product. With n independent draws, the distribution of the lowest price is
formula_0.
Therefore, the plan of purchase outlay:
formula_1
The expected price from the given distribution decreases as the number of searches increases, but the rate of decrease becomes smaller. This meets the second-order condition, and the optimal sample size (n*) satisfies the first-order condition, which states that the difference between the probability of finding the lowest price in (n*-1) searches and that of finding it in (n*) searches is greater than or equal to the search cost, which is greater than the difference between the probability of finding the lowest price in (n*) searches and that of finding it in (n*+1) searches.
formula_2.
Sequential search.
In sequential search, a consumer looks for a product or service one at a time until they find it, McCall J.J. introduced this type of search to economics. In economics, the sequential search model is used to examine how consumers choose which goods or services to purchase when they have asymmetrical information (incomplete) about those goods' quality.
Consumers in sequential search models must choose whether to stop looking for a better good or service or to buy what they have found so far. The model makes the assumption that customers have some idea of what they want and what the standard of the good or service should be. Models of sequential search have been used in many disciplines, including finance and labour economics. Sequential search models are used in labour economics to examine how employees look for work and how employers hire new employees. Sequential search models are used in the field of finance to examine how investors look for information on stocks and other financial assets.
The assumption that consumers know what they are looking for and what the standard of the product or service should be is one of the limitations of sequential search models. This presumption might not always be accurate in practical circumstances. Another drawback is that sequential search models don't account for the possibility that customers could find out more about the calibre of a good or service as they search further.
Search from a known distribution.
George J. Stigler proposed thinking of searching for bargains or jobs as an economically important problem. John J. McCall proposed a dynamic model of job search, based on the mathematical method of optimal stopping, on which much later work has been based. McCall's paper studied the problem of which job offers an unemployed worker should accept, and which reject, when the distribution of alternatives is known and constant, and the value of money is constant. Holding fixed job characteristics, he characterized the job search decision in terms of the reservation wage, that is, the lowest wage the worker is willing to accept. The worker's optimal strategy is simply to reject any wage offer lower than the reservation wage, and accept any wage offer higher than the reservation wage.
The reservation wage may change over time if some of the conditions assumed by McCall are not met. For example, a worker who fails to find a job might lose skills or face stigma, in which case the distribution of potential offers that worker might receive will get worse, the longer he or she is unemployed. In this case, the worker's optimal reservation wage will decline over time. Likewise, if the worker is risk averse, the reservation wage will decline over time if the worker gradually runs out of money while searching. The reservation wage would also differ for two jobs of different characteristics; that is, there will be a compensating differential between different types of jobs.
An interesting observation about McCall's model is that greater variance of offers may make the searcher better off, and prolong optimal search, even if he or she is risk averse. This is because when there is more variation in wage offers (holding fixed the mean), the searcher may want to wait longer (that is, set a higher reservation wage) in hopes of receiving an exceptionally high wage offer. The possibility of receiving some exceptionally low offers has less impact on the reservation wage, since bad offers can be turned down.
While McCall framed his theory in terms of the wage search decision of an unemployed worker, similar insights are applicable to a consumer's search for a low price. In that context, the highest price a consumer is willing to pay for a particular good is called the reservation price.
Search from known distributions and heterogeneous costs.
Opportunities might provide payoffs from different distributions. Costs of sampling may vary from an opportunity to another. As a result, some opportunities appear more profitable to sample than others. These problems are referred to as Pandora box problems introduced by Martin Weitzman. Boxes have different opening costs. Pandora opens boxes, but will only enjoy the best opportunity. With formula_3 the payoff she discovered from the box formula_4, formula_5 the cost she has paid to open it and formula_6 the set of boxes she has opened, Pandora receives
formula_7
It can be proven Pandora associates to each box a reservation value. Her optimal strategy is to open the boxes by decreasing order of reservation value until the opened box that maximizes her payoff exceed highest reservation value of the remaining boxes. This strategy is referred as the Pandora's rule.
In fact, the Pandora's rule remains the optimal sampling strategy for complex payoff functions. Wojciech Olszewski and Richard Weber show that Pandora's rule is optimal if she maximizes
formula_8
for formula_9 continuous, non-negative, non-decreasing, symmetric and submodular.
Endogenizing the price distribution.
Studying optimal search from a given distribution of prices led economists to ask why the same good should ever be sold, in equilibrium, at more than one price. After all, this is by definition a violation of the law of one price. However, when buyers do not have perfect information about where to find the lowest price (that is, whenever search is necessary), not all sellers may wish to offer the same price, because there is a trade-off between the frequency and the profitability of their sales. That is, firms may be indifferent between posting a high price (thus selling infrequently, only to those consumers with the highest reservation prices) and a low price (at which they will sell more often, because it will fall below the reservation price of more consumers).
Search from an unknown distribution.
When the searcher does not even know the distribution of offers, then there is an additional motive for search: by searching longer, more is learned about the range of offers available. Search from one or more unknown distributions is called a multi-armed bandit problem. The name comes from the slang term 'one-armed bandit' for a casino slot machine, and refers to the case in which the only way to learn about the distribution of rewards from a given slot machine is by actually playing that machine. Optimal search strategies for an unknown distribution have been analyzed using "allocation indices" such as the Gittins index.
Matching theory.
More recently, job search, and other types of search, have been incorporated into macroeconomic models, using a framework called 'matching theory'. Peter A. Diamond, Dale Mortensen, and Christopher A. Pissarides won the 2010 Nobel prize in economics for their work on matching theory.
In models of matching in the labor market, two types of search interact. That is, the rate at which new jobs are formed is assumed to depend both on workers' search decisions, and on firms' decisions to open job vacancies. While some matching models include a distribution of different wages, others are simplified by ignoring wage differences, and just imply that workers pass through an unemployment spell of random length before beginning work.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Fn(p)=1-[1-F(p)]n"
},
{
"math_id": 1,
"text": "P(n)=K\\int [(1-F(p)]^n dp "
},
{
"math_id": 2,
"text": "P(n^*-1)-P(n^*)\\geq c> P(n^*)-P(n*+1) "
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "c_i"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "\\max_{i \\in S} x_i - \\sum_{i \\in S} c_i"
},
{
"math_id": 8,
"text": "u \\left(x_1, ... ,x_S\\right) - \\sum_{i}^S c_i"
},
{
"math_id": 9,
"text": "u"
}
] | https://en.wikipedia.org/wiki?curid=12487279 |
12487536 | Point reflection | Geometric symmetry operation
In geometry, a point reflection (also called a point inversion or central inversion) is a transformation of affine space in which every point is reflected across a specific fixed point. When dealing with crystal structures and in the physical sciences the terms inversion symmetry, inversion center or centrosymmetric are more commonly used.
A point reflection is an involution: applying it twice is the identity transformation. It is equivalent to a homothetic transformation with scale factor −1. The point of inversion is also called homothetic center.
An object that is invariant under a point reflection is said to possess point symmetry; if it is invariant under point reflection through its center, it is said to possess central symmetry or to be centrally symmetric. A point group including a point reflection among its symmetries is called "centrosymmetric".
In Euclidean space, a point reflection is an isometry (preserves distance). In the Euclidean plane, a point reflection is the same as a half-turn rotation (180° or π radians); a point reflection through the object's centroid is the same as a half-turn "spin".
Terminology.
The term "reflection" is loose, and considered by some an abuse of language, with "inversion" preferred; however, "point reflection" is widely used. Such maps are involutions, meaning that they have order 2 – they are their own inverse: applying them twice yields the identity map – which is also true of other maps called "reflections". More narrowly, a "reflection" refers to a reflection in a hyperplane (formula_0 dimensional affine subspace – a point on the line, a line in the plane, a plane in 3-space), with the hyperplane being fixed, but more broadly "reflection" is applied to any involution of Euclidean space, and the fixed set (an affine space of dimension "k", where formula_1) is called the "mirror". In dimension 1 these coincide, as a point is a hyperplane in the line.
In terms of linear algebra, assuming the origin is fixed, involutions are exactly the diagonalizable maps with all eigenvalues either 1 or −1. Reflection in a hyperplane has a single −1 eigenvalue (and multiplicity formula_0 on the 1 eigenvalue), while point reflection has only the −1 eigenvalue (with multiplicity "n").
The term "inversion" should not be confused with inversive geometry, where "inversion" is defined with respect to a circle.
Examples.
In two dimensions, a point reflection is the same as a rotation of 180 degrees. In three dimensions, a point reflection can be described as a 180-degree rotation composed with reflection across the plane of rotation, perpendicular to the axis of rotation. In dimension "n", point reflections are orientation-preserving if "n" is even, and orientation-reversing if "n" is odd.
Formula.
Given a vector a in the Euclidean space R"n", the formula for the reflection of a across the point p is
formula_2
In the case where p is the origin, point reflection is simply the negation of the vector a.
In Euclidean geometry, the inversion of a point "X" with respect to a point "P" is a point "X"* such that "P" is the midpoint of the line segment with endpoints "X" and "X"*. In other words, the vector from "X" to "P" is the same as the vector from "P" to "X"*.
The formula for the inversion in "P" is
x* = 2p − x
where p, x and x* are the position vectors of "P", "X" and "X"* respectively.
This mapping is an isometric involutive affine transformation which has exactly one fixed point, which is "P".
Point reflection as a special case of uniform scaling or homothety.
When the inversion point "P" coincides with the origin, point reflection is equivalent to a special case of uniform scaling: uniform scaling with scale factor equal to −1. This is an example of linear transformation.
When "P" does not coincide with the origin, point reflection is equivalent to a special case of homothetic transformation: homothety with homothetic center coinciding with P, and scale factor −1. (This is an example of non-linear affine transformation.)
Point reflection group.
The composition of two point reflections is a translation. Specifically, point reflection at p followed by point reflection at q is translation by the vector 2(q − p).
The set consisting of all point reflections and translations is Lie subgroup of the Euclidean group. It is a semidirect product of R"n" with a cyclic group of order 2, the latter acting on R"n" by negation. It is precisely the subgroup of the Euclidean group that fixes the line at infinity pointwise.
In the case "n" = 1, the point reflection group is the full isometry group of the line.
Point reflection in analytic geometry.
Given the point formula_3 and its reflection formula_4 with respect to the point formula_5, the latter is the midpoint of the segment formula_6;
formula_7
Hence, the equations to find the coordinates of the reflected point are
formula_8
Particular is the case in which the point C has coordinates formula_9 (see the paragraph below)
formula_10
Properties.
In even-dimensional Euclidean space, say 2"N"-dimensional space, the inversion in a point "P" is equivalent to "N" rotations over angles π in each plane of an arbitrary set of "N" mutually orthogonal planes intersecting at "P". These rotations are mutually commutative. Therefore, inversion in a point in even-dimensional space is an orientation-preserving isometry or direct isometry.
In odd-dimensional Euclidean space, say (2"N" + 1)-dimensional space, it is equivalent to "N" rotations over π in each plane of an arbitrary set of "N" mutually orthogonal planes intersecting at "P", combined with the reflection in the 2"N"-dimensional subspace spanned by these rotation planes. Therefore, it "reverses" rather than preserves orientation, it is an indirect isometry.
Geometrically in 3D it amounts to rotation about an axis through "P" by an angle of 180°, combined with reflection in the plane through "P" which is perpendicular to the axis; the result does not depend on the orientation (in the other sense) of the axis. Notations for the type of operation, or the type of group it generates, are formula_11, "C""i", "S"2, and 1×. The group type is one of the three symmetry group types in 3D without any pure rotational symmetry, see cyclic symmetries with "n" = 1.
The following point groups in three dimensions contain inversion:
Closely related to inverse in a point is reflection in respect to a plane, which can be thought of as a "inversion in a plane".
Inversion centers in crystallography.
Molecules contain an inversion center when a point exists through which all atoms can reflect while retaining symmetry. In crystallography, the presence of inversion centers distinguishes between centrosymmetric and non-centrosymmetric compounds. Crystal structures are composed of various polyhedra, categorized by their coordination number and bond angles. For example, four-coordinate polyhedra are classified as tetrahedra, while five-coordinate environments can be square pyramidal or trigonal bipyramidal depending on the bonding angles. All crystalline compounds come from a repetition of an atomic building block known as a unit cell, and these unit cells define which polyhedra form and in what order. These polyhedra link together via corner-, edge- or face sharing, depending on which atoms share common bonds. Polyhedra containing inversion centers are known as centrosymmetric, while those without are non-centrosymmetric. Six-coordinate octahedra are an example of centrosymmetric polyhedra, as the central atom acts as an inversion center through which the six bonded atoms retain symmetry. Tetrahedra, on the other hand, are non-centrosymmetric as an inversion through the central atom would result in a reversal of the polyhedron. Polyhedra with an odd (versus even) coordination number are not centrosymmtric.
Real polyhedra in crystals often lack the uniformity anticipated in their bonding geometry. Common irregularities found in crystallography include distortions and disorder. Distortion involves the warping of polyhedra due to nonuniform bonding lengths, often due to differing electrostatic attraction between heteroatoms. For instance, a titanium center will likely bond evenly to six oxygens in an octahedra, but distortion would occur if one of the oxygens were replaced with a more electronegative fluorine. Distortions will not change the inherent geometry of the polyhedra—a distorted octahedron is still classified as an octahedron, but strong enough distortions can have an effect on the centrosymmetry of a compound. Disorder involves a split occupancy over two or more sites, in which an atom will occupy one crystallographic position in a certain percentage of polyhedra and the other in the remaining positions. Disorder can influence the centrosymmetry of certain polyhedra as well, depending on whether or not the occupancy is split over an already-present inversion center.
Centrosymmetry applies to the crystal structure as a whole, not just individual polyhedra. Crystals are classified into thirty-two crystallographic point groups which describe how the different polyhedra arrange themselves in space in the bulk structure. Of these thirty-two point groups, eleven are centrosymmetric. The presence of noncentrosymmetric polyhedra does not guarantee that the point group will be the same—two non-centrosymmetric shapes can be oriented in space in a manner which contains an inversion center between the two. Two tetrahedra facing each other can have an inversion center in the middle, because the orientation allows for each atom to have a reflected pair. The inverse is also true, as multiple centrosymmetric polyhedra can be arranged to form a noncentrosymmetric point group.
Non-centrosymmetric insulating compounds are piezoelectric and can be useful for application in nonlinear optics. The lack of symmetry via inversion centers can allow for areas of the crystal to interact differently with incoming light. The wavelength, frequency and intensity of light is subject to change as the electromagnetic radiation interacts with different energy states throughout the structure. Potassium titanyl phosphate, KTiOPO4 (KTP). crystalizes in the non-centrosymmetric, orthorhombic Pna21 space group, and is a useful non-linear crystal. KTP is used for frequency-doubling neodymium-doped lasers, utilizing a nonlinear optical property known as second-harmonic generation. The applications for nonlinear materials are still being researched, but these properties stem from the presence of (or lack thereof) an inversion center.
Inversion with respect to the origin.
Inversion with respect to the origin corresponds to additive inversion of the position vector, and also to scalar multiplication by −1. The operation commutes with every other linear transformation, but not with translation: it is in the center of the general linear group. "Inversion" without indicating "in a point", "in a line" or "in a plane", means this inversion; in physics 3-dimensional reflection through the origin is also called a parity transformation.
In mathematics, reflection through the origin refers to the point reflection of Euclidean space R"n" across the origin of the Cartesian coordinate system. Reflection through the origin is an orthogonal transformation corresponding to scalar multiplication by formula_12, and can also be written as formula_13, where formula_14 is the identity matrix. In three dimensions, this sends formula_15, and so forth.
Representations.
As a scalar matrix, it is represented in every basis by a matrix with formula_12 on the diagonal, and, together with the identity, is the center of the orthogonal group formula_16.
It is a product of "n" orthogonal reflections (reflection through the axes of any orthogonal basis); note that orthogonal reflections commute.
In 2 dimensions, it is in fact rotation by 180 degrees, and in dimension formula_17, it is rotation by 180 degrees in "n" orthogonal planes; note again that rotations in orthogonal planes commute.
Properties.
It has determinant formula_18 (from the representation by a matrix or as a product of reflections). Thus it is orientation-preserving in even dimension, thus an element of the special orthogonal group SO(2"n"), and it is orientation-reversing in odd dimension, thus not an element of SO(2"n" + 1) and instead providing a splitting of the map formula_19, showing that formula_20 as an internal direct product.
Analogously, it is a longest element of the orthogonal group, with respect to the generating set of reflections: elements of the orthogonal group all have length at most "n" with respect to the generating set of reflections, and reflection through the origin has length "n," though it is not unique in this: other maximal combinations of rotations (and possibly reflections) also have maximal length.
Geometry.
In SO(2"r"), reflection through the origin is the farthest point from the identity element with respect to the usual metric. In O(2"r" + 1), reflection through the origin is not in SO(2"r"+1) (it is in the non-identity component), and there is no natural sense in which it is a "farther point" than any other point in the non-identity component, but it does provide a base point in the other component.
Clifford algebras and spin groups.
It should "not" be confused with the element formula_22 in the spin group. This is particularly confusing for even spin groups, as formula_23, and thus in formula_24 there is both formula_12 and 2 lifts of formula_13.
Reflection through the identity extends to an automorphism of a Clifford algebra, called the "main involution" or "grade involution."
Reflection through the identity lifts to a pseudoscalar.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n-1"
},
{
"math_id": 1,
"text": "1 \\leq k \\leq n-1"
},
{
"math_id": 2,
"text": "\\mathrm{Ref}_\\mathbf{p}(\\mathbf{a}) = 2\\mathbf{p} - \\mathbf{a}."
},
{
"math_id": 3,
"text": "P(x,y)"
},
{
"math_id": 4,
"text": "P'(x',y')"
},
{
"math_id": 5,
"text": "C(x_c,y_c)"
},
{
"math_id": 6,
"text": "\\overline{PP'}"
},
{
"math_id": 7,
"text": "\\begin{cases}x_c=\\frac{x+x'}{2} \\\\ y_c=\\frac{y+y'}{2}\\end{cases}"
},
{
"math_id": 8,
"text": "\\begin{cases}x'=2x_c-x \\\\ y'=2y_c-y\\end{cases}"
},
{
"math_id": 9,
"text": "(0,0)"
},
{
"math_id": 10,
"text": "\\begin{cases}x'=-x \\\\ y'=-y\\end{cases}"
},
{
"math_id": 11,
"text": "\\overline{1}"
},
{
"math_id": 12,
"text": "-1"
},
{
"math_id": 13,
"text": "-I"
},
{
"math_id": 14,
"text": "I"
},
{
"math_id": 15,
"text": "(x, y, z) \\mapsto (-x, -y, -z)"
},
{
"math_id": 16,
"text": "O(n)"
},
{
"math_id": 17,
"text": "2n"
},
{
"math_id": 18,
"text": "(-1)^n"
},
{
"math_id": 19,
"text": "O(2n+1) \\to \\pm 1"
},
{
"math_id": 20,
"text": "O(2n + 1) = SO(2n + 1) \\times \\{\\pm I\\}"
},
{
"math_id": 21,
"text": "Q(-v) = Q(v)"
},
{
"math_id": 22,
"text": "-1 \\in \\mathrm{Spin}(n)"
},
{
"math_id": 23,
"text": "-I \\in SO(2n)"
},
{
"math_id": 24,
"text": "\\operatorname{Spin}(n)"
}
] | https://en.wikipedia.org/wiki?curid=12487536 |
12487614 | Radar engineering | Technical design of the components of a radar and its operation
Radar engineering is the design of technical aspects pertaining to the components of a radar and their ability to detect the return energy from moving scatterers — determining an object's position or obstruction in the environment. This includes field of view in terms of solid angle and maximum unambiguous range and velocity, as well as angular, range and velocity resolution. Radar sensors are classified by application, architecture, radar mode, platform, and propagation window.
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
Architecture choice.
The angle of a target is detected by scanning the field of view with a highly directive beam. This is done electronically, with a phased array antenna, or mechanically by rotating a physical antenna. The emitter and the receiver can be in the same place, as with the monostatic radars, or be separated as in the
bistatic radars. Finally, the radar wave emitted can be continuous or pulsed. The choice of the architecture depends on the sensors to be used.
Scanning antenna.
An electronically scanned array (ESA), or a phased array, offers advantages over mechanically scanned antennas such as instantaneous beam scanning, the availability of multiple concurrent agile beams, and concurrently operating radar modes. Figures of merit of an ESA are the bandwidth, the effective isotropically radiated power (EIRP) and the GR/T quotient, the field of view. EIRP is the product of the transmit gain, GT, and the transmit power, PT. GR/T is the quotient of the receive gain and the antenna noise temperature. A high EIRP and GR/T are a prerequisite for long-range detection. Design choices are:
formula_3
formula_4
Note that formula_1 is not a function of frequency. A constant phase shift over frequency has important applications as well, albeit in wideband pattern synthesis. For example, the generation of wideband monopulse formula_5 receive patterns depends on a feed network which combines two subarrays using a wideband hybrid coupler.
FMCW versus pulse-Doppler.
The range and velocity of a target are detected through pulse delay ranging and the Doppler effect (pulse-Doppler), or through the frequency modulation (FM) ranging and range differentiation. The range resolution is limited by the instantaneous signal bandwidth of the radar sensor in both pulse-Doppler and frequency modulated continuous wave (FMCW) radars. Monostatic monopulse-Doppler radar sensors offer advantages over FMCW radars, such as:
Bistatic versus monostatic.
Bistatic radars have a spatially dislocated transmitter and receiver. In this case sensor in the transmitting antenna report back to the system the angular position of the scanning beam while the energy detecting ones are with the other antenna. A time synchronisation is crucial in interpreting the data as the receiver antenna is not moving.
Monostatic radars have a spatially co-located transmitter and receiver. It this case, the emission has to be insulated from the reception sensors as the energy emitted is far greater than the returned one.
Platform.
Radar clutter is platform-dependent. Examples of platforms are airborne, car-borne, ship-borne, space-borne, and ground-based platforms.
Propagation window.
The radar frequency is selected based on size and technology readiness level considerations. The radar frequency is also chosen in order to optimize the radar cross-section (RCS) of the envisioned target, which is frequency-dependent. Examples of propagation windows are the 3 GHz (S), 10 GHz (X), 24 GHz (K), 35 GHz (Ka), 77 GHz (W), 94 GHz (W) propagation windows.
Radar Mode.
Radar modes for point targets include search and track. Radar modes for distributed targets include ground mapping and imaging. The radar mode sets the radar waveform.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta \\tau"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "k \\, d \\, \\cos{\\theta} = \\beta \\left ( f \\right ) = 2 \\, \\pi \\, \\frac{c}{\\lambda_0} \\, \\Delta \\tau"
},
{
"math_id": 4,
"text": "\\theta = \\arccos { \\left ( \\frac{c}{d} \\, \\Delta \\tau \\right )}"
},
{
"math_id": 5,
"text": "\\Sigma/\\Delta"
}
] | https://en.wikipedia.org/wiki?curid=12487614 |
1249107 | Standard rate turn | Aircraft maneuvering is referenced to a standard rate turn, also known as a rate one turn (ROT).
A standard rate turn is defined as a 3° per second turn, which completes a 360° turn in 2 minutes. This is known as a 2-minute turn, or rate one (180°/min). Fast airplanes, or aircraft on certain precision approaches, use a half standard rate ('rate half' in some countries), but the definition of standard rate does not change.
Usage.
Standardized turn rates are often employed in approaches and holding patterns to provide a reference for controllers and pilots so that each will know what the other is expecting. The pilot banks the airplane such that the turn and slip indicator points to the mark appropriate for that aircraft and then uses a clock to time the turn. The pilot can roll out at any desired direction depending on the length of time in the turn.
During a constant-bank level turn, increasing airspeed decreases the rate of turn, and increases the turn radius. A "rate half turn" (1.5° per second) is normally used when flying faster than 250 kn. The term "rate two turn" (6° per second) is used on some low speed aircraft.
Instrumentation.
Instruments, either the turn and slip indicator or the turn coordinator, have the standard rate or half standard rate turn clearly marked. Slower aircraft are equipped with 2-minute turn indicators while faster aircraft are often equipped with 4-minute turn indicators.
Formulae.
Angle of bank formula.
The formula for calculating the angle of bank for a specific true airspeed (TAS) in SI units (or other coherent system) is:
formula_0
where formula_1 is the angle of bank, formula_2 is true airspeed, formula_3 is the radius of the turn, and formula_4 is the acceleration due to gravity.
For a rate-one turn and velocity in knots (nautical miles per hour, symbol kn), this comes to
formula_5.
A convenient approximation for the bank angle in degrees is
formula_6
For aircraft holding purposes, the International Civil Aviation Organization (ICAO) mandates that all turns should be made, "at a bank angle of 25° or at a rate of 3° per second, whichever requires the lesser bank." By the above formula, a rate-one turn at a TAS greater than 180 knots would require a bank angle of more than 25°. Therefore, faster aircraft just use 25° for their turns.
Radius of turn formula.
One might also want to calculate the radius formula_3 of a Rate 1, 2 or 3 turn at a specific TAS.
formula_7
Where formula_8 is the rate of turn.
If the velocity and the angle of bank is given,
formula_9
where g is the gravitational acceleration. This is a simplified formula that ignores slip and returns zero for 90° of bank.
In metres (where gravity is approximately 9.81 metres per second per second, and velocity is given in metres per second):
formula_10
Or in feet (where velocity is given in knots):
formula_11
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi = \\arctan \\frac{v_\\mathrm{t}^2}{rg}"
},
{
"math_id": 1,
"text": "\\phi"
},
{
"math_id": 2,
"text": "v_\\mathrm{t}"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "\\phi = \\arctan \\frac{v_\\mathrm{t} [\\mathrm{kn}]} {364}"
},
{
"math_id": 6,
"text": "\\phi / ^\\circ \\approx \\frac{v_\\mathrm{t} [\\mathrm{kn}]} {10} + 7"
},
{
"math_id": 7,
"text": "r[\\mathrm{nmi}] = \\frac{v_\\mathrm{t} [\\mathrm{kn}]} {20 \\pi \\omega_\\mathrm{turn} [^\\circ / \\mathrm{s}] }"
},
{
"math_id": 8,
"text": "\\omega_\\mathrm{turn}"
},
{
"math_id": 9,
"text": "r\\; = \\frac{v_\\mathrm{t}^2} {g \\tan \\phi}"
},
{
"math_id": 10,
"text": "r[\\mathrm{m}] = \\frac{v_\\mathrm{t}^2} {9.81\\ \\mathrm{m/s^2} \\tan \\phi}"
},
{
"math_id": 11,
"text": "r[\\mathrm{ft}] = \\frac{(v_\\mathrm{t} [\\mathrm{kn}])^2} {11.294 \\tan \\phi}"
}
] | https://en.wikipedia.org/wiki?curid=1249107 |
1249147 | Pressure tank | A pressure tank or pressurizer is used in a piping system to maintain a desired pressure. Applications include buffering water pressure in homes.
A simple well water control system.
Referring to the figure on the left, a submersible water pump is installed in a well. The pressure switch turns the water pump on when it senses a pressure that is less than "P"lo and turns it off when it senses a pressure greater than "P"hi. While the pump is on, the pressure tank fills up. The pressure tank is then depleted as it supplies water in the specified pressure range to prevent "short-cycling", in which the pump tries to establish the proper pressure by rapidly cycling between "P"lo and "P"hi.
A simple pressure tank would be just a tank which held water with an air space above the water which would compress as more water entered the tank. Modern systems isolate the water from the pressurized air using a flexible rubber or plastic diaphragm or bladder, because otherwise the air will dissolve in the water and be removed from the tank by usage. Eventually there will be little or no air and the tank will become "waterlogged" causing short-cycling, and will need to be drained to restore operation. The diaphragm or bladder may itself exert a pressure on the water, but it is usually small and will be neglected in the following discussion.
Referring to the diagram on the right, a pressure tank is generally pressurized when empty with a "charging pressure" "P"c, which is usually about 2 psi below the turn-on pressure "P"lo (Case 1). The total volume of the tank is "V"t. When in use, the air in the tank will be compressed to pressure "P" and there will be a volume "V" of water in the tank (Case 2). In the following development, all pressures are gauge pressures, which are the pressures above atmospheric pressure ("P"a, which is altitude dependent). The ideal gas law may be written for both cases, and the amount of air in each case is equal:
formula_0
formula_1
where "N" is the number of molecules of gas (equal in both cases), "k" is the Boltzmann constant and "T" is the temperature. Assuming that the temperature is equal for both cases, the above equations can be solved for the water pressure/volume relationship in the tank:
formula_2
formula_3
Tanks are generally specified by their total volume "V"t and the "drawdown" (Δ"V"), which is the amount of water the tank will eject as the tank pressure goes from "P"hi to "P"lo, which are established by the pressure switch:
formula_4
The reason for the charging pressure can now be seen: The larger the charging pressure, the larger the drawdown. However, a charging pressure above "P"lo will not allow the pump to turn on when the water pressure is below "P"lo, so it is kept a bit below "P"lo. Another important parameter is the drawdown factor ("f"Δ"V"), which is the ratio of the drawdown to the total tank volume:
formula_5
This factor is independent of the tank size so that the drawdown can be calculated for any tank, given its total volume, atmospheric pressure, charging pressure, and the limiting pressures established by the pressure switch.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(P_\\text{c}+P_\\text{a})V_\\text{t} = NkT \\qquad \\text{Case 1}"
},
{
"math_id": 1,
"text": "(P+P_\\text{a})(V_\\text{t}-V) = NkT \\qquad \\mathrm{Case 2}"
},
{
"math_id": 2,
"text": "V(P) = V_\\text{t}\\frac{P-P_\\text{c}}{P+P_\\text{a}}"
},
{
"math_id": 3,
"text": "P(V) = \\frac{P_\\text{a} V+P_\\text{c} V_\\text{t}}{V_\\text{t}-V}"
},
{
"math_id": 4,
"text": "\\Delta V = V(P_\\text{hi}) - V(P_\\text{lo}) = V_\\text{t} \\frac{(P_\\text{a}+P_\\text{c})(P_\\text{hi}-P_\\text{lo})}{(P_\\text{hi}+P_\\text{a})(P_\\text{lo}+P_\\text{a})}"
},
{
"math_id": 5,
"text": "f_{\\Delta V}= \\frac{(P_\\text{a}+P_\\text{c})(P_\\text{hi}-P_\\text{lo})}{(P_\\text{hi}+P_\\text{a})(P_\\text{lo}+P_\\text{a})}"
}
] | https://en.wikipedia.org/wiki?curid=1249147 |
1249738 | Supertrace | In the theory of superalgebras, if "A" is a commutative superalgebra, "V" is a free right "A"-supermodule and "T" is an endomorphism from "V" to itself, then the supertrace of "T", str("T") is defined by the following trace diagram:
More concretely, if we write out "T" in block matrix form after the decomposition into even and odd subspaces as follows,
formula_0
then the supertrace
str("T") = the ordinary trace of "T"00 − the ordinary trace of "T"11.
Let us show that the supertrace does not depend on a basis.
Suppose e1, ..., ep are the even basis vectors and e"p"+1, ..., e"p"+"q" are the odd basis vectors. Then, the components of "T", which are elements of "A", are defined as
formula_1
The grading of "T""i""j" is the sum of the gradings of "T", e"i", e"j" mod 2.
A change of basis to e1', ..., ep', e("p"+1)', ..., e("p"+"q")' is given by the supermatrix
formula_2
and the inverse supermatrix
formula_3
where of course, "AA"−1 = "A"−1"A" = 1 (the identity).
We can now check explicitly that the supertrace is basis independent. In the case where "T" is even, we have
formula_4
In the case where "T" is odd, we have
formula_5
The ordinary trace is not basis independent, so the appropriate trace to use in the Z2-graded setting is the supertrace.
The supertrace satisfies the property
formula_6
for all "T"1, "T"2 in End("V"). In particular, the supertrace of a supercommutator is zero.
In fact, one can define a supertrace more generally for any associative superalgebra "E" over a commutative superalgebra "A" as a linear map tr: "E" -> "A" which vanishes on supercommutators. Such a supertrace is not uniquely defined; it can always at least be modified by multiplication by an element of "A".
Physics applications.
In supersymmetric quantum field theories, in which the action integral is invariant under a set of symmetry transformations (known as supersymmetry transformations) whose algebras are superalgebras, the supertrace has a variety of applications. In such a context, the supertrace of the mass matrix for the theory can be written as a sum over spins of the traces of the mass matrices for particles of different spin:
formula_7
In anomaly-free theories where only renormalizable terms appear in the superpotential, the above supertrace can be shown to vanish, even when supersymmetry is spontaneously broken.
The contribution to the effective potential arising at one loop (sometimes referred to as the Coleman–Weinberg potential) can also be written in terms of a supertrace. If formula_8 is the mass matrix for a given theory, the one-loop potential can be written as
formula_9
where formula_10 and formula_11 are the respective tree-level mass matrices for the separate bosonic and fermionic degrees of freedom in the theory and formula_12 is a cutoff scale.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T=\\begin{pmatrix}T_{00}&T_{01}\\\\T_{10}&T_{11}\\end{pmatrix}"
},
{
"math_id": 1,
"text": "T(\\mathbf{e}_j)=\\mathbf{e}_i T^i_j.\\,"
},
{
"math_id": 2,
"text": "\\mathbf{e}_{i'}=\\mathbf{e}_i A^i_{i'}"
},
{
"math_id": 3,
"text": "\\mathbf{e}_i=\\mathbf{e}_{i'} (A^{-1})^{i'}_i,\\,"
},
{
"math_id": 4,
"text": "\\operatorname{str}(A^{-1} T A)=(-1)^{|i'|} (A^{-1})^{i'}_j T^j_k A^k_{i'}=(-1)^{|i'|}(-1)^{(|i'|+|j|)(|i'|+|j|)}T^j_k A^k_{i'} (A^{-1})^{i'}_j=(-1)^{|j|} T^j_j\n=\\operatorname{str}(T)."
},
{
"math_id": 5,
"text": "\\operatorname{str}(A^{-1} T A)=(-1)^{|i'|} (A^{-1})^{i'}_j T^j_k A^k_{i'}=(-1)^{|i'|}(-1)^{(1+|j|+|k|)(|i'|+|j|)}T^j_k (A^{-1})^{i'}_j A^k_{i'} =(-1)^{|j|} T^j_j\n=\\operatorname{str}(T)."
},
{
"math_id": 6,
"text": "\\operatorname{str}(T_1 T_2) = (-1)^{|T_1||T_2|} \\operatorname{str}(T_2 T_1)"
},
{
"math_id": 7,
"text": "\\operatorname{str}[M^2]=\\sum_s(-1)^{2s} (2s+1)\\operatorname{tr}[m_s^2]."
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "V_{eff}^{1-loop}=\\dfrac{1}{64\\pi^2}\\operatorname{str}\\bigg[M^4\\ln\\Big(\\dfrac{M^2}{\\Lambda^2}\\Big)\\bigg] = \n\\dfrac{1}{64\\pi^2}\\operatorname{tr}\\bigg[m_{B}^4\\ln\\Big(\\dfrac{m_{B}^2}{\\Lambda^2}\\Big)-\nm_{F}^4\\ln\\Big(\\dfrac{m_{F}^2}{\\Lambda^2}\\Big)\\bigg]"
},
{
"math_id": 10,
"text": "m_B"
},
{
"math_id": 11,
"text": "m_F"
},
{
"math_id": 12,
"text": "\\Lambda"
}
] | https://en.wikipedia.org/wiki?curid=1249738 |
12498127 | Risk measure | In financial mathematics, a risk measure is used to determine the amount of an asset or set of assets (traditionally currency) to be kept in reserve. The purpose of this reserve is to make the risks taken by financial institutions, such as banks and insurance companies, acceptable to the regulator. In recent years attention has turned to convex and coherent risk measurement.
Mathematically.
A risk measure is defined as a mapping from a set of random variables to the real numbers. This set of random variables represents portfolio returns. The common notation for a risk measure associated with a random variable formula_0 is formula_1. A risk measure formula_2 should have certain properties:
formula_3
formula_4
formula_5
Set-valued.
In a situation with formula_6-valued portfolios such that risk can be measured in formula_7 of the assets, then a set of portfolios is the proper way to depict risk. Set-valued risk measures are useful for markets with transaction costs.
Mathematically.
A set-valued risk measure is a function formula_8, where formula_9 is a formula_10-dimensional Lp space, formula_11, and formula_12 where formula_13 is a constant solvency cone and formula_14 is the set of portfolios of the formula_15 reference assets. formula_16 must have the following properties:
formula_17
formula_18
formula_19
Examples.
Variance.
Variance (or standard deviation) is not a risk measure in the above sense. This can be seen since it has neither the translation property nor monotonicity. That is, formula_20 for all formula_21, and a simple counterexample for monotonicity can be found. The standard deviation is a deviation risk measure. To avoid any confusion, note that deviation risk measures, such as variance and standard deviation are sometimes called risk measures in different fields.
Relation to acceptance set.
There is a one-to-one correspondence between an acceptance set and a corresponding risk measure. As defined below it can be shown that formula_22 and formula_23.
Relation with deviation risk measure.
There is a one-to-one relationship between a deviation risk measure "D" and an expectation-bounded risk measure formula_24 where for any formula_30
formula_24 is called expectation bounded if it satisfies formula_33 for any nonconstant "X" and formula_34 for any constant "X".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\rho(X)"
},
{
"math_id": 2,
"text": "\\rho: \\mathcal{L} \\to \\mathbb{R} \\cup \\{+\\infty\\}"
},
{
"math_id": 3,
"text": "\\rho(0) = 0"
},
{
"math_id": 4,
"text": "\\mathrm{If}\\; a \\in \\mathbb{R} \\; \\mathrm{and} \\; Z \\in \\mathcal{L} ,\\;\\mathrm{then}\\; \\rho(Z + a) = \\rho(Z) - a"
},
{
"math_id": 5,
"text": "\\mathrm{If}\\; Z_1,Z_2 \\in \\mathcal{L} \\;\\mathrm{and}\\; Z_1 \\leq Z_2 ,\\; \\mathrm{then} \\; \\rho(Z_2) \\leq \\rho(Z_1)"
},
{
"math_id": 6,
"text": "\\mathbb{R}^d"
},
{
"math_id": 7,
"text": "m \\leq d"
},
{
"math_id": 8,
"text": "R: L_d^p \\rightarrow \\mathbb{F}_M"
},
{
"math_id": 9,
"text": "L_d^p"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "\\mathbb{F}_M = \\{D \\subseteq M: D = cl (D + K_M)\\}"
},
{
"math_id": 12,
"text": "K_M = K \\cap M"
},
{
"math_id": 13,
"text": "K"
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "R"
},
{
"math_id": 17,
"text": "K_M \\subseteq R(0) \\text{ and } R(0) \\cap -\\operatorname{int}K_M = \\emptyset"
},
{
"math_id": 18,
"text": "\\forall X \\in L_d^p, \\forall u \\in M: R(X + u1) = R(X) - u"
},
{
"math_id": 19,
"text": "\\forall X_2 - X_1 \\in L_d^p(K) \\Rightarrow R(X_2) \\supseteq R(X_1)"
},
{
"math_id": 20,
"text": "Var(X + a) = Var(X) \\neq Var(X) - a"
},
{
"math_id": 21,
"text": "a \\in \\mathbb{R}"
},
{
"math_id": 22,
"text": "R_{A_R}(X) = R(X)"
},
{
"math_id": 23,
"text": "A_{R_A} = A"
},
{
"math_id": 24,
"text": "\\rho"
},
{
"math_id": 25,
"text": "A_{\\rho} = \\{X \\in L^p: \\rho(X) \\leq 0\\}"
},
{
"math_id": 26,
"text": "A_R = \\{X \\in L^p_d: 0 \\in R(X)\\}"
},
{
"math_id": 27,
"text": "A"
},
{
"math_id": 28,
"text": "\\rho_A(X) = \\inf\\{u \\in \\mathbb{R}: X + u1 \\in A\\}"
},
{
"math_id": 29,
"text": "R_A(X) = \\{u \\in M: X + u1 \\in A\\}"
},
{
"math_id": 30,
"text": "X \\in \\mathcal{L}^2"
},
{
"math_id": 31,
"text": "D(X) = \\rho(X - \\mathbb{E}[X])"
},
{
"math_id": 32,
"text": "\\rho(X) = D(X) - \\mathbb{E}[X]"
},
{
"math_id": 33,
"text": "\\rho(X) > \\mathbb{E}[-X]"
},
{
"math_id": 34,
"text": "\\rho(X) = \\mathbb{E}[-X]"
}
] | https://en.wikipedia.org/wiki?curid=12498127 |
12499410 | Network motif | Type of sub-graph
Network motifs are recurrent and statistically significant subgraphs or patterns of a larger graph. All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs.
Network motifs are sub-graphs that repeat themselves in a specific network or even among various networks. Each of these sub-graphs, defined by a particular pattern of interactions between vertices, may reflect a framework in which particular functions are achieved efficiently. Indeed, motifs are of notable importance largely because they may reflect functional properties. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Although network motifs may provide a deep insight into the network's functional abilities, their detection is computationally challenging.
Definitions.
Let G
(V, E) and G′
(V′, E′) be two graphs. Graph G′ is a "sub-graph" of graph G (written as G′ ⊆ G) if V′ ⊆ V and E′ ⊆ E ∩ (V′ × V′). If G′ ⊆ G and G′ contains all of the edges 〈u, v〉 ∈ E with u, v ∈ V′, then G′ is an "induced sub-graph" of G. We call G′ and G isomorphic (written as G′ ↔ G), if there exists a bijection (one-to-one correspondence) f:V′ → V with 〈u, v〉 ∈ E′ ⇔ 〈f(u), f(v)〉 ∈ E for all u, v ∈ V′. The mapping f is called an isomorphism between G and G′.
When G″ ⊂ G and there exists an isomorphism between the sub-graph G″ and a graph G′, this mapping represents an "appearance" of G′ in G. The number of appearances of graph G′ in G is called the frequency FG of G′ in G. A graph is called "recurrent" (or "frequent") in G when its "frequency" FG(G′) is above a predefined threshold or cut-off value. We use terms "pattern" and "frequent sub-graph" in this review interchangeably. There is an ensemble Ω(G) of random graphs corresponding to the null-model associated to G. We should choose N random graphs uniformly from Ω(G) and calculate the frequency for a particular frequent sub-graph G′ in G. If the frequency of G′ in G is higher than its arithmetic mean frequency in N random graphs Ri, where 1 ≤ i ≤ N, we call this recurrent pattern "significant" and hence treat G′ as a "network motif" for G. For a small graph G′, the network G, and a set of randomized networks R(G) ⊆ Ω(R), where R(G)
N, the Z-score of the frequency of G′ is given by
formula_0
where μR(G′) and σR(G′) stand for the mean and standard deviation of the frequency in set R(G), respectively. The larger the Z(G′), the more significant is the sub-graph G′ as a motif. Alternatively, another measurement in statistical hypothesis testing that can be considered in motif detection is the "p"-value, given as the probability of FR(G′) ≥ FG(G′) (as its null-hypothesis), where FR(G′) indicates the frequency of G' in a randomized network. A sub-graph with "p"-value less than a threshold (commonly 0.01 or 0.05) will be treated as a significant pattern. The "p"-value for the frequency of G′ is defined as
formula_1
where N indicates the number of randomized networks, i is defined over an ensemble of randomized networks, and the Kronecker delta function δ(c(i)) is one if the condition c(i) holds. The concentration of a particular n-size sub-graph G′ in network G refers to the ratio of the sub-graph appearance in the network to the total "n"-size non-isomorphic sub-graphs' frequencies, which is formulated by
formula_2
where index i is defined over the set of all non-isomorphic n-size graphs. Another statistical measurement is defined for evaluating network motifs, but it is rarely used in known algorithms. This measurement is introduced by Picard "et al." in 2008 and used the Poisson distribution, rather than the Gaussian normal distribution that is implicitly being used above.
In addition, three specific concepts of sub-graph frequency have been proposed. As the figure illustrates, the first frequency concept F1 considers all matches of a graph in original network. This definition is similar to what we have introduced above. The second concept F2 is defined as the maximum number of edge-disjoint instances of a given graph in original network. And finally, the frequency concept F3 entails matches with disjoint edges and nodes. Therefore, the two concepts F2 and F3 restrict the usage of elements of the graph, and as can be inferred, the frequency of a sub-graph declines by imposing restrictions on network element usage. As a result, a network motif detection algorithm would pass over more candidate sub-graphs if we insist on frequency concepts F2 and F3.
History.
The study of network motifs was pioneered by Holland and Leinhardt who introduced the concept of a triad census of networks. They introduced methods to enumerate various types of subgraph configurations, and test whether the subgraph counts are statistically different from those expected in random networks.
This idea was further generalized in 2002 by Uri Alon and his group when network motifs were discovered in the gene regulation (transcription) network of the bacteria "E. coli" and then in a large set of natural networks. Since then, a considerable number of studies have been conducted on the subject. Some of these studies focus on the biological applications, while others focus on the computational theory of network motifs.
The biological studies endeavor to interpret the motifs detected for biological networks. For example, in work following, the network motifs found in "E. coli" were discovered in the transcription networks of other bacteria as well as yeast and higher organisms. A distinct set of network motifs were identified in other types of biological networks such as neuronal networks and protein interaction networks.
The computational research has focused on improving existing motif detection tools to assist the biological investigations and allow larger networks to be analyzed. Several different algorithms have been provided so far, which are elaborated in the next section in chronological order.
Most recently, the acc-MOTIF tool to detect network motifs was released.
Motif discovery algorithms.
Various solutions have been proposed for the challenging problem of network motif (NM) discovery. These algorithms can be classified under various paradigms such as exact counting methods, sampling methods, pattern growth methods and so on. However, motif discovery problem comprises two main steps: first, calculating the number of occurrences of a sub-graph and then, evaluating the sub-graph significance. The recurrence is significant if it is detectably far more than expected. Roughly speaking, the expected number of appearances of a sub-graph can be determined by a Null-model, which is defined by an ensemble of random networks with some of the same properties as the original network.
Until 2004, the only exact counting method for NM detection was the brute-force one proposed by Milo "et al.". This algorithm was successful for discovering small motifs, but using this method for finding even size 5 or 6 motifs was not computationally feasible. Hence, a new approach to this problem was needed.
Here, a review on computational aspects of major algorithms is given and their related benefits and drawbacks from an algorithmic perspective are discussed.
Classification of algorithms.
The table below lists the motif discovery algorithms that will be described in this section. They can be divided into two general categories: those based on exact counting and those using statistical sampling and estimations instead. Because the second group does not count all the occurrences of a subgraph in the main network, the algorithms belonging to this group are faster, but they might yield in biased and unrealistic results.
In the next level, the exact counting algorithms can be classified to network-centric and subgraph-centric methods. The algorithms of the first class search the given network for all subgraphs of a given size, while the algorithms falling into the second class first generate different possible non-isomorphic graphs of the given size, and then explore the network for each generated subgraph separately. Each approach has its advantages and disadvantages which are discussed below.
The table also indicates whether an algorithm can be used for directed or undirected networks as well as induced or non-induced subgraphs. For more information refer to the provided web links or lab addresses.
mfinder.
Kashtan "et al." published "mfinder", the first motif-mining tool, in 2004. It implements two kinds of motif finding algorithms: a full enumeration and the first sampling method.
Their sampling discovery algorithm was based on "edge sampling" throughout the network. This algorithm estimates concentrations of induced sub-graphs and can be utilized for motif discovery in directed or undirected networks. The sampling procedure of the algorithm starts from an arbitrary edge of the network that leads to a sub-graph of size two, and then expands the sub-graph by choosing a random edge that is incident to the current sub-graph. After that, it continues choosing random neighboring edges until a sub-graph of size n is obtained. Finally, the sampled sub-graph is expanded to include all of the edges that exist in the network between these n nodes. When an algorithm uses a sampling approach, taking unbiased samples is the most important issue that the algorithm might address. The sampling procedure, however, does not take samples uniformly and therefore Kashtan "et al." proposed a weighting scheme that assigns different weights to the different sub-graphs within network. The underlying principle of weight allocation is exploiting the information of the sampling probability for each sub-graph, i.e. the probable sub-graphs will obtain comparatively less weights in comparison to the improbable sub-graphs; hence, the algorithm must calculate the sampling probability of each sub-graph that has been sampled. This weighting technique assists "mfinder" to determine sub-graph concentrations impartially.
In expanded to include sharp contrast to exhaustive search, the computational time of the algorithm surprisingly is asymptotically independent of the network size. An analysis of the computational time of the algorithm has shown that it takes O(nn) for each sample of a sub-graph of size n from the network. On the other hand, there is no analysis in on the classification time of sampled sub-graphs that requires solving the "graph isomorphism" problem for each sub-graph sample. Additionally, an extra computational effort is imposed on the algorithm by the sub-graph weight calculation. But it is unavoidable to say that the algorithm may sample the same sub-graph multiple times – spending time without gathering any information. In conclusion, by taking the advantages of sampling, the algorithm performs more efficiently than an exhaustive search algorithm; however, it only determines sub-graphs concentrations approximately. This algorithm can find motifs up to size 6 because of its main implementation, and as result it gives the most significant motif, not all the others too. Also, it is necessary to mention that this tool has no option of visual presentation. The sampling algorithm is shown briefly:
FPF (Mavisto).
Schreiber and Schwöbbermeyer proposed an algorithm named "flexible pattern finder (FPF)" for extracting frequent sub-graphs of an input network and implemented it in a system named "Mavisto". Their algorithm exploits the "downward closure" property which is applicable for frequency concepts F2 and F3. The downward closure property asserts that the frequency for sub-graphs decrease monotonically by increasing the size of sub-graphs; however, this property does not hold necessarily for frequency concept F1. FPF is based on a "pattern tree" (see figure) consisting of nodes that represents different graphs (or patterns), where the parent of each node is a sub-graph of its children nodes; in other words, the corresponding graph of each pattern tree's node is expanded by adding a new edge to the graph of its parent node.
At first, the FPF algorithm enumerates and maintains the information of all matches of a sub-graph located at the root of the pattern tree. Then, one-by-one it builds child nodes of the previous node in the pattern tree by adding one edge supported by a matching edge in the target graph, and tries to expand all of the previous information about matches to the new sub-graph (child node). In next step, it decides whether the frequency of the current pattern is lower than a predefined threshold or not. If it is lower and if downward closure holds, FPF can abandon that path and not traverse further in this part of the tree; as a result, unnecessary computation is avoided. This procedure is continued until there is no remaining path to traverse.
The advantage of the algorithm is that it does not consider infrequent sub-graphs and tries to finish the enumeration process as soon as possible; therefore, it only spends time for promising nodes in the pattern tree and discards all other nodes. As an added bonus, the pattern tree notion permits FPF to be implemented and executed in a parallel manner since it is possible to traverse each path of the pattern tree independently. However, FPF is most useful for frequency concepts F2 and F3, because downward closure is not applicable to F1. Nevertheless, the pattern tree is still practical for F1 if the algorithm runs in parallel. Another advantage of the algorithm is that the implementation of this algorithm has no limitation on motif size, which makes it more amenable to improvements. The pseudocode of FPF (Mavisto) is shown below:
ESU (FANMOD).
The sampling bias of Kashtan "et al." provided great impetus for designing better algorithms for the NM discovery problem. Although Kashtan "et al." tried to settle this drawback by means of a weighting scheme, this method imposed an undesired overhead on the running time as well a more complicated implementation. This tool is one of the most useful ones, as it supports visual options and also is an efficient algorithm with respect to time. But, it has a limitation on motif size as it does not allow searching for motifs of size 9 or higher because of the way the tool is implemented.
Wernicke introduced an algorithm named "RAND-ESU" that provides a significant improvement over "mfinder". This algorithm, which is based on the exact enumeration algorithm "ESU", has been implemented as an application called "FANMOD". "RAND-ESU" is a NM discovery algorithm applicable for both directed and undirected networks, effectively exploits an unbiased node sampling throughout the network, and prevents overcounting sub-graphs more than once. Furthermore, "RAND-ESU" uses a novel analytical approach called "DIRECT" for determining sub-graph significance instead of using an ensemble of random networks as a Null-model. The "DIRECT" method estimates the sub-graph concentration without explicitly generating random networks. Empirically, the DIRECT method is more efficient in comparison with the random network ensemble in case of sub-graphs with a very low concentration; however, the classical Null-model is faster than the "DIRECT" method for highly concentrated sub-graphs. In the following, we detail the "ESU" algorithm and then we show how this exact algorithm can be modified efficiently to "RAND-ESU" that estimates sub-graphs concentrations.
The algorithms "ESU" and "RAND-ESU" are fairly simple, and hence easy to implement. "ESU" first finds the set of all induced sub-graphs of size k, let Sk be this set. "ESU" can be implemented as a recursive function; the running of this function can be displayed as a tree-like structure of depth k, called the ESU-Tree (see figure). Each of the ESU-Tree nodes indicate the status of the recursive function that entails two consecutive sets SUB and EXT. SUB refers to nodes in the target network that are adjacent and establish a partial sub-graph of size |SUB| ≤ k. If |SUB|
k, the algorithm has found an induced complete sub-graph, so Sk
SUB ∪ Sk. However, if |SUB| < k, the algorithm must expand SUB to achieve cardinality k. This is done by the EXT set that contains all the nodes that satisfy two conditions: First, each of the nodes in EXT must be adjacent to at least one of the nodes in SUB; second, their numerical labels must be larger than the label of first element in SUB. The first condition makes sure that the expansion of SUB nodes yields a connected graph and the second condition causes ESU-Tree leaves (see figure) to be distinct; as a result, it prevents overcounting. Note that, the EXT set is not a static set, so in each step it may expand by some new nodes that do not breach the two conditions. The next step of ESU involves classification of sub-graphs placed in the ESU-Tree leaves into non-isomorphic size-k graph classes; consequently, ESU determines sub-graphs frequencies and concentrations. This stage has been implemented simply by employing McKay's "nauty" algorithm, which classifies each sub-graph by performing a graph isomorphism test. Therefore, ESU finds the set of all induced k-size sub-graphs in a target graph by a recursive algorithm and then determines their frequency using an efficient tool.
The procedure of implementing "RAND-ESU" is quite straightforward and is one of the main advantages of "FANMOD". One can change the "ESU" algorithm to explore just a portion of the ESU-Tree leaves by applying a probability value 0 ≤ pd ≤ 1 for each level of the ESU-Tree and oblige "ESU" to traverse each child node of a node in level d-1 with probability pd. This new algorithm is called "RAND-ESU". Evidently, when pd
1 for all levels, "RAND-ESU" acts like "ESU". For pd
0 the algorithm finds nothing. Note that, this procedure ensures that the chances of visiting each leaf of the ESU-Tree are the same, resulting in "unbiased" sampling of sub-graphs through the network. The probability of visiting each leaf is Πdpd and this is identical for all of the ESU-Tree leaves; therefore, this method guarantees unbiased sampling of sub-graphs from the network. Nonetheless, determining the value of pd for 1 ≤ d ≤ k is another issue that must be determined manually by an expert to get precise results of sub-graph concentrations. While there is no lucid prescript for this matter, the Wernicke provides some general observations that may help in determining p_d values. In summary, "RAND-ESU" is a very fast algorithm for NM discovery in the case of induced sub-graphs supporting unbiased sampling method. Although, the main "ESU" algorithm and so the "FANMOD" tool is known for discovering induced sub-graphs, there is trivial modification to "ESU" which makes it possible for finding non-induced sub-graphs, too. The pseudo code of "ESU (FANMOD)" is shown below:
NeMoFinder.
Chen "et al." introduced a new NM discovery algorithm called "NeMoFinder", which adapts the idea in "SPIN" to extract frequent trees and after that expands them into non-isomorphic graphs. "NeMoFinder" utilizes frequent size-n trees to partition the input network into a collection of size-n graphs, afterward finding frequent size-n sub-graphs by expansion of frequent trees edge-by-edge until getting a complete size-n graph Kn. The algorithm finds NMs in undirected networks and is not limited to extracting only induced sub-graphs. Furthermore, "NeMoFinder" is an exact enumeration algorithm and is not based on a sampling method. As Chen "et al." claim, "NeMoFinder" is applicable for detecting relatively large NMs, for instance, finding NMs up to size-12 from the whole "S. cerevisiae" (yeast) PPI network as the authors claimed.
"NeMoFinder" consists of three main steps. First, finding frequent size-n trees, then utilizing repeated size-n trees to divide the entire network into a collection of size-n graphs, finally, performing sub-graph join operations to find frequent size-n sub-graphs. In the first step, the algorithm detects all non-isomorphic size-n trees and mappings from a tree to the network. In the second step, the ranges of these mappings are employed to partition the network into size-n graphs. Up to this step, there is no distinction between "NeMoFinder" and an exact enumeration method. However, a large portion of non-isomorphic size-n graphs still remain. "NeMoFinder" exploits a heuristic to enumerate non-tree size-n graphs by the obtained information from the preceding steps. The main advantage of the algorithm is in the third step, which generates candidate sub-graphs from previously enumerated sub-graphs. This generation of new size-n sub-graphs is done by joining each previous sub-graph with derivative sub-graphs from itself called "cousin sub-graphs". These new sub-graphs contain one additional edge in comparison to the previous sub-graphs. However, there exist some problems in generating new sub-graphs: There is no clear method to derive cousins from a graph, joining a sub-graph with its cousins leads to redundancy in generating particular sub-graph more than once, and cousin determination is done by a canonical representation of the adjacency matrix which is not closed under join operation. "NeMoFinder" is an efficient network motif finding algorithm for motifs up to size 12 only for protein-protein interaction networks, which are presented as undirected graphs. And it is not able to work on directed networks which are so important in the field of complex and biological networks. The pseudocode of "NeMoFinder" is shown below:
Grochow–Kellis.
Grochow and Kellis proposed an "exact" algorithm for enumerating sub-graph appearances. The algorithm is based on a "motif-centric" approach, which means that the frequency of a given sub-graph, called the "query graph", is exhaustively determined by searching for all possible mappings from the query graph into the larger network. It is claimed that a "motif-centric" method in comparison to "network-centric" methods has some beneficial features. First of all it avoids the increased complexity of sub-graph enumeration. Also, by using mapping instead of enumerating, it enables an improvement in the isomorphism test. To improve the performance of the algorithm, since it is an inefficient exact enumeration algorithm, the authors introduced a fast method which is called "symmetry-breaking conditions". During straightforward sub-graph isomorphism tests, a sub-graph may be mapped to the same sub-graph of the query graph multiple times. In the Grochow–Kellis (GK) algorithm symmetry-breaking is used to avoid such multiple mappings. Here we introduce the GK algorithm and the symmetry-breaking condition which eliminates redundant isomorphism tests.
The GK algorithm discovers the whole set of mappings of a given query graph to the network in two major steps. It starts with the computation of symmetry-breaking conditions of the query graph. Next, by means of a branch-and-bound method, the algorithm tries to find every possible mapping from the query graph to the network that meets the associated symmetry-breaking conditions. An example of the usage of symmetry-breaking conditions in GK algorithm is demonstrated in figure.
As it is mentioned above, the symmetry-breaking technique is a simple mechanism that precludes spending time finding a sub-graph more than once due to its symmetries. Note that, computing symmetry-breaking conditions requires finding all automorphisms of a given query graph. Even though, there is no efficient (or polynomial time) algorithm for the graph automorphism problem, this problem can be tackled efficiently in practice by McKay's tools. As it is claimed, using symmetry-breaking conditions in NM detection lead to save a great deal of running time. Moreover, it can be inferred from the results in that using the symmetry-breaking conditions results in high efficiency particularly for directed networks in comparison to undirected networks. The symmetry-breaking conditions used in the GK algorithm are similar to the restriction which "ESU" algorithm applies to the labels in EXT and SUB sets. In conclusion, the GK algorithm computes the exact number of appearance of a given query graph in a large complex network and exploiting symmetry-breaking conditions improves the algorithm performance. Also, GK algorithm is one of the known algorithms having no limitation for motif size in implementation and potentially it can find motifs of any size.
Color-coding approach.
Most algorithms in the field of NM discovery are used to find induced sub-graphs of a network. In 2008, Noga Alon "et al." introduced an approach for finding non-induced sub-graphs too. Their technique works on undirected networks such as PPI ones. Also, it counts non-induced trees and bounded treewidth sub-graphs. This method is applied for sub-graphs of size up to 10.
This algorithm counts the number of non-induced occurrences of a tree T with k
O(logn) vertices in a network G with n vertices as follows:
As available PPI networks are far from complete and error free, this approach is suitable for NM discovery for such networks. As Grochow–Kellis Algorithm and this one are the ones popular for non-induced sub-graphs, it is worth to mention that the algorithm introduced by Alon "et al." is less time-consuming than the Grochow–Kellis Algorithm.
MODA.
Omidi "et al." introduced a new algorithm for motif detection named "MODA" which is applicable for induced and non-induced NM discovery in undirected networks. It is based on the motif-centric approach discussed in the Grochow–Kellis algorithm section. It is very important to distinguish motif-centric algorithms such as MODA and GK algorithm because of their ability to work as query-finding algorithms. This feature allows such algorithms to be able to find a single motif query or a small number of motif queries (not all possible sub-graphs of a given size) with larger sizes. As the number of possible non-isomorphic sub-graphs increases exponentially with sub-graph size, for large size motifs (even larger than 10), the network-centric algorithms, those looking for all possible sub-graphs, face a problem. Although motif-centric algorithms also have problems in discovering all possible large size sub-graphs, but their ability to find small numbers of them is sometimes a significant property.
Using a hierarchical structure called an "expansion tree", the "MODA" algorithm is able to extract NMs of a given size systematically and similar to "FPF" that avoids enumerating unpromising sub-graphs; "MODA" takes into consideration potential queries (or candidate sub-graphs) that would result in frequent sub-graphs. Despite the fact that "MODA" resembles "FPF" in using a tree like structure, the expansion tree is applicable merely for computing frequency concept F1. As we will discuss next, the advantage of this algorithm is that it does not carry out the sub-graph isomorphism test for "non-tree" query graphs. Additionally, it utilizes a sampling method in order to speed up the running time of the algorithm.
Here is the main idea: by a simple criterion one can generalize a mapping of a k-size graph into the network to its same size supergraphs. For example, suppose there is mapping f(G) of graph G with k nodes into the network and we have a same size graph G′ with one more edge &langu, v〉; fG will map G′ into the network, if there is an edge 〈fG(u), fG(v)〉 in the network. As a result, we can exploit the mapping set of a graph to determine the frequencies of its same order supergraphs simply in O(1) time without carrying out sub-graph isomorphism testing. The algorithm starts ingeniously with minimally connected query graphs of size k and finds their mappings in the network via sub-graph isomorphism. After that, with conservation of the graph size, it expands previously considered query graphs edge-by-edge and computes the frequency of these expanded graphs as mentioned above. The expansion process continues until reaching a complete graph Kk (fully connected with edge).
As discussed above, the algorithm starts by computing sub-tree frequencies in the network and then expands sub-trees edge by edge. One way to implement this idea is called the expansion tree Tk for each k. Figure shows the expansion tree for size-4 sub-graphs. Tk organizes the running process and provides query graphs in a hierarchical manner. Strictly speaking, the expansion tree Tk is simply a directed acyclic graph or DAG, with its root number k indicating the graph size existing in the expansion tree and each of its other nodes containing the adjacency matrix of a distinct k-size query graph. Nodes in the first level of Tk are all distinct k-size trees and by traversing Tk in depth query graphs expand with one edge at each level. A query graph in a node is a sub-graph of the query graph in a node's child with one edge difference. The longest path in Tk consists of (k2-3k+4)/2 edges and is the path from the root to the leaf node holding the complete graph. Generating expansion trees can be done by a simple routine which is explained in.
"MODA" traverses Tk and when it extracts query trees from the first level of Tk it computes their mapping sets and saves these mappings for the next step. For non-tree queries from Tk, the algorithm extracts the mappings associated with the parent node in Tk and determines which of these mappings can support the current query graphs. The process will continue until the algorithm gets the complete query graph. The query tree mappings are extracted using the Grochow–Kellis algorithm. For computing the frequency of non-tree query graphs, the algorithm employs a simple routine that takes O(1) steps. In addition, "MODA" exploits a sampling method where the sampling of each node in the network is linearly proportional to the node degree, the probability distribution is exactly similar to the well-known Barabási-Albert preferential attachment model in the field of complex networks. This approach generates approximations; however, the results are almost stable in different executions since sub-graphs aggregate around highly connected nodes. The pseudocode of "MODA" is shown below:
Kavosh.
A recently introduced algorithm named "Kavosh" aims at improved main memory usage. "Kavosh" is usable to detect NM in both directed and undirected networks. The main idea of the enumeration is similar to the "GK" and "MODA" algorithms, which first find all k-size sub-graphs that a particular node participated in, then remove the node, and subsequently repeat this process for the remaining nodes.
For counting the sub-graphs of size k that include a particular node, trees with maximum depth of k, rooted at this node and based on neighborhood relationship are implicitly built. Children of each node include both incoming and outgoing adjacent nodes. To descend the tree, a child is chosen at each level with the restriction that a particular child can be included only if it has not been included at any upper level. After having descended to the lowest level possible, the tree is again ascended and the process is repeated with the stipulation that nodes visited in earlier paths of a descendant are now considered unvisited nodes. A final restriction in building trees is that all children in a particular tree must have numerical labels larger than the label of the root of the tree. The restrictions on the labels of the children are similar to the conditions which "GK" and "ESU" algorithm use to avoid overcounting sub-graphs.
The protocol for extracting sub-graphs makes use of the compositions of an integer. For the extraction of sub-graphs of size k, all possible compositions of the integer k-1 must be considered. The compositions of k-1 consist of all possible manners of expressing k-1 as a sum of positive integers. Summations in which the order of the summands differs are considered distinct. A composition can be expressed as k2,k3...,km where k2 + k3 + ... + km
k-1. To count sub-graphs based on the composition, ki nodes are selected from the i-th level of the tree to be nodes of the sub-graphs (i
2,3...,m). The k-1 selected nodes along with the node at the root define a sub-graph within the network. After discovering a sub-graph involved as a match in the target network, in order to be able to evaluate the size of each class according to the target network, "Kavosh" employs the "nauty" algorithm in the same way as "FANMOD". The enumeration part of Kavosh algorithm is shown below:
Recently a "Cytoscape" plugin called "CytoKavosh" is developed for this software. It is available via "Cytoscape" web page .
G-Tries.
In 2010, Pedro Ribeiro and Fernando Silva proposed a novel data structure for storing a collection of sub-graphs, called a "g-trie". This data structure, which is conceptually akin to a prefix tree, stores sub-graphs according to their structures and finds occurrences of each of these sub-graphs in a larger graph. One of the noticeable aspects of this data structure is that coming to the network motif discovery, the sub-graphs in the main network are needed to be evaluated. So, there is no need to find the ones in random network which are not in the main network. This can be one of the time-consuming parts in the algorithms in which all sub-graphs in random networks are derived.
A "g-trie" is a multiway tree that can store a collection of graphs. Each tree node contains information about a single graph vertex and its corresponding edges to ancestor nodes. A path from the root to a leaf corresponds to one single graph. Descendants of a g-trie node share a common sub-graph. Constructing a "g-trie" is well described in. After constructing a "g-trie", the counting part takes place. The main idea in counting process is to backtrack by all possible sub-graphs, but at the same time do the isomorphism tests. This backtracking technique is essentially the same technique employed by other motif-centric approaches like "MODA" and "GK" algorithms. Taking advantage of common substructures in the sense that at a given time there is a partial isomorphic match for several different candidate sub-graphs.
Among the mentioned algorithms, "G-Tries" is the fastest. But, the excessive use of memory is the drawback of this algorithm, which might limit the size of discoverable motifs by a personal computer with average memory.
ParaMODA and NemoMap.
ParaMODA and NemoMap are fast algorithms published in 2017 and 2018, respectively. They aren't as scalable as many of the others.
Comparison.
Tables and figure below show the results of running the mentioned algorithms on different standard networks. These results are taken from the corresponding sources, thus they should be treated individually.
Well-established motifs and their functions.
Much experimental work has been devoted to understanding network motifs in gene regulatory networks. These networks control which genes are expressed in the cell in response to biological signals. The network is defined such that genes are nodes, and directed edges represent the control of one gene by a transcription factor (regulatory protein that binds DNA) encoded by another gene. Thus, network motifs are patterns of genes regulating each other's transcription rate. When analyzing transcription networks, it is seen that the same network motifs appear again and again in diverse organisms from bacteria to human. The transcription network of "E. coli" and yeast, for example, is made of three main motif families, that make up almost the entire network. The leading hypothesis is that the network motif were independently selected by evolutionary processes in a converging manner, since the creation or elimination of regulatory interactions is fast on evolutionary time scale, relative to the rate at which genes change, Furthermore, experiments on the dynamics generated by network motifs in living cells indicate that they have characteristic dynamical functions. This suggests that the network motif serve as building blocks in gene regulatory networks that are beneficial to the organism.
The functions associated with common network motifs in transcription networks were explored and demonstrated by several research projects both theoretically and experimentally. Below are some of the most common network motifs and their associated function.
Negative auto-regulation (NAR).
One of simplest and most abundant network motifs in "E. coli" is negative auto-regulation in which a transcription factor (TF) represses its own transcription. This motif was shown to perform two important functions. The first function is response acceleration. NAR was shown to speed-up the response to signals both theoretically and experimentally. This was first shown in a synthetic transcription network and later on in the natural context in the SOS DNA repair system of E. coli. The second function is increased stability of the auto-regulated gene product concentration against stochastic noise, thus reducing variations in protein levels between different cells.
Positive auto-regulation (PAR).
Positive auto-regulation (PAR) occurs when a transcription factor enhances its own rate of production. Opposite to the NAR motif this motif slows the response time compared to simple regulation. In the case of a strong PAR the motif may lead to a bimodal distribution of protein levels in cell populations.
Feed-forward loops (FFL).
This motif is commonly found in many gene systems and organisms. The motif consists of three genes and three regulatory interactions. The target gene C is regulated by 2 TFs A and B and in addition TF B is also regulated by TF A . Since each of the regulatory interactions may either be positive or negative there are possibly eight types of FFL motifs. Two of those eight types: the coherent type 1 FFL (C1-FFL) (where all interactions are positive) and the incoherent type 1 FFL (I1-FFL) (A activates C and also activates B which represses C) are found much more frequently in the transcription network of "E. coli" and yeast than the other six types. In addition to the structure of the circuitry the way in which the signals from A and B are integrated by the C promoter should also be considered. In most of the cases the FFL is either an AND gate (A and B are required for C activation) or OR gate (either A or B are sufficient for C activation) but other input function are also possible.
Coherent type 1 FFL (C1-FFL).
The C1-FFL with an AND gate was shown to have a function of a 'sign-sensitive delay' element and a persistence detector both theoretically and experimentally with the arabinose system of "E. coli". This means that this motif can provide pulse filtration in which short pulses of signal will not generate a response but persistent signals will generate a response after short delay. The shut off of the output when a persistent pulse is ended will be fast. The opposite behavior emerges in the case of a sum gate with fast response and delayed shut off as was demonstrated in the flagella system of "E. coli". De novo evolution of C1-FFLs in gene regulatory networks has been demonstrated computationally in response to selection to filter out an idealized short signal pulse, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored.
Incoherent type 1 FFL (I1-FFL).
The I1-FFL is a pulse generator and response accelerator. The two signal pathways of the I1-FFL act in opposite directions where one pathway activates Z and the other represses it. When the repression is complete this leads to a pulse-like dynamics. It was also demonstrated experimentally that the I1-FFL can serve as response accelerator in a way which is similar to the NAR motif. The difference is that the I1-FFL can speed-up the response of any gene and not necessarily a transcription factor gene. An additional function was assigned to the I1-FFL network motif: it was shown both theoretically and experimentally that the I1-FFL can generate non-monotonic input function in both a synthetic and native systems. Finally, expression units that incorporate incoherent feedforward control of the gene product provide adaptation to the amount of DNA template and can be superior to simple combinations of constitutive promoters. Feedforward regulation displayed better adaptation than negative feedback, and circuits based on RNA interference were the most robust to variation in DNA template amounts. De novo evolution of I1-FFLs in gene regulatory networks has been demonstrated computationally in response to selection to a generate a pulse, with I1-FFLs being more evolutionarily accessible, but not superior, relative to an alternative motif in which it is the output rather than the input that activates the repressor.
Multi-output FFLs.
In some cases the same regulators X and Y regulate several Z genes of the same system. By adjusting the strength of the interactions this motif was shown to determine the temporal order of gene activation. This was demonstrated experimentally in the flagella system of "E. coli".
Single-input modules (SIM).
This motif occurs when a single regulator regulates a set of genes with no additional regulation. This is useful when the genes are cooperatively carrying out a specific function and therefore always need to be activated in a synchronized manner. By adjusting the strength of the interactions it can create temporal expression program of the genes it regulates.
In the literature, Multiple-input modules (MIM) arose as a generalization of SIM. However, the precise definitions of SIM and MIM have been a source of inconsistency. There are attempts to provide orthogonal definitions for canonical motifs in biological networks and algorithms to enumerate them, especially SIM, MIM and Bi-Fan (2x2 MIM).
Dense overlapping regulons (DOR).
This motif occurs in the case that several regulators combinatorially control a set of genes with diverse regulatory combinations. This motif was found in "E. coli" in various systems such as carbon utilization, anaerobic growth, stress response and others. In order to better understand the function of this motif one has to obtain more information about the way the multiple inputs are integrated by the genes. Kaplan "et al." has mapped the input functions of the sugar utilization genes in "E. coli", showing diverse shapes.
Activity motifs.
An interesting generalization of the network-motifs, activity motifs are over occurring patterns that can be found when nodes and edges in the network are annotated with quantitative features. For instance, when edges in a metabolic pathways are annotated with the magnitude or timing of the corresponding gene expression, some patterns are over occurring given the underlying network structure.
Criticism.
An assumption (sometimes more sometimes less implicit) behind the preservation of a topological sub-structure is that it is of a particular functional importance. This assumption has recently been questioned. Some authors have argued that motifs, like "bi-fan motifs", might show a variety depending on the network context, and therefore, structure of the motif does not necessarily determine function. Network structure certainly does not always indicate function; this is an idea that has been around for some time, for an example see the Sin operon.
Most analyses of motif function are carried out looking at the motif operating in isolation. Recent research provides good evidence that network context, i.e. the connections of the motif to the rest of the network, is too important to draw inferences on function from local structure only — the cited paper also reviews the criticisms and alternative explanations for the observed data. An analysis of the impact of a single motif module on the global dynamics of a network is studied in. Yet another recent work suggests that certain topological features of biological networks naturally give rise to the common appearance of canonical motifs, thereby questioning whether frequencies of occurrences are reasonable evidence that the structures of motifs are selected for their functional contribution to the operation of networks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Z(G^\\prime) = \\frac{F_G(G^\\prime) - \\mu_R(G^\\prime)}{\\sigma_R(G^\\prime)}"
},
{
"math_id": 1,
"text": "P(G^\\prime) = \\frac{1}{N}\\sum_{i=1}^N \\delta(c(i)) \\quad c(i): F_R^i(G^\\prime) \\ge F_G(G^\\prime)"
},
{
"math_id": 2,
"text": "C_G(G^\\prime) = \\frac{F_G(G^\\prime)}{\\sum_i F_G(G_i)}"
}
] | https://en.wikipedia.org/wiki?curid=12499410 |
1250001 | Geostationary ring | In orbital mechanics, the geostationary ring is the region of space around the Earth that includes geostationary orbits and the volume of space which can be reached by uncontrolled objects which begin in geostationary orbits and are subsequently perturbed. Objects in geostationary orbit can be perturbed by anomalies in the gravitational field of the Earth, by the gravitational effects of Sun and Moon, and by solar radiation pressure.
A precessional motion of the orbital plane is caused by the oblatedness of the Earth (formula_0), and the gravitational effects of Sun and Moon. This motion has a period of about 53 years. The two parameters describing the direction of the orbit plane in space, the right ascension of the ascending node, and the inclination are affected by this precession. The maximum inclination reached during the 53-year cycle is about 15 degrees. Therefore, the definition of the geostationary ring foresees a declination range from -15 degrees to +15 degrees. In addition, solar radiation pressure induces an eccentricity that leads to a variation of the orbit radius by ± 75 kilometers in some cases. This leads to the definition of the geostationary ring as being a segment of space around the geostationary orbit that ranges from 75 km below GEO to 75 km above GEO and from -15 degrees to 15 degrees declination.
The number of objects in the ring is increasing, and is a source of concern that the risk of collision with space debris in this region is particularly high.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J_2"
}
] | https://en.wikipedia.org/wiki?curid=1250001 |
12501014 | Spectral risk measure | A Spectral risk measure is a risk measure given as a weighted average of outcomes where bad outcomes are, typically, included with larger weights. A spectral risk measure is a function of portfolio returns and outputs the amount of the numeraire (typically a currency) to be kept in reserve. A spectral risk measure is always a coherent risk measure, but the converse does not always hold. An advantage of spectral measures is the way in which they can be related to risk aversion, and particularly to a utility function, through the weights given to the possible portfolio returns.
Definition.
Consider a portfolio formula_0 (denoting the portfolio payoff). Then a spectral risk measure formula_1 where formula_2 is non-negative, non-increasing, right-continuous, integrable function defined on formula_3 such that formula_4 is defined by
formula_5
where formula_6 is the cumulative distribution function for "X".
If there are formula_7 equiprobable outcomes with the corresponding payoffs given by the order statistics formula_8. Let formula_9. The measure
formula_10 defined by formula_11 is a spectral measure of risk if formula_9 satisfies the conditions
Properties.
Spectral risk measures are also coherent. Every spectral risk measure formula_19 satisfies:
In some texts the input "X" is interpreted as losses rather than payoff of a portfolio. In this case, the translation-invariance property would be given by formula_32, and the monotonicity property by formula_33 instead of the above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "M_{\\phi}: \\mathcal{L} \\to \\mathbb{R}"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "[0,1]"
},
{
"math_id": 4,
"text": "\\int_0^1 \\phi(p)dp = 1"
},
{
"math_id": 5,
"text": "M_{\\phi}(X) = -\\int_0^1 \\phi(p) F_X^{-1}(p) dp"
},
{
"math_id": 6,
"text": "F_X"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "X_{1:S}, ... X_{S:S}"
},
{
"math_id": 9,
"text": "\\phi\\in\\mathbb{R}^S"
},
{
"math_id": 10,
"text": "M_{\\phi}:\\mathbb{R}^S\\rightarrow \\mathbb{R}"
},
{
"math_id": 11,
"text": "M_{\\phi}(X)=-\\delta\\sum_{s=1}^S\\phi_sX_{s:S}"
},
{
"math_id": 12,
"text": "\\phi_s\\geq0 "
},
{
"math_id": 13,
"text": "s=1, \\dots, S"
},
{
"math_id": 14,
"text": "\\sum_{s=1}^S\\phi_s=1"
},
{
"math_id": 15,
"text": "\\phi_s"
},
{
"math_id": 16,
"text": "\\phi_{s_1}\\geq\\phi_{s_2}"
},
{
"math_id": 17,
"text": "{s_1}<{s_2}"
},
{
"math_id": 18,
"text": "{s_1}, {s_2}\\in\\{1,\\dots,S\\}"
},
{
"math_id": 19,
"text": "\\rho: \\mathcal{L} \\to \\mathbb{R}"
},
{
"math_id": 20,
"text": "\\lambda > 0"
},
{
"math_id": 21,
"text": "\\rho(\\lambda X) = \\lambda \\rho(X)"
},
{
"math_id": 22,
"text": "\\alpha \\in \\mathbb{R}"
},
{
"math_id": 23,
"text": "\\rho(X + a) = \\rho(X) - a"
},
{
"math_id": 24,
"text": "X \\geq Y"
},
{
"math_id": 25,
"text": "\\rho(X) \\leq \\rho(Y)"
},
{
"math_id": 26,
"text": "\\rho(X+Y) \\leq \\rho(X) + \\rho(Y)"
},
{
"math_id": 27,
"text": "F_Y"
},
{
"math_id": 28,
"text": "F_X = F_Y"
},
{
"math_id": 29,
"text": "\\rho(X) = \\rho(Y)"
},
{
"math_id": 30,
"text": "\\rho(X+Y) = \\rho(X) + \\rho(Y)"
},
{
"math_id": 31,
"text": "\\omega_1,\\omega_2 \\in \\Omega: \\; (X(\\omega_2) - X(\\omega_1))(Y(\\omega_2) - Y(\\omega_1)) \\geq 0"
},
{
"math_id": 32,
"text": "\\rho(X+a) = \\rho(X) + a"
},
{
"math_id": 33,
"text": "X \\geq Y \\implies \\rho(X) \\geq \\rho(Y)"
}
] | https://en.wikipedia.org/wiki?curid=12501014 |
12501672 | Discounted maximum loss | Discounted maximum loss, also known as worst-case risk measure, is the present value of the worst-case scenario for a financial portfolio.
In investment, in order to protect the value of an investment, one must consider all possible alternatives to the initial investment. How one does this comes down to personal preference; however, the worst possible alternative is generally considered to be the benchmark against which all other options are measured. The present value of this worst possible outcome is the discounted maximum loss.
Definition.
Given a finite state space formula_0, let formula_1 be a portfolio with profit formula_2 for formula_3. If formula_4 is the order statistic the discounted maximum loss is simply formula_5, where formula_6 is the discount factor.
Given a general probability space formula_7, let formula_1 be a portfolio with discounted return formula_8 for state formula_9. Then the discounted maximum loss can be written as formula_10 where formula_11 denotes the essential infimum.
Example.
As an example, assume that a portfolio is currently worth 100, and the discount factor is 0.8 (corresponding to an interest rate of 25%):
In this case the maximum loss is from 100 to 20 = 80, so the discounted maximum loss is simply formula_16
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "X_s"
},
{
"math_id": 3,
"text": "s\\in S"
},
{
"math_id": 4,
"text": "X_{1:S},...,X_{S:S}"
},
{
"math_id": 5,
"text": "-\\delta X_{1:S}"
},
{
"math_id": 6,
"text": "\\delta"
},
{
"math_id": 7,
"text": "(\\Omega,\\mathcal{F},\\mathbb{P})"
},
{
"math_id": 8,
"text": "\\delta X(\\omega)"
},
{
"math_id": 9,
"text": "\\omega \\in \\Omega"
},
{
"math_id": 10,
"text": "-\\operatorname{ess.inf} \\delta X = -\\sup \\delta \\{x \\in \\mathbb{R}: \\mathbb{P}(X \\geq x) = 1\\}"
},
{
"math_id": 11,
"text": "\\operatorname{ess.inf}"
},
{
"math_id": 12,
"text": "\\alpha = 0"
},
{
"math_id": 13,
"text": "\\rho_{\\max}"
},
{
"math_id": 14,
"text": "\\rho"
},
{
"math_id": 15,
"text": "\\rho(X) \\leq \\rho_{\\max}(X)"
},
{
"math_id": 16,
"text": "80\\times0.8=64"
}
] | https://en.wikipedia.org/wiki?curid=12501672 |
12505519 | Chief series | In abstract algebra, a chief series is a maximal normal series for a group.
It is similar to a composition series, though the two concepts are distinct in general: a chief series is a maximal "normal" series, while a composition series is a maximal "subnormal" series.
Chief series can be thought of as breaking the group down into less complicated pieces, which may be used to characterize various qualities of the group.
Definition.
A chief series is a maximal normal series for a group. Equivalently, a chief series is a composition series of the group "G" under the action of inner automorphisms.
In detail, if "G" is a group, then a chief series of "G" is a finite collection of normal subgroups "N""i" ⊆ "G",
formula_0
such that each quotient group "N""i"+1/"N""i", for "i" = 1, 2..., "n" − 1, is a minimal normal subgroup of "G"/"N""i". Equivalently, there does not exist any subgroup "A" normal in "G" such that "N""i" < "A" < "N""i"+1 for any "i". In other words, a chief series may be thought of as "full" in the sense that no normal subgroup of "G" may be added to it.
The factor groups "N""i"+1/"N""i" in a chief series are called the chief factors of the series. Unlike composition factors, chief factors are not necessarily simple. That is, there may exist a subgroup "A" normal in "N""i"+1 with "N""i" < "A" < "N""i"+1, but "A" is not normal in "G". However, the chief factors are always characteristically simple, that is, they have no proper nontrivial characteristic subgroups. In particular, a finite chief factor is a direct product of isomorphic simple groups.
Properties.
Existence.
Finite groups always have a chief series, though infinite groups need not have a chief series. For example, the group of integers Z with addition as the operation does not have a chief series. To see this, note Z is cyclic and abelian, and so all of its subgroups are normal and cyclic as well. Supposing there exists a chief series "N""i" leads to an immediate contradiction: "N"1 is cyclic and thus is generated by some integer "a", however the subgroup generated by 2"a" is a nontrivial normal subgroup properly contained in "N"1, contradicting the definition of a chief series.
Uniqueness.
When a chief series for a group exists, it is generally not unique. However, a form of the Jordan–Hölder theorem states that the chief factors of a group are unique up to isomorphism, independent of the particular chief series they are constructed from In particular, the number of chief factors is an invariant of the group "G", as well as the isomorphism classes of the chief factors and their multiplicities.
Other properties.
In abelian groups, chief series and composition series are identical, as all subgroups are normal.
Given any normal subgroup "N" ⊆ "G", one can always find a chief series in which "N" is one of the elements (assuming a chief series for "G" exists in the first place.) Also, if "G" has a chief series and "N" is normal in "G", then both "N" and "G"/"N" have chief series. The converse also holds: if "N" is normal in "G" and both "N" and "G"/"N" have chief series, "G" has a chief series as well.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1=N_0\\subseteq N_1\\subseteq N_2\\subseteq\\cdots\\subseteq N_n=G,"
}
] | https://en.wikipedia.org/wiki?curid=12505519 |
12506030 | Set-valued function | Function whose values are sets (mathematics)
A set-valued function (or correspondence) is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.
Set-valued functions are also known as multivalued functions in some references, but herein and in many others references in mathematical analysis, a multivalued function is a set-valued function f that has a further continuity property, namely that the choice of an element in the set formula_0 defines a corresponding element in each set formula_1 for y close to x, and thus defines locally an ordinary function.
Examples.
The argmax of a function is in general, multivalued. For example, formula_2.
Set-valued analysis.
Set-valued analysis is the study of sets in the spirit of mathematical analysis and general topology.
Instead of considering collections of only points, set-valued analysis considers collections of sets. If a collection of sets is endowed with a topology, or inherits an appropriate topology from an underlying topological space, then the convergence of sets can be studied.
Much of set-valued analysis arose through the study of mathematical economics and optimal control, partly as a generalization of convex analysis; the term "variational analysis" is used by authors such as R. Tyrrell Rockafellar and Roger J-B Wets, Jonathan Borwein and Adrian Lewis, and Boris Mordukhovich. In optimization theory, the convergence of approximating subdifferentials to a subdifferential is important in understanding necessary or sufficient conditions for any minimizing point.
There exist set-valued extensions of the following concepts from point-valued analysis: continuity, differentiation, integration, implicit function theorem, contraction mappings, measure theory, fixed-point theorems, optimization, and topological degree theory. In particular, equations are generalized to inclusions, while differential equations are generalized to differential inclusions.
One can distinguish multiple concepts generalizing continuity, such as the closed graph property and upper and lower hemicontinuity. There are also various generalizations of measure to multifunctions.
Applications.
Set-valued functions arise in optimal control theory, especially differential inclusions and related subjects as game theory, where the Kakutani fixed-point theorem for set-valued functions has been applied to prove existence of Nash equilibria. This among many other properties loosely associated with approximability of upper hemicontinuous multifunctions via continuous functions explains why upper hemicontinuity is more preferred than lower hemicontinuity.
Nevertheless, lower semi-continuous multifunctions usually possess continuous selections as stated in the Michael selection theorem, which provides another characterisation of paracompact spaces. Other selection theorems, like Bressan-Colombo directional continuous selection, Kuratowski and Ryll-Nardzewski measurable selection theorem, Aumann measurable selection, and Fryszkowski selection for decomposable maps are important in optimal control and the theory of differential inclusions.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "f(y)"
},
{
"math_id": 2,
"text": "\\operatorname{argmax}_{x \\in \\mathbb{R}} \\cos(x) = \\{2 \\pi k\\mid k \\in \\mathbb{Z}\\}"
}
] | https://en.wikipedia.org/wiki?curid=12506030 |
12506060 | CIECAM02 | Color appearance model
In colorimetry, CIECAM02 is the color appearance model published in 2002 by the International Commission on Illumination (CIE) Technical Committee 8-01 ("Color Appearance Modelling for Color Management Systems") and the successor of CIECAM97s.
The two major parts of the model are its chromatic adaptation transform, CIECAT02, and its equations for calculating mathematical correlates for the six technically defined dimensions of color appearance: brightness (luminance), lightness, colorfulness, chroma, saturation, and hue.
Brightness is the subjective appearance of how bright an object appears given its surroundings and how it is illuminated. Lightness is the subjective appearance of how light a color appears to be. Colorfulness is the degree of difference between a color and gray. Chroma is the colorfulness relative to the brightness of another color that appears white under similar viewing conditions. This allows for the fact that a surface of a given chroma displays increasing colorfulness as the level of illumination increases. Saturation is the colorfulness of a color relative to its own brightness. Hue is the degree to which a stimulus can be described as similar to or different from stimuli that are described as red, green, blue, and yellow, the so-called unique hues. The colors that make up an object’s appearance are best described in terms of lightness and chroma when talking about the colors that make up the object’s surface, and in terms of brightness, saturation and colorfulness when talking about the light that is emitted by or reflected off the object.
CIECAM02 takes for its input the tristimulus values of the stimulus, the tristimulus values of an adapting white point, adapting background, and surround luminance information, and whether or not observers are discounting the illuminant (color constancy is in effect). The model can be used to predict these appearance attributes or, with forward and reverse implementations for distinct viewing conditions, to compute corresponding colors.
The Windows Color System introduced in Windows Vista uses Canon's Kyuanos (キュアノス) technology for mapping image gamuts between output devices, which in turn uses CIECAM02 for color matching.
Viewing conditions.
The inner circle is the "stimulus", from which the tristimulus values should be measured in CIE XYZ using the 2° standard observer. The intermediate circle is the "proximal field", extending out another 2°. The outer circle is the "background", reaching out to 10°, from which the relative luminance (Yb) need be measured. If the proximal field is the same color as the background, the background is considered to be adjacent to the stimulus. Beyond the circles which comprise the "display field" ("display area", "viewing area") is the "surround field" (or "peripheral area"), which can be considered to be the entire room. The totality of the proximal field, background, and surround is called the "adapting field" (the field of view that supports adaptation—extends to the limit of vision).
When referring to the literature, it is also useful to be aware of the difference between the terms "adopted white point" (the computational white point) and the "adapted white point" (the observer white point). The distinction may be important in mixed mode illumination, where psychophysical phenomena come into play. This is a subject of research.
Parameter decision table.
CIECAM02 defines three surround(ing)s – average, dim, and dark – with associated parameters defined here for reference in the rest of this article:
For intermediate conditions, these values can be linearly interpolated.
The absolute luminance of the adapting field, which is a quantity that will be needed later, should be measured with a photometer. If one is not available, it can be calculated using a reference white:
formula_0
where "Y""b" is the relative luminance of background, the "E""w" = "πL""W" is the illuminance of the reference white in lux, "L""W" is the absolute luminance of the reference white in cd/m2, and "Y""w" is the relative luminance of the reference white in the adapting field. If unknown, the adapting field can be assumed to have average reflectance ("gray world" assumption): "L""A" = "L""W" / 5.
"Note": Care should be taken not to confuse "L""W", the absolute luminance of the reference white in cd/m2, and "L""w" the red cone response in the LMS color space.
Chromatic adaptation.
CAT02.
Given a set of tristimulus values in XYZ, the corresponding LMS values can be determined by the M"CAT02" transformation matrix (calculated using the CIE 1931 2° standard colorimetric observer). The sample color in the "test" illuminant is:
formula_1.
Once in LMS, the white point can be adapted to the desired degree by choosing the parameter "D". For the general CAT02, the "corresponding" color in the reference illuminant is:
formula_2
where the "Y""w" / "Y""wr" factor accounts for the two illuminants having the same chromaticity but different reference whites. The subscripts indicate the cone response for white under the test ("w") and reference illuminant ("wr"). The degree of adaptation (discounting) "D" can be set to zero for no adaptation (stimulus is considered self-luminous) and unity for complete adaptation (color constancy). In practice, it ranges from 0.65 to 1.0, as can be seen from the diagram. Intermediate values can be calculated by:
formula_3
where surround "F" is as defined above and "L""A" is the "adapting field luminance" in cd/m2.
In CIECAM02, the reference illuminant has equal energy "L""wr" = "M""wr" = "S""wr" = 100) and the reference white is the "perfect reflecting diffuser" (i.e., unity reflectance, and "Y""wr" = 100) hence:
formula_4
Furthermore, if the reference white in both illuminants have the "Y" tristimulus value ("Y""wr" = "Y""w") then:
formula_5
Post-adaptation.
After adaptation, the cone responses are converted to the Hunt–Pointer–Estévez space by going to XYZ and back:
formula_6
formula_7
Note that the matrix above, which was inherited from CIECAM97s, has the unfortunate property that since 0.38971 + 0.68898 – 0.07868 = 1.00001, 1⃗ ≠ MH1⃗ and that consequently gray has non-zero chroma, an issue which CAM16 aims to address.
Finally, the response is compressed based on the generalized Michaelis–Menten equation (as depicted aside):
formula_8
formula_9
"F""L" is the luminance level adaptation factor.
formula_10
As previously mentioned, if the luminance level of the background is unknown, it can be estimated from the absolute luminance of the white point as "L""A" = "L""W" / 5 using the "medium gray" assumption. (The expression for "F""L" is given in terms of 5"L""A" for convenience.) In photopic conditions, the luminance level adaptation factor ("F""L") is proportional to the cube root of the luminance of the adapting field ("L""A"). In scotopic conditions, it is proportional to "L""A" (meaning no luminance level adaptation). The photopic threshold is roughly "L""W" = 1 (see "F""L"–"L""A" graph above).
Appearance correlates.
CIECAM02 defines correlates for yellow-blue, red-green, brightness, and colorfulness. Let us make some preliminary definitions.
formula_11
The correlate for red–green ("a") is the magnitude of the departure of "C"1 from the criterion for unique yellow ("C"1 = "C"2 / 11), and the correlate for yellow–blue ("b") is based on the mean of the magnitude of the departures of "C"1 from unique red ("C"1 = "C"2) and unique green ("C"1 = "C"3).
formula_12
The 4.5 factor accounts for the fact that there are fewer cones at shorter wavelengths (the eye is less sensitive to blue). The order of the terms is such that b is positive for yellowish colors (rather than blueish).
The hue angle ("h") can be found by converting the rectangular coordinate ("a", "b") into polar coordinates:
formula_13
To calculate the eccentricity ("e""t") and hue composition ("H"), determine which quadrant the hue is in with the aid of the following table. Choose "i" such that "h""i" ≤ "h"′ < "h""i"+1, where "h"′ = "h" if "h" > "h"1 and "h"′ = "h" + 360° otherwise.
formula_14
Calculate the achromatic response "A":
formula_15
where
formula_16.
The correlate of lightness is
formula_17
where "c" is the impact of surround (see above), and
formula_18.
The correlate of brightness is
formula_19.
Then calculate a temporary quantity "t."
formula_20
The correlate of chroma is
formula_21.
The correlate of colorfulness is
formula_22.
The correlate of saturation is
formula_23.
Color spaces.
The appearance correlates of CIECAM02, "J", "a", and "b", form a uniform color space that can be used to calculate color differences, as long as a viewing condition is fixed. A more commonly-used derivative is the CAM02 Uniform Color Space (CAM02-UCS), an extension with tweaks to better match experimental data.
CIECAM02 as a model of human visual processing.
Like many color models, CIECAM02 aims to model the human perception of color. The CIECAM02 model has been shown to be a more plausible model of neural activity in the primary visual cortex, compared to the earlier CIELAB model. Specifically, both its achromatic response "A" and red-green correlate "a" can be matched to EMEG activity (entrainment), each with their own characteristic delay. | [
{
"math_id": 0,
"text": "\n L_A = \\frac{E_w}{\\pi} \\frac{Y_b}{Y_w} = \\frac{L_W Y_b}{Y_w}\n"
},
{
"math_id": 1,
"text": "\n \\begin{bmatrix}\n L\\\\\n M\\\\\n S\n \\end{bmatrix}\n =\n \\mathbf{M}_\\mathit{CAT02}\n \\begin{bmatrix}\n X\\\\\n Y\\\\\n Z\n \\end{bmatrix},\\quad\n \\mathbf{M}_\\mathit{CAT02}\n =\n \\begin{bmatrix}\n \\;\\;\\,0.7328 & 0.4296 & -0.1624\\\\\n -0.7036 & 1.6975 & \\;\\;\\,0.0061\\\\\n \\;\\;\\,0.0030 & 0.0136 & \\;\\;\\,0.9834\n \\end{bmatrix}\n"
},
{
"math_id": 2,
"text": "\\begin{align}\n L_c &= \\Big(\\frac{Y_w L_{wr}}{Y_{wr} L_w} D + 1-D\\Big)L\\\\\n M_c &=\\Big(\\frac{Y_w M_{wr}}{Y_{wr} M_w} D + 1-D\\Big)M\\\\\n S_c &= \\Big(\\frac{Y_w S_{wr}}{Y_{wr} S_w} D + 1-D\\Big)S\\\\\n\\end{align}"
},
{
"math_id": 3,
"text": "\n D = F \\left( 1 - \\textstyle{\\frac{1}{3.6}} e^{-(L_A + 42) / 92} \\right)"
},
{
"math_id": 4,
"text": "\\begin{align}\n L_c &= \\Big(\\frac{Y_w}{L_w} D + 1-D\\Big)L\\\\\n M_c &=\\Big(\\frac{Y_w}{M_w} D + 1-D\\Big)M\\\\\n S_c &= \\Big(\\frac{Y_w}{S_w} D + 1-D\\Big)S\\\\\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n L_c &= \\Big(\\frac{L_{wr}}{L_w} D + 1-D\\Big)L\\\\\n M_c &=\\Big(\\frac{M_{wr}}{M_w} D + 1-D\\Big)M\\\\\n S_c &= \\Big(\\frac{S_{wr}}{S_w} D + 1-D\\Big)S\\\\\n\\end{align}"
},
{
"math_id": 6,
"text": "\n \\begin{bmatrix}\n L' \\\\\n M' \\\\\n S'\n \\end{bmatrix}\n =\n \\mathbf{M}_H\n \\begin{bmatrix}\n X_c \\\\\n Y_c \\\\\n Z_c\n \\end{bmatrix}\n =\n \\mathbf{M}_H\n \\mathbf{M}_{CAT02}^{-1}\n \\begin{bmatrix}\n L_c \\\\\n M_c \\\\\n S_c\n \\end{bmatrix}\n"
},
{
"math_id": 7,
"text": "\n \\mathbf{M}_H\n =\n \\begin{bmatrix}\n \\;\\;\\,0.38971 & 0.68898 & -0.07868 \\\\\n -0.22981 & 1.18340 & \\;\\;\\,0.04641 \\\\\n \\;\\;\\,0.00000 & 0.00000 & \\;\\;\\,1.00000\n \\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\n k = \\frac{1}{5 L_A + 1}\n"
},
{
"math_id": 9,
"text": "\n F_L = \\textstyle{\\frac{1}{5}} k^4 \\left( 5 L_A \\right) + \\textstyle{\\frac{1}{10}} {(1 - k^4)}^2 {\\left( 5 L_A \\right)}^{1/3}\n"
},
{
"math_id": 10,
"text": "\\begin{align}\n L'_a &= \\frac{400 {\\left(F_L L'/100\\right)}^{0.42}}{27.13 + {\\left(F_L L'/100\\right)}^{0.42}} + 0.1 \\\\\n M'_a &= \\frac{400 {\\left(F_L M'/100\\right)}^{0.42}}{27.13 + {\\left(F_L M'/100\\right)}^{0.42}} + 0.1 \\\\\n S'_a &= \\frac{400 {\\left(F_L S'/100\\right)}^{0.42}}{27.13 + {\\left(F_L S'/100\\right)}^{0.42}} + 0.1\n\\end{align}"
},
{
"math_id": 11,
"text": "\\begin{align}\n C_1 &= L^\\prime_a - M^\\prime_a \\\\\n C_2 &= M^\\prime_a - S^\\prime_a \\\\\n C_3 &= S^\\prime_a - L^\\prime_a\n\\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align}\n a &= C_1 - \\textstyle{\\frac{1}{11}}C_2\n &= L^\\prime_a - \\textstyle{\\frac{12}{11}} M^\\prime_a + \\textstyle{\\frac{1}{11}} S^\\prime_a \\\\\n b &= \\textstyle{\\frac{1}{2}} \\left( C_2 - C_1 + C_1 - C_3 \\right) / 4.5\n &= \\textstyle{\\frac{1}{9}} \\left( L^\\prime_a + M^\\prime_a - 2S^\\prime_a \\right)\n\\end{align}"
},
{
"math_id": 13,
"text": "\n h = \\angle (a, b) = \\operatorname{atan2}(b, a),\\ (0 \\le h < 360^\\circ) \n"
},
{
"math_id": 14,
"text": "\\begin{align}\n H &= H_i + \\frac{100 (h^\\prime - h_i) / e_i}{(h^\\prime - h_i) / e_i + (h_{i+1} - h^\\prime) / e_{i+1}} \\\\\n e_t &= \\textstyle{\\frac{1}{4}} \\left[ \\cos\\left( \\textstyle{\\frac{\\pi}{180}}h + 2\\right) + 3.8 \\right]\n\\end{align}"
},
{
"math_id": 15,
"text": "\n A = (2 L^\\prime_a + M^\\prime_a + \\textstyle{\\frac{1}{20}} S^\\prime_a - 0.305) N_{bb}\n"
},
{
"math_id": 16,
"text": "\\begin{align}\n &N_{bb} = N_{cb} = 0.725 n^{-0.2} \\\\\n &n = Y_b / Y_w\n\\end{align}"
},
{
"math_id": 17,
"text": "\n J = 100 \\left( A / A_w \\right)^{c z}\n"
},
{
"math_id": 18,
"text": "\n z = 1.48 + \\sqrt{n}\n"
},
{
"math_id": 19,
"text": "\n Q = \\left(4 / c \\right) \\sqrt{\\textstyle{\\frac{1}{100}} J} \\left(A_w + 4\\right) F_L^{1/4}\n"
},
{
"math_id": 20,
"text": "\n t = \\frac{ \\textstyle{\\frac{50\\,000}{13}} N_c N_{cb} e_t \\sqrt{a^2+b^2} }\n { L_a^\\prime + M_a^\\prime + \\textstyle{\\frac{21}{20}} S_a^\\prime }\n"
},
{
"math_id": 21,
"text": "\n C = t^{0.9} \\sqrt {\\textstyle{\\frac{1}{100}} J} (1.64 - 0.29^n)^{0.73}\n"
},
{
"math_id": 22,
"text": "\n M = C \\cdot F_L^{1/4}\n"
},
{
"math_id": 23,
"text": "\n s = 100 \\sqrt {M / Q}\n"
}
] | https://en.wikipedia.org/wiki?curid=12506060 |
1250649 | Tata Institute of Fundamental Research | Public research institute in Mumbai, India
Tata Institute of Fundamental Research (TIFR) is an Indian Research Institute under the Department of Atomic Energy of the Government of India. It is a public deemed university located at Navy Nagar, Colaba in Mumbai. It also has a campus in Bangalore, International Centre for Theoretical Sciences (ICTS), and an affiliated campus in Serilingampally near Hyderabad. TIFR conducts research primarily in the natural sciences, the biological sciences and theoretical computer science.
History.
In 1944, Homi J. Bhabha, known for his role in the development of the Indian atomic energy programme, wrote to the Sir Dorabji Tata Trust requesting financial assistance to set up a scientific research institute. With support from J.R.D. Tata, then chairman of the Tata Group, TIFR was founded on 1 June 1945, and Homi Bhabha was appointed its first director. The institute initially operated within the campus of the Indian Institute of Science, Bangalore before relocating to Mumbai later that year. TIFR's new campus in Colaba was designed by Chicago-based architect Helmuth Bartsch and was inaugurated by Prime Minister Jawaharlal Nehru on 15 January 1962.
Shortly after Indian Independence, in 1949, the Council of Scientific and Industrial Research (CSIR) designated TIFR to be the centre for all large-scale projects in nuclear research. The first theoretical physics group was set up by Bhabha's students B.M. Udgaonkar and K.S. Singhvi. In December 1950, Bhabha organised an international conference at TIFR on elementary particle physics. Several world-renowned scientists attended the conference, including Rudolf Peierls, Léon Rosenfeld, William Fowler as well as Meghnad Saha, Vikram Sarabhai and others providing expertise from India. In the 1950s, TIFR gained prominence in the field of cosmic ray physics, with the setting up of research facilities in Ooty and in the Kolar gold mines.
In 1957, India's first digital computer, TIFRAC was built in TIFR. Acting on the suggestions of British physiologist Archibald Hill, Bhabha invited Obaid Siddiqi to set up a research group in molecular biology. This ultimately resulted in the establishment of the National Centre for Biological Sciences (NCBS), Bangalore twenty years later. In 1970, TIFR started research in radio astronomy with the setting up of the Ooty Radio Telescope. Encouraged by the success of ORT, Govind Swarup persuaded J. R. D. Tata to help set up the Giant Metrewave Radio Telescope near Pune, India.
TIFR attained the official deemed university status in June 2002. To meet the ever-growing demand of space needed for research labs and accommodation institute is coming up with a new campus at Hyderabad.
Research.
Research at TIFR is distributed across three schools, working over the mathematical sciences, natural sciences, technology and computer science.
School of Mathematics.
Since its birth in the 1950s, several contributions to mathematics have come from TIFR School of Mathematics. Notable contributions from TIFR mathematicians include Raghavan Narasimhan's proof of the embedding of open Riemann surfaces in formula_0, C. S. Seshadri's work on projective modules over polynomial rings and M. S. Narasimhan's results in the theory of pseudo differential operators.
Narasimhan and Seshadri wrote a seminal paper on stable vector bundles, work which has been recognised as one of the most influential articles in the area. M. S. Raghunathan started research at TIFR on algebraic and discrete groups, and was recognised for his work on rigidity.
School of Natural Sciences.
The School of Natural Sciences is further split into seven departments working in several areas of physics, chemistry and biology.
Within physics, the Department of Theoretical Physics (DTP) was set up by Bhabha, who conducted research in high energy physics and Condensed Matter Physics. The department worked on the major advances in this period such as Quantum Field Theory, string theory, and superconductivity. The current faculty includes Sandip Trivedi, Shiraz Minwalla, Abhijit Gadde, and Gautam Mandal. Several early faculty members at the institution were renowned in their fields. These include Ashoke Sen, who conducted seminal work on String Theory, specifically S-Duality, while at this institution. Other distinguished members were Spenta Wadia, Sunil Mukhi, Deepak Dhar and Nandini Trivedi.
The Department of Astrophysics works in areas like stellar binaries, gravitational waves and cosmology. TIFR is involved in building India's first gravity wave detector.
The High Energy Physics Department, TIFR has been involved in major accelerator projects like the KEK, Tevatron, LEP and the LHC. TIFR also runs the Pelletron particle accelerator facility. Bhabha's motivation resulted in the development of an NMR spectrometer for solid state studies.
The Department of Condensed Matter Physics and Material Sciences also conducts experimental research in high-temperature superconductivity, nanoelectronics and nanophotonics.
School of Technology and Computer Science.
The School of Technology and Computer Science grew out of early activities carried out at TIFR for building digital computers. Today, its activities cover areas such as Algorithms, Complexity Theory, Formal Method, Applied Probability, Learning Theory, Mathematical Finance, Information Theory, Communications, etc.
Department of Biological Sciences.
The Department Of Biological Sciences was set up by Obaid Siddiqi in early 1960s as a molecular biology group. Over the years has expanded to encompass various other branches of modern biology. The department has fourteen labs covering various aspects of modern molecular and cell biology.
Affiliated institutes.
TIFR also includes institutes outside its main campus in Colaba and Mumbai:
Visiting Students Research Programme.
The Visiting Students Research Programme (VSRP) is a summer programme conducted annually during the summer season by the Tata Institute of Fundamental Research. VSRP is offered in the subjects Physics and Astronomy, Chemistry, Mathematics, Biology and Computer Science.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C}^3"
}
] | https://en.wikipedia.org/wiki?curid=1250649 |
12511846 | Bunyakovsky conjecture | The Bunyakovsky conjecture (or Bouniakowsky conjecture) gives a criterion for a polynomial formula_0 in one variable with integer coefficients to give infinitely many prime values in the sequenceformula_1 It was stated in 1857 by the Russian mathematician Viktor Bunyakovsky. The following three conditions are necessary for formula_0 to have the desired prime-producing property:
Bunyakovsky's conjecture is that these conditions are sufficient: if formula_0 satisfies (1)–(3), then formula_3 is prime for infinitely many positive integers formula_4.
A seemingly weaker yet equivalent statement to Bunyakovsky's conjecture is that for every integer polynomial formula_0 that satisfies (1)–(3), formula_3 is prime for "at least one" positive integer formula_4: but then, since the translated polynomial formula_5 still satisfies (1)–(3), in view of the weaker statement formula_6 is prime for at least one positive integer formula_7, so that formula_3 is indeed prime for infinitely many positive integers formula_4. Bunyakovsky's conjecture is a special case of Schinzel's hypothesis H, one of the most famous open problems in number theory.
Discussion of three conditions.
The first condition is necessary because if the leading coefficient is negative then formula_8 for all large formula_9, and thus formula_3 is not a (positive) prime number for large positive integers formula_4. (This merely satisfies the sign convention that primes are positive.)
The second condition is necessary because if formula_10 where the polynomials formula_11 and formula_12 have integer coefficients, then we have formula_13 for all integers formula_4; but formula_11 and formula_12 take the values 0 and formula_14 only finitely many times, so formula_3 is composite for all large formula_4.
The second condition also fails for the polynomials reducible over the rationals.
For example, the integer-valued polynomial formula_15 doesn't satisfy the condition (2) since formula_16, so at least one of the latter two factors must be a divisor of formula_17 in order to have formula_18 prime, which holds only if formula_19. The corresponding values are formula_20, so these are the only such primes for integral formula_9 since all of these numbers are prime. This isn't a counterexample to Bunyakovsky conjecture since the condition (2) fails.
The third condition, that the numbers formula_3 have gcd 1, is obviously necessary, but is somewhat subtle, and is best understood by a counterexample. Consider formula_21, which has positive leading coefficient and is irreducible, and the coefficients are relatively prime; however formula_3 is "even" for all integers formula_4, and so is prime only finitely many times (namely at formula_22, when formula_23).
In practice, the easiest way to verify the third condition is to find one pair of positive integers formula_24 and formula_4 such that formula_6 and formula_3 are relatively prime. In general, for any integer-valued polynomial formula_25 we can use formula_26 for any integer formula_24, so the gcd is given by values of formula_0 at any consecutive formula_27 integers. In the example above, we have formula_28 and so the gcd is formula_29, which implies that formula_30 has even values on the integers.
Alternatively, when an integer polynomial formula_0 is written in the basis of binomial coefficient polynomials:
formula_31
each coefficient formula_32 is an integer and formula_33 In the example above, this is:
formula_34
and the coefficients in the right side of the equation have gcd 2.
Using this gcd formula, it can be proved formula_35 if and only if there are positive integers formula_24 and formula_4 such that formula_6 and formula_3 are relatively prime.
Examples.
A simple quadratic polynomial.
Some prime values of the polynomial formula_36 are listed in the following table. (Values of formula_9 form OEIS sequence ; those of formula_37 form .)
That formula_38 should be prime infinitely often is a problem first raised by Euler, and it is also the fifth Hardy–Littlewood conjecture and the fourth of Landau's problems. Despite the extensive numerical evidence
it is not known that this sequence extends indefinitely.
Cyclotomic polynomials.
The cyclotomic polynomials formula_39 for formula_40 satisfy the three conditions of Bunyakovsky's conjecture, so for all "k", there should be infinitely many natural numbers "n" such that formula_41 is prime. It can be shown that if for all "k", there exists an integer "n" > 1 with formula_41 prime, then for all "k", there are infinitely many natural numbers "n" with formula_41 prime.
The following sequence gives the smallest natural number "n" > 1 such that formula_41 is prime, for formula_40:
3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 2, 2, 6, 2, 4, 3, 2, 10, 2, 22, 2, 2, 4, 6, 2, 2, 2, 2, 2, 14, 3, 61, 2, 10, 2, 14, 2, 15, 25, 11, 2, 5, 5, 2, 6, 30, 11, 24, 7, 7, 2, 5, 7, 19, 3, 2, 2, 3, 30, 2, 9, 46, 85, 2, 3, 3, 3, 11, 16, 59, 7, 2, 2, 22, 2, 21, 61, 41, 7, 2, 2, 8, 5, 2, 2, ... (sequence in the OEIS).
This sequence is known to contain some large terms: the 545th term is 2706, the 601st is 2061, and the 943rd is 2042. This case of Bunyakovsky's conjecture is widely believed, but again it is not known that the sequence extends indefinitely.
Usually, there is an integer formula_4 between 2 and formula_42 (where formula_43 is Euler's totient function, so formula_42 is the degree of formula_41) such that formula_41 is prime, but there are exceptions; the first few are:
1, 2, 25, 37, 44, 68, 75, 82, 99, 115, 119, 125, 128, 159, 162, 179, 183, 188, 203, 213, 216, 229, 233, 243, 277, 289, 292, ...
Partial results: only Dirichlet's theorem.
To date, the only case of Bunyakovsky's conjecture that has been proved is that of polynomials of degree 1. This is Dirichlet's theorem, which states that when formula_44 and formula_24 are relatively prime integers there are infinitely many prime numbers formula_45. This is Bunyakovsky's conjecture for formula_46 (or formula_47 if formula_48).
The third condition in Bunyakovsky's conjecture for a linear polynomial formula_49 is equivalent to formula_44 and formula_24 being relatively prime.
No single case of Bunyakovsky's conjecture for degree greater than 1 is proved, although numerical evidence in higher degree is consistent with the conjecture.
Generalized Bunyakovsky conjecture.
Given formula_50 polynomials with positive degrees and integer coefficients, each satisfying the three conditions, assume that for any prime formula_51 there is an formula_4 such that none of the values of the formula_52 polynomials at formula_4 are divisible by formula_51. Given these assumptions, it is conjectured that there are infinitely many positive integers formula_4 such that all values of these formula_52 polynomials at formula_53 are prime. This conjecture is equivalent to the generalized Dickson conjecture and Schinzel's hypothesis H.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "f(1), f(2), f(3),\\ldots."
},
{
"math_id": 2,
"text": "f(1), f(2), f(3),\\ldots"
},
{
"math_id": 3,
"text": "f(n)"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "f(x+n)"
},
{
"math_id": 6,
"text": "f(m)"
},
{
"math_id": 7,
"text": "m>n"
},
{
"math_id": 8,
"text": "f(x) < 0"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "f(x) = g(x)h(x)"
},
{
"math_id": 11,
"text": "g(x)"
},
{
"math_id": 12,
"text": "h(x)"
},
{
"math_id": 13,
"text": "f(n) = g(n)h(n)"
},
{
"math_id": 14,
"text": "\\pm 1"
},
{
"math_id": 15,
"text": "P(x)=(1/12)\\cdot x^4+(11/12)\\cdot x^2+2"
},
{
"math_id": 16,
"text": "P(x)=(1/12)\\cdot (x^4+11x^2+24)=(1/12)\\cdot(x^2+3)\\cdot(x^2+8)"
},
{
"math_id": 17,
"text": "12"
},
{
"math_id": 18,
"text": "P(x)"
},
{
"math_id": 19,
"text": "|x| \\le 3"
},
{
"math_id": 20,
"text": "2, 3, 7, 17"
},
{
"math_id": 21,
"text": "f(x) = x^2 + x + 2"
},
{
"math_id": 22,
"text": "n =0,-1"
},
{
"math_id": 23,
"text": "f(n)=2"
},
{
"math_id": 24,
"text": "m"
},
{
"math_id": 25,
"text": "f(x) = c_0 + c_1x + \\cdots + c_dx^d"
},
{
"math_id": 26,
"text": "\\gcd \\{f(n)\\}_{n\\geq 1} = \\gcd(f(m),f(m+1),\\dots,f(m+d))"
},
{
"math_id": 27,
"text": "d+1"
},
{
"math_id": 28,
"text": "f(-1)=2, f(0)=2, f(1)=4"
},
{
"math_id": 29,
"text": "2"
},
{
"math_id": 30,
"text": "x^2 + x + 2"
},
{
"math_id": 31,
"text": "\nf(x) = a_0 + a_1\\binom{x}{1} + \\cdots + a_d\\binom{x}{d},\n"
},
{
"math_id": 32,
"text": "a_i"
},
{
"math_id": 33,
"text": "\\gcd\\{f(n)\\}_{n \\geq 1} = \\gcd(a_0,a_1,\\dots,a_d)."
},
{
"math_id": 34,
"text": "\nx^2 + x + 2 = 2\\binom{x}{2} + 2\\binom{x}{1} + 2,\n"
},
{
"math_id": 35,
"text": "\\gcd\\{f(n)\\}_{n \\geq 1} =1"
},
{
"math_id": 36,
"text": "f(x) = x^2 + 1"
},
{
"math_id": 37,
"text": "x^2 + 1"
},
{
"math_id": 38,
"text": "n^2+1"
},
{
"math_id": 39,
"text": "\\Phi_k(x)"
},
{
"math_id": 40,
"text": "k=1,2,3,\\ldots"
},
{
"math_id": 41,
"text": "\\Phi_k(n)"
},
{
"math_id": 42,
"text": "\\phi(k)"
},
{
"math_id": 43,
"text": "\\phi"
},
{
"math_id": 44,
"text": "a"
},
{
"math_id": 45,
"text": "p \\equiv a \\pmod m"
},
{
"math_id": 46,
"text": "f(x) = a + mx"
},
{
"math_id": 47,
"text": "a - mx"
},
{
"math_id": 48,
"text": "m < 0"
},
{
"math_id": 49,
"text": "mx + a"
},
{
"math_id": 50,
"text": "k \\geq 1"
},
{
"math_id": 51,
"text": "p"
},
{
"math_id": 52,
"text": "k"
},
{
"math_id": 53,
"text": "x = n"
}
] | https://en.wikipedia.org/wiki?curid=12511846 |
1251430 | Ł | Letter of the Latin alphabet
Ł or ł, described in English as L with stroke, is a letter of the Polish, Kashubian, Sorbian, Belarusian Latin, Ukrainian Latin, Wymysorys, Navajo, Dëne Sųłıné, Inupiaq, Zuni, Hupa, Sm'álgyax, Nisga'a, and Dogrib alphabets, several proposed alphabets for the Venetian language, and the ISO 11940 romanization of the Thai script. In some Slavic languages, it represents the continuation of the Proto-Slavic non-palatal ⟨L⟩ (dark L), except in Polish, Kashubian, and Sorbian, where it evolved further into . In most non-European languages, it represents a voiceless alveolar lateral fricative or similar sound.
Glyph shape Ł.
In normal typefaces, the letter has a stroke approximately in the middle of the vertical stem, crossing it at an angle between 70° and 45°, never horizontally. In cursive handwriting and typefaces that imitate it, the capital letter has a horizontal stroke through the middle and looks very similar to the pound sign £. In the cursive lowercase letter, the stroke is also horizontal and placed on top of the letter instead of going through the middle of the stem, which would not be distinguishable from the letter t. The stroke is either straight or slightly wavy, depending on the style. Unlike ⟨l⟩, the letter ⟨ł⟩ is usually written without a noticeable loop at the top. Most publicly available multilingual cursive typefaces, including commercial ones, feature an incorrect glyph for ⟨ł⟩.
A rare variant of the ł glyph is a cursive double-ł ligature, used in words such as ', ' or "" (archaic: Allah), where the strokes at the top of the letters are joined into a single stroke.
Polish Ł.
In Polish, ⟨Ł⟩ is used to distinguish the historical dark (velarized) L [ɫ] from clear L [l]. The Polish ⟨Ł⟩ now sounds the same as the English ⟨W⟩, [w] as in "water" (except for older speakers in some eastern dialects where it still sounds velarized).
In 1440, Jakub Parkoszowic proposed a letter resembling formula_0 to represent clear L. For dark L he suggested "l" with a stroke running in the opposite direction to the modern version. The latter was introduced in 1514–1515 by Stanisław Zaborowski in his . L with stroke originally represented a velarized alveolar lateral approximant , a pronunciation that is preserved in the eastern part of Poland and among the Polish minority in Lithuania, Belarus, and Ukraine. This pronunciation is similar to Russian unpalatalised ⟨Л⟩ in native words and grammar forms.
In modern Polish, Ł is usually pronounced (as [w] in English "wet"). This pronunciation first appeared among Polish lower classes in the 16th century. It was considered an uncultured accent by the upper classes (who pronounced ⟨Ł⟩ as ) until the mid-20th century, when this distinction gradually began to fade.
The shift from to in Polish has affected all instances of dark L, even word-initially or intervocalically, e.g. "ładny" ("pretty, nice") is pronounced , "słowo" ("word") is , and "ciało" ("body") is . Ł often alternates with clear L, such as the plural forms of adjectives and verbs in the past tense that are associated with masculine personal nouns, e.g. "mały" → "mali" ( → ). Alternation is also common in declension of nouns, e.g. from nominative to locative, "tło" → "na tle" ( → ).
Polish final Ł also often corresponds to Ukrainian word-final ⟨В⟩ Ve (Cyrillic) and Belarusian ⟨Ў⟩ Short U (Cyrillic). Thus, "he gave" is "dał" in Polish, "дав" in Ukrainian, "даў" in Belarusian (all pronounced ), but "дал" in Russian.
Examples.
Notable figures
Some examples of words with 'ł':
In contexts where Ł is not readily available as a glyph, basic L is used instead. Thus, the surname Małecki would be spelled Malecki in a foreign country.
In the 1980s, when some computers available in Poland lacked Polish diacritics, it was common practice to use a pound sterling sign (£) for Ł. This practice ceased as soon as DOS-based and Mac computers came with a code page for such characters.
Other languages.
In Belarusian Łacinka (both in the 1929 and 1962 versions), ⟨Ł⟩ corresponds to Cyrillic ⟨Л⟩ (El), and is normally pronounced (almost exactly as in English "pull").
In Navajo and Elaponke, ⟨Ł⟩ is used for a voiceless alveolar lateral fricative , like the Welsh double L.
⟨Ł⟩ is used in orthographic transcription of Ahtna, an Athabaskan language spoken in Alaska; it represents a breathy lateral fricative. It is also used in Tanacross, a related Athabaskan language.
When transcribing Armenian into the Latin alphabet, ⟨Ł⟩ may be used to write the letter ⟨Ղ⟩ , for example Ղուկաս => Łukas. In Classical Armenian, ⟨Ղ⟩ was pronounced as , which morphed into in both standard varieties of modern Armenian. Other transcriptions of ⟨Ղ⟩ include ⟨Ṙ⟩, ⟨Ġ⟩ or ⟨Gh⟩. | [
{
"math_id": 0,
"text": "\\ \\ell"
}
] | https://en.wikipedia.org/wiki?curid=1251430 |
1251473 | Solid torus | 3-dimensional object
In mathematics, a solid torus is the topological space formed by sweeping a disk around a circle. It is homeomorphic to the Cartesian product formula_0 of the disk and the circle, endowed with the product topology.
A standard way to visualize a solid torus is as a toroid, embedded in 3-space. However, it should be distinguished from a torus, which has the same visual appearance: the torus is the two-dimensional space on the boundary of a toroid, while the solid torus includes also the compact interior space enclosed by the torus.
A solid torus is a torus plus the volume inside the torus. Real-world objects that approximate a "solid torus" include O-rings, non-inflatable lifebuoys, ring doughnuts, and bagels.
Topological properties.
The solid torus is a connected, compact, orientable 3-dimensional manifold with boundary. The boundary is homeomorphic to formula_1, the ordinary torus.
Since the disk formula_2 is contractible, the solid torus has the homotopy type of a circle, formula_3. Therefore the fundamental group and homology groups are isomorphic to those of the circle:
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S^1 \\times D^2"
},
{
"math_id": 1,
"text": "S^1 \\times S^1"
},
{
"math_id": 2,
"text": "D^2"
},
{
"math_id": 3,
"text": "S^1"
},
{
"math_id": 4,
"text": "\\begin{align}\n \\pi_1\\left(S^1 \\times D^2\\right) &\\cong \\pi_1\\left(S^1\\right) \\cong \\mathbb{Z}, \\\\\n H_k\\left(S^1 \\times D^2\\right) &\\cong H_k\\left(S^1\\right) \\cong \\begin{cases}\n \\mathbb{Z} & \\text{if } k = 0, 1, \\\\\n 0 & \\text{otherwise}. \n \\end{cases}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1251473 |
1251559 | Club set | Set theory concept
In mathematics, particularly in mathematical logic and set theory, a club set is a subset of a limit ordinal that is closed under the order topology, and is unbounded (see below) relative to the limit ordinal. The name "club" is a contraction of "closed and unbounded".
Formal definition.
Formally, if formula_0 is a limit ordinal, then a set formula_1 is "closed" in formula_0 if and only if for every formula_2 if formula_3 then formula_4 Thus, if the limit of some sequence from formula_5 is less than formula_6 then the limit is also in formula_7
If formula_0 is a limit ordinal and formula_8 then formula_5 is unbounded in formula_0 if for any formula_2 there is some formula_9 such that formula_10
If a set is both closed and unbounded, then it is a club set. Closed proper classes are also of interest (every proper class of ordinals is unbounded in the class of all ordinals).
For example, the set of all countable limit ordinals is a club set with respect to the first uncountable ordinal; but it is not a club set with respect to any higher limit ordinal, since it is neither closed nor unbounded.
If formula_0 is an uncountable initial ordinal, then the set of all limit ordinals formula_11 is closed unbounded in formula_12 In fact a club set is nothing else but the range of a normal function (i.e. increasing and continuous).
More generally, if formula_13 is a nonempty set and formula_14 is a cardinal, then formula_15 (the set of subsets of formula_13 of cardinality formula_14) is "club" if every union of a subset of formula_5 is in formula_5 and every subset of formula_13 of cardinality less than formula_14 is contained in some element of formula_5 (see stationary set).
The closed unbounded filter.
Let formula_16 be a limit ordinal of uncountable cofinality formula_17 For some formula_18, let formula_19 be a sequence of closed unbounded subsets of formula_20 Then formula_21 is also closed unbounded. To see this, one can note that an intersection of closed sets is always closed, so we just need to show that this intersection is unbounded. So fix any formula_22 and for each "n" < ω choose from each formula_23 an element formula_24 which is possible because each is unbounded. Since this is a collection of fewer than formula_25 ordinals, all less than formula_26 their least upper bound must also be less than formula_26 so we can call it formula_27 This process generates a countable sequence formula_28 The limit of this sequence must in fact also be the limit of the sequence formula_29 and since each formula_23 is closed and formula_25 is uncountable, this limit must be in each formula_30 and therefore this limit is an element of the intersection that is above formula_31 which shows that the intersection is unbounded. QED.
From this, it can be seen that if formula_16 is a regular cardinal, then formula_32 is a non-principal formula_16-complete proper filter on the set formula_0 (that is, on the poset formula_33).
If formula_16 is a regular cardinal then club sets are also closed under diagonal intersection.
In fact, if formula_16 is regular and formula_34 is any filter on formula_26 closed under diagonal intersection, containing all sets of the form formula_35 for formula_36 then formula_34 must include all club sets.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa"
},
{
"math_id": 1,
"text": "C\\subseteq\\kappa"
},
{
"math_id": 2,
"text": "\\alpha < \\kappa,"
},
{
"math_id": 3,
"text": "\\sup(C \\cap \\alpha) = \\alpha \\neq 0,"
},
{
"math_id": 4,
"text": "\\alpha \\in C."
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "\\kappa,"
},
{
"math_id": 7,
"text": "C."
},
{
"math_id": 8,
"text": "C \\subseteq \\kappa"
},
{
"math_id": 9,
"text": "\\beta \\in C"
},
{
"math_id": 10,
"text": "\\alpha < \\beta."
},
{
"math_id": 11,
"text": "\\alpha < \\kappa"
},
{
"math_id": 12,
"text": "\\kappa."
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "\\lambda"
},
{
"math_id": 15,
"text": "C \\subseteq [X]^\\lambda"
},
{
"math_id": 16,
"text": "\\kappa \\,"
},
{
"math_id": 17,
"text": "\\lambda \\,."
},
{
"math_id": 18,
"text": "\\alpha < \\lambda \\,"
},
{
"math_id": 19,
"text": "\\langle C_\\xi : \\xi < \\alpha\\rangle \\,"
},
{
"math_id": 20,
"text": "\\kappa \\,."
},
{
"math_id": 21,
"text": "\\bigcap_{\\xi < \\alpha} C_\\xi \\,"
},
{
"math_id": 22,
"text": "\\beta_0 < \\kappa \\,,"
},
{
"math_id": 23,
"text": "C_\\xi \\,"
},
{
"math_id": 24,
"text": "\\beta_{n+1}^\\xi > \\beta_{n} \\,,"
},
{
"math_id": 25,
"text": "\\lambda \\,"
},
{
"math_id": 26,
"text": "\\kappa \\,,"
},
{
"math_id": 27,
"text": "\\beta_{n+1} \\,."
},
{
"math_id": 28,
"text": "\\beta_0,\\beta_1,\\beta_2, \\ldots \\,."
},
{
"math_id": 29,
"text": "\\beta_0^\\xi,\\beta_1^\\xi,\\beta_2^\\xi, \\ldots \\,,"
},
{
"math_id": 30,
"text": "C_\\xi \\,,"
},
{
"math_id": 31,
"text": "\\beta_0 \\,,"
},
{
"math_id": 32,
"text": "\\{S \\subseteq \\kappa : \\exists C \\subseteq S \\text{ such that } C \\text{ is closed unbounded in } \\kappa\\}"
},
{
"math_id": 33,
"text": "(\\wp(\\kappa), \\subseteq)"
},
{
"math_id": 34,
"text": "\\mathcal{F} \\,"
},
{
"math_id": 35,
"text": "\\{\\xi < \\kappa : \\xi \\geq \\alpha\\} \\,"
},
{
"math_id": 36,
"text": "\\alpha < \\kappa \\,,"
}
] | https://en.wikipedia.org/wiki?curid=1251559 |
12516446 | Expected value of sample information | In decision theory, the expected value of sample information (EVSI) is the expected increase in utility that a decision-maker could obtain from gaining access to a sample of additional observations before making a decision. The additional information obtained from the sample may allow them to make a more informed, and thus better, decision, thus resulting in an increase in expected utility. EVSI attempts to estimate what this improvement would be before seeing actual sample data; hence, EVSI is a form of what is known as "preposterior analysis". The use of EVSI in decision theory was popularized by Robert Schlaifer and Howard Raiffa in the 1960s.
Formulation.
Let
formula_0
It is common (but not essential) in EVSI scenarios for formula_1, formula_2 and formula_3, which is to say that each observation is an unbiased sensor reading of the underlying state formula_4, with each sensor reading being independent and identically distributed.
The utility from the optimal decision based only on the prior, without making any further observations, is given by
formula_5
If the decision-maker could gain access to a single sample, formula_6, the optimal posterior utility would be
formula_7
where formula_8 is obtained from Bayes' rule:
formula_9
formula_10
Since they don't know what sample would actually be obtained if one were obtained, they must average over all possible samples to obtain the expected utility given a sample:
formula_11
The expected value of sample information is then defined as
formula_12
Computation.
It is seldom feasible to carry out the integration over the space of possible observations in E[U|SI] analytically, so the computation of EVSI usually requires a Monte Carlo simulation. The method involves randomly simulating a sample, formula_13, then using it to compute the posterior formula_14 and maximizing utility based on formula_14. This whole process is then repeated many times, for formula_15 to obtain a Monte Carlo sample of optimal utilities. These are averaged to obtain the expected utility given a hypothetical sample.
Example.
A regulatory agency is to decide whether to approve a new treatment. Before making the final approve/reject decision, they ask what the value would be of conducting a further trial study on formula_16 subjects. This question is answered by the EVSI.
The diagram shows an influence diagram for computing the EVSI in this example.
The model classifies the outcome for any given subject into one of five categories:
formula_17 {"Cure", "Improvement", "Ineffective", "Mild side-effect", "Serious side-effect"}
And for each of these outcomes, assigns a utility equal to an estimated patient-equivalent monetary value of the outcome.
A decision state, formula_4 in this example is a vector of five numbers between 0 and 1 that sum to 1, giving the proportion of future patients that will experience each of the five possible outcomes. For example, a state formula_18 denotes the case where 5% of patients are cured, 60% improve, 20% find the treatment ineffective, 10% experience mild side-effects and 5% experience dangerous side-effects.
The prior, formula_19 is encoded using a Dirichlet distribution, requiring five numbers (that don't sum to 1) whose relative values capture the expected relative proportion of each outcome, and whose sum encodes the strength of this prior belief. In the diagram, the parameters of the Dirichlet distribution are contained in the variable "dirichlet alpha prior", while the prior distribution itself is in the chance variable "Prior". The probability density graph of the marginals is shown here:
In the chance variable "Trial data", trial data is simulated as a Monte Carlo sample from a Multinomial distribution. For example, when Trial_size=100, each Monte Carlo sample of "Trial_data" contains a vector that sums to 100 showing the number of subjects in the simulated study that experienced each of the five possible outcomes. The following result table depicts the first 8 simulated trial outcomes:
Combining this trial data with a Dirichlet prior requires only adding the outcome frequencies to the Dirichlet prior alpha values, resulting in a Dirichlet posterior distribution for each simulated trial. For each of these, the decision to approve is made based on whether the mean utility is positive, and using a utility of zero when the treatment is not approved, the "Pre-posterior utility is obtained". Repeating the computation for a range of possible trial sizes, an EVSI is obtained at each possible candidate trial size as depicted in this graph:
Comparison to related measures.
Expected value of sample information (EVSI) is a relaxation of the expected value of perfect information (EVPI) metric, which encodes the increase of utility that would be obtained if one were to learn the true underlying state, formula_4. Essentially EVPI indicates the value of perfect information, while EVSI indicates the value of "some limited and incomplete" information.
The expected value of including uncertainty (EVIU) compares the value of modeling uncertain information as compared to modeling a situation without taking uncertainty into account. Since the impact of uncertainty on computed results is often analysed using Monte Carlo methods, EVIU appears to be very similar to "the value of carrying out an analysis using a Monte Carlo sample", which closely resembles in statement the notion captured with EVSI. However, EVSI and EVIU are quite distinct—a notable difference between the manner in which EVSI uses Bayesian updating to incorporate the simulated sample.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{ll}\nd\\in D & \\mbox{the decision being made, chosen from space } D \n\\\\\nx\\in X & \\mbox{an uncertain state, with true value in space } X\n\\\\\nz \\in Z & \\mbox{an observed sample composed of } n \\mbox{ observations } \\langle z_1,z_2,..,z_n \\rangle\n\\\\\nU(d,x) & \\mbox{the utility of selecting decision } d \\mbox{ from } x\n\\\\\np(x) & \\mbox{the prior subjective probability distribution (density function) on } x\n\\\\\np(z|x) & \\mbox{the conditional prior probability of observing the sample } z\n\\end{array}\n"
},
{
"math_id": 1,
"text": "Z_i=X"
},
{
"math_id": 2,
"text": "p(z|x)=\\prod p(z_i|x)"
},
{
"math_id": 3,
"text": "\\int z p(z|x) dz = x"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "\nE[U] = \\max_{d\\in D} ~ \\int_X U(d,x) p(x) ~ dx.\n"
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": "\nE[U|z] = \\max_{d\\in D} ~ \\int_X U(d,x) p(x|z) ~ dx\n"
},
{
"math_id": 8,
"text": "p(x|z)"
},
{
"math_id": 9,
"text": "\np(x|z) = {{p(z|x) p(x)}\\over{p(z)}};\n"
},
{
"math_id": 10,
"text": "\np(z) = \\int p(z|x) p(x) ~ dx.\n"
},
{
"math_id": 11,
"text": "\nE[U|SI] = \\int_Z E[U|z] p(z) dz = \\int_Z \\max_{d\\in D} ~ \\int_X U(d,x) p(z|x) p(x) ~ dx ~ dz."
},
{
"math_id": 12,
"text": "\n\\begin{array}{rl}\nEVSI & = E[U|SI] - E[U] \\\\\n& = \\left(\\int_Z \\max_{d\\in D} ~ \\int_X U(d,x) p(z|x) p(x) ~ dx ~ dz\\right)\n - \\left(\\max_{d\\in D} ~ \\int_X U(d,x) p(x) ~ dx\\right).\n\\end{array}\n"
},
{
"math_id": 13,
"text": "z^i=\\langle z^i_1,z^i_2,..,z^i_n\\rangle"
},
{
"math_id": 14,
"text": "p(x|z^i)"
},
{
"math_id": 15,
"text": "i=1,..,M"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "Z_i ="
},
{
"math_id": 18,
"text": "x=[5\\%,60\\%,20\\%,10\\%,5\\%]"
},
{
"math_id": 19,
"text": "p(x)"
}
] | https://en.wikipedia.org/wiki?curid=12516446 |
1251702 | Suspension (topology) | In topology, a branch of mathematics, the suspension of a topological space "X" is intuitively obtained by stretching "X" into a cylinder and then collapsing both end faces to points. One views "X" as "suspended" between these end points. The suspension of "X" is denoted by SX or susp("X").76
There is a variation of the suspension for pointed space, which is called the reduced suspension and denoted by Σ"X". The "usual" suspension "SX" is sometimes called the unreduced suspension, unbased suspension, or free suspension of "X", to distinguish it from Σ"X."
Free suspension.
The (free) suspension formula_0 of a topological space formula_1 can be defined in several ways.
1. formula_0 is the quotient space formula_2 In other words, it can be constructed as follows:
2. Another way to write this is:
formula_6
Where formula_7 are two points, and for each "i" in {0,1}, formula_8 is the projection to the point formula_9 (a function that maps everything to formula_9). That means, the suspension formula_0 is the result of constructing the cylinder formula_3, and then attaching it by its faces, formula_10 and formula_11, to the points formula_7 along the projections formula_12.
3. One can view formula_0 as two cones on "X," glued together at their base.
4. formula_0 can also be defined as the join formula_13 where formula_14 is a discrete space with two points.76
5. In Homotopy type theory, formula_0 be defined as a higher inductive type generated by
S: formula_0
N:formula_0
formula_15
Properties.
In rough terms, "S" increases the dimension of a space by one: for example, it takes an "n"-sphere to an ("n" + 1)-sphere for "n" ≥ 0.
Given a continuous map formula_16 there is a continuous map formula_17 defined by formula_18 where square brackets denote equivalence classes. This makes formula_19 into a functor from the category of topological spaces to itself.
Reduced suspension.
If "X" is a pointed space with basepoint "x"0, there is a variation of the suspension which is sometimes more useful. The reduced suspension or based suspension Σ"X" of "X" is the quotient space:
formula_20.
This is the equivalent to taking "SX" and collapsing the line ("x"0 × "I"
) joining the two ends to a single point. The basepoint of the pointed space Σ"X" is taken to be the equivalence class of ("x"0, 0).
One can show that the reduced suspension of "X" is homeomorphic to the smash product of "X" with the unit circle "S"1.
formula_21
For well-behaved spaces, such as CW complexes, the reduced suspension of "X" is homotopy equivalent to the unbased suspension.
Adjunction of reduced suspension and loop space functors.
Σ gives rise to a functor from the category of pointed spaces to itself. An important property of this functor is that it is left adjoint to the functor formula_22 taking a pointed space formula_1 to its loop space formula_23. In other words, we have a natural isomorphism
formula_24
where formula_1 and formula_25 are pointed spaces and formula_26 stands for continuous maps that preserve basepoints. This adjunction can be understood geometrically, as follows: formula_27 arises out of formula_1 if a pointed circle is attached to every non-basepoint of formula_1, and the basepoints of all these circles are identified and glued to the basepoint of formula_1. Now, to specify a pointed map from formula_27 to formula_25, we need to give pointed maps from each of these pointed circles to formula_25. This is to say we need to associate to each element of formula_1 a loop in formula_25 (an element of the loop space formula_28), and the trivial loop should be associated to the basepoint of formula_1: this is a pointed map from formula_1 to formula_28. (The continuity of all involved maps needs to be checked.)
The adjunction is thus akin to currying, taking maps on cartesian products to their curried form, and is an example of Eckmann–Hilton duality.
This adjunction is a special case of the adjunction explained in the article on smash products.
Applications.
The reduced suspension can be used to construct a homomorphism of homotopy groups, to which the Freudenthal suspension theorem applies. In homotopy theory, the phenomena which are preserved under suspension, in a suitable sense, make up stable homotopy theory.
Examples.
Some examples of suspensions are:""77,&hairsp;Exercise.1
Desuspension.
Desuspension is an operation partially inverse to suspension.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "SX"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "(X \\times [0,1])/(X\\times \\{0\\})\\big/ ( X\\times \\{1\\})."
},
{
"math_id": 3,
"text": "X \\times [0,1]"
},
{
"math_id": 4,
"text": " X\\times \\{0\\}"
},
{
"math_id": 5,
"text": " X\\times \\{1\\}"
},
{
"math_id": 6,
"text": "SX := v_0 \\cup_{p_0}(X \\times [0,1])\\cup_{p_1} v_1\\ =\\ \\varinjlim_{i \\in \\{0,1\\}} \\bigl( (X \\times [0,1]) \\hookleftarrow (X\\times \\{i\\}) \\xrightarrow{p_i} v_i\\bigr),"
},
{
"math_id": 7,
"text": "v_0, v_1"
},
{
"math_id": 8,
"text": "p_i"
},
{
"math_id": 9,
"text": "v_i"
},
{
"math_id": 10,
"text": "X\\times\\{0\\}"
},
{
"math_id": 11,
"text": "X\\times\\{1\\}"
},
{
"math_id": 12,
"text": "p_i: \\bigl( X\\times\\{i\\} \\bigr)\\to v_i"
},
{
"math_id": 13,
"text": "X\\star S^0,"
},
{
"math_id": 14,
"text": "S^0"
},
{
"math_id": 15,
"text": "Merid: \\bigl( X\\bigr)\\to (N=S)"
},
{
"math_id": 16,
"text": "f:X\\rightarrow Y,"
},
{
"math_id": 17,
"text": "Sf:SX\\rightarrow SY"
},
{
"math_id": 18,
"text": "Sf([x,t]):=[f(x),t],"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "\\Sigma X = (X\\times I)/(X\\times\\{0\\}\\cup X\\times\\{1\\}\\cup \\{x_0\\}\\times I)"
},
{
"math_id": 21,
"text": "\\Sigma X \\cong S^1 \\wedge X"
},
{
"math_id": 22,
"text": "\\Omega"
},
{
"math_id": 23,
"text": "\\Omega X"
},
{
"math_id": 24,
"text": " \\operatorname{Maps}_*\\left(\\Sigma X,Y\\right) \\cong \\operatorname{Maps}_*\\left(X,\\Omega Y\\right)"
},
{
"math_id": 25,
"text": "Y"
},
{
"math_id": 26,
"text": "\\operatorname{Maps}_*"
},
{
"math_id": 27,
"text": "\\Sigma X"
},
{
"math_id": 28,
"text": "\\Omega Y"
}
] | https://en.wikipedia.org/wiki?curid=1251702 |
1251777 | Thermal desorption spectroscopy | Method for observing interactions between molecules and surfaces
Temperature programmed desorption (TPD) is the method of observing desorbed molecules from a surface when the surface temperature is increased. When experiments are performed using well-defined surfaces of single-crystalline samples in a continuously pumped ultra-high vacuum (UHV) chamber, then this experimental technique is often also referred to as thermal desorption spectroscopy or thermal desorption spectrometry (TDS).
Desorption.
When molecules or atoms come in contact with a surface, they adsorb onto it, minimizing their energy by forming a bond with the surface. The binding energy varies with the combination of the adsorbate and surface. If the surface is heated, at one point, the energy transferred to the adsorbed species will cause it to desorb. The temperature at which this happens is known as the desorption temperature. Thus TPD shows information on the binding energy.
Measurement.
Since TPD observes the mass of desorbed molecules, it shows what molecules are adsorbed on the surface. Moreover, TPD recognizes the different adsorption conditions of the same molecule from the differences between the desorption temperatures of molecules desorbing different sites at the surface, e.g. terraces vs. steps. TPD also obtains the amounts of adsorbed molecules on the surface from the intensity of the peaks of the TPD spectrum, and the total amount of adsorbed species is shown by the integral of the spectrum.
To measure TPD, one needs a mass spectrometer, such as a quadrupole mass spectrometer or a time-of-flight (TOF) mass spectrometer, under ultrahigh vacuum (UHV) conditions. The amount of adsorbed molecules is measured by increasing the temperature at a heating rate of typically 2 K/s to 10 K/s. Several masses may be simultaneously measured by the mass spectrometer, and the intensity of each mass as a function of temperature is obtained as a TDS spectrum.
The heating procedure is often controlled by the PID control algorithm, with the controller being either a computer or specialised equipment such as a Eurotherm.
Other methods of measuring desorption are Thermal Gravimetric Analysis (TGA) or using infrared detectors, thermal conductivity detectors etc.
Quantitative interpretation of TPD data.
TDS spectrum 1 and 2 are typical examples of a TPD measurement. Both are examples of NO desorbing from a single crystal in high vacuum. The crystal was mounted on a titanium filament and heated with current. The desorbing NO was measured using a mass spectrometer monitoring the atomic mass of 30.
Before 1990 analysis of a TPD spectrum was usually done using a so-called simplified method; the "Redhead" method, assuming the exponential prefactor and the desorption energy to be independent of the surface coverage. After 1990 and with use of computer algorithms TDS spectra were analyzed using the "complete analysis method" or the "leading edge method". These methods assume the exponential prefactor and the desorption energy to be dependent of the surface coverage. Several available methods of analyzing TDS are described and compared in an article by A.M. de JONG and J.W. NIEMANTSVERDRIET. During parameter optimization/estimation, using the integral has been found to create a more well behaved objective function than the differential.
Theoretical Introduction.
Thermal desorption is described by the Polanyi–Wigner equation derived from the Arrhenius equation.
formula_0
where
formula_1 the desorption rate [mol/(cm2 s)] as a function of formula_2,
formula_3 order of desorption,
formula_2 surface coverage,
formula_4 pre-exponential factor [Hz] as a function of formula_2,
formula_5 activation energy of desorption [kJ/mol] as a function of formula_2,
formula_6 gas constant [J/(K mol)],
formula_7 temperature [K].
This equation is difficult in practice while several variables are a function of the coverage and influence each other. The “complete analysis method” calculates the pre-exponential factor and the activation energy at several coverages. This calculation can be simplified. First we assume the pre-exponential factor and the activation energy to be independent of the coverage.
We also assume a linear heating rate:
formula_8
where:
formula_9 the heating rate in [K/s],
formula_10 the start temperature in [K],
formula_11 the time in [s].
We assume that the pump rate of the system is indefinitely large, thus no gasses will absorb during the desorption. The change in pressure during desorption is described as:
(equation 2)
formula_12
where:
formula_13 the pressure in the system,
formula_14 the time in [s].
formula_15,
formula_16 the sample surface [m2],
formula_17 a constant,
formula_18 volume of the system [m3],
formula_19 the desorption rate [mol/(cm2 s)],
formula_20,
formula_21 the pump rate,
formula_18 volume of the system [m3],
We assume that formula_22 is indefinitely large so molecules do not re-adsorp during desorption process and we assume that formula_23 is indefinitely small compared to formula_24 and thus:
formula_25
Equation 2 and 3 lead to conclude that the desorption rate is a function of the change in pressure. One can use data in an experiment, which are a function of the pressure like the intensity of a mass spectrometer, to determine the desorption rate.
Since we assumed the pre-exponential factor and the activation energy to be independent of the coverage.
Thermal desorption is described with a simplified Arrhenius equation:
formula_26
where:
formula_19 the desorption rate[mol/(cm2 s)],
formula_3 order of desorption,
formula_27 surface coverage,
formula_28 pre-exponential factor [Hz],
formula_29 activation energy of desorption [kJ/mol],
formula_6 gas constant,
formula_7 temperature [K].
Using the before mentioned Redhead method (a method less precise as the "complete analysis" or the "leading edge" method) and the temperature maximum formula_30 one can determine the activation energy:
for n=1
formula_31
for n=2
formula_32
M. Ehasi and K. Christmann described a simple method to determine the activation energy of the second order.
Equation 6 can be changed into:
formula_33
where:
formula_34 is the surface area of a TDS or TPD peak.
A graph of formula_35 versus formula_36 results in a straight line with a slope equal to formula_37.
Thus in a first-order reaction the formula_30 is independent of the surface coverage. Changing the surface coverage one can determine formula_38. Usually a fixed value of the pre-exponential factor is used and is formula_39 known, with these values one can derive the formula_40 iteratively from formula_30.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nr(\\sigma) = -\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t} = v(\\sigma) \\sigma^n e^{-E_\\text{act}(\\sigma)/RT},\n"
},
{
"math_id": 1,
"text": " r(\\sigma) "
},
{
"math_id": 2,
"text": " \\sigma "
},
{
"math_id": 3,
"text": " n "
},
{
"math_id": 4,
"text": " v(\\sigma) "
},
{
"math_id": 5,
"text": " E_\\text{act}(\\sigma) "
},
{
"math_id": 6,
"text": " R "
},
{
"math_id": 7,
"text": " T "
},
{
"math_id": 8,
"text": "\n T(t) = T_0 + (\\beta t),\n"
},
{
"math_id": 9,
"text": "\t\\beta "
},
{
"math_id": 10,
"text": "T_0 "
},
{
"math_id": 11,
"text": "t "
},
{
"math_id": 12,
"text": "\n\\frac{\\mathrm{d}P}{\\mathrm{d}t} + P/\\alpha = \\frac{\\mathrm{d}(a\\,r(t))}{\\mathrm{d}t},\n"
},
{
"math_id": 13,
"text": " P "
},
{
"math_id": 14,
"text": " t "
},
{
"math_id": 15,
"text": " a = A/KV "
},
{
"math_id": 16,
"text": " A "
},
{
"math_id": 17,
"text": " K "
},
{
"math_id": 18,
"text": " V "
},
{
"math_id": 19,
"text": " r(t) "
},
{
"math_id": 20,
"text": " \\alpha = V/S "
},
{
"math_id": 21,
"text": " S "
},
{
"math_id": 22,
"text": "S"
},
{
"math_id": 23,
"text": "\nP/\\alpha "
},
{
"math_id": 24,
"text": " \\frac{\\mathrm{d}P}{\\mathrm{d}t} "
},
{
"math_id": 25,
"text": " a\\,r(t) = \\frac{\\mathrm{d}P}{\\mathrm{d}t}. "
},
{
"math_id": 26,
"text": "\nr(t) = -\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t} = v_n \\sigma^n e^{-E_{\\mathrm{act}}/RT},\n"
},
{
"math_id": 27,
"text": "\\sigma "
},
{
"math_id": 28,
"text": " v_n "
},
{
"math_id": 29,
"text": " E_\\text{act} "
},
{
"math_id": 30,
"text": "T_m"
},
{
"math_id": 31,
"text": " \nE_\\text{act}/{RT_m}^2 = v_1/\\beta e^{-E_\\text{act}/RT_m},\n"
},
{
"math_id": 32,
"text": " \nE_\\text{act}/{RT_m}^2 = \\sigma_0 v_2/\\beta e^{-E_{\\mathrm{act}}/RT_m}.\n"
},
{
"math_id": 33,
"text": " \n\\ln(\\sigma_0 {T_m}^2) = -E_\\text{act}/RT + \\ln({\\beta - E_\\text{act}/v_2R}),\n"
},
{
"math_id": 34,
"text": " \\sigma_0 "
},
{
"math_id": 35,
"text": " \\ln(\\sigma_0 {T_m}^2) "
},
{
"math_id": 36,
"text": " 1/T_m "
},
{
"math_id": 37,
"text": "-E_\\text{act}/R"
},
{
"math_id": 38,
"text": "n"
},
{
"math_id": 39,
"text": "\\beta"
},
{
"math_id": 40,
"text": "E_\\text{act}"
}
] | https://en.wikipedia.org/wiki?curid=1251777 |
12518245 | Pinhole camera model | Model of 3D points projected onto planar image via a lens-less aperture
The pinhole camera model describes the mathematical relationship between the coordinates of a point in three-dimensional space and its projection onto the image plane of an "ideal" pinhole camera, where the camera aperture is described as a point and no lenses are used to focus light. The model does not include, for example, geometric distortions or blurring of unfocused objects caused by lenses and finite sized apertures. It also does not take into account that most practical cameras have only discrete image coordinates. This means that the pinhole camera model can only be used as a first order approximation of the mapping from a 3D scene to a 2D image. Its validity depends on the quality of the camera and, in general, decreases from the center of the image to the edges as lens distortion effects increase.
Some of the effects that the pinhole camera model does not take into account can be compensated, for example by applying suitable coordinate transformations on the image coordinates; other effects are sufficiently small to be neglected if a high quality camera is used. This means that the pinhole camera model often can be used as a reasonable description of how a camera depicts a 3D scene, for example in computer vision and computer graphics.
Geometry.
The geometry related to the mapping of a pinhole camera is illustrated in the figure. The figure contains the following basic objects:
The "pinhole" aperture of the camera, through which all projection lines must pass, is assumed to be infinitely small, a point. In the literature this point in 3D space is referred to as the "optical (or lens or camera) center".
Formulation.
Next we want to understand how the coordinates formula_3 of point Q depend on the coordinates formula_1 of point P. This can be done with the help of the following figure which shows the same scene as the previous figure but now from above, looking down in the negative direction of the X2 axis.
In this figure we see two similar triangles, both having parts of the projection line (green) as their hypotenuses. The catheti of the left triangle are formula_4 and "f" and the catheti of the right triangle are formula_5 and formula_6. Since the two triangles are similar it follows that
formula_7 or formula_8
A similar investigation, looking in the negative direction of the X1 axis gives
formula_9 or formula_10
This can be summarized as
formula_11
which is an expression that describes the relation between the 3D coordinates formula_12 of point P and its image coordinates formula_13 given by point Q in the image plane.
Rotated image and the virtual image plane.
The mapping from 3D to 2D coordinates described by a pinhole camera is a perspective projection followed by a 180° rotation in the image plane. This corresponds to how a real pinhole camera operates; the resulting image is rotated 180° and the relative size of projected objects depends on their distance to the focal point and the overall size of the image depends on the distance "f" between the image plane and the focal point. In order to produce an unrotated image, which is what we expect from a camera, there are two possibilities:
In both cases, the resulting mapping from 3D coordinates to 2D image coordinates is given by the expression above, but without the negation, thus
formula_14
In homogeneous coordinates.
The mapping from 3D coordinates of points in space to 2D image coordinates can also be represented in homogeneous coordinates. Let formula_15 be a representation of a 3D point in homogeneous coordinates (a 4-dimensional vector), and let formula_16 be a representation of the image of this point in the pinhole camera (a 3-dimensional vector). Then the following relation holds
formula_17
where formula_18 is the formula_19 camera matrix and the formula_20 means equality between elements of projective spaces. This implies that the left and right hand sides are equal up to a non-zero scalar multiplication. A consequence of this relation is that also formula_18 can be seen as an element of a projective space; two camera matrices are equivalent if they are equal up to a scalar multiplication. This description of the pinhole camera mapping, as a linear transformation formula_18 instead of as a fraction of two linear expressions, makes it possible to simplify many derivations of relations between 3D and 2D coordinates.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": " (x_1, x_2, x_3) "
},
{
"math_id": 2,
"text": "x_3"
},
{
"math_id": 3,
"text": " (y_1, y_2) "
},
{
"math_id": 4,
"text": " -y_1 "
},
{
"math_id": 5,
"text": " x_1 "
},
{
"math_id": 6,
"text": " x_3 "
},
{
"math_id": 7,
"text": " \\frac{-y_1}{f} = \\frac{x_1}{x_3} "
},
{
"math_id": 8,
"text": " y_1 = -\\frac{f \\, x_1}{x_3} "
},
{
"math_id": 9,
"text": " \\frac{-y_2}{f} = \\frac{x_2}{x_3} "
},
{
"math_id": 10,
"text": " y_2 = -\\frac{f \\, x_2}{x_3} "
},
{
"math_id": 11,
"text": " \\begin{pmatrix} y_1 \\\\ y_2 \\end{pmatrix} = -\\frac{f}{x_3} \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix} "
},
{
"math_id": 12,
"text": " (x_1,x_2,x_3) "
},
{
"math_id": 13,
"text": " (y_1,y_2) "
},
{
"math_id": 14,
"text": " \\begin{pmatrix} y_1 \\\\ y_2 \\end{pmatrix} = \\frac{f}{x_3} \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix} "
},
{
"math_id": 15,
"text": " \\mathbf{x} "
},
{
"math_id": 16,
"text": " \\mathbf{y} "
},
{
"math_id": 17,
"text": " \\mathbf{y} \\sim \\mathbf{C} \\, \\mathbf{x} "
},
{
"math_id": 18,
"text": " \\mathbf{C} "
},
{
"math_id": 19,
"text": " 3 \\times 4 "
},
{
"math_id": 20,
"text": "\\, \\sim "
}
] | https://en.wikipedia.org/wiki?curid=12518245 |
12520196 | Cross slope | Cross slope, cross fall or camber is a geometric feature of pavement surfaces: the transverse slope with respect to the horizon. It is a very important safety factor. Cross slope is provided to provide a drainage gradient so that water will run off the surface to a drainage system such as a street gutter or ditch. Inadequate cross slope will contribute to aquaplaning. On straight sections of normal two-lane roads, the pavement cross section is usually highest in the center and drains to both sides. In horizontal curves, the cross slope is banked into superelevation to reduce steering effort and lateral force required to go around the curve. All water drains to the inside of the curve. If the cross slope magnitude oscillates within , the body and payload of high (heavy) vehicles will experience high roll vibration.
Cross slope is usually expressed as a percentage:
formula_0.
Cross slope is the angle around a vertical axis between:
Typical values range from 2 percent for straight segments to 10 percent for sharp superelevated curves. It may also be expressed as a fraction of an inch in rise over a one-foot run (e.g. <templatestyles src="Fraction/styles.css" />1⁄4 inch per foot).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{cross slope} = \\frac{\\text{rise}}{\\text{run}} \\times 100\\%"
}
] | https://en.wikipedia.org/wiki?curid=12520196 |
12527335 | Cosmic time | Time coordinate used in cosmology
Cosmic time, or cosmological time, is the time coordinate commonly used in the Big Bang models of physical cosmology. This concept of time avoids some issues related to relativity by being defined within a solution to the equations of general relativity widely used in cosmology.
Problems with absolute time.
Albert Einstein's theory of special relativity showed that simultaneity is not absolute. An observer located halfway between two lighting strikes may believe they occurred at the same time, while another observer close to one of the strikes will claim it occurred first and the other strike came after. This coupling of space and time, Minkowski spacetime, complicates scientific time comparisons.
However, Einstein's theory of general relativity provides a partial solution. In general relativity, spacetime is defined in relation to the distribution of mass. A "clock" conceptually linked to a mass will provide a well defined time measurement for all co-moving masses. Cosmic time is based on this concept of a clock.
Definition.
Cosmic time formula_0 is a measure of time by a physical clock with zero peculiar velocity in the absence of matter over-/under-densities (to prevent time dilation due to relativistic effects or confusions caused by expansion of the universe). Unlike other measures of time such as temperature, redshift, particle horizon, or Hubble horizon, the cosmic time (similar and complementary to the co-moving coordinates) is blind to the expansion of the universe.
Cosmic time is the standard time coordinate for specifying the Friedmann–Lemaître–Robertson–Walker solutions of Einstein field equations of general relativity.
Such time coordinate may be defined for a homogeneous, expanding universe so that the universe has the same density everywhere at each moment in time (the fact that this is possible means that the universe is, by definition, homogeneous). The clocks measuring cosmic time should move along the Hubble flow.
Reference point.
There are two main ways for establishing a reference point for the cosmic time.
Lookback time.
The present time can be used as the cosmic reference point creating lookback time. This can be described in terms of the time light has taken to arrive here from a distance object.
Age of the universe.
Alternatively, the Big Bang may be taken as reference to define formula_0 as the age of the universe, also known as time since the big bang. The current physical cosmology estimates the present age as 13.8 billion years.
The formula_1 doesn't necessarily have to correspond to a physical event (such as the cosmological singularity) but rather it refers to the point at which the scale factor would vanish for a standard cosmological model such as ΛCDM. For technical purposes, concepts such as the average temperature of the universe (in units of eV) or the particle horizon are used when the early universe is the objective of a study since understanding the interaction among particles is more relevant than their time coordinate or age.
In mathematical terms, a cosmic time on spacetime formula_2 is a fibration formula_3. This fibration, having the parameter formula_0, is made of three-dimensional manifolds formula_4.
Relation to redshift.
Astronomical observations and theoretical models may use redshift as a time-like parameter. Cosmic time and redshift z are related. In case of flat universe without dark energy the cosmic time can expressed as:
formula_5
Here formula_6 is the Hubble constant and formula_7 is the density parameter ratio of density of the universe, formula_8 to the critical density formula_9 for the Friedmann equation for a flat universe:
formula_10
Uncertainties in the value of these parameters make the time values derived from redshift measurements model dependent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "t=0"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "t \\colon M \\to R"
},
{
"math_id": 4,
"text": "S_t"
},
{
"math_id": 5,
"text": " t(z) \\approx \\frac {2}{3 H_0 {\\Omega_0}^{1/2} } z^{-3/2}\\ , \\ z \\gg 1/\\Omega_0."
},
{
"math_id": 6,
"text": "H_0"
},
{
"math_id": 7,
"text": "\\Omega_0 = \\rho/\\rho_\\text{crit}"
},
{
"math_id": 8,
"text": "\\rho(t)"
},
{
"math_id": 9,
"text": "\\rho_c(t)"
},
{
"math_id": 10,
"text": "\\rho_c(t) = \\frac{3H^2(t)}{8\\pi G}"
}
] | https://en.wikipedia.org/wiki?curid=12527335 |
125276 | Matrix addition | Notions of sums for matrices in linear algebra
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.
For a vector, formula_0, adding two matrices would have the geometric effect of applying each matrix transformation separately onto formula_0, then adding the transformed vectors.
formula_1
However, there are other operations that could also be considered addition for matrices, such as the direct sum and the Kronecker sum.
Entrywise sum.
Two matrices must have an equal number of rows and columns to be added. In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B. The sum of A and B, denoted A + B, is computed by adding corresponding elements of A and B:
formula_2
Or more concisely (assuming that A + B = C):
formula_3
For example:
formula_4
Similarly, it is also possible to subtract one matrix from another, as long as they have the same dimensions. The difference of A and B, denoted A − B, is computed by subtracting elements of B from corresponding elements of A, and has the same dimensions as A and B. For example:
formula_5
Direct sum.
Another operation, which is used less often, is the direct sum (denoted by ⊕). The Kronecker sum is also denoted ⊕; the context should make the usage clear. The direct sum of any pair of matrices A of size "m" × "n" and B of size "p" × "q" is a matrix of size ("m" + "p") × ("n" + "q") defined as:
formula_6
For instance,
formula_7
The direct sum of matrices is a special type of block matrix. In particular, the direct sum of square matrices is a block diagonal matrix.
The adjacency matrix of the union of disjoint graphs (or multigraphs) is the direct sum of their adjacency matrices. Any element in the direct sum of two vector spaces of matrices can be represented as a direct sum of two matrices.
In general, the direct sum of "n" matrices is:
formula_8
where the zeros are actually blocks of zeros (i.e., zero matrices).
Kronecker sum.
The Kronecker sum is different from the direct sum, but is also denoted by ⊕. It is defined using the Kronecker product ⊗ and normal matrix addition. If A is "n"-by-"n", B is "m"-by-"m" and formula_9 denotes the "k"-by-"k" identity matrix then the Kronecker sum is defined by:
formula_10
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{v}\\!"
},
{
"math_id": 1,
"text": "\\mathbf{A}\\vec{v} + \\mathbf{B}\\vec{v} = (\\mathbf{A} + \\mathbf{B})\\vec{v}\\!"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\mathbf{A}+\\mathbf{B} & = \\begin{bmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & a_{m2} & \\cdots & a_{mn} \\\\\n\\end{bmatrix} +\n\n\\begin{bmatrix}\n b_{11} & b_{12} & \\cdots & b_{1n} \\\\\n b_{21} & b_{22} & \\cdots & b_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n b_{m1} & b_{m2} & \\cdots & b_{mn} \\\\\n\\end{bmatrix} \\\\\n& = \\begin{bmatrix}\n a_{11} + b_{11} & a_{12} + b_{12} & \\cdots & a_{1n} + b_{1n} \\\\\n a_{21} + b_{21} & a_{22} + b_{22} & \\cdots & a_{2n} + b_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} + b_{m1} & a_{m2} + b_{m2} & \\cdots & a_{mn} + b_{mn} \\\\\n\\end{bmatrix} \\\\\n\n\\end{align}\\,\\!"
},
{
"math_id": 3,
"text": "c_{ij}=a_{ij}+b_{ij}"
},
{
"math_id": 4,
"text": "\n \\begin{bmatrix}\n 1 & 3 \\\\\n 1 & 0 \\\\\n 1 & 2\n \\end{bmatrix}\n+\n \\begin{bmatrix}\n 0 & 0 \\\\\n 7 & 5 \\\\\n 2 & 1\n \\end{bmatrix}\n=\n \\begin{bmatrix}\n 1+0 & 3+0 \\\\\n 1+7 & 0+5 \\\\\n 1+2 & 2+1\n \\end{bmatrix}\n=\n \\begin{bmatrix}\n 1 & 3 \\\\\n 8 & 5 \\\\\n 3 & 3\n \\end{bmatrix}\n"
},
{
"math_id": 5,
"text": "\n\\begin{bmatrix}\n 1 & 3 \\\\\n 1 & 0 \\\\ \n 1 & 2\n\\end{bmatrix}\n-\n\\begin{bmatrix}\n 0 & 0 \\\\\n 7 & 5 \\\\\n 2 & 1\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 1-0 & 3-0 \\\\\n 1-7 & 0-5 \\\\\n 1-2 & 2-1\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 1 & 3 \\\\\n -6 & -5 \\\\\n -1 & 1\n\\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "\n \\mathbf{A} \\oplus \\mathbf{B} =\n \\begin{bmatrix} \\mathbf{A} & \\boldsymbol{0} \\\\ \\boldsymbol{0} & \\mathbf{B} \\end{bmatrix} =\n \\begin{bmatrix}\n a_{11} & \\cdots & a_{1n} & 0 & \\cdots & 0 \\\\\n \\vdots & \\ddots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m 1} & \\cdots & a_{mn} & 0 & \\cdots & 0 \\\\\n 0 & \\cdots & 0 & b_{11} & \\cdots & b_{1q} \\\\\n \\vdots & \\ddots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & \\cdots & 0 & b_{p1} & \\cdots & b_{pq}\n \\end{bmatrix}\n"
},
{
"math_id": 7,
"text": "\n \\begin{bmatrix}\n 1 & 3 & 2 \\\\\n 2 & 3 & 1\n \\end{bmatrix}\n\\oplus\n \\begin{bmatrix}\n 1 & 6 \\\\\n 0 & 1\n \\end{bmatrix}\n=\n \\begin{bmatrix}\n 1 & 3 & 2 & 0 & 0 \\\\\n 2 & 3 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 1 & 6 \\\\\n 0 & 0 & 0 & 0 & 1\n \\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\n\\bigoplus_{i=1}^{n} \\mathbf{A}_{i} = \\operatorname{diag}( \\mathbf{A}_1, \\mathbf{A}_2, \\mathbf{A}_3, \\ldots, \\mathbf{A}_n) =\n\\begin{bmatrix}\n \\mathbf{A}_1 & \\boldsymbol{0} & \\cdots & \\boldsymbol{0} \\\\\n \\boldsymbol{0} & \\mathbf{A}_2 & \\cdots & \\boldsymbol{0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\boldsymbol{0} & \\boldsymbol{0} & \\cdots & \\mathbf{A}_n \\\\\n\\end{bmatrix}\\,\\!"
},
{
"math_id": 9,
"text": "\\mathbf{I}_k"
},
{
"math_id": 10,
"text": " \\mathbf{A} \\oplus \\mathbf{B} = \\mathbf{A} \\otimes \\mathbf{I}_m + \\mathbf{I}_n \\otimes \\mathbf{B}. "
}
] | https://en.wikipedia.org/wiki?curid=125276 |
12527733 | Stick number | Smallest number of edges of an equivalent polygonal path for a knot
In the mathematical theory of knots, the stick number is a knot invariant that intuitively gives the smallest number of straight "sticks" stuck end to end needed to form a knot. Specifically, given any knot formula_0, the stick number of formula_0, denoted by formula_1, is the smallest number of edges of a polygonal path equivalent to formula_0.
Known values.
Six is the lowest stick number for any nontrivial knot. There are few knots whose stick number can be determined exactly. Gyo Taek Jin determined the stick number of a formula_2-torus knot formula_3 in case the parameters formula_4 and formula_5 are not too far from each other:
<templatestyles src="Block indent/styles.css"/>formula_6, if formula_7
The same result was found independently around the same time by a research group around Colin Adams, but for a smaller range of parameters.
Bounds.
The stick number of a knot sum can be upper bounded by the stick numbers of the summands:
formula_8
Related invariants.
The stick number of a knot formula_0 is related to its crossing number formula_9 by the following inequalities:
formula_10
These inequalities are both tight for the trefoil knot, which has a crossing number of 3 and a stick number of 6.
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "\\operatorname{stick}(K)"
},
{
"math_id": 2,
"text": "(p,q)"
},
{
"math_id": 3,
"text": "T(p,q)"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "q"
},
{
"math_id": 6,
"text": "\\operatorname{stick}(T(p,q)) = 2q"
},
{
"math_id": 7,
"text": "2 \\le p < q \\le 2p."
},
{
"math_id": 8,
"text": "\\text{stick}(K_1\\#K_2)\\le \\text{stick}(K_1)+ \\text{stick}(K_2)-3 \\, "
},
{
"math_id": 9,
"text": "c(K)"
},
{
"math_id": 10,
"text": "\\frac12(7+\\sqrt{8\\,\\text{c}(K)+1}) \\le \\text{stick}(K)\\le \\frac32 (c(K)+1)."
}
] | https://en.wikipedia.org/wiki?curid=12527733 |
125280 | Matrix multiplication | Mathematical operation in linear algebra
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812, to represent the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering.
Computing matrix products is a central operation in all computational applications of linear algebra.
Notation.
This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. A; vectors in lowercase bold, e.g. a; and entries of vectors and matrices are italic (they are numbers from a field), e.g. "A" and "a". Index notation is often the clearest way to express definitions, and is used as standard in the literature. The entry in row i, column j of matrix A is indicated by (A)"ij", "A""ij" or "a""ij". In contrast, a single subscript, e.g. A1, A2, is used to select a matrix (not a matrix entry) from a collection of matrices.
Definitions.
Matrix times matrix.
If A is an "m" × "n" matrix and B is an "n" × "p" matrix,
formula_0
the "matrix product" C = AB (denoted without multiplication signs or dots) is defined to be the "m" × "p" matrix
formula_1
such that
formula_2
for "i" = 1, ..., "m" and "j" = 1, ..., "p".
That is, the entry &NoBreak;}&NoBreak; of the product is obtained by multiplying term-by-term the entries of the ith row of A and the jth column of B, and summing these n products. In other words, &NoBreak;}&NoBreak; is the dot product of the ith row of A and the jth column of B.
Therefore, AB can also be written as
formula_3
Thus the product AB is defined if and only if the number of columns in A equals the number of rows in B, in this case "n".
In most scenarios, the entries are numbers, but they may be any kind of mathematical objects for which an addition and a multiplication are defined, that are associative, and such that the addition is commutative, and the multiplication is distributive with respect to the addition. In particular, the entries may be matrices themselves (see block matrix).
Matrix times vector.
A vector formula_4 of length formula_5 can be viewed as a column vector, corresponding to an formula_6 matrix formula_7 whose entries are given by formula_8 If formula_9 is an formula_10 matrix, the matrix-times-vector product denoted by formula_11 is then the vector formula_12 that, viewed as a column vector, is equal to the formula_13 matrix formula_14 In index notation, this amounts to:
formula_15
One way of looking at this is that the changes from "plain" vector to column vector and back are assumed and left implicit.
Vector times matrix.
Similarly, a vector formula_4 of length formula_5 can be viewed as a row vector, corresponding to a formula_16 matrix. To make it clear that a row vector is meant, it is customary in this context to represent it as the transpose of a column vector; thus, one will see notations such as formula_17 The identity formula_18 holds. In index notation, if formula_9 is an formula_19 matrix, formula_20 amounts to:
formula_21
Vector times vector.
The dot product formula_22 of two vectors formula_23 and formula_24 of equal length is equal to the single entry of the formula_25 matrix resulting from multiplying these vectors as a row and a column vector, thus: formula_26 (or formula_27 which results in the same formula_25 matrix).
Illustration.
The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the product matrix corresponds to a row of A and a column of B.
formula_28
The values at the intersections, marked with circles in figure to the right, are:
formula_29
Fundamental applications.
Historically, matrix multiplication has been introduced for facilitating and clarifying computations in linear algebra. This strong relationship between matrix multiplication and linear algebra remains fundamental in all mathematics, as well as in physics, chemistry, engineering and computer science.
Linear maps.
If a vector space has a finite basis, its vectors are each uniquely represented by a finite sequence of scalars, called a coordinate vector, whose elements are the coordinates of the vector on the basis. These coordinate vectors form another vector space, which is isomorphic to the original vector space. A coordinate vector is commonly organized as a column matrix (also called a "column vector"), which is a matrix with only one column. So, a column vector represents both a coordinate vector, and a vector of the original vector space.
A linear map A from a vector space of dimension n into a vector space of dimension m maps a column vector
formula_30
onto the column vector
formula_31
The linear map A is thus defined by the matrix
formula_32
and maps the column vector formula_4 to the matrix product
formula_33
If B is another linear map from the preceding vector space of dimension m, into a vector space of dimension p, it is represented by a &NoBreak;&NoBreak; matrix formula_34 A straightforward computation shows that the matrix of the composite map &NoBreak;&NoBreak; is the matrix product formula_35 The general formula &NoBreak;&NoBreak;) that defines the function composition is instanced here as a specific case of associativity of matrix product (see below):
formula_36
Geometric rotations.
Using a Cartesian coordinate system in a Euclidean plane, the rotation by an angle formula_37 around the origin is a linear map.
More precisely,
formula_38
where the source point formula_39 and its image formula_40 are written as column vectors.
The composition of the rotation by formula_37 and that by formula_41 then corresponds to the matrix product
formula_42
where appropriate trigonometric identities are employed for the second equality.
That is, the composition corresponds to the rotation by angle formula_43, as expected.
Resource allocation in economics.
As an example, a fictitious factory uses 4 kinds of basic commodities, formula_47 to produce 3 kinds of intermediate goods, formula_48, which in turn are used to produce 3 kinds of final products, formula_49. The matrices
formula_50 and formula_51
provide the amount of basic commodities needed for a given amount of intermediate goods, and the amount of intermediate goods needed for a given amount of final products, respectively.
For example, to produce one unit of intermediate good formula_52, one unit of basic commodity formula_53, two units of formula_54, no units of formula_55, and one unit of formula_45 are needed, corresponding to the first column of formula_56.
Using matrix multiplication, compute
formula_57
this matrix directly provides the amounts of basic commodities needed for given amounts of final goods. For example, the bottom left entry of formula_44 is computed as formula_58, reflecting that formula_59 units of formula_45 are needed to produce one unit of formula_46. Indeed, one formula_45 unit is needed for formula_52, one for each of two formula_60, and formula_61 for each of the four formula_62 units that go into the formula_46 unit, see picture.
In order to produce e.g. 100 units of the final product formula_46, 80 units of formula_63, and 60 units of formula_64, the necessary amounts of basic goods can be computed as
formula_65
that is, formula_66 units of formula_53, formula_67 units of formula_54, formula_68 units of formula_55, formula_69 units of formula_45 are needed.
Similarly, the product matrix formula_44 can be used to compute the needed amounts of basic goods for other final-good amount data.
System of linear equations.
The general form of a system of linear equations is
formula_70
Using same notation as above, such a system is equivalent with the single matrix equation
formula_71
Dot product, bilinear form and sesquilinear form.
The dot product of two column vectors is the unique entry of the matrix product
formula_72
where formula_73 is the row vector obtained by transposing formula_4. (As usual, a 1×1 matrix is identified with its unique entry.)
More generally, any bilinear form over a vector space of finite dimension may be expressed as a matrix product
formula_74
and any sesquilinear form may be expressed as
formula_75
where formula_76 denotes the conjugate transpose of formula_4 (conjugate of the transpose, or equivalently transpose of the conjugate).
General properties.
Matrix multiplication shares some properties with usual multiplication. However, matrix multiplication is not defined if the number of columns of the first factor differs from the number of rows of the second factor, and it is non-commutative, even when the product remains defined after changing the order of the factors.
Non-commutativity.
An operation is commutative if, given two elements A and B such that the product formula_77 is defined, then formula_78 is also defined, and formula_79
If A and B are matrices of respective sizes &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, then formula_77 is defined if &NoBreak;&NoBreak;, and formula_78 is defined if &NoBreak;&NoBreak;. Therefore, if one of the products is defined, the other one need not be defined. If &NoBreak;&NoBreak;, the two products are defined, but have different sizes; thus they cannot be equal. Only if &NoBreak;&NoBreak;, that is, if A and B are square matrices of the same size, are both products defined and of the same size. Even in this case, one has in general
formula_80
For example
formula_81
but
formula_82
This example may be expanded for showing that, if A is a &NoBreak;&NoBreak; matrix with entries in a field F, then formula_83 for every &NoBreak;&NoBreak; matrix B with entries in F, if and only if formula_84 where &NoBreak;&NoBreak;, and I is the &NoBreak;&NoBreak; identity matrix. If, instead of a field, the entries are supposed to belong to a ring, then one must add the condition that c belongs to the center of the ring.
One special case where commutativity does occur is when D and E are two (square) diagonal matrices (of the same size); then DE = ED. Again, if the matrices are over a general ring rather than a field, the corresponding entries in each must also commute with each other for this to hold.
Distributivity.
The matrix product is distributive with respect to matrix addition. That is, if A, B, C, D are matrices of respective sizes "m" × "n", "n" × "p", "n" × "p", and "p" × "q", one has (left distributivity)
formula_85
and (right distributivity)
formula_86
This results from the distributivity for coefficients by
formula_87
formula_88
Product with a scalar.
If A is a matrix and c a scalar, then the matrices formula_89 and formula_90 are obtained by left or right multiplying all entries of A by c. If the scalars have the commutative property, then formula_91
If the product formula_44 is defined (that is, the number of columns of A equals the number of rows of B), then
formula_92 and formula_93
If the scalars have the commutative property, then all four matrices are equal. More generally, all four are equal if "c" belongs to the center of a ring containing the entries of the matrices, because in this case, "c"X
X"c" for all matrices X.
These properties result from the bilinearity of the product of scalars:
formula_94
formula_95
Transpose.
If the scalars have the commutative property, the transpose of a product of matrices is the product, in the reverse order, of the transposes of the factors. That is
formula_96
where T denotes the transpose, that is the interchange of rows and columns.
This identity does not hold for noncommutative entries, since the order between the entries of A and B is reversed, when one expands the definition of the matrix product.
Complex conjugate.
If A and B have complex entries, then
formula_97
where * denotes the entry-wise complex conjugate of a matrix.
This results from applying to the definition of matrix product the fact that the conjugate of a sum is the sum of the conjugates of the summands and the conjugate of a product is the product of the conjugates of the factors.
Transposition acts on the indices of the entries, while conjugation acts independently on the entries themselves. It results that, if A and B have complex entries, one has
formula_98
where † denotes the conjugate transpose (conjugate of the transpose, or equivalently transpose of the conjugate).
Associativity.
Given three matrices A, B and C, the products (AB)C and A(BC) are defined if and only if the number of columns of A equals the number of rows of B, and the number of columns of B equals the number of rows of C (in particular, if one of the products is defined, then the other is also defined). In this case, one has the associative property
formula_99
As for any associative operation, this allows omitting parentheses, and writing the above products as &NoBreak;&NoBreak;
This extends naturally to the product of any number of matrices provided that the dimensions match. That is, if A1, A2, ..., A"n" are matrices such that the number of columns of A"i" equals the number of rows of A"i" + 1 for "i" = 1, ..., "n" – 1, then the product
formula_100
is defined and does not depend on the order of the multiplications, if the order of the matrices is kept fixed.
These properties may be proved by straightforward but complicated summation manipulations. This result also follows from the fact that matrices represent linear maps. Therefore, the associative property of matrices is simply a specific case of the associative property of function composition.
Computational complexity depends on parenthesization.
Although the result of a sequence of matrix products does not depend on the order of operation (provided that the order of the matrices is not changed), the computational complexity may depend dramatically on this order.
For example, if A, B and C are matrices of respective sizes 10×30, 30×5, 5×60, computing (AB)C needs 10×30×5 + 10×5×60 = 4,500 multiplications, while computing A(BC) needs 30×5×60 + 10×30×60 = 27,000 multiplications.
Algorithms have been designed for choosing the best order of products; see Matrix chain multiplication. When the number n of matrices increases, it has been shown that the choice of the best order has a complexity of formula_101
Application to similarity.
Any invertible matrix formula_102 defines a similarity transformation (on square matrices of the same size as formula_102)
formula_103
Similarity transformations map product to products, that is
formula_104
In fact, one has
formula_105
Square matrices.
Let us denote formula_106 the set of "n"×"n" square matrices with entries in a ring R, which, in practice, is often a field.
In formula_106, the product is defined for every pair of matrices. This makes formula_106 a ring, which has the identity matrix I as identity element (the matrix whose diagonal entries are equal to 1 and all other entries are 0). This ring is also an associative R-algebra.
If "n" > 1, many matrices do not have a multiplicative inverse. For example, a matrix such that all entries of a row (or a column) are 0 does not have an inverse. If it exists, the inverse of a matrix A is denoted A−1, and, thus verifies
formula_107
A matrix that has an inverse is an invertible matrix. Otherwise, it is a singular matrix.
A product of matrices is invertible if and only if each factor is invertible. In this case, one has
formula_108
When R is commutative, and, in particular, when it is a field, the determinant of a product is the product of the determinants. As determinants are scalars, and scalars commute, one has thus
formula_109
The other matrix invariants do not behave as well with products. Nevertheless, if R is commutative, AB and BA have the same trace, the same characteristic polynomial, and the same eigenvalues with the same multiplicities. However, the eigenvectors are generally different if AB ≠ BA.
Powers of a matrix.
One may raise a square matrix to any nonnegative integer power multiplying it by itself repeatedly in the same way as for ordinary numbers. That is,
formula_110
formula_111
formula_112
Computing the kth power of a matrix needs "k" – 1 times the time of a single matrix multiplication, if it is done with the trivial algorithm (repeated multiplication). As this may be very time consuming, one generally prefers using exponentiation by squaring, which requires less than 2 log2 "k" matrix multiplications, and is therefore much more efficient.
An easy case for exponentiation is that of a diagonal matrix. Since the product of diagonal matrices amounts to simply multiplying corresponding diagonal elements together, the kth power of a diagonal matrix is obtained by raising the entries to the power k:
formula_113
Abstract algebra.
The definition of matrix product requires that the entries belong to a semiring, and does not require multiplication of elements of the semiring to be commutative. In many applications, the matrix elements belong to a field, although the tropical semiring is also a common choice for graph shortest path problems. Even in the case of matrices over fields, the product is not commutative in general, although it is associative and is distributive over matrix addition. The identity matrices (which are the square matrices whose entries are zero outside of the main diagonal and 1 on the main diagonal) are identity elements of the matrix product. It follows that the "n" × "n" matrices over a ring form a ring, which is noncommutative except if "n" = 1 and the ground ring is commutative.
A square matrix may have a multiplicative inverse, called an inverse matrix. In the common case where the entries belong to a commutative ring R, a matrix has an inverse if and only if its determinant has a multiplicative inverse in R. The determinant of a product of square matrices is the product of the determinants of the factors. The "n" × "n" matrices that have an inverse form a group under matrix multiplication, the subgroups of which are called matrix groups. Many classical groups (including all finite groups) are isomorphic to matrix groups; this is the starting point of the theory of group representations.
Matrices are the morphisms of a category, the category of matrices. The objects are the natural numbers that measure the size of matrices, and the composition of morphisms is matrix multiplication. The source of a morphism is the number of columns of the corresponding matrix, and the target is the number of rows.
Computational complexity.
The matrix multiplication algorithm that results from the definition requires, in the worst case, &NoBreak;&NoBreak; multiplications and &NoBreak;&NoBreak; additions of scalars to compute the product of two square "n"×"n" matrices. Its computational complexity is therefore &NoBreak;&NoBreak;, in a model of computation for which the scalar operations take constant time.
Rather surprisingly, this complexity is not optimal, as shown in 1969 by Volker Strassen, who provided an algorithm, now called Strassen's algorithm, with a complexity of formula_114
Strassen's algorithm can be parallelized to further improve the performance.
As of 2024[ [update]], the best peer-reviewed matrix multiplication algorithm is by Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou and has complexity "O"("n"2.371552).
It is not known whether matrix multiplication can be performed in "n"2 + o(1) time. This would be optimal, since one must read the &NoBreak;&NoBreak; elements of a matrix in order to multiply it with another matrix.
Since matrix multiplication forms the basis for many algorithms, and many operations on matrices even have the same complexity as matrix multiplication (up to a multiplicative constant), the computational complexity of matrix multiplication appears throughout numerical linear algebra and theoretical computer science.
Generalizations.
Other types of products of matrices include:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{A}=\\begin{pmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & a_{m2} & \\cdots & a_{mn} \\\\\n\\end{pmatrix},\\quad\\mathbf{B}=\\begin{pmatrix}\n b_{11} & b_{12} & \\cdots & b_{1p} \\\\\n b_{21} & b_{22} & \\cdots & b_{2p} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n b_{n1} & b_{n2} & \\cdots & b_{np} \\\\\n\\end{pmatrix}"
},
{
"math_id": 1,
"text": "\\mathbf{C} = \\begin{pmatrix}\n c_{11} & c_{12} & \\cdots & c_{1p} \\\\\n c_{21} & c_{22} & \\cdots & c_{2p} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n c_{m1} & c_{m2} & \\cdots & c_{mp} \\\\\n\\end{pmatrix}"
},
{
"math_id": 2,
"text": " c_{ij} = a_{i1} b_{1j} + a_{i2} b_{2j} + \\cdots + a_{in} b_{nj} = \\sum_{k=1}^n a_{ik} b_{kj}, "
},
{
"math_id": 3,
"text": "\\mathbf{C} = \\begin{pmatrix}\n a_{11}b_{11} +\\cdots + a_{1n}b_{n1} & a_{11}b_{12} +\\cdots + a_{1n}b_{n2} & \\cdots & a_{11}b_{1p} +\\cdots + a_{1n}b_{np} \\\\\n a_{21}b_{11} +\\cdots + a_{2n}b_{n1} & a_{21}b_{12} +\\cdots + a_{2n}b_{n2} & \\cdots & a_{21}b_{1p} +\\cdots + a_{2n}b_{np} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1}b_{11} +\\cdots + a_{mn}b_{n1} & a_{m1}b_{12} +\\cdots + a_{mn}b_{n2} & \\cdots & a_{m1}b_{1p} +\\cdots + a_{mn}b_{np} \\\\\n\\end{pmatrix} \n"
},
{
"math_id": 4,
"text": "\\mathbf x"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "n\\times1"
},
{
"math_id": 7,
"text": "\\mathbf X"
},
{
"math_id": 8,
"text": "\\mathbf X_{i1}=\\mathbf x_i."
},
{
"math_id": 9,
"text": "\\mathbf A"
},
{
"math_id": 10,
"text": "m\\times n"
},
{
"math_id": 11,
"text": "\\mathbf {Ax}"
},
{
"math_id": 12,
"text": "\\mathbf y"
},
{
"math_id": 13,
"text": "m\\times1"
},
{
"math_id": 14,
"text": "\\mathbf{AX}."
},
{
"math_id": 15,
"text": "y_i=\\sum_{j=1}^n a_{ij}x_j."
},
{
"math_id": 16,
"text": "1\\times n"
},
{
"math_id": 17,
"text": "\\mathbf{x}^\\mathrm{T}\\mathbf{A}."
},
{
"math_id": 18,
"text": "\\mathbf{x}^\\mathrm{T}\\mathbf{A}=(\\mathbf{A}^\\mathrm{T}\\mathbf{x})^\\mathrm{T}"
},
{
"math_id": 19,
"text": "n\\times p"
},
{
"math_id": 20,
"text": "\\mathbf{x}^\\mathrm{T}\\mathbf{A}=\\mathbf{y}^\\mathrm{T}"
},
{
"math_id": 21,
"text": "y_k=\\sum_{j=1}^n x_j a_{jk}."
},
{
"math_id": 22,
"text": "\\mathbf a\\cdot\\mathbf b"
},
{
"math_id": 23,
"text": "\\mathbf a"
},
{
"math_id": 24,
"text": "\\mathbf b"
},
{
"math_id": 25,
"text": "1\\times 1"
},
{
"math_id": 26,
"text": "\\mathbf{a}^\\mathrm{T}\\mathbf{b}"
},
{
"math_id": 27,
"text": "\\mathbf{b}^\\mathrm{T}\\mathbf{a},"
},
{
"math_id": 28,
"text": "\n\\overset{4\\times 2 \\text{ matrix}}{\\begin{bmatrix}\na_{11} & a_{12} \\\\\n\\cdot & \\cdot \\\\\na_{31} & a_{32} \\\\\n\\cdot & \\cdot \\\\\n\\end{bmatrix}}\n\\overset{2\\times 3\\text{ matrix}}{\\begin{bmatrix}\n\\cdot & b_{12} & b_{13} \\\\\n\\cdot & b_{22} & b_{23} \\\\\n\\end{bmatrix}}\n\n= \\overset{4\\times 3\\text{ matrix}}{\\begin{bmatrix}\n\\cdot & c_{12} & \\cdot \\\\\n\\cdot & \\cdot & \\cdot \\\\\n\\cdot & \\cdot & c_{33} \\\\\n\\cdot & \\cdot & \\cdot \\\\\n\\end{bmatrix}}\n"
},
{
"math_id": 29,
"text": "\\begin{align}\nc_{12} & = a_{11} b_{12} + a_{12} b_{22} \\\\\nc_{33} & = a_{31} b_{13} + a_{32} b_{23} .\n\\end{align}"
},
{
"math_id": 30,
"text": "\\mathbf x=\\begin{pmatrix}x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n\\end{pmatrix}"
},
{
"math_id": 31,
"text": "\\mathbf y= A(\\mathbf x)= \\begin{pmatrix}a_{11}x_1+\\cdots + a_{1n}x_n\\\\ a_{21}x_1+\\cdots + a_{2n}x_n \\\\ \\vdots \\\\ a_{m1}x_1+\\cdots + a_{mn}x_n\\end{pmatrix}."
},
{
"math_id": 32,
"text": "\\mathbf{A}=\\begin{pmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & a_{m2} & \\cdots & a_{mn} \\\\\n\\end{pmatrix}, "
},
{
"math_id": 33,
"text": "\\mathbf y = \\mathbf {Ax}."
},
{
"math_id": 34,
"text": "\\mathbf B."
},
{
"math_id": 35,
"text": "\\mathbf {BA}."
},
{
"math_id": 36,
"text": "(\\mathbf{BA})\\mathbf x = \\mathbf{B}(\\mathbf {Ax}) = \\mathbf{BAx}."
},
{
"math_id": 37,
"text": "\\alpha"
},
{
"math_id": 38,
"text": " \\begin{bmatrix} x' \\\\ y' \\end{bmatrix} =\n \\begin{bmatrix} \\cos \\alpha & - \\sin \\alpha \\\\ \\sin \\alpha & \\cos \\alpha \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix},"
},
{
"math_id": 39,
"text": "(x,y)"
},
{
"math_id": 40,
"text": "(x',y')"
},
{
"math_id": 41,
"text": "\\beta"
},
{
"math_id": 42,
"text": "\\begin{bmatrix} \\cos \\beta & - \\sin \\beta \\\\ \\sin \\beta & \\cos \\beta \\end{bmatrix} \n \\begin{bmatrix} \\cos \\alpha & - \\sin \\alpha \\\\ \\sin \\alpha & \\cos \\alpha \\end{bmatrix}\n = \\begin{bmatrix} \\cos \\beta \\cos \\alpha - \\sin \\beta \\sin \\alpha & - \\cos \\beta \\sin \\alpha - \\sin \\beta \\cos \\alpha \\\\\n \\sin \\beta \\cos \\alpha + \\cos \\beta \\sin \\alpha & - \\sin \\beta \\sin \\alpha + \\cos \\beta \\cos \\alpha \\end{bmatrix}\n = \\begin{bmatrix} \\cos (\\alpha+\\beta) & - \\sin(\\alpha+\\beta) \\\\ \\sin(\\alpha+\\beta) & \\cos(\\alpha+\\beta) \\end{bmatrix},"
},
{
"math_id": 43,
"text": "\\alpha+\\beta"
},
{
"math_id": 44,
"text": "\\mathbf{AB}"
},
{
"math_id": 45,
"text": "b_4"
},
{
"math_id": 46,
"text": "f_1"
},
{
"math_id": 47,
"text": "b_1, b_2, b_3, b_4"
},
{
"math_id": 48,
"text": "m_1, m_2, m_3"
},
{
"math_id": 49,
"text": "f_1, f_2, f_3"
},
{
"math_id": 50,
"text": "\\mathbf{A} = \\begin{pmatrix} 1 & 0 & 1 \\\\ 2 & 1 & 1 \\\\ 0 & 1 & 1 \\\\ 1 & 1 & 2 \\\\ \\end{pmatrix} "
},
{
"math_id": 51,
"text": "\\mathbf{B} = \\begin{pmatrix} 1 & 2 & 1 \\\\ 2 & 3 & 1 \\\\ 4 & 2 & 2 \\\\ \\end{pmatrix} "
},
{
"math_id": 52,
"text": "m_1"
},
{
"math_id": 53,
"text": "b_1"
},
{
"math_id": 54,
"text": "b_2"
},
{
"math_id": 55,
"text": "b_3"
},
{
"math_id": 56,
"text": "\\mathbf{A}"
},
{
"math_id": 57,
"text": "\\mathbf{AB} = \\begin{pmatrix} 5 & 4 & 3 \\\\ 8 & 9 & 5 \\\\\\ 6 & 5 & 3 \\\\ 11 & 9 & 6 \\\\ \\end{pmatrix} ;"
},
{
"math_id": 58,
"text": "1 \\cdot 1 + 1 \\cdot 2 + 2 \\cdot 4 = 11"
},
{
"math_id": 59,
"text": "11"
},
{
"math_id": 60,
"text": "m_2"
},
{
"math_id": 61,
"text": "2"
},
{
"math_id": 62,
"text": "m_3"
},
{
"math_id": 63,
"text": "f_2"
},
{
"math_id": 64,
"text": "f_3"
},
{
"math_id": 65,
"text": "(\\mathbf{AB}) \\begin{pmatrix} 100 \\\\ 80 \\\\ 60 \\\\ \\end{pmatrix} = \\begin{pmatrix} 1000 \\\\ 1820 \\\\ 1180 \\\\ 2180 \\end{pmatrix} ,"
},
{
"math_id": 66,
"text": "1000"
},
{
"math_id": 67,
"text": "1820"
},
{
"math_id": 68,
"text": "1180"
},
{
"math_id": 69,
"text": "2180"
},
{
"math_id": 70,
"text": "\\begin{matrix}a_{11}x_1+\\cdots + a_{1n}x_n=b_1,\n\\\\ a_{21}x_1+\\cdots + a_{2n}x_n =b_2,\n\\\\ \\vdots\n\\\\ a_{m1}x_1+\\cdots + a_{mn}x_n =b_m. \\end{matrix}"
},
{
"math_id": 71,
"text": "\\mathbf{Ax}=\\mathbf b."
},
{
"math_id": 72,
"text": "\\mathbf x^\\mathsf T \\mathbf y,"
},
{
"math_id": 73,
"text": "\\mathbf x^\\mathsf T"
},
{
"math_id": 74,
"text": "\\mathbf x^\\mathsf T \\mathbf {Ay},"
},
{
"math_id": 75,
"text": "\\mathbf x^\\dagger \\mathbf {Ay},"
},
{
"math_id": 76,
"text": "\\mathbf x^\\dagger"
},
{
"math_id": 77,
"text": "\\mathbf{A}\\mathbf{B}"
},
{
"math_id": 78,
"text": "\\mathbf{B}\\mathbf{A}"
},
{
"math_id": 79,
"text": "\\mathbf{A}\\mathbf{B}=\\mathbf{B}\\mathbf{A}."
},
{
"math_id": 80,
"text": "\\mathbf{A}\\mathbf{B} \\neq \\mathbf{B}\\mathbf{A}."
},
{
"math_id": 81,
"text": "\\begin{pmatrix} 0 & 1 \\\\ 0 & 0 \\end{pmatrix}\\begin{pmatrix} 0 & 0 \\\\ 1 & 0 \\end{pmatrix}=\\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix},"
},
{
"math_id": 82,
"text": "\\begin{pmatrix} 0 & 0 \\\\ 1 & 0 \\end{pmatrix}\\begin{pmatrix} 0 & 1 \\\\ 0 & 0 \\end{pmatrix} = \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix}."
},
{
"math_id": 83,
"text": "\\mathbf{A}\\mathbf{B} = \\mathbf{B}\\mathbf{A}"
},
{
"math_id": 84,
"text": "\\mathbf{A}=c\\,\\mathbf{I}"
},
{
"math_id": 85,
"text": "\\mathbf{A}(\\mathbf{B} + \\mathbf{C}) = \\mathbf{AB} + \\mathbf{AC},"
},
{
"math_id": 86,
"text": "(\\mathbf{B} + \\mathbf{C} )\\mathbf{D} = \\mathbf{BD} + \\mathbf{CD}."
},
{
"math_id": 87,
"text": "\\sum_k a_{ik}(b_{kj} + c_{kj}) = \\sum_k a_{ik}b_{kj} + \\sum_k a_{ik}c_{kj} "
},
{
"math_id": 88,
"text": "\\sum_k (b_{ik} + c_{ik}) d_{kj} = \\sum_k b_{ik}d_{kj} + \\sum_k c_{ik}d_{kj}. "
},
{
"math_id": 89,
"text": "c\\mathbf{A}"
},
{
"math_id": 90,
"text": "\\mathbf{A}c"
},
{
"math_id": 91,
"text": "c\\mathbf{A} = \\mathbf{A}c."
},
{
"math_id": 92,
"text": " c(\\mathbf{AB}) = (c \\mathbf{A})\\mathbf{B}"
},
{
"math_id": 93,
"text": " (\\mathbf{A} \\mathbf{B})c=\\mathbf{A}(\\mathbf{B}c)."
},
{
"math_id": 94,
"text": "c \\left(\\sum_k a_{ik}b_{kj}\\right) = \\sum_k (c a_{ik} ) b_{kj} "
},
{
"math_id": 95,
"text": "\\left(\\sum_k a_{ik}b_{kj}\\right) c = \\sum_k a_{ik} ( b_{kj}c). "
},
{
"math_id": 96,
"text": " (\\mathbf{AB})^\\mathsf{T} = \\mathbf{B}^\\mathsf{T}\\mathbf{A}^\\mathsf{T} "
},
{
"math_id": 97,
"text": " (\\mathbf{AB})^* = \\mathbf{A}^*\\mathbf{B}^* "
},
{
"math_id": 98,
"text": " (\\mathbf{AB})^\\dagger = \\mathbf{B}^\\dagger\\mathbf{A}^\\dagger ,"
},
{
"math_id": 99,
"text": "(\\mathbf{AB})\\mathbf{C}=\\mathbf{A}(\\mathbf{BC})."
},
{
"math_id": 100,
"text": " \\prod_{i=1}^n \\mathbf{A}_i = \\mathbf{A}_1\\mathbf{A}_2\\cdots\\mathbf{A}_n "
},
{
"math_id": 101,
"text": "O(n \\log n)."
},
{
"math_id": 102,
"text": "\\mathbf{P}"
},
{
"math_id": 103,
"text": "S_\\mathbf{P}(\\mathbf{A}) = \\mathbf{P}^{-1} \\mathbf{A} \\mathbf{P}."
},
{
"math_id": 104,
"text": "S_\\mathbf{P}(\\mathbf{AB}) = S_\\mathbf{P}(\\mathbf{A})S_\\mathbf{P}(\\mathbf{B})."
},
{
"math_id": 105,
"text": "\\mathbf{P}^{-1} (\\mathbf{AB}) \\mathbf{P} \n= \\mathbf{P}^{-1} \\mathbf{A}(\\mathbf{P}\\mathbf{P}^{-1})\\mathbf{B} \\mathbf{P}\n=(\\mathbf{P}^{-1} \\mathbf{A}\\mathbf{P})(\\mathbf{P}^{-1}\\mathbf{B} \\mathbf{P})."
},
{
"math_id": 106,
"text": "\\mathcal M_n(R)"
},
{
"math_id": 107,
"text": " \\mathbf{A}\\mathbf{A}^{-1} = \\mathbf{A}^{-1}\\mathbf{A} = \\mathbf{I}. "
},
{
"math_id": 108,
"text": "(\\mathbf{A}\\mathbf{B})^{-1} = \\mathbf{B}^{-1}\\mathbf{A}^{-1}."
},
{
"math_id": 109,
"text": " \\det(\\mathbf{AB}) = \\det(\\mathbf{BA}) =\\det(\\mathbf{A})\\det(\\mathbf{B}). "
},
{
"math_id": 110,
"text": "\\mathbf{A}^0 = \\mathbf{I},"
},
{
"math_id": 111,
"text": "\\mathbf{A}^1 = \\mathbf{A},"
},
{
"math_id": 112,
"text": "\\mathbf{A}^k = \\underbrace{\\mathbf{A}\\mathbf{A}\\cdots\\mathbf{A}}_{k\\text{ times}}."
},
{
"math_id": 113,
"text": "\n \\begin{bmatrix}\n a_{11} & 0 & \\cdots & 0 \\\\\n 0 & a_{22} & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & \\cdots & a_{nn}\n \\end{bmatrix}^k =\n\\begin{bmatrix}\n a_{11}^k & 0 & \\cdots & 0 \\\\\n 0 & a_{22}^k & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & \\cdots & a_{nn}^k\n \\end{bmatrix}.\n"
},
{
"math_id": 114,
"text": "O( n^{\\log_{2}7}) \\approx O(n^{2.8074})."
},
{
"math_id": 115,
"text": "\\mathbf{a}\\mathbf{b}^\\mathsf{T}"
}
] | https://en.wikipedia.org/wiki?curid=125280 |
1252844 | Saltwater intrusion | Movement of saline water into freshwater aquifers
Saltwater intrusion is the movement of saline water into freshwater aquifers, which can lead to groundwater quality degradation, including drinking water sources, and other consequences. Saltwater intrusion can naturally occur in coastal aquifers, owing to the hydraulic connection between groundwater and seawater. Because saline water has a higher mineral content than freshwater, it is denser and has a higher water pressure. As a result, saltwater can push inland beneath the freshwater. In other topologies, submarine groundwater discharge can push fresh water into saltwater.
Certain human activities, especially groundwater pumping from coastal freshwater wells, have increased saltwater intrusion in many coastal areas. Water extraction drops the level of fresh groundwater, reducing its water pressure and allowing saltwater to flow further inland. Other contributors to saltwater intrusion include navigation channels or agricultural and drainage channels, which provide conduits for saltwater to move inland. Sea level rise caused by climate change also contributes to saltwater intrusion. Saltwater intrusion can also be worsened by extreme events like hurricane storm surges.
Hydrology.
At the coastal margin, fresh groundwater flowing from inland areas meets with saline groundwater from the ocean. The fresh groundwater flows from inland areas towards the coast where elevation and groundwater levels are lower. Because saltwater has a higher content of dissolved salts and minerals, it is denser than freshwater, causing it to have a higher hydraulic head than freshwater. Hydraulic head refers to the liquid pressure exerted by a water column: a water column with higher hydraulic head will move into a water column with lower hydraulic head, if the columns are connected.
The higher pressure and density of saltwater causes it to move into coastal aquifers in a wedge shape under the freshwater. The saltwater and freshwater meet in a transition zone where mixing occurs through dispersion and diffusion. Ordinarily the inland extent of the saltwater wedge is limited because fresh groundwater levels, or the height of the freshwater column, increases as land elevation gets higher.
Causes.
Groundwater extraction.
Groundwater extraction is the primary cause of saltwater intrusion. Groundwater is the main source of drinking water in many coastal areas of the United States, and extraction has increased over time. Under baseline conditions, the inland extent of saltwater is limited by higher pressure exerted by the freshwater column, owing to its higher elevation. Groundwater extraction can lower the level of the freshwater table, reducing the pressure exerted by the freshwater column and allowing the denser saltwater to move inland laterally. In Cape May, New Jersey, since the 1940s water withdrawals have lowered groundwater levels by up to 30 meters, reducing the water table to below sea level and causing widespread intrusion and contamination of water supply wells.
Groundwater extraction can also lead to well contamination by causing upwelling, or upcoming, of saltwater from the depths of the aquifer. Under baseline conditions, a saltwater wedge extends inland, underneath the freshwater because of its higher density. Water supply wells located over or near the saltwater wedge can draw the saltwater upward, creating a saltwater cone that might reach and contaminate the well. Some aquifers are predisposed towards this type of intrusion, such as the Lower Floridan aquifer: though a relatively impermeable rock or clay layer separates fresh groundwater from saltwater, isolated cracks breach the confining layer, promoting upward movement of saltwater. Pumping of groundwater strengthens this effect by lowering the water table, reducing the downward push of freshwater.
Canals and drainage networks.
The construction of canals and drainage networks can lead to saltwater intrusion. Canals provide conduits for saltwater to be carried inland, as does the deepening of existing channels for navigation purposes. In Sabine Lake Estuary in the Gulf of Mexico, large-scale waterways have allowed saltwater to move into the lake, and upstream into the rivers feeding the lake. Additionally, channel dredging in the surrounding wetlands to facilitate oil and gas drilling has caused land subsidence, further promoting inland saltwater movement.
Drainage networks constructed to drain flat coastal areas can lead to intrusion by lowering the freshwater table, reducing the water pressure exerted by the freshwater column. Saltwater intrusion in southeast Florida has occurred largely as a result of drainage canals built between 1903 into the 1980s to drain the Everglades for agricultural and urban development. The main cause of intrusion was the lowering of the water table, though the canals also conveyed seawater inland until the construction of water control gates.
Solutions.
The seawater intrusion (SWI) into rivers can lead to many negative consequences, especially on agricultural activities and live ecosystems in upstream areas of rivers. There are many solutions developed to prevent or reduce the negative effects of Seawater intrusion. One of the sustainable solutions for rivers is using air bubble curtains that can completely solve SWI issues in rivers.
Effect on water supply.
Many coastal communities around the United States are experiencing saltwater contamination of water supply wells, and this problem has been seen for decades. Many Mediterranean coastal aquifers suffer for seawater intrusion effects. The consequences of saltwater intrusion for supply wells vary widely, depending on extent of the intrusion, the intended use of the water, and whether the salinity exceeds standards for the intended use. In some areas such as Washington State, intrusion only reaches portions of the aquifer, affecting only certain water supply wells. Other aquifers have faced more widespread salinity contamination, significantly affecting groundwater supplies for the region. For instance, in Cape May, New Jersey, where groundwater extraction has lowered water tables by up to 30 meters, saltwater intrusion has caused closure of over 120 water supply wells since the 1940s.
Ghyben–Herzberg relation.
The first physical formulations of saltwater intrusion were made by Willem Badon-Ghijben in 1888 and 1889 as well as Alexander Herzberg in 1901, thus called the Ghyben–Herzberg relation. They derived analytical solutions to approximate the intrusion behavior, which are based on a number of assumptions that do not hold in all field cases.
In the equation, formula_0 the thickness of the freshwater zone above sea level is represented as formula_1 and that below sea level is represented as formula_2. The two thicknesses formula_1 and formula_2, are related by formula_3 and formula_4 where formula_3 is the density of freshwater and formula_5 is the density of saltwater. Freshwater has a density of about 1.000 grams per cubic centimeter (g/cm3) at 20 °C, whereas that of seawater is about 1.025 g/cm3. The equation can be simplified to formula_6.
The Ghyben–Herzberg ratio states that, for every meter of fresh water in an unconfined aquifer above sea level, there will be forty meters of fresh water in the aquifer below sea level.
In the 20th century the vastly increased computing power available allowed the use of numerical methods (usually finite differences or finite elements) that need fewer assumptions and can be applied more generally.
Modeling.
Modeling of saltwater intrusion is considered difficult. Some typical difficulties that arise are:
Mitigation and management.
Saltwater is also an issue where a lock separates saltwater from freshwater (for example the Hiram M. Chittenden Locks in Washington). In this case a collection basin was built from which the saltwater can be pumped back to the sea. Some of the intruding saltwater is also pumped to the fish ladder to make it more attractive to migrating fish.
As groundwater salinization becomes a relevant problem, more complex initiatives should be applied from local technical and engineering solutions to rules or regulatory instruments for whole aquifers or regions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " z = \\frac{ \\rho_f} {(\\rho_s-\\rho_f)} h"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "z"
},
{
"math_id": 3,
"text": "\\rho_f"
},
{
"math_id": 4,
"text": "\\rho_s"
},
{
"math_id": 5,
"text": "\\rho_s "
},
{
"math_id": 6,
"text": "z\\ = 40 h"
}
] | https://en.wikipedia.org/wiki?curid=1252844 |
1252846 | Graph rewriting | Creating a new graph from an existing graph
In computer science, graph transformation, or graph rewriting, concerns the technique of creating a new graph out of an original graph algorithmically. It has numerous applications, ranging from software engineering (software construction and also software verification) to layout algorithms and picture generation.
Graph transformations can be used as a computation abstraction. The basic idea is that if the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.
Formally, a graph rewriting system usually consists of a set of graph rewrite rules of the form formula_0, with formula_1 being called pattern graph (or left-hand side) and formula_2 being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching, thus solving the subgraph isomorphism problem) and by replacing the found occurrence by an instance of the replacement graph. Rewrite rules can be further regulated in the case of labeled graphs, such as in string-regulated graph grammars.
Sometimes graph grammar is used as a synonym for "graph rewriting system", especially in the context of formal languages; the different wording is used to emphasize the goal of constructions, like the enumeration of all graphs from some starting graph, i.e. the generation of a graph language – instead of simply transforming a given state (host graph) into a new state.
Graph rewriting approaches.
Algebraic approach.
The algebraic approach to graph rewriting is based upon category theory. The algebraic approach is further divided into sub-approaches, the most common of which are the "double-pushout (DPO) approach" and the "single-pushout (SPO) approach". Other sub-approaches include the "sesqui-pushout" and the "pullback approach".
From the perspective of the DPO approach a graph rewriting rule is a pair of morphisms in the category of graphs and graph homomorphisms between them: formula_3, also written formula_4, where formula_5 is injective. The graph K is called "invariant" or sometimes the "gluing graph". A "rewriting step" or "application" of a rule r to a "host graph" G is defined by two pushout diagrams both originating in the same morphism formula_6, where D is a "context graph" (this is where the name "double"-pushout comes from). Another graph morphism formula_7 models an occurrence of L in G and is called a "match". Practical understanding of this is that formula_1 is a subgraph that is matched from formula_8 (see subgraph isomorphism problem), and after a match is found, formula_1 is replaced with formula_2 in host graph formula_8 where formula_9 serves as an interface, containing the nodes and edges which are preserved when applying the rule. The graph formula_9 is needed to attach the pattern being matched to its context: if it is empty, the match can only designate a whole connected component of the graph formula_8.
In contrast a graph rewriting rule of the SPO approach is a single morphism in the category of labeled multigraphs and "partial mappings" that preserve the multigraph structure: formula_10. Thus a rewriting step is defined by a single pushout diagram. Practical understanding of this is similar to the DPO approach. The difference is, that there is no interface between the host graph G and the graph G' being the result of the rewriting step.
From the practical perspective, the key distinction between DPO and SPO is how they deal with the deletion of nodes with adjacent edges, in particular, how they avoid that such deletions may leave behind "dangling edges". The DPO approach only deletes a node when the rule specifies the deletion of all adjacent edges as well (this "dangling condition" can be checked for a given match), whereas the SPO approach simply disposes the adjacent edges, without requiring an explicit specification.
There is also another algebraic-like approach to graph rewriting, based mainly on Boolean algebra and an algebra of matrices, called "matrix graph grammars".
Determinate graph rewriting.
Yet another approach to graph rewriting, known as "determinate" graph rewriting, came out of logic and database theory. In this approach, graphs are treated as database instances, and rewriting operations as a mechanism for defining queries and views; therefore, all rewriting is required to yield unique results (up to isomorphism), and this is achieved by applying any rewriting rule concurrently throughout the graph, wherever it applies, in such a way that the result is indeed uniquely defined.
Term graph rewriting.
Another approach to graph rewriting is term graph rewriting, which involves the processing or transformation of term graphs (also known as "abstract semantic graphs") by a set of syntactic rewrite rules.
Term graphs are a prominent topic in programming language research since term graph rewriting rules are capable of formally expressing a compiler's operational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can perform automated verification and logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings.
The TERMGRAPH conference focuses entirely on research into term graph rewriting and its applications.
Classes of graph grammar and graph rewriting system.
Graph rewriting systems naturally group into classes according to the kind of representation of graphs that are used and how the rewrites are expressed. The term graph grammar, otherwise equivalent to graph rewriting system or graph replacement system, is most often used in classifications. Some common types are:
Implementations and applications.
Graphs are an expressive, visual and mathematically precise formalism for modelling of objects (entities) linked by relations; objects are represented by nodes and relations between them by edges. Nodes and edges are commonly typed and attributed. Computations are described in this model by changes in the relations between the entities or by attribute changes of the graph elements. They are encoded in graph rewrite/graph transformation rules and executed by graph rewrite systems/graph transformation tools.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "L \\rightarrow R"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "r = (L \\leftarrow K \\rightarrow R)"
},
{
"math_id": 4,
"text": "L \\supseteq K \\subseteq R"
},
{
"math_id": 5,
"text": "K \\rightarrow L"
},
{
"math_id": 6,
"text": "k\\colon K\\rightarrow D"
},
{
"math_id": 7,
"text": "m\\colon L\\rightarrow G"
},
{
"math_id": 8,
"text": "G"
},
{
"math_id": 9,
"text": "K"
},
{
"math_id": 10,
"text": "r\\colon L\\rightarrow R"
}
] | https://en.wikipedia.org/wiki?curid=1252846 |
12528650 | KeY | The KeY tool is used in formal verification of Java programs. It accepts specifications written in the Java Modeling Language to Java source files. These are transformed into theorems of dynamic logic and then compared against program semantics that are likewise defined in terms of dynamic logic. KeY is significantly powerful in that it supports both interactive (i.e. by hand) and fully automated correctness proofs. Failed proof attempts can be used for a more efficient debugging or verification-based testing. There have been several extensions to KeY in order to apply it to the verification of C programs or hybrid systems. KeY is jointly developed by Karlsruhe Institute of Technology, Germany; Technische Universität Darmstadt, Germany; and Chalmers University of Technology in Gothenburg, Sweden and is licensed under the GPL.
Overview.
The usual user input to KeY consists of a Java source file with annotations in JML. Both are translated to KeY's internal representation, dynamic logic. From the given specifications, several proof obligations arise which are to be discharged, i.e. a proof has to be found. To this ends, the program is symbolically executed with the resulting changes to program variables stored in so-called "updates". Once the program has been processed completely, there remains a first-order logic proof obligation. At the heart of the KeY system lies a first-order theorem prover based on sequent calculus, which is used to close the proof. Interference rules are captured in so called "taclets" which consist of an own simple language to describe changes to a sequent.
Java Card DL.
The theoretical foundation of KeY is a formal logic called Java Card DL. DL stands for Dynamic Logic. It is a version of a first-order dynamic logic tailored to Java Card programs. As such, it for example allows statements (formulas) like formula_0, which intuitively says that the post-condition formula_1 must hold in all program states reachable by executing the Java Card program formula_2 in any state that satisfies the pre-condition formula_3. This is equivalent to formula_4 in Hoare calculus if formula_3 and formula_1 are purely first order. Dynamic logic, however, extends Hoare logic in that formulas may contain nested program modalities such as formula_5, or that quantification over formulas which contain modalities is possible. There is also a dual modality formula_6 which includes termination. This dynamic logic can be seen as a special multi-modal logic (with an infinite number of modalities) where for each Java block formula_2 there are modalities formula_5 and formula_6.
Deduction component.
At the heart of the KeY system lies a first-order theorem prover based on a sequent calculus. A sequent is of the form formula_7 where formula_8 (assumptions) and formula_9 (propositions) are sets of formulas with the intuitive meaning that formula_10 holds true. By means of deduction, an initial sequent representing the proof obligation is shown to be constructible from just fundamental first-order axioms (such as equality formula_11).
Symbolic execution of Java code.
During that, program modalities are eliminated by symbolic execution. For instance, the formula formula_12 is logically equivalent to formula_13. As this example shows, symbolic execution in dynamic logic is very similar to calculating weakest preconditions. Both formula_14 and formula_15 essentially denote the same thing – with two exceptions: Firstly, formula_16 is a function of some meta-calculus while formula_14 really is a formula of the given calculus. Secondly, symbolic execution runs through the program "forward" just as an actual execution would. To save intermediate results of assignments, KeY introduces a concept called "updates", which are similar to substitutions but are only applied once the program modality has been eliminated. Syntactically, updates are consist of parallel (side-effect free) assignments written in curly braces in front of a modality. An example of symbolic execution with updates: formula_17 is transformed to formula_18 in the first step and to formula_19 in the second step. The modality then is empty and "backwards application" of the update to the postcondition yields a precondition where formula_20 could take any value.
Example.
Suppose one wants to prove that the following method calculates the product of some non-negative integers formula_20 and formula_21.
int foo (int x, int y) {
int z = 0;
while (y > 0)
if (y % 2 == 0) {
x = x*2;
y = y/2;
} else {
y = y/2;
z = z+x;
x = x*2;
return z;
One thus starts the proof with the premise formula_22 and the to-be-shown conclusion formula_23. Note that tableaux of sequent calculi are usually written "upside-down", i.e., the starting sequent appears at the bottom and deduction steps go upwards. The proof can be seen in the figure on the right.
Additional features.
Symbolic Execution Debugger.
The "Symbolic Execution Debugger" visualizes the control flow of a program as a symbolic execution tree that contains all feasible execution paths through the program up to a certain point. It is provided as a plugin to the Eclipse development platform.
Test Case Generator.
KeY is usable as a model-based testing tool that can generate unit tests for Java programs. The model from which test data and the test case are derived consists of a formal specification (provided in JML) and a symbolic execution tree of the implementation under test which is computed by the KeY system.
Distribution and Variants of the KeY System.
KeY is free software written in Java and licensed under GPL. It can be downloaded from the project website in source; currently there are no pre-compiled binaries available. As another possibility, KeY can be executed directly via Java web start without the need for compilation and installation.
KeY-Hoare.
"KeY-Hoare" is built on top of KeY and features a Hoare calculus with state updates. State updates are a means of describing state transitions in a Kripke structure. This calculus can be seen as a subset to the one that is used in the main branch of KeY. Due to the simplicity of the Hoare calculus, this implementation is essentially meant to exemplify formal methods in undergraduate classes.
KeYmaera/KeYmaeraX.
"KeYmaera" (previously called HyKeY) is a deductive verification tool for hybrid systems based on a calculus for the differential dynamic logic dL .
It extends the KeY tool with computer algebra systems like Mathematica and corresponding algorithms and proof strategies such that it can be used for practical verification of hybrid systems.
KeYmaera has been developed at the University of Oldenburg and the Carnegie Mellon University. The name of the tool was chosen as a homophone to Chimera, the hybrid animal from ancient Greek mythology.
KeYmaeraX developed at the Carnegie Mellon University is the successor of KeYmaera. It has been completely rewritten.
KeY for C.
"KeY for C" is an adaption of the KeY System to MISRA C, a subset of the C programming language. This variant is no longer supported.
ASMKeY.
There is also an adaptation to use KeY for the symbolic execution of Abstract State Machines, that was developed at ETH Zürich. This variant is no longer supported.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi \\rightarrow [\\alpha]\\psi"
},
{
"math_id": 1,
"text": "\\psi"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\{\\phi\\}\\alpha\\{\\psi\\}"
},
{
"math_id": 5,
"text": "[\\alpha]"
},
{
"math_id": 6,
"text": "\\langle\\alpha\\rangle"
},
{
"math_id": 7,
"text": "\\Gamma \\vdash \\Delta"
},
{
"math_id": 8,
"text": "\\Gamma"
},
{
"math_id": 9,
"text": "\\Delta"
},
{
"math_id": 10,
"text": "\\bigwedge_{\\gamma\\in\\Gamma} \\gamma \\rightarrow \\bigvee_{\\delta\\in\\Delta}\\delta"
},
{
"math_id": 11,
"text": "e\\ \\dot{=}\\ e"
},
{
"math_id": 12,
"text": "x\\ \\dot{=}\\ 0 \\rightarrow [x++;]x\\ \\dot{=}\\ 1"
},
{
"math_id": 13,
"text": "x\\ \\dot{=}\\ 0 \\rightarrow x\\ \\dot{=}\\ 0"
},
{
"math_id": 14,
"text": "[\\alpha]\\psi"
},
{
"math_id": 15,
"text": "wp(\\alpha,\\psi)"
},
{
"math_id": 16,
"text": "wp"
},
{
"math_id": 17,
"text": "[x= 3; x=x+1;]x\\ \\dot{=}\\ 4"
},
{
"math_id": 18,
"text": "\\{x:= 3\\}[x=x+1;]x\\ \\dot{=}\\ 4"
},
{
"math_id": 19,
"text": "\\{x:= 4\\}[]x\\ \\dot{=}\\ 4"
},
{
"math_id": 20,
"text": "x"
},
{
"math_id": 21,
"text": "y"
},
{
"math_id": 22,
"text": "x \\geq 0 \\land y \\geq 0"
},
{
"math_id": 23,
"text": "z\\ \\dot{=}\\ x \\cdot y"
}
] | https://en.wikipedia.org/wiki?curid=12528650 |
12528854 | Hermite constant | Constant relating to close packing of spheres
In mathematics, the Hermite constant, named after Charles Hermite, determines how long a shortest element of a lattice in Euclidean space can be.
The constant "γn" for integers "n" > 0 is defined as follows. For a lattice "L" in Euclidean space R"n" with unit covolume, i.e. vol(R"n"/"L") = 1, let "λ"1("L") denote the least length of a nonzero element of "L". Then √"γn" is the maximum of "λ"1("L") over all such lattices "L".
The square root in the definition of the Hermite constant is a matter of historical convention.
Alternatively, the Hermite constant "γn" can be defined as the square of the maximal systole of a flat "n"-dimensional torus of unit volume.
Example.
The Hermite constant is known in dimensions 1–8 and 24.
For "n" = 2, one has "γ"2 = . This value is attained by the hexagonal lattice of the Eisenstein integers.
Estimates.
It is known that
formula_0
A stronger estimate due to Hans Frederick Blichfeldt is
formula_1
where formula_2 is the gamma function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma_n \\le \\left( \\frac 4 3 \\right)^\\frac{n-1}{2}."
},
{
"math_id": 1,
"text": "\\gamma_n \\le \\left( \\frac 2 \\pi \\right)\\Gamma\\left(2 + \\frac n 2\\right)^\\frac{2}{n},"
},
{
"math_id": 2,
"text": "\\Gamma(x)"
}
] | https://en.wikipedia.org/wiki?curid=12528854 |
12529188 | Weak value | Quantity in quantum mechanics
In quantum mechanics (and computation), a weak value is a quantity related to a shift of a measuring device's pointer when usually there is pre- and postselection. It should not be confused with a weak measurement, which is often defined in conjunction. The weak value was first defined by Yakir Aharonov, David Albert, and Lev Vaidman, published in Physical Review Letters 1988, and is related to the two-state vector formalism. There is also a way to obtain weak values without postselection.
Definition and Derivation.
There are many excellent review articles on weak values (see e.g. ) here we briefly cover the basics.
Definition.
We will denote the initial state of a system as formula_0, while the final state of the system is denoted as formula_1. We will refer to the initial and final states of the system as the pre- and post-selected quantum mechanical states. With respect to these states, the "weak value" of the observable formula_2 is defined as:
formula_3
Notice that if formula_4 then the weak value is equal to the usual expected value in the initial state formula_5 or the final state formula_6. In general the weak value quantity is a complex number. The weak value of the observable becomes large when the post-selected state, formula_1, approaches being orthogonal to the pre-selected state, formula_0, i.e. formula_7. If formula_8 is larger than the largest eigenvalue of formula_2 or smaller than the smallest eigenvalue of formula_2 the weak value is said to be anomalous.
As an example consider a spin 1/2 particle. Take formula_2 to be the Pauli Z operator formula_9 with eigenvalues formula_10. Using the initial state
formula_11
and the final state
formula_12
we can calculate the weak value to be
formula_13
For formula_14 the weak value is anomalous.
Derivation.
Here we follow the presentation given by Duck, Stevenson, and Sudarshan, (with some notational updates from Kofman et al. )which makes explicit when the approximations used to derive the weak value are valid.
Consider a quantum system that you want to measure by coupling an ancillary (also quantum) measuring device. The observable to be measured on the system is formula_15. The system and ancilla are coupled via the Hamiltonian
formula_16
where the coupling constant is integrated over an interaction time formula_17 and formula_18 is the canonical commutator. The Hamiltonian generates the unitary
formula_19
Take the initial state of the ancilla to have a Gaussian distribution
formula_20
the position wavefunction of this state is
formula_21
The initial state of the system is given by formula_22 above; the state formula_23, jointly describing the initial state of the system and ancilla, is given then by:
formula_24
Next the system and ancilla interact via the unitary formula_25. After this one performs a projective measurement of the projectors formula_26 on the system. If we postselect (or condition) on getting the outcome formula_27, then the (unnormalized) final state of the meter is
formula_28
To arrive at this conclusion, we use the first order series expansion of formula_29 on line (I), and we require that
formula_30
On line (II) we use the approximation that formula_31 for small formula_32. This final approximation is only valid when
formula_33
As formula_34 is the generator of translations, the ancilla's wavefunction is now given by
formula_35
This is the original wavefunction, shifted by an amount formula_36. By Busch's theorem the system and meter wavefunctions are necessarily disturbed by the measurement. There is a certain sense in which the protocol that allows one to measure the weak value is minimally disturbing, but there is still disturbance.
Applications.
Quantum metrology and tomography.
At the end of the original weak value paper the authors suggested weak values could be used in quantum metrology:
<templatestyles src="Template:Quote_box/styles.css" />
Another striking aspect of this experiment becomes evident when we consider it as a device for measuring
a small gradient of the magnetic field ... yields a tremendous amplification.
Aharonov, Albert, Vaidman
This suggestion was followed by Hosten and Kwiat and later by Dixon et al. It appears to be an interesting line of research that could result in improved quantum sensing technology.
Additionally in 2011, weak measurements of many photons prepared in the same pure state, followed by strong measurements of a complementary variable, were used to perform quantum tomography (i.e. reconstruct the state in which the photons were prepared).
Quantum foundations.
Weak values have been used to examine some of the paradoxes in the foundations of quantum theory. This relies to a large extent on whether weak values are deemed to be relevant to describe properties of quantum systems, a point which is not obvious since weak values are generally different from eigenvalues. For example, the research group of Aephraim M. Steinberg at the University of Toronto confirmed Hardy's paradox experimentally using joint weak measurement of the locations of entangled pairs of photons. (also see)
Building on weak measurements, Howard M. Wiseman proposed a weak value measurement of the velocity of a quantum particle at a precise position, which he termed its "naïvely observable velocity". In 2010, a first experimental observation of trajectories of a photon in a double-slit interferometer was reported, which displayed the qualitative features predicted in 2001 by Partha Ghose for photons in the de Broglie-Bohm interpretation. Following up on Wiseman's weak velocity measurement, Johannes Fankhauser and Patrick Dürr suggest in a paper that weak velocity measurements constitute no new arguments, let alone empirical evidence, in favor of or against standard de Broglie-Bohm theory. According to the authors such measurements could not provide direct experimental evidence displaying the shape of particle trajectories, even if it is assumed that some deterministic particle trajectories exist.
Quantum computation.
Weak values have been implemented into quantum computing to get a giant speed up in time complexity. In a paper, Arun Kumar Pati describes a new kind of quantum computer using weak value amplification and post-selection (WVAP), and implements search algorithm which (given a successful post selection) can find the target state in a single run with time complexity formula_37, beating out the well known Grover's algorithm.
Criticisms.
Criticisms of weak values include philosophical and practical criticisms. Some noted researchers such as Asher Peres, Tony Leggett, David Mermin, and Charles H. Bennett are critical of weak values.
Recently, it has been shown that the pre- and postselection of a quantum
system recovers a completely hidden interference phenomenon in the measurement
apparatus. Studying the interference pattern shows that what is interpreted
as an amplification using the weak value is a pure phase effect and
the weak value plays no role in its interpretation. This phase effect
increases the degree of the entanglement which lies behind the effectiveness
of the pre- and postselection in the parameter estimation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|\\psi_i\\rangle"
},
{
"math_id": 1,
"text": "|\\psi_f\\rangle"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": " A_w = \\frac{\\langle\\psi_f|A|\\psi_i\\rangle}{\\langle\\psi_f|\\psi_i\\rangle}."
},
{
"math_id": 4,
"text": "|\\psi_f\\rangle = |\\psi_i\\rangle"
},
{
"math_id": 5,
"text": "\\langle\\psi_i|A|\\psi_i\\rangle"
},
{
"math_id": 6,
"text": "\\langle\\psi_f|A|\\psi_f\\rangle"
},
{
"math_id": 7,
"text": "|\\langle\\psi_f|\\psi_i\\rangle| \\ll 1"
},
{
"math_id": 8,
"text": " A_w "
},
{
"math_id": 9,
"text": "A= \\sigma_z "
},
{
"math_id": 10,
"text": " \\pm 1"
},
{
"math_id": 11,
"text": " |\\psi_i\\rangle= \\frac{1}{\\sqrt{2}}\\begin{pmatrix}\\cos\\frac{\\alpha}{2}+\\sin\\frac{\\alpha}{2} \\\\ \\cos\\frac{\\alpha}{2}-\\sin\\frac{\\alpha}{2}\\end{pmatrix}"
},
{
"math_id": 12,
"text": " |\\psi_f\\rangle=\\frac{1}{\\sqrt{2}}\\begin{pmatrix}1 \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 13,
"text": " A_w = (\\sigma_z)_w = \\tan\\frac{\\alpha}{2}."
},
{
"math_id": 14,
"text": "| \\alpha |>\\frac{\\pi}{2} "
},
{
"math_id": 15,
"text": " A "
},
{
"math_id": 16,
"text": "H = \\gamma A \\otimes p,"
},
{
"math_id": 17,
"text": " \\gamma = \\int_{t_i}^{t_f} g(t) dt \\ll 1 "
},
{
"math_id": 18,
"text": " [q, p] =i "
},
{
"math_id": 19,
"text": "U= \\exp[-i \\gamma A\\otimes p]."
},
{
"math_id": 20,
"text": "|\\Phi\\rangle = \\frac{1}{(2\\pi \\sigma^2)^{1/4}}\\int dq' \\exp[-q'^2/4\\sigma^2]|q'\\rangle,"
},
{
"math_id": 21,
"text": "\\Phi(q) =\\langle q|\\Phi\\rangle = \\frac{1}{(2\\pi \\sigma^2)^{1/4}} \\exp[-q^2/4\\sigma^2]."
},
{
"math_id": 22,
"text": " |\\psi_i\\rangle "
},
{
"math_id": 23,
"text": "|\\Psi\\rangle"
},
{
"math_id": 24,
"text": "|\\Psi\\rangle =|\\psi_i\\rangle \\otimes |\\Phi\\rangle."
},
{
"math_id": 25,
"text": "U |\\Psi\\rangle"
},
{
"math_id": 26,
"text": "\\{ |\\psi_f\\rangle\\langle \\psi_f |, I- |\\psi_f\\rangle\\langle \\psi_f |\\}"
},
{
"math_id": 27,
"text": " |\\psi_f\\rangle\\langle \\psi_f |"
},
{
"math_id": 28,
"text": "\\begin{align}\n|\\Phi_f \\rangle\n&= \\langle \\psi_f |U |\\psi_i\\rangle \\otimes |\\Phi\\rangle\\\\\n&\\approx \\langle \\psi_f |(I\\otimes I -i \\gamma A\\otimes p ) |\\psi_i\\rangle \\otimes|\\Phi\\rangle \\quad \\text{(I)}\\\\\n&= \\langle \\psi_f|\\psi_i\\rangle (1 -i \\gamma A_w p ) |\\Phi\\rangle\\\\\n&\\approx \\langle \\psi_f|\\psi_i\\rangle \\exp(-i \\gamma A_w p) |\\Phi\\rangle. \\quad \\text{(II)}\n\\end{align}"
},
{
"math_id": 29,
"text": "U"
},
{
"math_id": 30,
"text": "\\begin{align}\n\\frac{|\\gamma|}{\\sigma} \\left|\\frac{\\langle \\psi_f |A^n |\\psi_i \\rangle}{ \\langle \\psi_f| A |\\psi_i \\rangle }\\right|^{1/(n-1)} \\ll 1, \\quad (n = 2, 3, \\dots)\n\\end{align}"
},
{
"math_id": 31,
"text": "e^{-x}\\approx 1-x"
},
{
"math_id": 32,
"text": "x"
},
{
"math_id": 33,
"text": "|\\gamma A_w|/\\sigma \\ll 1."
},
{
"math_id": 34,
"text": " p "
},
{
"math_id": 35,
"text": "\\Phi_f(q) = \\Phi(q-\\gamma A_w)."
},
{
"math_id": 36,
"text": " \\gamma A_w "
},
{
"math_id": 37,
"text": "O(\\log N)"
}
] | https://en.wikipedia.org/wiki?curid=12529188 |
125297 | Dynamic programming | Problem optimization method
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have "optimal substructure".
If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation.
Overview.
Mathematical optimization.
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time.
This is done by defining a sequence of value functions "V"1, "V"2, ..., "V""n" taking "y" as an argument representing the state of the system at times "i" from 1 to "n".
The definition of "V""n"("y") is the value obtained in state "y" at the last time "n".
The values "V""i" at earlier times "i" = "n" −1, "n" − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation.
For "i" = 2, ..., "n", "V""i"−1 at any state "y" is calculated from "V""i" by maximizing a simple function (usually the sum) of the gain from a decision at time "i" − 1 and the function "V""i" at the new state of the system if this decision is made.
Since "V""i" has already been calculated for the needed states, the above operation yields "V""i"−1 for those states.
Finally, "V"1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Control theory.
In control theory, a typical problem is to find an admissible control formula_0 which causes the system formula_1 to follow an admissible trajectory formula_2 on a continuous time interval formula_3 that minimizes a cost function
formula_4
The solution to this problem is an optimal control law or policy formula_5, which produces an optimal trajectory formula_2 and a cost-to-go function formula_6. The latter obeys the fundamental equation of dynamic programming:
formula_7
a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which formula_8 and formula_9. One finds that minimizing formula_10 in terms of formula_11, formula_12, and the unknown function formula_13 and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition formula_14. In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship.
Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation:
formula_15
at the formula_16-th stage of formula_17 equally spaced discrete time intervals, and where formula_18 and formula_19 denote discrete approximations to formula_20 and formula_21. This functional equation is known as the Bellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation.
Example from economics: Ramsey's problem of optimal saving.
In economics, the objective is generally to maximize (rather than minimize) some dynamic social welfare function. In Ramsey's problem, this function relates amounts of consumption to levels of utility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in capital stock that is used in production), known as intertemporal choice. Future consumption is discounted at a constant rate formula_22. A discrete approximation to the transition equation of capital is given by
formula_23
where formula_24 is consumption, formula_16 is capital, and formula_20 is a production function satisfying the Inada conditions. An initial capital stock formula_25 is assumed.
Let formula_26 be consumption in period t, and assume consumption yields utility formula_27 as long as the consumer lives. Assume the consumer is impatient, so that he discounts future utility by a factor b each period, where formula_28. Let formula_29 be capital in period t. Assume initial capital is a given amount formula_30, and suppose that this period's capital and consumption determine next period's capital as formula_31, where A is a positive constant and formula_32. Assume capital cannot be negative. Then the consumer's decision problem can be written as follows:
formula_33 subject to formula_34 for all formula_35
Written this way, the problem looks complicated, because it involves solving for all the choice variables formula_36. (The capital formula_37 is not a choice variable—the consumer's initial capital is taken as given.)
The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of "value functions" formula_38, for formula_39 which represent the value of having any amount of capital k at each time t. There is (by assumption) no utility from having capital after death, formula_40.
The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation. In this problem, for each formula_35, the Bellman equation is
formula_41 subject to formula_34
This problem is much simpler than the one we wrote down before, because it involves only two decision variables, formula_26 and formula_42. Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time t, his current capital formula_29 is given, and he only needs to choose current consumption formula_26 and saving formula_42.
To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as k. formula_43 is already known, so using the Bellman equation once we can calculate formula_44, and so on until we get to formula_45, which is the "value" of the initial decision problem for the whole lifetime. In other words, once we know formula_46, we can calculate formula_47, which is the maximum of formula_48, where formula_49 is the choice variable and formula_50.
Working backwards, it can be shown that the value function at time formula_51 is
formula_52
where each formula_53 is a constant, and the optimal amount to consume at time formula_51 is
formula_54
which can be simplified to
formula_55
We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period T, the last period of life.
Computer science.
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to "non-overlapping" sub-problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems.
"Optimal substructure" means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion. For example, given a graph "G=(V,E)", the shortest path "p" from a vertex "u" to a vertex "v" exhibits optimal substructure: take any intermediate vertex "w" on this shortest path "p". If "p" is truly the shortest path, then it can be split into sub-paths "p1" from "u" to "w" and "p2" from "w" to "v" such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in "Introduction to Algorithms"). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does.
"Overlapping" sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci sequence: "F""i" = "F""i"−1 + "F""i"−2, with base case "F"1 = "F"2 = 1. Then "F"43 = "F"42 + "F"41, and "F"42 = "F"41 + "F"40. Now "F"41 is being solved in the recursive sub-trees of both "F"43 as well as "F"42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once.
This can be achieved in either of two ways:
Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as "call-by-need"). Some languages make it possible portably (e.g. Scheme, Common Lisp, Perl or D). Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the "M." adverb. In any case, this is only possible for a referentially transparent function. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language.
Bioinformatics.
Dynamic programming is widely used in bioinformatics for tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in the US and by Georgii Gurskii and Alexander Zasedatelev in the Soviet Union. Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding.
Examples: computer algorithms.
Dijkstra's algorithm for the shortest path problem.
From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.
In fact, Dijkstra's explanation of the logic behind the algorithm, namely
<templatestyles src="Template:Blockquote/styles.css" />Problem 2. Find the path of minimum total length between two given nodes formula_56 and formula_57.
We use the fact that, if formula_58 is a node on the minimal path from formula_56 to formula_57, knowledge of the latter implies the knowledge of the minimal path from formula_56 to formula_58.
is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem.
Fibonacci sequence.
Using dynamic programming in the calculation of the "n"th member of the Fibonacci sequence improves its performance greatly. Here is a naïve implementation, based directly on the mathematical definition:
function fib(n)
if n <= 1 return n
return fib(n − 1) + fib(n − 2)
Notice that if we call, say, codice_0, we produce a call tree that calls the function on the same value many different times:
In particular, codice_6 was calculated three times from scratch. In larger examples, many more values of codice_7, or "subproblems", are recalculated, leading to an exponential time algorithm.
Now, suppose we have a simple map object, "m", which maps each value of codice_7 that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O("n") time instead of exponential time (but requires O("n") space):
var m := map(0 → 0, 1 → 1)
function fib(n)
if "key n is not in "map m
m[n] := fib(n − 1) + fib(n − 2)
return m[n]
This technique of saving values that have already been calculated is called "memoization"; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.
In the bottom-up approach, we calculate the smaller values of codice_7 first, then build larger values from them. This method also uses O("n") time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O("n") space to store the map.
function fib(n)
if n = 0
return 0
else
var previousFib := 0, currentFib := 1
repeat n − 1 times "// loop is skipped if n = 1"
var newFib := previousFib + currentFib
previousFib := currentFib
currentFib := newFib
return currentFib
In both examples, we only calculate codice_6 one time, and then use it to calculate both codice_11 and codice_12, instead of computing it every time either of them is evaluated.
A type of balanced 0–1 matrix.
Consider the problem of assigning values, either zero or one, to the positions of an n × n matrix, with n even, so that each row and each column contains exactly n / 2 zeros and n / 2 ones. We ask how many different assignments there are for a given formula_17. For example, when n
4, five possible solutions are
formula_59
There are at least three possible approaches: brute force, backtracking, and dynamic programming.
Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns (n / 2 zeros and n / 2 ones). As there are formula_60 possible assignments and formula_61 sensible assignments, this strategy is not practical except maybe up to formula_62.
Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at least n / 2. While more sophisticated than brute force, this approach will visit every solution once, making it impractical for n larger than six, since the number of solutions is already 116,963,796,250 for n = 8, as we shall see.
Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We consider k × n boards, where 1 ≤ k ≤ n, whose formula_16 rows contain formula_63 zeros and formula_63 ones. The function "f" to which memoization is applied maps vectors of "n" pairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value of formula_64 (formula_17 arguments or one vector of formula_17 elements). The process of subproblem creation involves iterating over every one of formula_65 possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of the k × n board and recursively compute the number of solutions to the remaining (k − 1) × n board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a 1 × n board. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation of n / 2 formula_66 and n / 2 formula_67 pairs or not.
For example, in the first two boards shown above the sequences of vectors would be
The number of solutions (sequence in the OEIS) is
formula_68
Links to the MAPLE implementation of the dynamic programming approach may be found among the external links.
Checkerboard.
Consider a checkerboard with "n" × "n" squares and a cost function codice_13 which returns a cost associated with square codice_14 (codice_15 being the row, codice_16 being the column). For instance (on a 5 × 5 checkerboard),
Thus codice_17
Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on codice_18 can move to codice_19, codice_20 or codice_21.
This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function codice_22 as
"q"("i", "j") = the minimum cost to reach square ("i", "j").
Starting at rank codice_23 and descending to rank codice_24, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between rank codice_23 and rank codice_24.
The function codice_22 is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus codice_13. For instance:
formula_69
Now, let us define codice_22 in somewhat more general terms:
formula_70
The first line of this equation deals with a board modeled as squares indexed on codice_24 at the lowest bound and codice_23 at the highest bound. The second line specifies what happens at the first rank; providing a base case. The third line, the recursion, is the important part. It represents the codice_32 terms in the example. From this definition we can derive straightforward recursive code for codice_33. In the following pseudocode, codice_23 is the size of the board, codice_13 is the cost function, and codice_36 returns the minimum of a number of values:
function minCost(i, j)
if j < 1 or j > n
return infinity
else if i = 1
return c(i, j)
else
return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j)
This function only computes the path cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow because it too exhibits the overlapping sub-problems attribute. That is, it recomputes the same path costs over and over. However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional array codice_37 rather than using a function. This avoids recomputation; all the values needed for array codice_37 are computed ahead of time only once. Precomputed values for codice_14 are simply looked up whenever needed.
We also need to know what the actual shortest path is. To do this, we use another array codice_40; a "predecessor array". This array records the path to any square codice_41. The predecessor of codice_41 is modeled as an offset relative to the index (in codice_37) of the precomputed path cost of codice_41. To reconstruct the complete path, we lookup the predecessor of codice_41, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following pseudocode:
function computeShortestPathArrays()
for x from 1 to n
q[1, x] := c(1, x)
for y from 1 to n
q[y, 0] := infinity
q[y, n + 1] := infinity
for y from 2 to n
for x from 1 to n
m := min(q[y-1, x-1], q[y-1, x], q[y-1, x+1])
q[y, x] := m + c(y, x)
if m = q[y-1, x-1]
p[y, x] := -1
else if m = q[y-1, x]
p[y, x] := 0
else
p[y, x] := 1
Now the rest is a simple matter of finding the minimum and printing it.
function computeShortestPath()
computeShortestPathArrays()
minIndex := 1
min := q[n, 1]
for i from 2 to n
if q[n, i] < min
minIndex := i
min := q[n, i]
printPath(n, minIndex)
function printPath(y, x)
print(x)
print("<-")
if y = 2
print(x + p[y, x])
else
printPath(y-1, x + p[y, x])
Sequence alignment.
In genetics, sequence alignment is an important application where dynamic programming is essential. Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost.
The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either:
The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum.
Different variants exist, see Smith–Waterman algorithm and Needleman–Wunsch algorithm.
Tower of Hanoi puzzle.
The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:
The dynamic programming solution consists of solving the functional equation
S(n,h,t) = S(n-1,h, not(h,t)) ; S(1,h,t) ; S(n-1,not(h,t),t)
where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ";" denotes concatenation, and
S(n, h, t) := solution to a problem consisting of n disks that are to be moved from rod h to rod t.
For n=1 the problem is trivial, namely S(1,h,t) = "move a disk from rod h to rod t" (there is only one disk left).
The number of moves required by this solution is 2"n" − 1. If the objective is to maximize the number of moves (without cycling) then the dynamic programming functional equation is slightly more complicated and 3"n" − 1 moves are required.
Egg dropping puzzle.
The following is a description of the instance of this famous puzzle involving N=2 eggs and a building with H=36 floors:
Suppose that we wish to know which stories in a 36-story building are safe to drop eggs from, and which will cause the eggs to break on landing (using U.S. English terminology, in which the first floor is at ground level). We make a few assumptions:
* An egg that survives a fall can be used again.
* A broken egg must be discarded.
* The effect of a fall is the same for all eggs.
* If an egg breaks when dropped, then it would break if dropped from a higher window.
* If an egg survives a fall, then it would survive a shorter fall.
* It is not ruled out that the first-floor windows break eggs, nor is it ruled out that eggs can survive the 36th-floor windows.
If only one egg is available and we wish to be sure of obtaining the right result, the experiment can be carried out in only one way. Drop the egg from the first-floor window; if it survives, drop it from the second-floor window. Continue upward until it breaks. In the worst case, this method may require 36 droppings. Suppose 2 eggs are available. What is the lowest number of egg-droppings that is guaranteed to work in all cases?
To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where
"n" = number of test eggs available, "n" = 0, 1, 2, 3, ..., "N" − 1.
"k" = number of (consecutive) floors yet to be tested, "k" = 0, 1, 2, ..., "H" − 1.
For instance, "s" = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is "s" = ("N","H") where "N" denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs ("n" = 0) or when "k" = 0, whichever occurs first. If termination occurs at state "s" = (0,"k") and "k" > 0, then the test failed.
Now, let
"W"("n","k") = minimum number of trials required to identify the value of the critical floor under the worst-case scenario given that the process is in state "s" = ("n","k").
Then it can be shown that
"W"("n","k") = 1 + min{max("W"("n" − 1, "x" − 1), "W"("n","k" − "x")): "x" = 1, 2, ..., "k" }
with "W"("n",0) = 0 for all "n" > 0 and "W"(1,"k") = "k" for all "k". It is easy to solve this equation iteratively by systematically increasing the values of "n" and "k".
Faster DP solution using a different parametrization.
Notice that the above solution takes formula_71 time with a DP solution. This can be improved to formula_72 time by binary searching on the optimal formula_73 in the above recurrence, since formula_74 is increasing in formula_73 while formula_75 is decreasing in formula_73, thus a local minimum of formula_76 is a global minimum. Also, by storing the optimal formula_73 for each cell in the DP table and referring to its value for the previous cell, the optimal formula_73 for each cell can be found in constant time, improving it to formula_77 time. However, there is an even faster solution that involves a different parametrization of the problem:
Let formula_16 be the total number of floors such that the eggs break when dropped from the formula_16th floor (The example above is equivalent to taking formula_78).
Let formula_79 be the minimum floor from which the egg must be dropped to be broken.
Let formula_80 be the maximum number of values of formula_79 that are distinguishable using formula_11 tries and formula_17 eggs.
Then formula_81 for all formula_82.
Let formula_83 be the floor from which the first egg is dropped in the optimal strategy.
If the first egg broke, formula_79 is from formula_84 to formula_83 and distinguishable using at most formula_85 tries and formula_86 eggs.
If the first egg did not break, formula_79 is from formula_87 to formula_16 and distinguishable using formula_85 tries and formula_17 eggs.
Therefore, formula_88.
Then the problem is equivalent to finding the minimum formula_73 such that formula_89.
To do so, we could compute formula_90 in order of increasing formula_11, which would take formula_91 time.
Thus, if we separately handle the case of formula_92, the algorithm would take formula_93 time.
But the recurrence relation can in fact be solved, giving formula_94, which can be computed in formula_95 time using the identity formula_96 for all formula_97.
Since formula_98 for all formula_99, we can binary search on formula_11 to find formula_73, giving an formula_100 algorithm.
Matrix chain multiplication.
Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices &NoBreak;&NoBreak;. Matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example:
((A1 × A2) × A3) × ... An
A1×(((A2×A3)× ... ) × An)
(A1 × A2) × (A3 × ... An)
and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplistic matrix multiplication algorithm for purposes of illustration).
For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown below:
Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000+100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis.
Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis.
At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below.
Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. Ai × ... × Aj, i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j].
The formula is:
if i = j, m[i,j]= 0
if i < j, m[i,j]= min over all possible values of k (m[i,k]+m[k+1,j] + formula_101)
where "k" ranges from "i" to "j" − 1.
This formula can be coded as shown below, where input parameter "chain" is the chain of matrices, i.e. &NoBreak;&NoBreak;:
function OptimalMatrixChainParenthesis(chain)
n = length(chain)
for i = 1, n
m[i,i] = 0 "// Since it takes no calculations to multiply one matrix"
for len = 2, n
for i = 1, n - len + 1
j = i + len -1
m[i,j] = infinity "// So that the first calculation updates"
for k = i, j-1
q = m[i, k] + m[k+1, j] + formula_101
if q < m[i, j] "// The new order of parentheses is better than what we had"
m[i, j] = q "// Update"
s[i, j] = k "// Record which k to split on, i.e. where to place the parenthesis"
So far, we have calculated values for all possible "m"["i", "j"], the minimum number of calculations to multiply a chain from matrix "i" to matrix "j", and we have recorded the corresponding "split point""s"["i", "j"]. For example, if we are multiplying chain A1×A2×A3×A4, and it turns out that "m"[1, 3] = 100 and "s"[1, 3] = 2, that means that the optimal placement of parenthesis for matrices 1 to 3 is &NoBreak;&NoBreak; and to multiply those matrices will require 100 scalar calculations.
This algorithm will produce "tables" "m"[, ] and "s"[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices.
Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm:
function PrintOptimalParenthesis(s, i, j)
if i = j
print "A"i
else
print "("
PrintOptimalParenthesis(s, i, s[i, j])
PrintOptimalParenthesis(s, s[i, j] + 1, j)
print ")"
Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like.
To actually multiply the matrices using the proper splits, we need the following algorithm:
function MatrixChainMultiply(chain from 1 to n) // returns the final matrix, i.e. A1×A2×... ×An
OptimalMatrixChainParenthesis(chain from 1 to n) // this will produce s[ . ] and m[ . ] "tables"
OptimalMatrixMultiplication(s, chain from 1 to n) // actually multiply
function OptimalMatrixMultiplication(s, i, j) // returns the result of multiplying a chain of matrices from Ai to Aj in optimal way
if i < j
// keep on splitting the chain and multiplying the matrices in left and right sides
LeftSide = OptimalMatrixMultiplication(s, i, s[i, j])
RightSide = OptimalMatrixMultiplication(s, s[i, j] + 1, j)
return MatrixMultiply(LeftSide, RightSide)
else if i = j
return Ai // matrix at position i
else
print "error, i <= j must hold"
function MatrixMultiply(A, B) // function that multiplies two matrices
if columns(A) = rows(B)
for i = 1, rows(A)
for j = 1, columns(B)
C[i, j] = 0
for k = 1, columns(A)
C[i, j] = C[i, j] + A[i, k]*B[k, j]
return C
else
print "error, incompatible dimensions."
History of the name.
The term "dynamic programming" was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions, and the field was thereafter recognized by the IEEE as a systems analysis and engineering topic. Bellman's contribution is remembered in the name of the Bellman equation, a central result of dynamic programming which restates an optimization problem in recursive form.
Bellman explains the reasoning behind the term "dynamic programming" in his autobiography, "Eye of the Hurricane: An Autobiography":
<templatestyles src="Template:Blockquote/styles.css" />I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, "Where did the name, dynamic programming, come from?" The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word "research". I'm not using the term lightly; I'm using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word "programming". I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let's kill two birds with one stone. Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.
The word "dynamic" was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. The word "programming" referred to the use of the method to find an optimal "program", in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases "linear programming" and "mathematical programming", a synonym for mathematical optimization.
The above explanation of the origin of the term may be inaccurate: According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true."
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{u}^{\\ast}"
},
{
"math_id": 1,
"text": "\\dot{\\mathbf{x}}(t) = \\mathbf{g} \\left( \\mathbf{x}(t), \\mathbf{u}(t), t \\right)"
},
{
"math_id": 2,
"text": "\\mathbf{x}^{\\ast}"
},
{
"math_id": 3,
"text": "t_{0} \\leq t \\leq t_{1}"
},
{
"math_id": 4,
"text": "J = b \\left( \\mathbf{x}(t_{1}), t_{1} \\right) + \\int_{t_{0}}^{t_{1}} f \\left( \\mathbf{x}(t), \\mathbf{u}(t), t \\right) \\mathrm{d} t"
},
{
"math_id": 5,
"text": "\\mathbf{u}^{\\ast} = h(\\mathbf{x}(t), t)"
},
{
"math_id": 6,
"text": "J^{\\ast}"
},
{
"math_id": 7,
"text": "- J_{t}^{\\ast} = \\min_{\\mathbf{u}} \\left\\{ f \\left( \\mathbf{x}(t), \\mathbf{u}(t), t \\right) + J_{x}^{\\ast \\mathsf{T}} \\mathbf{g} \\left( \\mathbf{x}(t), \\mathbf{u}(t), t \\right) \\right\\}"
},
{
"math_id": 8,
"text": "J_{x}^{\\ast} = \\frac{\\partial J^{\\ast}}{\\partial \\mathbf{x}} = \\left[ \\frac{\\partial J^{\\ast}}{\\partial x_{1}} ~~~~ \\frac{\\partial J^{\\ast}}{\\partial x_{2}} ~~~~ \\dots ~~~~ \\frac{\\partial J^{\\ast}}{\\partial x_{n}} \\right]^{\\mathsf{T}}"
},
{
"math_id": 9,
"text": "J_{t}^{\\ast} = \\frac{\\partial J^{\\ast}}{\\partial t}"
},
{
"math_id": 10,
"text": "\\mathbf{u}"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "\\mathbf{x}"
},
{
"math_id": 13,
"text": "J_{x}^{\\ast}"
},
{
"math_id": 14,
"text": "J \\left( t_{1} \\right) = b \\left( \\mathbf{x}(t_{1}), t_{1} \\right)"
},
{
"math_id": 15,
"text": "J_{k}^{\\ast} \\left( \\mathbf{x}_{n-k} \\right) = \\min_{\\mathbf{u}_{n-k}} \\left\\{ \\hat{f} \\left( \\mathbf{x}_{n-k}, \\mathbf{u}_{n-k} \\right) + J_{k-1}^{\\ast} \\left( \\hat{\\mathbf{g}} \\left( \\mathbf{x}_{n-k}, \\mathbf{u}_{n-k} \\right) \\right) \\right\\}"
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "\\hat{f}"
},
{
"math_id": 19,
"text": "\\hat{\\mathbf{g}}"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "\\mathbf{g}"
},
{
"math_id": 22,
"text": "\\beta \\in (0,1)"
},
{
"math_id": 23,
"text": "k_{t+1} = \\hat{g} \\left( k_{t}, c_{t} \\right) = f(k_{t}) - c_{t}"
},
{
"math_id": 24,
"text": "c"
},
{
"math_id": 25,
"text": "k_{0} > 0"
},
{
"math_id": 26,
"text": "c_t"
},
{
"math_id": 27,
"text": "u(c_t)=\\ln(c_t)"
},
{
"math_id": 28,
"text": "0<b<1"
},
{
"math_id": 29,
"text": "k_t"
},
{
"math_id": 30,
"text": "k_0>0"
},
{
"math_id": 31,
"text": "k_{t+1}=Ak^a_t - c_t"
},
{
"math_id": 32,
"text": "0<a<1"
},
{
"math_id": 33,
"text": "\\max \\sum_{t=0}^T b^t \\ln(c_t)"
},
{
"math_id": 34,
"text": "k_{t+1}=Ak^a_t - c_t \\geq 0"
},
{
"math_id": 35,
"text": "t=0,1,2,\\ldots,T"
},
{
"math_id": 36,
"text": "c_0, c_1, c_2, \\ldots , c_T"
},
{
"math_id": 37,
"text": "k_0"
},
{
"math_id": 38,
"text": "V_t(k)"
},
{
"math_id": 39,
"text": "t=0,1,2,\\ldots,T,T+1"
},
{
"math_id": 40,
"text": "V_{T+1}(k)=0"
},
{
"math_id": 41,
"text": "V_t(k_t) \\, = \\, \\max \\left( \\ln(c_t) + b V_{t+1}(k_{t+1}) \\right)"
},
{
"math_id": 42,
"text": "k_{t+1}"
},
{
"math_id": 43,
"text": "V_{T+1}(k)"
},
{
"math_id": 44,
"text": "V_T(k)"
},
{
"math_id": 45,
"text": "V_0(k)"
},
{
"math_id": 46,
"text": "V_{T-j+1}(k)"
},
{
"math_id": 47,
"text": "V_{T-j}(k)"
},
{
"math_id": 48,
"text": "\\ln(c_{T-j}) + b V_{T-j+1}(Ak^a-c_{T-j})"
},
{
"math_id": 49,
"text": "c_{T-j}"
},
{
"math_id": 50,
"text": "Ak^a-c_{T-j} \\ge 0"
},
{
"math_id": 51,
"text": "t=T-j"
},
{
"math_id": 52,
"text": "V_{T-j}(k) \\, = \\, a \\sum_{i=0}^j a^ib^i \\ln k + v_{T-j}"
},
{
"math_id": 53,
"text": "v_{T-j}"
},
{
"math_id": 54,
"text": "c_{T-j}(k) \\, = \\, \\frac{1}{\\sum_{i=0}^j a^ib^i} Ak^a"
},
{
"math_id": 55,
"text": "\\begin{align}\nc_{T}(k) & = Ak^a\\\\\nc_{T-1}(k) & = \\frac{Ak^a}{1+ab}\\\\\nc_{T-2}(k) & = \\frac{Ak^a}{1+ab+a^2b^2}\\\\\n&\\dots\\\\\nc_2(k) & = \\frac{Ak^a}{1+ab+a^2b^2+\\ldots+a^{T-2}b^{T-2}}\\\\\nc_1(k) & = \\frac{Ak^a}{1+ab+a^2b^2+\\ldots+a^{T-2}b^{T-2}+a^{T-1}b^{T-1}}\\\\\nc_0(k) & = \\frac{Ak^a}{1+ab+a^2b^2+\\ldots+a^{T-2}b^{T-2}+a^{T-1}b^{T-1}+a^Tb^T}\n\\end{align}"
},
{
"math_id": 56,
"text": "P"
},
{
"math_id": 57,
"text": "Q"
},
{
"math_id": 58,
"text": "R"
},
{
"math_id": 59,
"text": "\\begin{bmatrix}\n0 & 1 & 0 & 1 \\\\\n1 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 1 \\\\\n1 & 0 & 1 & 0\n\\end{bmatrix} \\text{ and } \\begin{bmatrix}\n0 & 0 & 1 & 1 \\\\\n0 & 0 & 1 & 1 \\\\\n1 & 1 & 0 & 0 \\\\\n1 & 1 & 0 & 0\n\\end{bmatrix} \\text{ and } \\begin{bmatrix}\n1 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 1 \\\\\n1 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 1\n\\end{bmatrix} \\text{ and } \\begin{bmatrix}\n1 & 0 & 0 & 1 \\\\\n0 & 1 & 1 & 0 \\\\\n0 & 1 & 1 & 0 \\\\\n1 & 0 & 0 & 1\n\\end{bmatrix} \\text{ and } \\begin{bmatrix}\n1 & 1 & 0 & 0 \\\\\n1 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 1 \\\\\n0 & 0 & 1 & 1\n\\end{bmatrix}."
},
{
"math_id": 60,
"text": "2^{n^2}"
},
{
"math_id": 61,
"text": "\\tbinom{n}{n/2}^n"
},
{
"math_id": 62,
"text": "n=6"
},
{
"math_id": 63,
"text": "n/2"
},
{
"math_id": 64,
"text": " f((n/2, n/2), (n/2, n/2), \\ldots (n/2, n/2)) "
},
{
"math_id": 65,
"text": "\\tbinom{n}{n/2}"
},
{
"math_id": 66,
"text": "(0, 1)"
},
{
"math_id": 67,
"text": "(1, 0)"
},
{
"math_id": 68,
"text": " 1,\\, 2,\\, 90,\\, 297200,\\, 116963796250,\\, 6736218287430460752, \\ldots "
},
{
"math_id": 69,
"text": "q(A) = \\min(q(B),q(C),q(D))+c(A) \\, "
},
{
"math_id": 70,
"text": "q(i,j)=\\begin{cases} \\infty & j < 1 \\text{ or }j > n \\\\ c(i, j) & i = 1 \\\\ \\min(q(i-1, j-1), q(i-1, j), q(i-1, j+1)) + c(i,j) & \\text{otherwise.}\\end{cases}"
},
{
"math_id": 71,
"text": "O( n k^2 )"
},
{
"math_id": 72,
"text": "O( n k \\log k )"
},
{
"math_id": 73,
"text": "x"
},
{
"math_id": 74,
"text": "W(n-1,x-1)"
},
{
"math_id": 75,
"text": "W(n,k-x)"
},
{
"math_id": 76,
"text": "\\max(W(n-1,x-1),W(n,k-x))"
},
{
"math_id": 77,
"text": "O( n k )"
},
{
"math_id": 78,
"text": "k=37"
},
{
"math_id": 79,
"text": "m"
},
{
"math_id": 80,
"text": "f(t,n)"
},
{
"math_id": 81,
"text": "f(t,0) = f(0,n) = 1"
},
{
"math_id": 82,
"text": "t,n \\geq 0"
},
{
"math_id": 83,
"text": "a"
},
{
"math_id": 84,
"text": "1"
},
{
"math_id": 85,
"text": "t-1"
},
{
"math_id": 86,
"text": "n-1"
},
{
"math_id": 87,
"text": "a+1"
},
{
"math_id": 88,
"text": "f(t,n) = f(t-1,n-1) + f(t-1,n)"
},
{
"math_id": 89,
"text": "f(x,n) \\geq k"
},
{
"math_id": 90,
"text": "\\{ f(t,i) : 0 \\leq i \\leq n \\}"
},
{
"math_id": 91,
"text": "O( n x )"
},
{
"math_id": 92,
"text": "n=1"
},
{
"math_id": 93,
"text": "O( n \\sqrt{k} )"
},
{
"math_id": 94,
"text": "f(t,n) = \\sum_{i=0}^{n}{ \\binom{t}{i} }"
},
{
"math_id": 95,
"text": "O(n)"
},
{
"math_id": 96,
"text": "\\binom{t}{i+1} = \\binom{t}{i} \\frac{t-i}{i+1}"
},
{
"math_id": 97,
"text": "i \\geq 0"
},
{
"math_id": 98,
"text": "f(t,n) \\leq f(t+1,n)"
},
{
"math_id": 99,
"text": "t \\geq 0"
},
{
"math_id": 100,
"text": "O( n \\log k )"
},
{
"math_id": 101,
"text": "p_{i-1}*p_k*p_j"
}
] | https://en.wikipedia.org/wiki?curid=125297 |
1252991 | Orbital hybridisation | Mixing (superposition) of atomic orbitals in chemistry
In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals to form new "hybrid orbitals" (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds, the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent sp3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies.
History and uses.
Chemist Linus Pauling first developed the hybridisation theory in 1931 to explain the structure of simple molecules such as methane (CH4) using atomic orbitals. Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality, methane has four C-H bonds of equivalent strength. The angle between any two bonds is the tetrahedral bond angle of 109°28' (around 109.5°). Pauling supposed that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations which he called "hybrid" orbitals. Each hybrid is denoted sp3 to indicate its composition, and is directed along one of the four C-H bonds. This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures.
Hybridisation theory is an integral part of organic chemistry, one of the most compelling examples being Baldwin's rules. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons. Hybridisation theory explains bonding in alkenes and methane. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity.
Overview.
Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen.
Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbon–hydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as "s-p-three") hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form formula_0, where N is a normalisation constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is formula_1 in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4.
Types of hybridisation.
sp3.
Hybridisation describes the bonding of atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals directed towards the 4 hydrogen atoms.
Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read:
This diagram suggests that the carbon atom could use its two singly occupied p-type orbitals to form two covalent bonds with two hydrogen atoms in a methylene (CH2) molecule, with a hypothetical bond angle of 90° corresponding to the angle between two p orbitals on the same atom. However the true H-C-H angle in singlet methylene is about 102° which implies the presence of some orbital hybridisation.
The carbon atom can also bond to four hydrogen atoms in methane by an excitation (or promotion) of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals.
The energy released by the formation of two additional bonds more than compensates for the excitation energy required, energetically favoring the formation of four C-H bonds.
Quantum mechanically, the lowest energy is obtained if the four bonds are equivalent, which requires that they are formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions, which are the four sp3 hybrids.
In CH4, four sp3 hybrid orbitals are overlapped by hydrogen 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength.
The following :
translates into :
sp2.
Other carbon compounds and other molecules may be explained in a similar way. For example, ethene (C2H4) has a double bond between the carbons.
For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, usually denoted 2px and 2py. The third 2p orbital (2pz) remains unhybridised.
forming a total of three sp2 orbitals with one remaining p orbital. In ethene, the two carbon atoms form a σ bond by overlapping one sp2 orbital from each carbon atom. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. Each carbon atom forms covalent C–H bonds with two hydrogens by s–sp2 overlap, all with 120° bond angles. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data.
sp.
The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital is mixed with only one of the three p orbitals,
resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles.
Hybridisation and molecule shape.
Hybridisation helps to explain molecule shape, since the angles between bonds are approximately equal to the angles between hybrid orbitals. This is in contrast to valence shell electron-pair repulsion (VSEPR) theory, which can be used to predict molecular geometry based on empirical rules rather than on valence-bond or orbital theories.
spx hybridisation.
As the valence orbitals of main group elements are the one s and three p orbitals with the corresponding octet rule, spx hybridization is used to model the shape of these molecules.
spxdy hybridisation.
As the valence orbitals of transition metals are the five d, one s and three p orbitals with the corresponding 18-electron rule, spxdy hybridisation is used to model the shape of these molecules. These molecules tend to have multiple shapes corresponding to the same hybridization due to the different d-orbitals involved. A square planar complex has one unoccupied p-orbital and hence has 16 valence electrons.
sdx hybridisation.
In certain transition metal complexes with a low d electron count, the p-orbitals are unoccupied and sdx hybridisation is used to model the shape of these molecules.
Hybridisation of hypervalent molecules.
Octet expansion.
In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations.
In 1990, Eric Alfred Magnusson of the University of New South Wales published a paper definitively excluding the role of d-orbital hybridisation in bonding in hypervalent compounds of second-row (period 3) elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding.
Resonance.
In light of computational chemistry, a better treatment would be to invoke sigma bond resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. All resonance structures must obey the octet rule.
Hybridisation in computational VB theory.
While the simple model of orbital hybridisation is commonly used to explain molecular shape, hybridisation is used differently when computed in modern valence bond programs. Specifically, hybridisation is not determined "a priori" but is instead variationally optimized to find the lowest energy solution and then reported. This means that all artificial constraints, specifically two constraints, on orbital hybridisation are lifted:
This means that in practice, hybrid orbitals do not conform to the simple ideas commonly taught and thus in scientific computational papers are simply referred to as spx, spxdy or sdx hybrids to express their nature instead of more specific integer values.
Isovalent hybridisation.
Although ideal hybrid orbitals can be useful, in reality, most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of the bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridizations like sp2.5 are also readily described.
The hybridization of bond orbitals is determined by Bent's rule: "Atomic character concentrates in orbitals directed towards electropositive substituents".
For molecules with lone pairs, the bonding orbitals are isovalent spx hybrids. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4.0 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does "not" imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals.
Hybridisation defects.
Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20–33%. The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analysed by considering localised molecular orbitals, for example using natural localised molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio. The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg.
However, computational VB groups such as Gerratt, Cooper and Raimondi (SCVB) as well as Shaik and Hiberty (VBSCF) go a step further to argue that even for model molecules such as methane, ethylene and acetylene, the hybrid orbitals are already defective and nonorthogonal, with hybridisations such as sp1.76 instead of sp3 for methane.
Photoelectron spectra.
One misconception concerning orbital hybridization is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionised states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and an A1 state. The difference in energy between each ionized state and the ground state would be ionization energy, which yields two values in agreement with experimental results.
Localized vs canonical molecular orbitals.
Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is, therefore "equivalent" to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value.
Two localized representations.
Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. Different valence bond methods use either of the two representations, which have mathematically equivalent total many-electron wave functions and are related by a unitary transformation of the set of occupied molecular orbitals.
For multiple bonds, the sigma-pi representation is the predominant one compared to the equivalent orbital (bent bond) representation. In contrast, for multiple lone pairs, most textbooks use the equivalent orbital representation. However, the sigma-pi representation is also used, such as by Weinhold and Landis within the context of natural bond orbitals, a localized orbital theory containing modernized analogs of classical (valence bond/Lewis structure) bonding pairs and lone pairs. For the hydrogen fluoride molecule, for example, two F lone pairs are essentially unhybridized p orbitals, while the other is an sp"x" hybrid orbital. An analogous consideration applies to water (one O lone pair is in a pure p orbital, another is in an sp"x" hybrid orbital).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N(s + \\sqrt 3 p\\sigma)"
},
{
"math_id": 1,
"text": "\\color{blue}\\sqrt 3"
}
] | https://en.wikipedia.org/wiki?curid=1252991 |
12531023 | Gray-footed chipmunk | Species of rodent in the family Sciuridae
<templatestyles src="Template:Taxobox/core/styles.css" />
The gray-footed chipmunk (Neotamias canipes) is a terrestrial and forest-dwelling species of chipmunk and rodent in the family Sciuridae. It is endemic to New Mexico and in the Sierra Diablo and Guadalupe Mountains in the Trans-Pecos region of Texas in the United States. Its natural habitat are coniferous forests. First discovered in 1902, they are distinguished by the unique gray dorsal colouring on the hind feet, hence the common name. They demonstrate sexual dimorphism, and the female is larger than the male.
Taxonomy.
The species name comes from the Latin "cantitia", meaning "gray in color", and "pes", meaning "foot", and the genus name comes from the Greek word "Tamias", meaning "a distributor".
"N. canipes" was formerly considered a subspecies of "N. cinereicollis", the gray-collared chipmunk, but it was brought to species status in 1960.
There are two distinguished subspecies of "N. canipes":
Description.
The gray-footed chipmunk is commonly distinguished by the dorsal gray colouration of the hind feet, hence the common name. Pelage is grayish; the lateral sides of the head marked with five brown and four white stripes. Only three brown and two white stripes are found on the lateral sides of the body; there are also black or brown stripes on the back. The dorsal face of the tail is coloured black, and the underside reddish brown. The abdominal region of the pelage is white. They are bilaterally symmetric. Winter pelage bears similarity to summer, with the exception of more gray colour on the dorsal and paler tone on the lateral sides.
Gray-footed chipmunks experience sexual dimorphism, and the female is larger than the male. This is commonly seen in many species of chipmunks.
Because the gray-footed chipmunk physical characteristics vary in different mountain ranges, their weight is commonly between 65 and 75 g (2.3 to 2.65 oz). In the Sacramento Mountains, the total length varies between 227 and 264 mm (8.9 to 10.4 in), hind foot length ranging between 34 and 36 mm, and tail length between 91 and 108 mm (3.6 to 4.25 in). However, in the Guadalupe Mountains of Texas and the White Mountains of New Mexico, total length varies between 210 and 250 mm (8.25 to 9.4 in), hind foot length ranging 32 to 35 mm (1.25 to 1.4 in), and tail length between 92 and 115 mm (3.6 to 4.5 in).
The dental formula of the gray-footed chipmunk is formula_0, meaning they have two incisors, no canines, four premolars, and six molars for the upper teeth. Lower teeth are identical except for only having two premolars. They have 22 teeth in total.
The chipmunk has a karyotype of "A".
Distribution and habitat.
Gray-footed chipmunks are native to the southeastern mountain ranges of New Mexico such as the Sacramento, Gallinas, and Jicarilla. They are also to mountain ranges in Texas such as the Guadalupe in the Trans-Pecos region. The chipmunks are found primarily at elevations of 1,600 m to 3,600 m (5,250 ft to 11,800 ft), but have been recorded at lower elevations. The type locality was found in the Guadalupe Mountains, 2,133 m (7,000 ft) above sea level.
The preferred habitat of the gray-footed chipmunk are coniferous forests with an abundance of pines and firs, pinyon-juniper woodlands, and rocky hillsides. For nesting and to avoid predation, they prefer areas with an abundance of fallen trees and rock crevices.
The size of the gray-footed chipmunk territory has not been reported, but few species of chipmunks have territories exceeding one hectare (2.47 acres). It is suspected that the territory of the gray-footed chipmunk ranges from 0.2 to 4.0 hectares (0.5 to 9.9 acres). It is likely that the chipmunks are territorial and somewhat sedentary.
Global populations of the chipmunk are unknown, however it is thought that the number most likely exceeds 100,000 individuals.
Diet.
Gray-footed chipmunks are omnivorous, their diet consisting of acorns, seeds of Douglas fir, gooseberries, mushrooms, juniper berries, and insects. In late summer and autumn, gray-footed chipmunks consume primarily acorns for hibernation, but do not usually gain weight. Instead, they rely on caches of acorns and other seeds (Abies, Picea, Pinus) to survive the winter.
Predation.
Many carnivores including northern goshawks are predators to the gray-footed chipmunk, as well as other raptors. When threatened, the chipmunk will seek protection in rock crevices and burrows, the colour of its fur allows it to camouflage somewhat. They have been found to climb trees to seek protection.
Breeding.
Females deliver a litter of around four young annually, between mid-May through August. Little else is known of their breeding habits; however, they are most likely similar to those of other species of "Neotamias". It is likely that they breed polygynously. After emerging from hibernation, females undergo estrus in the spring. Usually, gestation lasts for one month. Lactation typically lasts one to two months, however this varies across "Neotamias" species. Males do not provide parental care; the female will raise her young in a burrow or nest until they can survive on their own. In late April, young typically gain their independence and are capable of breeding the following year. Young are typically mature in early Autumn.
Behaviour.
The gray-footed chipmunk is diurnal and is mostly active shortly after dawn, when they feed and forage. The chipmunk is primarily terrestrial, and are generally found in rock crevices and fallen logs. They tend to seek protection among rock crevices and thick brush, but have also been found to climb trees.
The chipmunks will engage in torpor during the winter, but unlike most hibernating mammals, they do not gain extra weight to survive the winter and rely on caches of acorns and other seeds to sustain themselves. They are heterothermic endotherms, meaning that body temperature decreases in the winter during hibernation, and body temperature increases during the summer. They are also homeothermic endotherms due to the relative stability of their body temperatures during the winter hibernation and the summer activity.
Communication.
Communication in gray-footed chipmunks between individuals is achieved through chipping sounds. These sounds are describes as a "chipper" or a "chuck-chuck-chuck". They remain silent when threatened, but they do have an alarm call like a higher pitched "chipper" sound due to the vocalization peaks' short intervals. Not much is known about the chipmunk's body communication, but other members of "Neotamias" communicate via tail and body position.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1.0.2.3}{1.0.1.3}"
}
] | https://en.wikipedia.org/wiki?curid=12531023 |
12531229 | Autoregressive fractionally integrated moving average | In statistics, autoregressive fractionally integrated moving average models are time series models that generalize ARIMA ("autoregressive integrated moving average") models by allowing non-integer values of the differencing parameter. These models are useful in modeling time series with long memory—that is, in which deviations from the long-run mean decay more slowly than an exponential decay. The acronyms "ARFIMA" or "FARIMA" are often used, although it is also conventional to simply extend the "ARIMA("p", "d", "q")" notation for models, by simply allowing the order of differencing, "d", to take fractional values. Fractional differencing and the ARFIMA model were introduced in the early 1980s by Clive Granger, Roselyne Joyeux, and Jonathan Hosking.
Basics.
In an ARIMA model, the "integrated" part of the model includes the differencing operator (1 − "B") (where "B" is the backshift operator) raised to an integer power. For example,
formula_0
where
formula_1
so that
formula_2
In a "fractional" model, the power is allowed to be fractional, with the meaning of the term identified using the following formal binomial series expansion
formula_3
ARFIMA(0, "d", 0).
The simplest autoregressive fractionally integrated model, ARFIMA(0, "d", 0), is, in standard notation,
formula_4
where this has the interpretation
formula_5
ARFIMA(0, "d", 0) is similar to fractional Gaussian noise (fGn): with "d" = "H"−<templatestyles src="Fraction/styles.css" />1⁄2, their covariances have the same power-law decay. The advantage of fGn over ARFIMA(0,"d",0) is that many asymptotic relations hold for finite samples. The advantage of ARFIMA(0,"d",0) over fGn is that it has an especially simple spectral density—
formula_6
—and it is a particular case of ARFIMA("p", "d", "q"), which is a versatile family of models.
General form: ARFIMA("p", "d", "q").
An ARFIMA model shares the same form of representation as the ARIMA("p", "d", "q") process, specifically:
formula_7
In contrast to the ordinary ARIMA process, the "difference parameter", "d", is allowed to take non-integer values.
Enhancement to ordinary ARMA models.
The enhancement to ordinary ARMA models is as follows:
The point of the pre-filtering is to reduce low frequencies in the data set which can cause non-stationarities in the data, which non-stationarities ARMA models cannot handle well (or at all)... but only enough so that the reductions can be recovered after the model is built.
Fractional differencing and the inverse operation fractional integration (both directions being used in the ARFIMA modeling and forecasting process) can be thought of as digital filtering and "unfiltering" operations. As such, it is useful to study the frequency response of such filters to know which frequencies are kept and which are attenuated or discarded.
Note that any filtering that would substitute for fractional differencing and integration in this AR(FI)MA model should be similarly invertible as differencing and integration (summing) to avoid information loss. E.g. a high pass filter which completely discards many low frequencies (unlike the fractional differencing high pass filter which only completely discards frequency 0 [constant behavior in the input signal] and merely attenuates other low frequencies, see above PDF) may not work so well, because after fitting ARMA terms to the filtered series, the reverse operation to return the ARMA forecast to its original units would not be able re-boost those attenuated low frequencies, since the low frequencies were cut to zero.
Such frequency response studies may suggest other similar families of (reversible) filters that might be useful replacements for the "FI" part of the ARFIMA modeling flow, such as the well-known, easy to implement, and minimal distortion high-pass Butterworth filter or similar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(1-B)^2=1-2B+B^2 \\,,"
},
{
"math_id": 1,
"text": "B^2X_t=X_{t-2} \\, ,"
},
{
"math_id": 2,
"text": "(1-B)^2X_t = X_t -2X_{t-1} + X_{t-2}."
},
{
"math_id": 3,
"text": "\\begin{align}\n(1 - B)^d &= \\sum_{k=0}^{\\infty} \\; {d \\choose k} \\; (-B)^k \\\\\n& = \\sum_{k=0}^{\\infty} \\; \\frac{\\prod_{a=0}^{k-1} (d - a)\\ (-B)^k}{k!}\\\\\n&=1-dB+\\frac{d(d-1)}{2!}B^2 -\\cdots \\, .\n\\end{align}"
},
{
"math_id": 4,
"text": " (1 - B)^d X_t= \\varepsilon_t, "
},
{
"math_id": 5,
"text": " X_t-dX_{t-1}+\\frac{d(d-1)}{2!}X_{t-2} -\\cdots = \\varepsilon_t . "
},
{
"math_id": 6,
"text": "f(\\lambda) = \\frac{1}{2\\pi} \\left(2 \\sin\\left(\\frac{\\lambda}{2}\\right)\\right)^{-2d}"
},
{
"math_id": 7,
"text": "\n\\left(\n 1 - \\sum_{i=1}^p \\phi_i B^i\n\\right)\n\\left(\n 1-B\n\\right)^d\nX_t\n=\n\\left(\n 1 + \\sum_{i=1}^q \\theta_i B^i\n\\right) \\varepsilon_t \\, .\n"
}
] | https://en.wikipedia.org/wiki?curid=12531229 |
1253303 | Induced metric | In mathematics and theoretical physics, the induced metric is the metric tensor defined on a submanifold that is induced from the metric tensor on a manifold into which the submanifold is embedded, through the pullback. It may be determined using the following formula (using the Einstein summation convention), which is the component form of the pullback operation:
formula_0
Here formula_1, formula_2 describe the indices of coordinates formula_3 of the submanifold while the functions formula_4 encode the embedding into the higher-dimensional manifold whose tangent indices are denoted formula_5, formula_6.
Example – Curve in 3D.
Let
formula_7
be a map from the domain of the curve formula_8 with parameter formula_9 into the Euclidean manifold formula_10. Here formula_11 are constants.
Then there is a metric given on formula_10 as
formula_12.
and we compute
formula_13
Therefore formula_14
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g_{ab} = \\partial_a X^\\mu \\partial_b X^\\nu g_{\\mu\\nu}\\ "
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "\\xi^a"
},
{
"math_id": 4,
"text": "X^\\mu(\\xi^a)"
},
{
"math_id": 5,
"text": "\\mu"
},
{
"math_id": 6,
"text": "\\nu"
},
{
"math_id": 7,
"text": "\n \\Pi\\colon \\mathcal{C} \\to \\mathbb{R}^3,\\ \\tau \\mapsto \\begin{cases}\\begin{align}x^1&= (a+b\\cos(n\\cdot \\tau))\\cos(m\\cdot \\tau)\\\\x^2&=(a+b\\cos(n\\cdot \\tau))\\sin(m\\cdot \\tau)\\\\x^3&=b\\sin(n\\cdot \\tau).\\end{align} \\end{cases}"
},
{
"math_id": 8,
"text": "\\mathcal{C}"
},
{
"math_id": 9,
"text": "\\tau"
},
{
"math_id": 10,
"text": "\\mathbb{R}^3"
},
{
"math_id": 11,
"text": "a,b,m,n\\in\\mathbb{R}"
},
{
"math_id": 12,
"text": "g=\\sum\\limits_{\\mu,\\nu}g_{\\mu\\nu}\\mathrm{d}x^\\mu\\otimes \\mathrm{d}x^\\nu\\quad\\text{with}\\quad\ng_{\\mu\\nu} = \\begin{pmatrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{pmatrix}\n"
},
{
"math_id": 13,
"text": "g_{\\tau\\tau}=\\sum\\limits_{\\mu,\\nu}\\frac{\\partial x^\\mu}{\\partial \\tau}\\frac{\\partial x^\\nu}{\\partial \\tau}\\underbrace{g_{\\mu\\nu}}_{\\delta_{\\mu\\nu}} = \\sum\\limits_\\mu\\left(\\frac{\\partial x^\\mu}{\\partial \\tau}\\right)^2=m^2 a^2+2m^2ab\\cos(n\\cdot \\tau)+m^2b^2\\cos^2(n\\cdot \\tau)+b^2n^2\n"
},
{
"math_id": 14,
"text": "g_\\mathcal{C}=(m^2 a^2+2m^2ab\\cos(n\\cdot \\tau)+m^2b^2\\cos^2(n\\cdot \\tau)+b^2n^2) \\, \\mathrm{d}\\tau\\otimes \\mathrm{d}\\tau"
}
] | https://en.wikipedia.org/wiki?curid=1253303 |
1253305 | Wheeler–DeWitt equation | Field equation, part of a theory that attempts to combine quantum mechanics and general relativity
The Wheeler–DeWitt equation for theoretical physics and applied mathematics, is a field equation attributed to John Archibald Wheeler and Bryce DeWitt. The equation attempts to mathematically combine the ideas of quantum mechanics and general relativity, a step towards a theory of quantum gravity.
In this approach, time plays a role different from what it does in non-relativistic quantum mechanics, leading to the so-called 'problem of time'. More specifically, the equation describes the quantum version of the Hamiltonian constraint using metric variables. Its commutation relations with the diffeomorphism constraints generate the Bergman–Komar "group" (which "is" the diffeomorphism group on-shell).
Motivation and background.
In canonical gravity, spacetime is foliated into spacelike submanifolds. The three-metric (i.e., metric on the hypersurface) is formula_0 and given by
formula_1
In that equation the Latin indices run over the values 1, 2, 3 and the Greek indices run over the values 1, 2, 3, 4. The three-metric formula_0 is the field, and we denote its conjugate momenta as formula_2. The Hamiltonian is a constraint (characteristic of most relativistic systems)
formula_3
where formula_4 and formula_5 is the Wheeler–DeWitt metric. In index-free notation, the Wheeler–DeWitt metric on the space of positive definite quadratic forms "g" in three dimensions is
formula_6
Quantization "puts hats" on the momenta and field variables; that is, the functions of numbers in the classical case become operators that modify the state function in the quantum case. Thus we obtain the operator
formula_7
Working in "position space", these operators are
formula_8
formula_9
One can apply the operator to a general wave functional of the metric formula_10 where:
formula_11
which would give a set of constraints amongst the coefficients formula_12. This means the amplitudes for formula_13 gravitons at certain positions is related to the amplitudes for a different number of gravitons at different positions. Or, one could use the two-field formalism, treating formula_14 as an independent field so that the wave function is formula_15.
Mathematical formalism.
The Wheeler–DeWitt equation is a functional differential equation. It is ill-defined in the general case, but very important in theoretical physics, especially in quantum gravity. It is a functional differential equation on the space of three dimensional spatial metrics. The Wheeler–DeWitt equation has the form of an operator acting on a wave functional; the functional reduces to a function in cosmology. Contrary to the general case, the Wheeler–DeWitt equation is well defined in minisuperspaces like the configuration space of cosmological theories. An example of such a wave function is the Hartle–Hawking state. Bryce DeWitt first published this equation in 1967 under the name "Einstein–Schrödinger equation"; it was later renamed the "Wheeler–DeWitt equation".
Hamiltonian constraint.
Simply speaking, the Wheeler–DeWitt equation says
formula_16
where formula_17 is the Hamiltonian constraint in quantized general relativity and formula_18 stands for the wave function of the universe. Unlike ordinary quantum field theory or quantum mechanics, the Hamiltonian is a first class constraint on physical states. We also have an independent constraint for each point in space.
Although the symbols formula_19 and formula_18 may appear familiar, their interpretation in the Wheeler–DeWitt equation is substantially different from non-relativistic quantum mechanics. formula_18 is no longer a spatial wave function in the traditional sense of a complex-valued function that is defined on a 3-dimensional space-like surface and normalized to unity. Instead it is a functional of field configurations on all of spacetime. This wave function contains all of the information about the geometry and matter content of the universe. formula_19 is still an operator that acts on the Hilbert space of wave functions, but it is not the same Hilbert space as in the nonrelativistic case, and the Hamiltonian no longer determines the evolution of the system, so the Schrödinger equation formula_20 no longer applies. This property is known as timelessness. Various attempts to incorporate time in a fully quantum framework have been made, starting with the "Page and Wootters mechanism" and other subsequent proposals. The reemergence of time was also proposed as arising from quantum correlations between an evolving system and a reference quantum clock system, the concept of system-time entanglement is introduced as a quantifier of the actual distinguishable evolution undergone by the system.
Momentum constraint.
We also need to augment the Hamiltonian constraint with momentum constraints
formula_21
associated with spatial diffeomorphism invariance.
In minisuperspace approximations, we only have one Hamiltonian constraint (instead of infinitely many of them).
In fact, the principle of general covariance in general relativity implies that global evolution per se does not exist; the time formula_22 is just a label we assign to one of the coordinate axes. Thus, what we think about as time evolution of any physical system is just a gauge transformation, similar to that of QED induced by U(1) local gauge transformation formula_23 where formula_24 plays the role of local time. The role of a Hamiltonian is simply to restrict the space of the "kinematic" states of the Universe to that of "physical" states—the ones that follow gauge orbits. For this reason we call it a "Hamiltonian constraint." Upon quantization, physical states become wave functions that lie in the kernel of the Hamiltonian operator.
In general, the Hamiltonian vanishes for a theory with general covariance or time-scaling invariance.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma_{ij}"
},
{
"math_id": 1,
"text": "g_{\\mu\\nu}\\,\\mathrm{d}x^{\\mu}\\,\\mathrm{d}x^{\\nu}=(-\\,N^2+\\beta_k\\beta^k)\\,\\mathrm{d}t^2+2\\beta_k\\,\\mathrm{d}x^k\\,\\mathrm{d}t+\\gamma_{ij}\\,\\mathrm{d}x^i\\,\\mathrm{d}x^j."
},
{
"math_id": 2,
"text": "\\pi^{ij}"
},
{
"math_id": 3,
"text": "\\mathcal{H}=\\frac{1}{2\\sqrt{\\gamma}}G_{ijkl}\\pi^{ij}\\pi^{kl}-\\sqrt{\\gamma}\\,{}^{(3)}\\!R=0"
},
{
"math_id": 4,
"text": "\\gamma=\\det(\\gamma_{ij})"
},
{
"math_id": 5,
"text": "G_{ijkl}=(\\gamma_{ik}\\gamma_{jl}+\\gamma_{il}\\gamma_{jk}-\\gamma_{ij}\\gamma_{kl})"
},
{
"math_id": 6,
"text": "\\operatorname{tr}((g^{-1}dg)^2)-(\\operatorname{tr}(g^{-1}dg))^2."
},
{
"math_id": 7,
"text": "\\widehat{\\mathcal{H}}=\\frac{1}{2\\sqrt{\\gamma}}\\widehat{G}_{ijkl}\\widehat{\\pi}^{ij}\\widehat{\\pi}^{kl}-\\sqrt{\\gamma}\\,{}^{(3)}\\!\\widehat{R}."
},
{
"math_id": 8,
"text": " \\hat{\\gamma}_{ij}(t,x^k) \\to \\gamma_{ij}(t,x^k)"
},
{
"math_id": 9,
"text": " \\hat{\\pi}^{ij}(t,x^k) \\to -i \\frac{\\delta}{\\delta \\gamma_{ij}(t,x^k)}. "
},
{
"math_id": 10,
"text": "\\widehat{\\mathcal{H}} \\Psi[\\gamma] =0 "
},
{
"math_id": 11,
"text": " \\Psi[\\gamma] = a + \\int \\psi(x) \\gamma(x) dx^3+ \\int\\int \\psi(x,y)\\gamma(x)\\gamma(y) dx^3 dy^3 +... "
},
{
"math_id": 12,
"text": "\\psi(x,y,...)"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "\\omega(g)"
},
{
"math_id": 15,
"text": "\\Psi[\\gamma,\\omega]"
},
{
"math_id": 16,
"text": "\\hat{H}(x) |\\psi\\rangle = 0"
},
{
"math_id": 17,
"text": "\\hat{H}(x)"
},
{
"math_id": 18,
"text": "|\\psi\\rangle"
},
{
"math_id": 19,
"text": "\\hat{H}"
},
{
"math_id": 20,
"text": "\\hat{H} |\\psi\\rangle = i \\hbar \\partial / \\partial t |\\psi\\rangle "
},
{
"math_id": 21,
"text": "\\vec{\\mathcal{P}}(x) \\left| \\psi \\right\\rangle = 0"
},
{
"math_id": 22,
"text": "t"
},
{
"math_id": 23,
"text": " \\psi \\rightarrow e^{i\\theta(\\vec{r} )} \\psi"
},
{
"math_id": 24,
"text": "\\theta(\\vec{r})"
}
] | https://en.wikipedia.org/wiki?curid=1253305 |
12533877 | Revenue equivalence | Revenue equivalence is a concept in auction theory that states that given certain conditions, any mechanism that results in the same outcomes (i.e. allocates items to the same bidders) also has the same expected revenue.
Notation.
There is a set formula_0 of possible outcomes.
There are formula_1 agents which have different valuations for each outcome. The valuation of agent formula_2 (also called its "type") is represented as a function:
formula_3
which expresses the value it has for each alternative, in monetary terms.
The agents have quasilinear utility functions; this means that, if the outcome is formula_4 and in addition the agent receives a payment formula_5 (positive or negative), then the total utility of agent formula_2 is:
formula_6
The vector of all value-functions is denoted by formula_7.
For every agent formula_2, the vector of all value-functions of the "other" agents is denoted by formula_8. So formula_9.
A "mechanism" is a pair of functions:
The agents' types are independent identically-distributed random variables. Thus, a mechanism induces a Bayesian game in which a player's strategy is his reported type as a function of his true type. A mechanism is said to be Bayesian-Nash incentive compatible if there is a Bayesian Nash equilibrium in which all players report their true type.
Statement.
Under these assumptions, the revenue equivalence theorem then says the following.
For any two Bayesian-Nash incentive compatible mechanisms, if:
then:
Example.
A classic example is the pair of auction mechanisms: first price auction and second price auction. First-price auction has a variant which is Bayesian-Nash incentive compatible; second-price auction is dominant-strategy-incentive-compatible, which is even stronger than Bayesian-Nash incentive compatible. The two mechanisms fulfill the conditions of the theorem because:
Indeed, the expected payment for each player is the same in both auctions, and the auctioneer's revenue is the same; see the page on first-price sealed-bid auction for details.
Equivalence of auction mechanisms in single item auctions.
In fact, we can use revenue equivalence to prove that many types of auctions are revenue equivalent. For example, the first price auction, second price auction, and the all-pay auction are all revenue equivalent when the bidders are symmetric (that is, their valuations are independent and identically distributed).
Second price auction.
Consider the second price single item auction, in which the player with the highest bid pays the second highest bid. It is optimal for each player formula_2 to bid its own value formula_15.
Suppose formula_2 wins the auction, and pays the second highest bid, or formula_16. The revenue from this auction is simply formula_16.
First price auction.
In the first price auction, where the player with the highest bid simply pays its bid, if all players bid using a bidding function formula_17 this is a Nash equilibrium.
In other words, if each player bids such that they bid the expected value of second highest bid, assuming that theirs was the highest, then no player has any incentive to deviate. If this were true, then it is easy to see that the expected revenue from this auction is also formula_16 if formula_2 wins the auction.
Proof.
To prove this, suppose that a player 1 bids formula_18 where formula_19, effectively bluffing that its value is formula_20 rather than formula_7. We want to find a value of formula_20 such that the player's expected payoff is maximized.
The probability of winning is then formula_21. The expected cost of this bid is formula_22. Then a player's expected payoff is
formula_23
Let formula_24, a random variable. Then we can rewrite the above as
formula_25.
Using the general fact that formula_26, we can rewrite the above as
formula_27.
Taking derivatives with respect to formula_20, we obtain
formula_28.
Thus bidding with your value formula_7 maximizes the player's expected payoff. Since formula_29 is monotone increasing, we verify that this is indeed a maximum point.
English auction.
In the open ascending price auction (aka English auction), a buyer's dominant strategy is to remain in the auction until the asking price is equal to his value. Then, if he is the last one remaining in the arena, he wins and pays the second-highest bid.
Consider the case of two buyers, each with a value that is an independent draw from a distribution with support [0,1], cumulative distribution function F(v) and probability density function f(v). If buyers behave according to their dominant strategies, then a buyer with value v wins if his opponent's value x is lower. Thus his win probability is
formula_30
and his expected payment is
formula_31
The expected payment conditional upon winning is therefore
formula_32
Multiplying both sides by F(v) and differentiating by v yields the following differential equation for e(v).
formula_33.
Rearranging this equation,
formula_34
Let B(v) be the equilibrium bid function in the sealed first-price auction. We establish revenue equivalence by showing that B(v)=e(v), that is, the equilibrium payment by the winner in one auction is equal to the equilibrium expected payment by the winner in the other.
Suppose that a buyer has value v and bids b. His opponent bids according to the equilibrium bidding strategy. The support of the opponent's bid distribution is [0,B(1)]. Thus any bid of at least B(1) wins with probability 1. Therefore, the best bid b lies in the interval [0,B(1)] and so we can write this bid as b = B(x) where x lies in [0,1]. If the opponent has value y he bids B(y). Therefore, the win probability is
formula_35.
The buyer's expected payoff is his win probability times his net gain if he wins, that is,
formula_36.
Differentiating, the necessary condition for a maximum is
formula_37.
That is if B(x) is the buyer's best response it must satisfy this first order condition. Finally we note that for B(v) to be the equilibrium bid function, the buyer's best response must be B(v). Thus x=v.
Substituting for x in the necessary condition,
formula_38.
Note that this differential equation is identical to that for e(v). Since e(0)=B(0)=0 it follows that formula_39.
Using revenue equivalence to predict bidding functions.
We can use revenue equivalence to predict the bidding function of a player in a game. Consider the two player version of the second price auction and the first price auction, where each player's value is drawn uniformly from formula_40.
Second price auction.
The expected payment of the first player in the second price auction can be computed as follows:
formula_41
Since players bid truthfully in a second price auction, we can replace all prices with players' values. If player 1 wins, he pays what player 2 bids, or formula_42. Player 1 himself bids formula_43. Since payment is zero when player 1 loses, the above is
formula_44
Since formula_45 come from a uniform distribution, we can simplify this to
formula_46
First price auction.
We can use revenue equivalence to generate the correct symmetric bidding function in the first price auction. Suppose that in the first price auction, each player has the bidding function formula_47, where this function is unknown at this point.
The expected payment of player 1 in this game is then
formula_41 (as above)
Now, a player simply pays what the player bids, and let's assume that players with higher values still win, so that the probability of winning is simply a player's value, as in the second price auction. We will later show that this assumption was correct. Again, a player pays nothing if he loses the auction. We then obtain
formula_48
By the Revenue Equivalence principle, we can equate this expression to the revenue of the second-price auction that we calculated above:
formula_49
From this, we can infer the bidding function:
formula_50
Note that with this bidding function, the player with the higher value still wins. We can show that this is the correct equilibrium bidding function in an additional way, by thinking about how a player should maximize his bid given that all other players are bidding using this bidding function. See the page on first-price sealed-bid auction.
All-pay auctions.
Similarly, we know that the expected payment of player 1 in the second price auction is formula_51, and this must be equal to the expected payment in the all-pay auction, i.e.
formula_52
Thus, the bidding function for each player in the all-pay auction is formula_53
Implications.
An important implication of the theorem is that any single-item auction which unconditionally gives the item to the highest bidder is going to have the same expected revenue. This means that, if we want to increase the auctioneer's revenue, the outcome function must be changed. One way to do this is to set a Reservation price on the item. This changes the Outcome function since now the item is not always given to the highest bidder. By carefully selecting the reservation price, an auctioneer can get a substantially higher expected revenue.
Limitations.
The revenue-equivalence theorem breaks in some important cases:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "v_i : X \\longrightarrow R_{\\geq 0}"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "p_i"
},
{
"math_id": 6,
"text": "u_i := v_i(x) + p_i"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "v_{-i}"
},
{
"math_id": 9,
"text": "v \\equiv (v_i,v_{-i})"
},
{
"math_id": 10,
"text": "Outcome"
},
{
"math_id": 11,
"text": "x\\in X"
},
{
"math_id": 12,
"text": "Payment"
},
{
"math_id": 13,
"text": "(p_1,\\dots,p_n)"
},
{
"math_id": 14,
"text": "v_i^0"
},
{
"math_id": 15,
"text": "b_i=v_i"
},
{
"math_id": 16,
"text": "\\max_{j \\neq i} b_j"
},
{
"math_id": 17,
"text": "b(v) = E(\\max_{j \\neq i} v_j ~ | ~ v_j \\le v ~ \\forall ~ j),"
},
{
"math_id": 18,
"text": "b(z)"
},
{
"math_id": 19,
"text": " z < v"
},
{
"math_id": 20,
"text": "z"
},
{
"math_id": 21,
"text": "Pr(\\max_{i > 1} v_i < z)"
},
{
"math_id": 22,
"text": "E(\\max_{i>1} v_i~|~v_i<z~\\forall~i)"
},
{
"math_id": 23,
"text": "Pr(\\max_{i > 1} v_i < z)(v-E(\\max_{i>1} v_i~|~v_i<z~\\forall~i))"
},
{
"math_id": 24,
"text": "X=\\max_{i > 1} v_i"
},
{
"math_id": 25,
"text": "Pr(X < z)(v-E(X~|X \\le z))"
},
{
"math_id": 26,
"text": "E(X~|~X\\le z)\\cdot Pr(X<z) = \\int_0^z Pr(X<z) - Pr(X<y)dy"
},
{
"math_id": 27,
"text": "Pr(X<z)\\cdot v - Pr(X<z) \\cdot z + \\int_0^z Pr(X < y)dy"
},
{
"math_id": 28,
"text": "Pr(X<z)'(v-z)=0 \\Rightarrow v = z"
},
{
"math_id": 29,
"text": "Pr(X<z)"
},
{
"math_id": 30,
"text": "w=\\Pr \\{x<v\\}\\equiv F(v)"
},
{
"math_id": 31,
"text": "C(v)=\\int\\limits_{0}^{v}{{}}xf(x)dx"
},
{
"math_id": 32,
"text": "e(v)=\\frac{C(v)}{F(v)}=\\frac{\\int\\limits_{0}^{v}{{}}xf(x)dx}{F(v)}"
},
{
"math_id": 33,
"text": "{e}'(v)F(v)+e(v)f(v)=vf(v)"
},
{
"math_id": 34,
"text": "{e}'(v)=\\frac{f(v)}{F(v)}(v-e(v))"
},
{
"math_id": 35,
"text": "w=\\Pr \\{b<B(y)\\}=\\Pr \\{B(x)<B(y)\\}=\\Pr \\{x<y\\}=F(v)"
},
{
"math_id": 36,
"text": "U=w(v-B(x))=F(x)(v-B(x))"
},
{
"math_id": 37,
"text": "{U}'(x)=f(x)(v-B(x))-F(x){B}'(x)=F(x)\\left(\\frac{f(x)}{F(x)}(v-B(x))-{B}'(x)\\right)=0"
},
{
"math_id": 38,
"text": "\\frac{f(v)}{F(v)}(v-B(v))-{B}'(v)=0"
},
{
"math_id": 39,
"text": "B(v)=e(v)"
},
{
"math_id": 40,
"text": "[0,1]"
},
{
"math_id": 41,
"text": "E(\\text{Payment}~|~\\text{Player 1 wins})P(\\text{Player 1 wins}) + E(\\text{Payment}~|~\\text{Player 1 loses})P(\\text{Player 1 loses})"
},
{
"math_id": 42,
"text": "p_2=v_2"
},
{
"math_id": 43,
"text": "p_1=v_1"
},
{
"math_id": 44,
"text": "E(v_2~|~v_2 < v_1)P(v_2 < v_1) + 0"
},
{
"math_id": 45,
"text": "v_1,v_2"
},
{
"math_id": 46,
"text": "\\frac{v_1}{2} \\cdot v_1 = \\frac{v_1^2}{2}"
},
{
"math_id": 47,
"text": "b(v)"
},
{
"math_id": 48,
"text": "b(v_1) \\cdot v_1 + 0"
},
{
"math_id": 49,
"text": "b(v_1) \\cdot v_1 = \\frac{v_1^2}{2}"
},
{
"math_id": 50,
"text": "b(v_1) = \\frac{v_1}{2}"
},
{
"math_id": 51,
"text": "\\frac{v_1^2}{2}"
},
{
"math_id": 52,
"text": "\\frac{v_1^2}{2} = b(v_1)"
},
{
"math_id": 53,
"text": "\\frac{v^2}{2}"
}
] | https://en.wikipedia.org/wiki?curid=12533877 |
12537045 | Sums of powers | List of mathematical contexts in which exponentiated terms are summed
In mathematics and statistics, sums of powers occur in a number of contexts:
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Dmbox/styles.css" />
Index of articles associated with the same name
This includes a list of related items that share the same name (or similar names). <br> If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article. | [
{
"math_id": 0,
"text": "1^k + 2^k + 3^k + \\cdots + n^k"
},
{
"math_id": 1,
"text": "a^2=b^4+c^4"
},
{
"math_id": 2,
"text": "a^4=b^4+c^2"
},
{
"math_id": 3,
"text": "x^k+y^k=z^k"
},
{
"math_id": 4,
"text": "|x/a|^k+|y/b|^k=1"
},
{
"math_id": 5,
"text": "a^4 + b^4 + c^4 + d^4 = (a + b + c + d)^4 "
},
{
"math_id": 6,
"text": "\\sum_{i=1}^{n} a_i^k = \\sum_{j=1}^{m} b_j^k."
},
{
"math_id": 7,
"text": "\n\\varphi^{n+1} = \\varphi^n + \\varphi^{n-1}."
},
{
"math_id": 8,
"text": "1^k+2^k+\\cdots+m^k=(m+1)^k"
},
{
"math_id": 9,
"text": "\\sum_{i=k}^{n} z^i = \\frac{z^{k}-z^{n+1}}{1-z}."
}
] | https://en.wikipedia.org/wiki?curid=12537045 |
1253782 | Relativistic quantum chemistry | Theories of quantum chemistry explained via relativistic mechanics
Relativistic quantum chemistry combines relativistic mechanics with quantum chemistry to calculate elemental properties and structure, especially for the heavier elements of the periodic table. A prominent example is an explanation for the color of gold: due to relativistic effects, it is not silvery like most other metals.
The term relativistic effects was developed in light of the history of quantum mechanics. Initially, quantum mechanics was developed without considering the theory of relativity. Relativistic effects are those discrepancies between values calculated by models that consider relativity and those that do not. Relativistic effects are important for heavier elements with high atomic numbers, such as lanthanides and actinides.
Relativistic effects in chemistry can be considered to be perturbations, or small corrections, to the non-relativistic theory of chemistry, which is developed from the solutions of the Schrödinger equation. These corrections affect the electrons differently depending on the electron speed compared with the speed of light. Relativistic effects are more prominent in heavy elements because only in these elements do electrons attain sufficient speeds for the elements to have properties that differ from what non-relativistic chemistry predicts.
History.
Beginning in 1935, Bertha Swirles described a relativistic treatment of a many-electron system, despite Paul Dirac's 1929 assertion that the only imperfections remaining in quantum mechanics "give rise to difficulties only when high-speed particles are involved and are therefore of no importance in the consideration of the atomic and molecular structure and ordinary chemical reactions in which it is, indeed, usually sufficiently accurate if one neglects relativity variation of mass and velocity and assumes only Coulomb forces between the various electrons and atomic nuclei".
Theoretical chemists by and large agreed with Dirac's sentiment until the 1970s, when relativistic effects were observed in heavy elements. The Schrödinger equation had been developed without considering relativity in Schrödinger's 1926 article. Relativistic corrections were made to the Schrödinger equation (see Klein–Gordon equation) to describe the fine structure of atomic spectra, but this development and others did not immediately trickle into the chemical community. Since atomic spectral lines were largely in the realm of physics and not in that of chemistry, most chemists were unfamiliar with relativistic quantum mechanics, and their attention was on lighter elements typical for the organic chemistry focus of the time.
Dirac's opinion on the role relativistic quantum mechanics would play for chemical systems is wrong for two reasons. First, electrons in "s" and "p" atomic orbitals travel at a significant fraction of the speed of light. Second, relativistic effects give rise to indirect consequences that are especially evident for "d" and "f" atomic orbitals.
Qualitative treatment.
One of the most important and familiar results of relativity is that the relativistic mass of the electron increases as
formula_0
where formula_1 are the electron rest mass, velocity of the electron, and speed of light respectively. The figure at the right illustrates this relativistic effect as a function of velocity.
This has an immediate implication on the Bohr radius (formula_2), which is given by
formula_3
where formula_4 is the reduced Planck constant, and α is the fine-structure constant (a relativistic correction for the Bohr model).
Bohr calculated that a 1s orbital electron of a hydrogen atom orbiting at the Bohr radius of 0.0529 nm travels at nearly 1/137 the speed of light. One can extend this to a larger element with an atomic number "Z" by using the expression formula_5 for a 1s electron, where "v" is its radial velocity, i.e., its instantaneous speed tangent to the radius of the atom. For gold with "Z" = 79, "v" ≈ 0.58"c", so the 1s electron will be moving at 58% of the speed of light. Substituting this in for "v"/"c" in the equation for the relativistic mass, one finds that "m"rel = 1.22"m"e, and in turn putting this in for the Bohr radius above one finds that the radius shrinks by 22%.
If one substitutes the "relativistic mass" into the equation for the Bohr radius it can be written
formula_6
It follows that
formula_7
At right, the above ratio of the relativistic and nonrelativistic Bohr radii has been plotted as a function of the electron velocity. Notice how the relativistic model shows the radius decreases with increasing velocity.
When the Bohr treatment is extended to hydrogenic atoms, the Bohr radius becomes
formula_8
where formula_9 is the principal quantum number, and "Z" is an integer for the atomic number. In the Bohr model, the angular momentum is given as formula_10. Substituting into the equation above and solving for formula_11 gives
formula_12
From this point, atomic units can be used to simplify the expression into
formula_13
Substituting this into the expression for the Bohr ratio mentioned above gives
formula_14
At this point one can see that a low value of formula_9 and a high value of formula_15 results in formula_16. This fits with intuition: electrons with lower principal quantum numbers will have a higher probability density of being nearer to the nucleus. A nucleus with a large charge will cause an electron to have a high velocity. A higher electron velocity means an increased electron relativistic mass, and as a result the electrons will be near the nucleus more of the time and thereby contract the radius for small principal quantum numbers.
Periodic table deviations.
Mercury.
Mercury (Hg) is a liquid down to approximately −39 °C, its melting point. Bonding forces are weaker for Hg–Hg bonds than for their immediate neighbors such as cadmium (m.p. 321 °C) and gold (m.p. 1064 °C). The lanthanide contraction only partially accounts for this anomaly. Because the 6s2 orbital is contracted by relativistic effects and may therefore only weakly contribute to any chemical bonding, Hg–Hg bonding must be mostly the result of van der Waals forces.
Mercury gas is mostly monatomic, Hg(g). Hg2(g) rarely forms and has a low dissociation energy, as expected due to the lack of strong bonds.
Au2(g) and Hg(g) are analogous with H2(g) and He(g) with regard to having the same nature of difference. The relativistic contraction of the 6s2 orbital leads to gaseous mercury sometimes being referred to as a pseudo noble gas.
Color of gold and caesium.
The reflectivity of aluminium (Al), silver (Ag), and gold (Au) is shown in the graph to the right. The human eye sees electromagnetic radiation with a wavelength near 600 nm as yellow. Gold absorbs blue light more than it absorbs other visible wavelengths of light; the reflected light reaching the eye is therefore lacking in blue compared with the incident light. Since yellow is complementary to blue, this makes a piece of gold under white light appear yellow to human eyes.
The electronic transition from the 5d orbital to the 6s orbital is responsible for this absorption. An analogous transition occurs in silver, but the relativistic effects are smaller than in gold. While silver's 4d orbital experiences some relativistic expansion and the 5s orbital contraction, the 4d–5s distance in silver is much greater than the 5d–6s distance in gold. The relativistic effects increase the 5d orbital's distance from the atom's nucleus and decrease the 6s orbital's distance. Due to the decreased 6s orbital distance, the electronic transition primarily absorbs in the violet/blue region of the visible spectrum, as opposed to the UV region.
Caesium, the heaviest of the alkali metals that can be collected in quantities sufficient for viewing, has a golden hue, whereas the other alkali metals are silver-white. However, relativistic effects are not very significant at "Z" = 55 for caesium (not far from "Z" = 47 for silver). The golden color of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium, this frequency is in the ultraviolet, but for caesium it reaches the blue-violet end of the visible spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially, while other colors (having lower frequency) are reflected; hence it appears yellowish.
Lead–acid battery.
Without relativity, lead ("Z" = 82) would be expected to behave much like tin ("Z" = 50), so tin–acid batteries should work just as well as the lead–acid batteries commonly used in cars. However, calculations show that about 10 V of the 12 V produced by a 6-cell lead–acid battery arises purely from relativistic effects, explaining why tin–acid batteries do not work.
Inert-pair effect.
In Tl(I) (thallium), Pb(II) (lead), and Bi(III) (bismuth) complexes a 6s2 electron pair exists. The inert pair effect is the tendency of this pair of electrons to resist oxidation due to a relativistic contraction of the 6s orbital.
Other effects.
Additional phenomena commonly caused by relativistic effects are the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_\\text{rel} = \\frac{m_\\text{e}}{\\sqrt{1 - (v_\\text{e}/c)^2}},"
},
{
"math_id": 1,
"text": "m_e, v_e, c"
},
{
"math_id": 2,
"text": "a_0"
},
{
"math_id": 3,
"text": "a_0 = \\frac{\\hbar}{m_\\text{e} c \\alpha},"
},
{
"math_id": 4,
"text": "\\hbar"
},
{
"math_id": 5,
"text": "v \\approx \\frac{Zc}{137}"
},
{
"math_id": 6,
"text": "a_\\text{rel} = \\frac{\\hbar \\sqrt{1 - (v_\\text{e}/c)^2}}{m_\\text{e} c \\alpha}."
},
{
"math_id": 7,
"text": "\\frac{a_\\text{rel}}{a_0} = \\sqrt{1 - (v_\\text{e}/c)^2}."
},
{
"math_id": 8,
"text": "r = \\frac{n^2}{Z} a_0 = \\frac{n^2 \\hbar^2 4 \\pi \\varepsilon_0}{m_\\text{e}Ze^2},"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "mv_\\text{e}r = n\\hbar"
},
{
"math_id": 11,
"text": "v_\\text{e}"
},
{
"math_id": 12,
"text": "\\begin{align}\nr &= \\frac{n^2 a_0}{Z} = \\frac{n \\hbar}{m v_\\text{e}}, \\\\\nv_\\text{e} &= \\frac{Z}{n^2 a_0} \\frac{n \\hbar}{m}, \\\\\n\\frac{v_\\text{e}}{c} &= \\frac{Z \\alpha}{n} = \\frac{Z e^2}{4 \\pi \\varepsilon_0 \\hbar c n}.\n\\end{align}"
},
{
"math_id": 13,
"text": "v_\\text{e} = \\frac{Z}{n}."
},
{
"math_id": 14,
"text": "\\frac{a_\\text{rel}}{a_0} = \\sqrt{1 - \\left(\\frac{Z}{nc}\\right)^2}."
},
{
"math_id": 15,
"text": "Z"
},
{
"math_id": 16,
"text": "\\frac{a_\\text{rel}}{a_0} < 1"
}
] | https://en.wikipedia.org/wiki?curid=1253782 |
1253853 | Sound attenuator | A sound attenuator, or duct silencer, sound trap, or muffler, is a noise control acoustical treatment of Heating Ventilating and Air-Conditioning (HVAC) ductwork designed to reduce transmission of noise through the ductwork, either from equipment into occupied spaces in a building, or between occupied spaces.
In its simplest form, a sound attenuator consists of a baffle within the ductwork. These baffles often contain sound-absorbing materials. The physical dimensions and baffle configuration of sound attenuators are selected to attenuate a specific range of frequencies. Unlike conventional internally-lined ductwork, which is only effective at attenuating mid- and high-frequency noise, sound attenuators can achieve broader band attenuation in relatively short lengths. Certain types of sound attenuators are essentially a Helmholtz resonator used as a passive noise-control device.
Configuration.
Generally, sound attenuators consist of the following elements:
Sound attenuators are available in circular and rectangular form factors. Prefabricated rectangular sound attenuators typically come in 3, 5, 7, or 9-ft lengths. The width and height of the sound attenuators are often determined by the surrounding ductwork, though extended media options are available for improved attenuation. The baffles of rectangular sound attenuators are commonly referred to as splitters, whereas circular sound attenuators contain a bullet-shaped baffle.
Sound attenuators are typically classified as "Low," "Medium," or "High" based on performance characteristics and/or duct velocity. An example classification scheme is listed below.
Properties.
The acoustical properties of commercially available sound attenuators are tested in accordance with ASTM E477: Standard Test Method for Laboratory Measurements of Acoustical and Airflow Performance of Duct Liner Materials and Prefabricated Silencers. These tests are conducted at NVLAP-accredited facilities and then reported by the manufacturer in marketing or engineering bulletins. Outside of the US, sound attenuators are tested in accordance with British Standard 4718 (legacy) or ISO 7235.
Dynamic insertion loss.
The dynamic insertion loss of a sound attenuator is the amount of attenuation, in decibels, provided by the silencer under flow conditions. While flow conditions in typical low velocity duct systems rarely exceed 2000–3000 ft/min, sound attenuators for steam vents must withstand airflow velocities in the 15,000-20,000 ft/min. range. The acoustic performance of a sound attenuator is tested over a range of airflow velocities, and for forward and reverse flow conditions. Forward flow is when the air and sound waves propagate in the same direction. The insertion loss of a silencer is defined as
formula_0
where:
formula_1= Radiated sound power from the duct with the attenuator
formula_2= Radiated sound power from the duct without the attenuator
Some manufacturers report the static insertion loss of the silencer, which is typically measured with a loudspeaker in lieu of a fan to represent a zero flow condition. These values can be useful in the design of smoke evacuation systems, where sound attenuators are used to attenuate exterior noise that breaks into the exhaust ductwork.
The insertion loss of a sound attenuator is sometimes referred to as transmission loss.
Regenerated noise.
The internal baffles of a sound attenuator constrict airflow, which in turn generates turbulent noise. Noise generated by a sound attenuator is directly related to the airflow velocity at the constriction, and changes proportionally with the face area of the sound attenuator.
The change in generated noise can be expressed as
formula_3
where:
formula_4= The new face area of the sound attenuator
formula_5= Reference face area of the sound attenuator
For example, if the attenuator doubles in width, while maintaining a constant airflow velocity, the generated noise will increase by 3 dB. Conversely, if the attenuator shrinks by a factor of 10, while keeping the airflow velocity constant, the generated noise will decrease by 10 dB. Since turbulence generated noise caused by duct fittings changes at a rate of formula_6, airflow velocities are a critical component of attenuator sizing.
Regenerated noise should always be reviewed, but it is usually only a concern in very quiet rooms (e.g. concert halls, recording studios, music rehearsal rooms) or when the ductwork velocity is greater than 1500 ft/m.
There is a prediction formula that can be used to estimate duct silencer regenerated noise if no data exists
formula_7
where:
formula_8 = sound power level generated by the sound attenuator (dB)
formula_9 = velocity at the constricted cross-area (ft/min)
formula_10 = reference velocity (196.8 ft/min)
formula_11 = number of air passages (number of splitters)
formula_12 = height or circumference of the sound attenuator (in)
formula_13 = reference dimension (0.0394 in)
Pressure drop.
Similar to other duct fittings, sound attenuators cause pressure drop. Catalog pressure drop values obtained through ASTM E477 assume ideal, laminar airflow, which is not allow always found in field installations. The ASHRAE Handbook provides pressure drop correction factors for different inlet and outlet conditions. These correction factors are used whenever there's a turbulent wake within 3 to 5 duct diameters upstream or downstream of the attenuator.
Where sound attenuator dimensions differ from surrounding duct dimensions, transitions to and from the sound attenuator should be smooth and gradual. Abrupt transitions cause the pressure drop and regenerated noise to significantly increase.
The pressure drop through a sound attenuator is typically higher than the pressure drop for an equivalent length of lined duct. However, significantly longer lengths of lined duct are required to achieve equal attenuation, at which point the pressure drop of large extents of lined duct is significantly greater than incurred through a single sound attenuator.
Friction losses due to dissipative sound attenuators can be expressed as
formula_14
where:
formula_15 = ratio of the sound attenuator perimeter and area
formula_16 = length of the duct
formula_17 = The friction loss coefficient
formula_18 = density of air
formula_19 = passage velocity
The perimeter, area, and length of the sound attenuator are also parameters which affect its pressure drop. Friction loss at the sound attenuator is directly proportional to its noise attenuation performance, whereby greater attenuation usually equates to greater pressure drop.
Design variations.
Prefabricated sound attenuators rose to prominence in the late 1950s-early 1960s. Several manufacturers were among the first to produce and test prefabricated sound attenuators: Koppers, Industrial Acoustics Company, Industrial Sound Control, and Elof Hansson.
Though rectangular dissipative attenuators are the most common variant of attenuators used today in architectural acoustics noise control, other design options exist.
Reactive silencers.
Reactive silencers are very common in muffler design of automobiles and trucks. Attenuation is primarily achieved through sound reflection, area change, and tuned chambers. The design of reactive silencers from scratch is mathematically intensive, so manufacturers often have a number of prefabricated designs.
Dissipative silencers.
Dissipative silencers attenuate sound by transferring sound energy to heat. Dissipative silencers are used when broadband attenuation with low pressure drop is desired. In typical ductwork, high frequencies propagate down the duct as a beam, and minimally interact with the outer, lined edges. Sound attenuators with baffles that break the line of sight or elbow attenuators with a bend provide better high frequency attenuation than conventional lined ductwork. Generally, longer attenuators with thicker baffles will have a greater insertion loss over a wider frequency range.
These types of attenuators are commonly used on air handling units, ducted fan coil units, and at the air intake of compressors, gas turbines, and other ventilated equipment enclosures. On certain air handling unit or fan applications, it is common to use a co-planar silencer—a dissipative silencer that is sized for the fan and mounted directly to the fan outlet. This is a common feature in fan array design.
Crosstalk silencers.
Purpose-built sound attenuators to prevent crosstalk between two closed, private spaces. Their design typically incorporates one or more bends to form a "Z" or "U" shape. This bend increases the efficacy of the sound attenuator without significantly increasing its overall length. Crosstalk attenuators are passive devices and should be sized for extremely low pressure drops — typically less than 0.05 inches w.g.
Exhaust registers.
In the early 1970s, American SF Products, Inc. created the KGE Exhaust Register, which was an air distribution device with an integral sound attenuator.
Noise control implementation.
First, the project noise control engineer (or acoustician), mechanical engineer, and equipment representative select the quietest possible equipment which meets the mechanical requirements and budget constraints of the project. Then, the noise control engineers will typically calculate out the path, without the attenuator first. The required sound attenuator insertion loss is the difference between the calculated path and the target background noise level. If no attenuator selection is feasible, the noise control engineer and mechanical must re-evaluate the path between the equipment and the sound attenuator. When space constraints do not allow for a straight attenuator, an elbow or transitional attenuator can be used.
Duct silencers are prominently featured in systems where fiberglass internal duct liner is prohibited. While fiberglass's contribution to air quality is insignificant, many higher education projects have adopted a limit on internal fiberglass liner. In these situations, the project acoustician must rely on duct silencers as the primary means of fan noise and duct-borne noise attenuation.
Sound attenuators are typically located near ducted mechanical equipment, to attenuate noise which propagates down the duct. This creates a trade-off: the sound attenuator should be located near the fan and yet the air is typically more turbulent closer to fans and dampers. Ideally, sound attenuators should straddle the wall of the mechanical equipment room provided there are no fire dampers. If a sound attenuator is located over occupied space, the noise control engineer should confirm that duct breakout noise is not an issue prior to the attenuator. If there is significant distance between the attenuator and the mechanical room penetration, additional duct cladding (such as external fiberglass blanket or gypsum lagging) may be required to prevent noise from breaking into the duct and bypassing the attenuator.
Sound attenuators can also be used outdoors to quiet cooling towers, air intake of emergency generators, and exhaust fans. Larger equipment will require an array of sound attenuators, otherwise known as an attenuator bank. | [
{
"math_id": 0,
"text": "IL\\ (dB)=10\\log( \\frac{W_0}{W_m})"
},
{
"math_id": 1,
"text": "W_0"
},
{
"math_id": 2,
"text": "W_m"
},
{
"math_id": 3,
"text": "Generated\\ Noise\\ (dB)=10\\log( \\frac{A_1}{A_0})"
},
{
"math_id": 4,
"text": "A_1"
},
{
"math_id": 5,
"text": "A_0"
},
{
"math_id": 6,
"text": "50log"
},
{
"math_id": 7,
"text": "Lw=55log(V/V_0)+10log(N)+10log(H/H_0)-45"
},
{
"math_id": 8,
"text": "Lw"
},
{
"math_id": 9,
"text": "V"
},
{
"math_id": 10,
"text": "V_0"
},
{
"math_id": 11,
"text": "N"
},
{
"math_id": 12,
"text": "H"
},
{
"math_id": 13,
"text": "H_0"
},
{
"math_id": 14,
"text": " Friction\\ Loss=\\frac{P}{A}l(K_f\\frac{1}{2}\\rho v_p^2), \\ N/m^2 "
},
{
"math_id": 15,
"text": " \\frac{P}{A} "
},
{
"math_id": 16,
"text": "l"
},
{
"math_id": 17,
"text": "K_f"
},
{
"math_id": 18,
"text": " \\rho "
},
{
"math_id": 19,
"text": " v_p^2 "
}
] | https://en.wikipedia.org/wiki?curid=1253853 |
12539451 | Moiety conservation | Concept in biochemistry
Moiety conservation is the conservation of a subgroup in a chemical species, which is cyclically transferred from one molecule to another. In biochemistry, moiety conservation can have profound effects on the system's dynamics.
Moiety-conserved cycles in biochemistry.
A typical example of a conserved moiety in biochemistry is the Adenosine diphosphate (ADP) subgroup that remains unchanged when it is phosphorylated to create adenosine triphosphate (ATP) and then dephosphorylated back to ADP forming a conserved cycle. Moiety-conserved cycles in nature exhibit unique network control features which can be elucidated using techniques such as metabolic control analysis. Other examples in metabolism include NAD/NADH, NADP/NADPH, CoA/Acetyl-CoA. Conserved cycles also exist in large numbers in protein signaling networks when proteins get phosphorylated and dephosphorylated.
Most, if not all, of these cycles, are time-scale-dependent. For example, although a protein in a phosphorylation cycle is conserved during the interconversion, over a longer time scale, there will be low levels of protein synthesis and degradation, which change the level of protein moiety. The same applies to cycles involving ATP, NAD, etc. Thus, although the concept of a moiety-conserved cycle in biochemistry is a useful approximation, over time scales that include significant net synthesis and degradation of the moiety, the approximation is no longer valid. When invoking the conserved-moiety assumption on a particular moiety, we are, in effect, assuming the system is closed to that moiety.
Identifying conserved cycles.
Conserved cycles in a biochemical network can be identified by examination of the stoichiometry matrix, formula_0. The stoichiometry matrix for a simple cycle with species A and AP is given by:
formula_1
The rates of change of A and AP can be written using the equation:
formula_2
Expanding the expression leads to:
formula_3
Note that formula_4. This means that formula_5, where formula_6 is the total mass of moiety formula_7.
Given an arbitrary system:
formula_8
elementary row operations can be applied to both sides such that the stoichiometric matrix is reduced to its echelon form, formula_9 giving:
formula_10
The elementary operations are captured in the formula_11 matrix. We can partition formula_11 to match the echelon matrix where the zero rows begin such that:
formula_12
By multiplying out the lower partition, we obtain:
formula_13
The formula_14 matrix will contain entries corresponding to the conserved cycle participants.
Conserved cycles and computer models.
The presence of conserved moieties can affect how computer simulation models are constructed. Moiety-conserved cycles will reduce the number of differential equations required to solve a system. For example, a simple cycle has only one independent variable. The other variable can be computed using the difference between the total mass and the independent variable. The set of differential equations for the two-cycle is given by:
formula_15
These can be reduced to one differential equation and one linear algebraic equation:
formula_16
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\boldsymbol{N}"
},
{
"math_id": 1,
"text": "\n\\boldsymbol{N}=\\begin{bmatrix}\n1 & -1 \\\\\n-1 & 1\n\\end{bmatrix}"
},
{
"math_id": 2,
"text": "\n\\begin{bmatrix}\n\\frac{dA}{dt} \\\\\n\\frac{dAP}{dt}\n\\end{bmatrix}=\n\\left[\\begin{array}{rr}\n1 & -1 \\\\\n-1 & 1\n\\end{array}\\right]\n\\left[\\begin{array}{r}\nv_1 \\\\\nv_2\n\\end{array}\\right]\n"
},
{
"math_id": 3,
"text": "\n\\begin{align}\n\\frac{dA}{dt} &= v_1 - v_2 \\\\[4pt]\n\\frac{dAP}{dt} &= v_2 - v_1\n\\end{align}\n"
},
{
"math_id": 4,
"text": " dA/dt + dAP/dt = 0"
},
{
"math_id": 5,
"text": "A + AP = T"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": " \\boldsymbol{N} \\boldsymbol{v} = \\frac{d\\boldsymbol{x}}{dt}"
},
{
"math_id": 9,
"text": "\\boldsymbol{M}"
},
{
"math_id": 10,
"text": "\n\\begin{bmatrix}\n\\boldsymbol{M} \\\\\n\\boldsymbol{0}\n\\end{bmatrix} \\boldsymbol{v} = \\boldsymbol{E} \\frac{d\\boldsymbol{x}}{dt}\n"
},
{
"math_id": 11,
"text": "\\boldsymbol{E}"
},
{
"math_id": 12,
"text": "\n\\begin{bmatrix}\n\\boldsymbol{M} \\\\\n\\boldsymbol{0}\n\\end{bmatrix} \\boldsymbol{v} = \n\\begin{bmatrix}\n\\boldsymbol{X} \\\\\n\\boldsymbol{Y}\n\\end{bmatrix}\n\\frac{d\\boldsymbol{x}}{dt}\n"
},
{
"math_id": 13,
"text": "\n\\boldsymbol{Y}\n\\frac{d\\boldsymbol{x}}{dt} = 0\n"
},
{
"math_id": 14,
"text": "\\boldsymbol{Y}"
},
{
"math_id": 15,
"text": "\n\\begin{aligned}\n\\frac{d A}{d t} &=v_1-v_2 \\\\[4pt]\n\\frac{d AP}{d t}&=v_2-v_1\n\\end{aligned}\n"
},
{
"math_id": 16,
"text": "\n\\begin{aligned}\nAP &=T-A \\\\[4pt]\n\\frac{dA}{d t} &= v_1-v_2\n\\end{aligned}\n"
}
] | https://en.wikipedia.org/wiki?curid=12539451 |
12539517 | Bulldozer (microarchitecture) | Microarchitecture by AMD
The AMD Bulldozer Family 15h is a microprocessor microarchitecture for the FX and Opteron line of processors, developed by AMD for the desktop and server markets. Bulldozer is the codename for this family of microarchitectures. It was released on October 12, 2011, as the successor to the K10 microarchitecture.
Bulldozer is designed from scratch, not a development of earlier processors. The core is specifically aimed at computing products with TDPs of 10 to 125 watts. AMD claims dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores.
The "Bulldozer" cores support most of the instruction sets implemented by Intel processors (Sandy Bridge) available at its introduction (including SSSE3, SSE4.1, SSE4.2, AES, CLMUL, and AVX) as well as new instruction sets proposed by AMD; ABM, XOP, FMA4 and F16C. Only Bulldozer GEN4 (Excavator) supports AVX2 instruction sets.
Overview.
According to AMD, Bulldozer-based CPUs are based on GlobalFoundries' 32 nm Silicon on insulator (SOI) process technology and reuses the approach of DEC for multitasking computer performance with the arguments that it, according to press notes, "balances dedicated and shared computer resources to provide a highly compact, high units count design that is easily replicated on a chip for performance scaling." In other words, by eliminating some of the "redundant" elements that naturally creep into multicore designs, AMD has hoped to take better advantage of its hardware capabilities, while using less power.
Bulldozer-based implementations built on 32nm SOI with HKMG arrived in October 2011 for both servers and desktops. The server segment included the dual chip (16-core) Opteron processor codenamed "Interlagos" (for Socket G34) and single chip (4, 6 or 8 cores) "Valencia" (for Socket C32), while the "Zambezi" (4, 6 and 8 cores) targeted desktops on Socket AM3+.
Bulldozer is the first major redesign of AMD’s processor architecture since 2003, when the firm launched its K8 processors, and also features two 128-bit FMA-capable FPUs which can be combined into one 256-bit FPU. This design is accompanied by two integer clusters, each with 4 pipelines (the fetch/decode stage is shared). Bulldozer also introduced shared L2 cache in the new architecture. AMD calls this "design" a "Module". A 16-core processor design would feature eight of these "modules", but the operating system will recognize each "module" as two logical cores.
The modular architecture consists of multithreaded shared L2 cache and FlexFPU, which uses simultaneous multithreading. Each physical integer core, two per module, is single threaded, in contrast with Intel's Hyperthreading, where two virtual simultaneous threads share the resources of a single physical core.
In a retrospective review, Jeremy Laird of APC magazine commented on Bulldozer issues, noted that it was slower than outgoing Phenom II K10 design, and that the PC software ecosystem had not yet "embraced" the multi-threaded model. By his observation, issues caused a big loss for AMD, that the company lost over 1 billion USD in 2012, and that some industry observers were predicting the bankruptcy by mid-2015. The company later managed to return to profit. Mentioned reasons for regaining the profitability were the earlier divesting of in-house manufacturing into GlobalFoundries and then outsourcing the manufacturing to TSMC and making a new Ryzen CPU design.
Architecture.
Bulldozer core.
Bulldozer made use of "Clustered Multithreading" (CMT), a technique where some parts of the processor are shared between two threads and some parts are unique for each thread. Prior examples of such an approach to unconventional multithreading can be traced way back to the 2005 Sun Microsystems' UltraSPARC T1 CPU.
In terms of hardware complexity and functionality, a Bulldozer CMT module is equal to a dual-core processor in its integer calculation capabilities, and to either a single-core processor or a handicapped dual-core in terms of floating-point computational power, depending on whether the code is saturated in floating point instructions in both threads running on the same CMT module, and whether the FPU is performing 128-bit or 256-bit floating point operations. The reason for this is that for each two integer cores, that is, within the same module, there is a single floating-point unit consisting of a pair of 128-bit FMAC execution units.
CMT is in some way a simpler but similar design philosophy to SMT; both designs try to utilize execution units efficiently; in either method, when two threads compete for some execution pipelines, there is a loss in performance in one or more of the threads. Due to dedicated integer cores, the Bulldozer family modules performed roughly like a dual-core, dual-threaded processor during sections of code that were either wholly integer or a mix of integer and floating-point calculations; yet, due to the SMT use of the shared floating-point pipelines, the module would perform similarly to a single-core, dual-threaded SMT processor (SMT2) for a pair of threads saturated with floating-point instructions. (Both of these last two comparisons make the assumption that the processor possesses an equally wide and capable execution core, integer-wise and floating-point-wise, respectively.)
Both CMT and SMT are at peak effectiveness while running integer and floating point code on a pair of threads. CMT stays at peak effectiveness while working on a pair of threads consisting both of integer code, while under SMT, one or both threads will underperform due to competition for integer execution units. The disadvantage for CMT is a greater number of idle integer execution units in a single threaded case. In the single threaded case, CMT is limited to use at most half of the integer execution units in its module, while SMT imposes no such limit. A large SMT core with integer circuitry as wide and fast as two CMT cores could in theory have momentarily up to twice an integer performance in a single thread case. (More realistically for general code as a whole, Pollack's Rule estimates a speedup factor of formula_0, or approximately 40% increase in performance.)
CMT processors and a typical SMT processor are similar in their efficient shared use of the L2 cache between a pair of threads.
The longer pipeline allowed the Bulldozer family of processors to achieve a much higher clock frequency compared to its K10 predecessors. While this increased frequencies and throughput, the longer pipeline also increased latencies and increased branch misprediction penalties.
The issue widths (and peak instruction executions per cycle) of a Jaguar, K10, and Bulldozer core are 2, 3, and 4 respectively. This made Bulldozer a more superscalar design compared to Jaguar/Bobcat. However, due to K10's somewhat wider core (in addition to the lack of refinements and optimizations in a first generation design) the Bulldozer architecture typically performed with somewhat lower IPC compared to its K10 predecessors. It was not until the refinements made in Piledriver and Steamroller, that the IPC of the Bulldozer family distinctly began to exceed that of K10 processors such as Phenom II.
Processors.
The first revenue shipments of Bulldozer-based Opteron processors was announced on September 7, 2011. The FX-4100, FX-6100, FX-8120 and FX-8150 were released in October 2011; with remaining FX series AMD processors released at the end of the first quarter of 2012.
Desktop.
Major Sources: CPU-World and Xbit-Labs
Server.
There are two series of Bulldozer-based processors for servers: Opteron 4200 series (Socket C32, code named Valencia, with up to four modules) and Opteron 6200 series (Socket G34, code named Interlagos, with up to 8 modules).
False advertising lawsuit.
In November 2015, AMD was sued under the California Consumers Legal Remedies Act and Unfair Competition Law for allegedly misrepresenting the specifications of Bulldozer chips. The class-action lawsuit, filed on 26 October in the US District Court for the Northern District of California, claims that each Bulldozer module is in fact a single CPU core with a few dual-core traits, rather than a true dual-core design. In August 2019, AMD agreed to settle the suit for $12.1M.
Performance.
Performance on Linux.
On 24 October 2011, the first generation tests done by Phoronix confirmed that the performance of Bulldozer CPU was somewhat less than expected. In several tests, the CPU performed similarly to the older generation Phenom 1060T.
The performance later substantially increased, as various compiler optimizations and CPU driver fixes were released.
Performance on Windows.
The first Bulldozer CPUs were met with a mixed response. It was discovered that the FX-8150 performed poorly in benchmarks that were not highly threaded, falling behind the second-generation Intel Core i* series processors and being matched or even outperformed by AMD's own Phenom II X6 at lower clock speeds. In highly threaded benchmarks, the FX-8150 performed on par with the Phenom II X6, and the Intel Core i7 2600K, depending on the benchmark. Given the overall more consistent performance of the Intel Core i5 2500K at a lower price, these results left many reviewers underwhelmed. The processor was found to be extremely power-hungry under load, especially when overclocked, compared to Intel's Sandy Bridge.
On 13 October 2011, AMD stated on its blog that "there are some in our community who feel the product performance did not meet their expectations", but showed benchmarks on actual applications where it outperformed the Sandy Bridge i7 2600k and AMD X6 1100T.
In January 2012, Microsoft released two hotfixes for Windows 7 and Server 2008 R2 that marginally improve the performance of Bulldozer CPUs by addressing the thread scheduling concerns raised after the release of Bulldozer.
On 6 March 2012, AMD posted a knowledge base article stating that there was a compatibility problem with FX processors, and certain games on the widely used digital game distribution platform, Steam. AMD stated that they had provided a BIOS update to several motherboard manufacturers (namely: Asus, Gigabyte Technology, MSI, and ASRock) that would fix the problem.
In September 2014, AMD CEO Rory Read conceded the Bulldozer design had not been a "game-changing part", and that AMD had to live with the design for four years.
Overclocking.
On 31 August 2011, AMD and a group of well-known overclockers including Brian McLachlan, Sami Mäkinen, Aaron Schradin, and Simon Solotko managed to set a new world record for CPU frequency using the unreleased and overclocked FX-8150 Bulldozer processor. Before that day, the record sat at 8.309 GHz, but the Bulldozer combined with liquid helium cooling reached a new high of 8.429 GHz. The record has since been overtaken at 8.58 GHz by Andre Yang using liquid nitrogen. On August 22, 2014 and using an FX-8370 (Piledriver), The Stilt from Team Finland achieved a maximum CPU frequency of 8.722 GHz.
The CPU clock frequency records set by overclocked Bulldozer CPUs were only broken almost a decade later by overclocks of Intel's 13th generation Core Raptor Lake CPUs in October 2022.
Revisions.
"Piledriver" is the AMD codename for its improved second-generation microarchitecture based on "Bulldozer". AMD "Piledriver" cores are found in Socket FM2 "Trinity" and "Richland" based series of APUs and CPUs and the Socket AM3+ "Vishera" based FX-series of CPUs. Piledriver was the last generation in the Bulldozer family to be available for socket AM3+ and to be available with an L3 cache. The Piledriver processors available for FM2 (and its mobile variant) sockets did not come with a L3 cache, as the L2 cache is the last-level cache for all FM2/FM2+ processors.
"Steamroller" is the AMD codename for its third-generation microarchitecture based on an improved version of "Piledriver". "Steamroller" cores are found in the Socket FM2+ "Kaveri" based series of APUs and CPUs.
"Excavator" is the codename for the fourth-generation "Bulldozer" core. "Excavator" was implemented as 'Carrizo' A-series APUs, "Bristol Ridge" A-series APUs, and Athlon x4 CPUs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}"
}
] | https://en.wikipedia.org/wiki?curid=12539517 |
12541 | Gematria | Numerology method
Gematria (; or gimatria , plural or , ) is the practice of assigning a numerical value to a name, word or phrase by reading it as a number, or sometimes by using an alphanumerical cipher. The letters of the alphabets involved have standard numerical values, but a word can yield several values if a cipher is used.
According to Aristotle (384–322 BCE), isopsephy, based on the Milesian numbering of the Greek alphabet developed in the Greek city of Miletus, was part of the Pythagorean tradition, which originated in the 6th century BCE. The first evidence of use of Hebrew letters as numbers dates to 78 BCE; gematria is still used in Jewish culture. Similar systems have been used in other languages and cultures, derived from or inspired by either Greek isopsephy or Hebrew gematria, and include Arabic abjad numerals and English gematria.
The most common form of Hebrew gematria is used in the Talmud and Midrash, and elaborately by many post-Talmudic commentators. It involves reading words and sentences as numbers, assigning numerical instead of phonetic value to each letter of the Hebrew alphabet. When read as numbers, they can be compared and contrasted with other words or phrases – cf. the Hebrew proverb (, lit. 'wine entered, secret went out', i.e. ). The gematric value of ('wine') is 70 (=10; =10; =50) and this is also the gematric value of ('secret', =60; =6; =4).
Although a type of gematria system ('Aru') was employed by the ancient Babylonian culture, their writing script was logographic, and the numerical assignments they made were to whole words. Aru was very different from the Milesian systems used by Greek and Hebrew cultures, which used alphabetic writing scripts. The value of words with Aru were assigned in an entirely arbitrary manner and correspondences were made through tables, and so cannot be considered a true form of gematria.
Gematria sums can involve single words, or a string of lengthy calculations. A short example of Hebrew numerology that uses gematria is the word (, lit. 'alive'), which is composed of two letters that (using the assignments in the table shown below) add up to 18. This has made 18 a "lucky number" among the Jewish people.
In early Jewish sources, the term can also refer to other forms of calculation or letter manipulation, for example atbash.
Etymology.
Classical scholars agree that the Hebrew word "gematria" was derived from the Greek word γεωμετρία "geōmetriā", "geometry", though some scholars believe it to derive from Greek γραμματεια "grammateia" "knowledge of writing". It is likely that both Greek words had an influence on the formation of the Hebrew word. Some hold it to derive from the order of the Greek alphabet, gamma being the third letter of the Greek alphabet ("gamma tria").
The word has been extant in English since at least the 17th century from translations of works by Giovanni Pico della Mirandola. It is largely used in Jewish texts, notably in those associated with the Kabbalah. Neither the concept nor the term appears in the Hebrew Bible itself.
History.
The first documented use of gematria is from an Assyrian inscription dating to the 8th century BCE, commissioned by Sargon II. In this inscription, Sargon II states: "the king built the wall of Khorsabad 16,283 cubits long to correspond with the numerical value of his name."
The practice of using alphabetic letters to represent numbers developed in the Greek city of Miletus, and is thus known as the Milesian system. Early examples include vase graffiti dating to the 6th century BCE. Aristotle wrote that the Pythgoraean tradition, founded in the 6th century BCE by Pythagoras of Samos, practiced isopsephy, the Greek predecessor of gematria. Pythagoras was a contemporary of the philosophers Anaximander, Anaximenes, and the historian Hecataeus, all of whom lived in Miletus, across the sea from Samos. The Milesian system was in common use by the reign of Alexander the Great (336–323 BCE) and was adopted by other cultures during the subsequent Hellenistic period. It was officially adopted in Egypt during the reign of Ptolemy II Philadelphus (284–246 BCE).
In early biblical texts, numbers were written out in full using Hebrew number words. The first evidence of the use of Hebrew letters as numerals appears during the late Hellenistic period, in 78 BCE. Scholars have identified gematria in the Hebrew Bible, the canon of which was fixed during the Hasmonean dynasty (c. 140 BCE to 37 BCE), though some scholars argue it was not fixed until the second century CE or even later. The Hasmonean king of Judea, Alexander Jannaeus (died 76 BCE) had coins inscribed in Aramaic with the Phoenician alphabet, marking the 20th and 25th years of his reign using the letters K and KE ( and ).
Some old Mishnaic texts may preserve very early usage of this number system, but no surviving written documents exist, and some scholars believe these texts were passed down orally and during the early stages before the Bar Kochba rebellion were never written. Gematria is not known to be found in the Dead Sea scrolls, a vast body of texts from 100 BCE – 100 CE, or in any of the documents found from the Bar-Kochba revolt circa 150 CE.
According to Proclus in his commentary on the "Timaeus" of Plato written in the 5th century, the author Theodorus Asaeus from a century earlier interpreted the word "soul" (ψυχή) based on gematria and an inspection of the graphical aspects of the letters that make up the word. According to Proclus, Theodorus learned these methods from the writings of Numenius of Apamea and Amelius. Proclus rejects these methods by appealing to the arguments against them put forth by the Neoplatonic philosopher Iamblichus. The first argument was that some letters have the same numerical value but opposite meaning. His second argument was that the form of letters changes over the years, and so their graphical qualities cannot hold any deeper meaning. Finally, he puts forth the third argument that when one uses all sorts of methods as addition, subtraction, division, multiplication, and even ratios, the infinite ways in which these can be combined allow virtually any number to be produced to suit any purpose.
Some scholars propose that at least two cases of gematria appear in the New Testament. According to one theory, the reference to the miraculous "catch of 153 fish" in John 21:11 is an application of gematria derived from the name of the spring called 'EGLaIM in Ezekiel 47:10. The appearance of this gematria in John 21:11 has been connected to one of the Dead Sea Scrolls, namely 4Q252, which also applies the same gematria of 153 derived from Ezekiel 47 to state that Noah arrived at Mount Ararat on the 153rd day after the beginning of the flood. Some historians see gematria behind the reference to the number of the name of the Beast in Revelation as 666, which corresponds to the numerical value of the Hebrew transliteration of the Greek name "Neron Kaisar", referring to the 1st century Roman emperor who persecuted the early Christians. Another possible influence on the use of 666 in Revelation goes back to reference to Solomon's intake of 666 talents of gold in 1 Kings 10:14.
Gematria makes several appearances in various Christian and Jewish texts written in the first centuries of the common era. One appearance of gematria in the early Christian period is in the Epistle of Barnabas 9:6–7, which dates to sometime between 70 and 132 CE. There, the 318 servants of Abraham in Genesis 14:14 is used to indicate that Abraham looked forward to the coming of Jesus as the numerical value of some of the letters in the Greek name for Jesus as well as the 't' representing a symbol for the cross also equaled 318. Another example is a Christian interpolation in the Sibylline Oracles, where the symbolic significance of the value of 888 (equal to the numerical value of "Iesous", the Latinized rendering of the Greek version of Jesus' name) is asserted. Irenaeus also heavily criticized the interpretation of letters by the Gnostic Marcus. Because of their association with Gnosticism and the criticisms of Irenaeus as well as Hippolytus of Rome and Epiphanius of Salamis, this form of interpretation never became popular in Christianity—though it does appear in at least some texts. Another two examples can be found in 3 Baruch, a text that may have either been composed by a Jew or a Christian sometime between the 1st and 3rd centuries. In the first example, a snake is stated to consume a cubit of ocean every day, but is unable to ever finish consuming it, because the oceans are also refilled by 360 rivers. The number 360 is given because the numerical value of the Greek word for snake, "δράκων", when transliterated to Hebrew () is 360. In a second example, the number of giants stated to have died during the Deluge is 409,000. The Greek word for 'deluge', "κατακλυσμός", has a numerical value of 409 when transliterated in Hebrew characters, thus leading the author of 3 Baruch to use it for the number of perished giants.
Gematria is often used in Rabbinic literature. One example is that the numerical value of "The Satan" () in Hebrew is 364, and so it was said that the Satan had authority to prosecute Israel for 364 days before his reign ended on the Day of Atonement, an idea which appears in Yoma 20a and Peskita 7a. Yoma 20a states: "Rami bar Ḥama said: The numerological value of the letters that constitute the word HaSatan is three hundred and sixty four: Heh has a value of five, sin has a value of three hundred, tet has a value of nine, and nun has a value of fifty. Three hundred and sixty-four days of the solar year, which is three hundred and sixty-five days long, Satan has license to prosecute." Genesis 14:14 states that Abraham took 318 of his servants to help him rescue some of his kinsmen, which was taken in Peskita 70b to be a reference to Eleazar, whose name has a numerical value of 318.
The total value of the letters of the Islamic Basmala, i.e. the phrase "Bismillah al-Rahman al-Rahim" ("In the name of God, the Most Gracious, the Most Merciful"), according to the standard Abjadi system of numerology, is 786. This number has therefore acquired a significance in folk Islam and Near Eastern folk magic and also appears in many instances of pop-culture, such as its appearance in the 2006 song '786 All is War' by the band Fun-Da-Mental. A recommendation of reciting the basmala 786 times in sequence is recorded in Al-Buni. Sündermann (2006) reports that a contemporary "spiritual healer" from Syria recommends the recitation of the basmala 786 times over a cup of water, which is then to be ingested as medicine. The use of gematria is still pervasive in many parts of Asia and Africa.
Methods of Hebrew gematria.
Standard encoding.
In standard gematria ("mispar hechrechi"), each letter is given a numerical value between 1 and 400, as shown in the following table. In "mispar gadol", the five final letters are given their own values, ranging from 500 to 900. It is possible that this well-known cipher was used to conceal other more hidden ciphers in Jewish texts. For instance, a scribe may discuss a sum using the 'standard gematria' cipher, but may intend the sum to be checked with a different secret cipher.
A mathematical formula for finding a letter's corresponding number in "mispar gadol" is:
formula_0
where "x" is the position of the letter in the language letters index (regular order of letters), and the floor and modulo functions are used.
Vowels.
The value of the Hebrew vowels is not usually counted, but some lesser-known methods include the vowels as well. The most common vowel values are as follows (a less common alternative value, based on the digit sum, is given in parentheses):
Sometimes, the names of the vowels are spelled out and their gematria is calculated using standard methods.
Other methods.
There are many different methods used to calculate the numerical value for the individual Hebrew/Aramaic words, phrases or whole sentences. Gematria is the 29th of 32 hermeneutical rules countenanced by the Rabbis of the Talmud for valid aggadic interpretation of the Torah. More advanced methods are usually used for the most significant Biblical verses, prayers, names of God, etc. These methods include:
Related transformations.
Within the wider topic of gematria are included the various alphabet transformations, where one letter is substituted by another based on a logical scheme:
Most of the above-mentioned methods and ciphers are listed by Rabbi Moshe Cordevero.
Some authors provide lists of as many as 231 various replacement ciphers, related to the 231 mystical Gates of the "Sefer Yetzirah".
Dozens of other far more advanced methods are used in Kabbalistic literature, without any particular names. In Ms. Oxford 1,822, one article lists 75 different forms of gematria. Some known methods are recursive in nature and are reminiscent of graph theory or make a lot of use of combinatorics. Rabbi Elazar Rokeach (born c. 1176 – died 1238) often used multiplication, instead of addition, for the above-mentioned methods. For example, spelling out the letters of a word and then multiplying the squares of each letter value in the resulting string produces very large numbers, in orders of trillions. The spelling process can be applied recursively, until a certain pattern (e.g., all the letters of the word "Talmud") is found; the gematria of the resulting string is then calculated. The same author also used the sums of all possible unique letter combinations, which add up to the value of a given letter. For example, the letter Hei, which has the standard value of 5, can be produced by combining formula_9, formula_10, formula_11, formula_12, formula_13, or formula_14, which adds up to formula_15. Sometimes combinations of repeating letters are not allowed (e.g., formula_14 is valid, but formula_11 is not). The original letter itself can also be viewed as a valid combination.
Variant spellings of some letters can be used to produce sets of different numbers, which can be added up or analyzed separately. Many various complex formal systems and recursive algorithms, based on graph-like structural analysis of the letter names and their relations to each other, modular arithmetic, pattern search and other highly advanced techniques, are found in the "Sefer ha-Malchut" by Rabbi David ha-Levi of the Draa Valley, a Spanish-Moroccan Kabbalist of the 15th–16th century. Rabbi David ha-Levi's methods also consider the numerical values and other properties of the vowels.
Kabbalistic astrology uses some specific methods to determine the astrological influences on a particular person. According to one method, the gematria of the person's name is added to the gematria of his or her mother's name; the result is then divided by 7 and 12. The remainders signify a particular planet and Zodiac sign.
Transliterated Hebrew.
Historically, hermetic and esoteric groups of the 19th and 20th centuries in the UK and in France used a transliterated Hebrew cipher with the Latin alphabet. In particular, the transliterated cipher was taught to members of the Hermetic Order of the Golden Dawn. In 1887, S.L. MacGregor Mathers, who was one of the order's founders, published the transliterated cipher in "The Kabbalah Unveiled" in the Mathers table.
As a former member of the Golden Dawn, Aleister Crowley used the transliterated cipher extensively in his writings for his two magical orders the A∴A∴ and Ordo Templi Orientis (O.T.O). Many other occult authors belonging to various esoteric groups have either mentioned the cipher or published it in their books, including Paul Foster Case of the Builders of the Adytum (B.O.T.A).
Use in non-Semitic languages.
Greek.
According to Aristotle (384–322 BCE), isopsephy, an early Milesian system using the Greek alphabet, was part of the Pythagorean tradition, which originated in the 6th century BCE.
Plato (c. 427–347 BCE) offers a discussion in the "Cratylus", involving a view of words and names as referring (more or less accurately) to the "essential nature" of a person or object and that this view may have influenced—and is central to—isopsephy.
A sample of graffiti at Pompeii (destroyed under volcanic ash in 79 CE) reads "I love the girl whose name is phi mu epsilon (545)".
Other examples of use in Greek come primarily from the Christian literature. Davies and Allison state that, unlike rabbinic sources, isopsephy is always explicitly stated as being used.
Latin.
During the Renaissance, systems of gematria were devised for the Classical Latin alphabet. There were a number of variations of these which were popular in Europe.
In 1525, Christoph Rudolff included a Classical Latin gematria in his work "Nimble and beautiful calculation via the artful rules of algebra [which] are so commonly called "coss"":
A=1 B=2 C=3 D=4 E=5 F=6 G=7 H=8 I=9 K=10 L=11 M=12
N=13 O=14 P=15 Q=16 R=17 S=18 T=19 U=20 W=21 X=22 Y=23 Z=24
At the beginning of the "Apocalypisis in Apocalypsin" (1532), the German monk Michael Stifel (also known as Steifel) describes the natural order and trigonal number alphabets, claiming to have invented the latter. He used the trigonal alphabet to interpret the prophecy in the Biblical Book of Revelation, and predicted the world would end at 8am on October 19, 1533. The official Lutheran reaction to Steifel's prophecy shows that this type of activity was not welcome. Belief in the power of numbers was unacceptable in reformed circles, and gematria was not part of the reformation agenda.
An analogue of the Greek system of isopsephy using the Latin alphabet appeared in 1583, in the works of the French poet Étienne Tabourot. This cipher and variations of it were published or referred to in the major work of Italian Pietro Bongo "Numerorum Mysteria," and a 1651 work by Georg Philipp Harsdörffer, and by Athanasius Kircher in 1665, and in a 1683 volume of "Cabbalologia" by Johann Henning, where it was simply referred to as the "1683 alphabet". It was mentioned in the work of Johann Christoph Männling "The European Helicon or Muse Mountain", in 1704, and it was also called the "Alphabetum Cabbalisticum Vulgare" in "Die verliebte und galante Welt" by Christian Friedrich Hunold in 1707. It was used by Leo Tolstoy in his 1865 work "War and Peace" to identify Napoleon with the number of the Beast.
English.
English Qabalah refers to several different systems of mysticism related to Hermetic Qabalah that interpret the letters of the English alphabet via an assigned set of numerological significances. The first system of English gematria was used by the poet John Skelton in 1523 in his poem "The Garland of Laurel".
The Agrippa code was used with English as well as Latin. It was defined by Heinrich Cornelius Agrippa in 1532, in his work "De Occulta Philosopha". Agrippa based his system on the order of the Classical Latin alphabet using a ranked valuation as in isopsephy, appending the four additional letters in use at the time after Z, including J (600) and U (700), which were still considered letter variants. Agrippa was the mentor of Welsh magician John Dee, who makes reference to the Agrippa code in Theorem XVI of his 1564 book, "Monas Hieroglyphica".
Since the death of Aleister Crowley (1875–1947), a number of people have proposed numerical correspondences for English gematria in order to achieve a deeper understanding of Crowley's "The Book of the Law" (1904). One such system, the English Qaballa, was discovered by English magician James Lees on November 26, 1976. The founding of Lees' magical order (O∴A∴A∴) in 1974 and his discovery of EQ are chronicled in "All This and a Book" by Cath Thompson.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x) = 10^{\\left \\lfloor \\frac{x-1}{9} \\right \\rfloor} \\times ((x-1 \\mod 9) + 1),"
},
{
"math_id": 1,
"text": "1+30+80=111"
},
{
"math_id": 2,
"text": "1 + (1 + 8) + (1 + 8 + 4) = 23"
},
{
"math_id": 3,
"text": "1+8+4=13"
},
{
"math_id": 4,
"text": "(1 + 30 + 80) = 111"
},
{
"math_id": 5,
"text": "(2 + 10 + 400) = 412"
},
{
"math_id": 6,
"text": "1 + 40 + 400 => 1 + 4 + 4 = 9"
},
{
"math_id": 7,
"text": "(1 + 4 + 4) + (1 + 4 + 4) = 18 <=> 1 + 8 = 9"
},
{
"math_id": 8,
"text": "(1 + 4 + 4) + (1 + 4 + 4) + (1 + 4 + 4) = 27 <=> 2 + 7 = 9"
},
{
"math_id": 9,
"text": "1 + 1 + 1 + 1 + 1"
},
{
"math_id": 10,
"text": "2 + 1 + 1 + 1"
},
{
"math_id": 11,
"text": "3 + 1 + 1"
},
{
"math_id": 12,
"text": "4+1"
},
{
"math_id": 13,
"text": "2 + 2 + 1"
},
{
"math_id": 14,
"text": "2+3"
},
{
"math_id": 15,
"text": "30"
}
] | https://en.wikipedia.org/wiki?curid=12541 |
12542581 | Cut point | In topology, a cut-point is a point of a connected space such that its removal causes the resulting space to be disconnected. If removal of a point doesn't result in disconnected spaces, this point is called a non-cut point.
For example, every point of a line is a cut-point, while no point of a circle is a cut-point.
Cut-points are useful to determine whether two connected spaces are homeomorphic by counting the number of cut-points in each space. If two spaces have different number of cut-points, they are not homeomorphic. A classic example is using cut-points to show that lines and circles are not homeomorphic.
Cut-points are also useful in the characterization of topological continua, a class of spaces which combine the properties of compactness and connectedness and include many familiar spaces such as the unit interval, the circle, and the torus.
Definition.
Formal definitions.
A cut-point of a connected T1 topological space "X", is a point "p" in "X" such that "X" - {"p"} is not connected. A point which is not a cut-point is called a non-cut point.
A non-empty connected topological space X is a cut-point space if every point in X is a cut point of X.
Irreducible cut-point spaces.
Definitions.
A cut-point space is irreducible if no proper subset of it is a cut-point space.
The Khalimsky line: Let formula_0 be the set of the integers and formula_1 where formula_2 is a basis for a topology on formula_0. The Khalimsky line is the set formula_0 endowed with this topology. It's a cut-point space. Moreover, it's irreducible.
See also.
Cut point (graph theory) | [
{
"math_id": 0,
"text": "\\mathbb{Z}"
},
{
"math_id": 1,
"text": "B=\\{ \\{2i-1,2i,2i+1\\} : i \\in \\mathbb{Z} \\} \\cup \\{ \\{2i+1\\} : i \\in \\mathbb{Z}\\}"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "X"
}
] | https://en.wikipedia.org/wiki?curid=12542581 |
12543 | Groupoid | Category where every morphism is invertible; generalization of a group
In mathematics, especially in category theory and homotopy theory, a groupoid (less often Brandt groupoid or virtual group) generalises the notion of group in several equivalent ways. A groupoid can be seen as a:
In the presence of dependent typing, a category in general can be viewed as a typed monoid, and similarly, a groupoid can be viewed as simply a typed group. The morphisms take one from one object to another, and form a dependent family of types, thus morphisms might be typed formula_0, formula_1, say. Composition is then a total function: formula_2, so that formula_3.
Special cases include:
Groupoids are often used to reason about geometrical objects such as manifolds. Heinrich Brandt (1927) introduced groupoids implicitly via Brandt semigroups.
Definitions.
Algebraic.
A groupoid can be viewed as an algebraic structure consisting of a set with a binary partial function .
Precisely, it is a non-empty set formula_4 with a unary operation formula_5 and a partial function formula_6. Here * is not a binary operation because it is not necessarily defined for all pairs of elements of formula_4. The precise conditions under which formula_7 is defined are not articulated here and vary by situation.
The operations formula_8 and −1 have the following axiomatic properties: For all formula_9, formula_10, and formula_11 in formula_4,
Two easy and convenient properties follow from these axioms:
Category theoretic.
A groupoid is a small category in which every morphism is an isomorphism, i.e., invertible. More explicitly, a groupoid "G" is a set "G"0 of "objects" with
If "f" is an element of "G"("x","y") then "x" is called the source of "f", written "s"("f"), and "y" is called the target of "f", written "t"("f").
A groupoid "G" is sometimes denoted as formula_30, where formula_31 is the set of all morphisms, and the two arrows formula_32 represent the source and the target.
More generally, one can consider a groupoid object in an arbitrary category admitting finite fiber products.
Comparing the definitions.
The algebraic and category-theoretic definitions are equivalent, as we now show. Given a groupoid in the category-theoretic sense, let "G" be the disjoint union of all of the sets "G"("x","y") (i.e. the sets of morphisms from "x" to "y"). Then formula_33 and formula_34 become partial operations on "G", and formula_34 will in fact be defined everywhere. We define ∗ to be formula_33 and −1 to be formula_34, which gives a groupoid in the algebraic sense. Explicit reference to "G"0 (and hence to formula_35) can be dropped.
Conversely, given a groupoid "G" in the algebraic sense, define an equivalence relation formula_36 on its elements by
formula_37 iff "a" ∗ "a"−1 = "b" ∗ "b"−1. Let "G"0 be the set of equivalence classes of formula_36, i.e. formula_38. Denote "a" ∗ "a"−1 by formula_39 if formula_40 with formula_41.
Now define formula_42 as the set of all elements "f" such that formula_43 exists. Given formula_44 and formula_45 their composite is defined as formula_46. To see that this is well defined, observe that since formula_47 and formula_48 exist, so does formula_49. The identity morphism on "x" is then formula_39, and the category-theoretic inverse of "f" is "f"−1.
Sets in the definitions above may be replaced with classes, as is generally the case in category theory.
Vertex groups and orbits.
Given a groupoid "G", the vertex groups or isotropy groups or object groups in "G" are the subsets of the form "G"("x","x"), where "x" is any object of "G". It follows easily from the axioms above that these are indeed groups, as every pair of elements is composable and inverses are in the same vertex group.
The orbit of a groupoid "G" at a point formula_50 is given by the set formula_51 containing every point that can be joined to x by a morphism in G. If two points formula_52 and formula_53 are in the same orbits, their vertex groups formula_54 and formula_55 are isomorphic: if formula_56 is any morphism from formula_52 to formula_53, then the isomorphism is given by the mapping formula_57.
Orbits form a partition of the set X, and a groupoid is called transitive if it has only one orbit (equivalently, if it is connected as a category). In that case, all the vertex groups are isomorphic (on the other hand, this is not a sufficient condition for transitivity; see the section below for counterexamples).
Subgroupoids and morphisms.
A subgroupoid of formula_58 is a subcategory formula_59 that is itself a groupoid. It is called wide or full if it is wide or full as a subcategory, i.e., respectively, if formula_60 or formula_61 for every formula_62.
A groupoid morphism is simply a functor between two (category-theoretic) groupoids.
Particular kinds of morphisms of groupoids are of interest. A morphism formula_63 of groupoids is called a fibration if for each object formula_52 of formula_64 and each morphism formula_10 of formula_65 starting at formula_66 there is a morphism formula_67 of formula_64 starting at formula_52 such that formula_68. A fibration is called a covering morphism or covering of groupoids if further such an formula_67 is unique. The covering morphisms of groupoids are especially useful because they can be used to model covering maps of spaces.
It is also true that the category of covering morphisms of a given groupoid formula_65 is equivalent to the category of actions of the groupoid formula_65 on sets.
Examples.
Topology.
Given a topological space formula_69, let formula_70 be the set formula_69. The morphisms from the point formula_71 to the point formula_72 are equivalence classes of continuous paths from formula_71 to formula_72, with two paths being equivalent if they are homotopic.
Two such morphisms are composed by first following the first path, then the second; the homotopy equivalence guarantees that this composition is associative. This groupoid is called the fundamental groupoid of formula_69, denoted formula_73 (or sometimes, formula_74). The usual fundamental group formula_75 is then the vertex group for the point formula_52.
The orbits of the fundamental groupoid formula_73 are the path-connected components of formula_69. Accordingly, the fundamental groupoid of a path-connected space is transitive, and we recover the known fact that the fundamental groups at any base point are isomorphic. Moreover, in this case, the fundamental groupoid and the fundamental groups are equivalent as categories (see the section below for the general theory).
An important extension of this idea is to consider the fundamental groupoid formula_76 where formula_77 is a chosen set of "base points". Here formula_76 is a (wide) subgroupoid of formula_73, where one considers only paths whose endpoints belong to formula_78. The set formula_78 may be chosen according to the geometry of the situation at hand.
Equivalence relation.
If formula_69 is a setoid, i.e. a set with an equivalence relation formula_36, then a groupoid "representing" this equivalence relation can be formed as follows:
The vertex groups of this groupoid are always trivial; moreover, this groupoid is in general not transitive and its orbits are precisely the equivalence classes. There are two extreme examples:
Čech groupoid.
A Čech groupoidp. 5 is a special kind of groupoid associated to an equivalence relation given by an open cover formula_92 of some manifold formula_69. Its objects are given by the disjoint union
formula_93,
and its arrows are the intersections
formula_94.
The source and target maps are then given by the induced mapsformula_95and the inclusion mapformula_96giving the structure of a groupoid. In fact, this can be further extended by settingformula_97as the formula_98-iterated fiber product where the formula_99 represents formula_98-tuples of composable arrows. The structure map of the fiber product is implicitly the target map, sinceformula_100is a cartesian diagram where the maps to formula_101 are the target maps. This construction can be seen as a model for some ∞-groupoids. Also, another artifact of this construction is k-cocyclesformula_102for some constant sheaf of abelian groups can be represented as a functionformula_103giving an explicit representation of cohomology classes.
Group action.
If the group formula_4 acts on the set formula_69, then we can form the action groupoid (or transformation groupoid) representing this group action as follows:
More explicitly, the "action groupoid" is a small category with formula_106 and formula_107 and with source and target maps formula_108 and formula_109. It is often denoted formula_110 (or formula_111 for a right action). Multiplication (or composition) in the groupoid is then formula_112 which is defined provided formula_113.
For formula_52 in formula_69, the vertex group consists of those formula_114 with formula_115, which is just the isotropy subgroup at formula_52 for the given action (which is why vertex groups are also called isotropy groups). Similarly, the orbits of the action groupoid are the orbit of the group action, and the groupoid is transitive if and only if the group action is transitive.
Another way to describe formula_4-sets is the functor category formula_116, where formula_117 is the groupoid (category) with one element and isomorphic to the group formula_4. Indeed, every functor formula_118 of this category defines a set formula_119 and for every formula_104 in formula_4 (i.e. for every morphism in formula_117) induces a bijection formula_120 : formula_121. The categorical structure of the functor formula_118 assures us that formula_118 defines a formula_4-action on the set formula_4. The (unique) representable functor formula_118 : formula_122 is the Cayley representation of formula_4. In fact, this functor is isomorphic to formula_123 and so sends formula_124 to the set formula_125 which is by definition the "set" formula_4 and the morphism formula_104 of formula_117 (i.e. the element formula_104 of formula_4) to the permutation formula_120 of the set formula_4. We deduce from the Yoneda embedding that the group formula_4 is isomorphic to the group formula_126, a subgroup of the group of permutations of formula_4.
Finite set.
Consider the group action of formula_127 on the finite set formula_128 which takes each number to its negative, so formula_129 and formula_130. The quotient groupoid formula_131 is the set of equivalence classes from this group action formula_132, and formula_133 has a group action of formula_127 on it.
Quotient variety.
Any finite group formula_134 that maps to formula_135 gives a group action on the affine space formula_136 (since this is the group of automorphisms). Then, a quotient groupoid can be of the form formula_137, which has one point with stabilizer formula_134 at the origin. Examples like these form the basis for the theory of orbifolds. Another commonly studied family of orbifolds are weighted projective spaces formula_138 and subspaces of them, such as Calabi–Yau orbifolds.
Fiber product of groupoids.
Given a diagram of groupoids with groupoid morphisms
formula_139
where formula_140 and formula_141, we can form the groupoid formula_142 whose objects are triples formula_143, where formula_144, formula_145, and formula_146 in formula_147. Morphisms can be defined as a pair of morphisms formula_148 where formula_149 and formula_150 such that for triples formula_151, there is a commutative diagram in formula_147 of formula_152, formula_153 and the formula_154.
Homological algebra.
A two term complex
formula_155
of objects in a concrete Abelian category can be used to form a groupoid. It has as objects the set formula_156 and as arrows the set formula_157; the source morphism is just the projection onto formula_156 while the target morphism is the addition of projection onto formula_158 composed with formula_159 and projection onto formula_156. That is, given formula_160, we have
formula_161
Of course, if the abelian category is the category of coherent sheaves on a scheme, then this construction can be used to form a presheaf of groupoids.
Puzzles.
While puzzles such as the Rubik's Cube can be modeled using group theory (see Rubik's Cube group), certain puzzles are better modeled as groupoids.
The transformations of the fifteen puzzle form a groupoid (not a group, as not all moves can be composed). This groupoid acts on configurations.
Mathieu groupoid.
The Mathieu groupoid is a groupoid introduced by John Horton Conway acting on 13 points such that the elements fixing a point form a copy of the Mathieu group M12.
Relation to groups.
If a groupoid has only one object, then the set of its morphisms forms a group. Using the algebraic definition, such a groupoid is literally just a group. Many concepts of group theory generalize to groupoids, with the notion of functor replacing that of group homomorphism.
Every transitive/connected groupoid - that is, as explained above, one in which any two objects are connected by at least one morphism - is isomorphic to an action groupoid (as defined above) formula_162. By transitivity, there will only be one orbit under the action.
Note that the isomorphism just mentioned is not unique, and there is no natural choice. Choosing such an isomorphism for a transitive groupoid essentially amounts to picking one object formula_163, a group isomorphism formula_164 from formula_165 to formula_4, and for each formula_52 other than formula_163, a morphism in formula_4 from formula_163 to formula_52.
If a groupoid is not transitive, then it is isomorphic to a disjoint union of groupoids of the above type, also called its connected components (possibly with different groups formula_4 and sets formula_69 for each connected component).
In category-theoretic terms, each connected component of a groupoid is equivalent (but not isomorphic) to a groupoid with a single object, that is, a single group. Thus any groupoid is equivalent to a multiset of unrelated groups. In other words, for equivalence instead of isomorphism, one does not need to specify the sets formula_69, but only the groups formula_166 For example,
The collapse of a groupoid into a mere collection of groups loses some information, even from a category-theoretic point of view, because it is not natural. Thus when groupoids arise in terms of other structures, as in the above examples, it can be helpful to maintain the entire groupoid. Otherwise, one must choose a way to view each formula_54 in terms of a single group, and this choice can be arbitrary. In the example from topology, one would have to make a coherent choice of paths (or equivalence classes of paths) from each point formula_71 to each point formula_72 in the same path-connected component.
As a more illuminating example, the classification of groupoids with one endomorphism does not reduce to purely group theoretic considerations. This is analogous to the fact that the classification of vector spaces with one endomorphism is nontrivial.
Morphisms of groupoids come in more kinds than those of groups: we have, for example, fibrations, covering morphisms, universal morphisms, and quotient morphisms. Thus a subgroup formula_167 of a group formula_4 yields an action of formula_4 on the set of cosets of formula_167 in formula_4 and hence a covering morphism formula_71 from, say, formula_168 to formula_4, where formula_168 is a groupoid with vertex groups isomorphic to formula_167. In this way, presentations of the group formula_4 can be "lifted" to presentations of the groupoid formula_168, and this is a useful way of obtaining information about presentations of the subgroup formula_167. For further information, see the books by Higgins and by Brown in the References.
Category of groupoids.
The category whose objects are groupoids and whose morphisms are groupoid morphisms is called the groupoid category, or the category of groupoids, and is denoted by Grpd.
The category Grpd is, like the category of small categories, Cartesian closed: for any groupoids formula_169 we can construct a groupoid formula_170 whose objects are the morphisms formula_171 and whose arrows are the natural equivalences of morphisms. Thus if formula_172 are just groups, then such arrows are the conjugacies of morphisms. The main result is that for any groupoids formula_173 there is a natural bijection
formula_174
This result is of interest even if all the groupoids formula_173 are just groups.
Another important property of Grpd is that it is both complete and cocomplete.
Relation to Cat.
The inclusion formula_175 has both a left and a right adjoint:
formula_176
formula_177
Here, formula_178 denotes the localization of a category that inverts every morphism, and formula_179 denotes the subcategory of all isomorphisms.
Relation to sSet.
The nerve functor formula_180 embeds Grpd as a full subcategory of the category of simplicial sets. The nerve of a groupoid is always a Kan complex.
The nerve has a left adjoint
formula_181
Here, formula_73 denotes the fundamental groupoid of the simplicial set X.
Groupoids in Grpd.
There is an additional structure which can be derived from groupoids internal to the category of groupoids, double-groupoids. Because Grpd is a 2-category, these objects form a 2-category instead of a 1-category since there is extra structure. Essentially, these are groupoids formula_182 with functorsformula_183and an embedding given by an identity functorformula_184One way to think about these 2-groupoids is they contain objects, morphisms, and squares which can compose together vertically and horizontally. For example, given squaresformula_185 and formula_186with formula_9 the same morphism, they can be vertically conjoined giving a diagramformula_187which can be converted into another square by composing the vertical arrows. There is a similar composition law for horizontal attachments of squares.
Groupoids with geometric structures.
When studying geometrical objects, the arising groupoids often carry a topology, turning them into topological groupoids, or even some differentiable structure, turning them into Lie groupoids. These last objects can be also studied in terms of their associated Lie algebroids, in analogy to the relation between Lie groups and Lie algebras.
Groupoids arising from geometry often possess further structures which interact with the groupoid multiplication. For instance, in Poisson geometry one has the notion of a symplectic groupoid, which is a Lie groupoid endowed with a compatible symplectic form. Similarly, one can have groupoids with a compatible Riemannian metric, or complex structure, etc.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g:A \\rightarrow B"
},
{
"math_id": 1,
"text": "h:B \\rightarrow C"
},
{
"math_id": 2,
"text": "\\circ : (B \\rightarrow C) \\rightarrow (A \\rightarrow B) \\rightarrow A \\rightarrow C "
},
{
"math_id": 3,
"text": "h \\circ g : A \\rightarrow C "
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "{}^{-1}:G\\to G,"
},
{
"math_id": 6,
"text": "*:G\\times G \\rightharpoonup G"
},
{
"math_id": 7,
"text": "*"
},
{
"math_id": 8,
"text": "\\ast"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "b"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "a*b"
},
{
"math_id": 13,
"text": "b*c"
},
{
"math_id": 14,
"text": "(a * b) * c"
},
{
"math_id": 15,
"text": "a * (b * c)"
},
{
"math_id": 16,
"text": "a^{-1} * a"
},
{
"math_id": 17,
"text": "a*{a^{-1}}"
},
{
"math_id": 18,
"text": "a*b*{b^{-1}} = a"
},
{
"math_id": 19,
"text": "{a^{-1}} * a * b = b"
},
{
"math_id": 20,
"text": "(a^{-1})^{-1} = a"
},
{
"math_id": 21,
"text": "(a*b)^{-1} = b^{-1} * a^{-1}"
},
{
"math_id": 22,
"text": "\\mathrm{id}_x"
},
{
"math_id": 23,
"text": "\\mathrm{comp}_{x,y,z} : G(y, z)\\times G(x, y) \\rightarrow G(x, z): (g, f) \\mapsto gf"
},
{
"math_id": 24,
"text": "\\mathrm{inv}: G(x, y) \\rightarrow G(y, x): f \\mapsto f^{-1}"
},
{
"math_id": 25,
"text": "f\\ \\mathrm{id}_x = f"
},
{
"math_id": 26,
"text": "\\mathrm{id}_y\\ f = f"
},
{
"math_id": 27,
"text": "(h g) f = h (g f)"
},
{
"math_id": 28,
"text": "f f^{-1} = \\mathrm{id}_y"
},
{
"math_id": 29,
"text": "f^{-1} f = \\mathrm{id}_x"
},
{
"math_id": 30,
"text": "G_1 \\rightrightarrows G_0"
},
{
"math_id": 31,
"text": "G_1"
},
{
"math_id": 32,
"text": "G_1 \\to G_0"
},
{
"math_id": 33,
"text": "\\mathrm{comp}"
},
{
"math_id": 34,
"text": "\\mathrm{inv}"
},
{
"math_id": 35,
"text": "\\mathrm{id}"
},
{
"math_id": 36,
"text": "\\sim"
},
{
"math_id": 37,
"text": "a \\sim b"
},
{
"math_id": 38,
"text": "G_0:=G/\\!\\!\\sim"
},
{
"math_id": 39,
"text": "1_x"
},
{
"math_id": 40,
"text": "a\\in G"
},
{
"math_id": 41,
"text": "x\\in G_0"
},
{
"math_id": 42,
"text": "G(x, y)"
},
{
"math_id": 43,
"text": "1_x*f*1_y"
},
{
"math_id": 44,
"text": "f \\in G(x,y)"
},
{
"math_id": 45,
"text": "g \\in G(y, z),"
},
{
"math_id": 46,
"text": "gf:=f*g \\in G(x,z)"
},
{
"math_id": 47,
"text": "(1_x*f)*1_y"
},
{
"math_id": 48,
"text": "1_y*(g*1_z)"
},
{
"math_id": 49,
"text": "(1_x*f*1_y)*(g*1_z)=f*g"
},
{
"math_id": 50,
"text": "x \\in X"
},
{
"math_id": 51,
"text": "s(t^{-1}(x)) \\subseteq X"
},
{
"math_id": 52,
"text": "x"
},
{
"math_id": 53,
"text": "y"
},
{
"math_id": 54,
"text": "G(x)"
},
{
"math_id": 55,
"text": "G(y)"
},
{
"math_id": 56,
"text": "f"
},
{
"math_id": 57,
"text": "g\\to fgf^{-1}"
},
{
"math_id": 58,
"text": "G \\rightrightarrows X"
},
{
"math_id": 59,
"text": "H \\rightrightarrows Y"
},
{
"math_id": 60,
"text": "X = Y"
},
{
"math_id": 61,
"text": "G(x,y)=H(x,y)"
},
{
"math_id": 62,
"text": "x,y \\in Y"
},
{
"math_id": 63,
"text": "p: E \\to B"
},
{
"math_id": 64,
"text": "E"
},
{
"math_id": 65,
"text": "B"
},
{
"math_id": 66,
"text": "p(x)"
},
{
"math_id": 67,
"text": "e"
},
{
"math_id": 68,
"text": "p(e)=b"
},
{
"math_id": 69,
"text": "X"
},
{
"math_id": 70,
"text": "G_0"
},
{
"math_id": 71,
"text": "p"
},
{
"math_id": 72,
"text": "q"
},
{
"math_id": 73,
"text": "\\pi_1(X)"
},
{
"math_id": 74,
"text": "\\Pi_1(X)"
},
{
"math_id": 75,
"text": "\\pi_1(X,x)"
},
{
"math_id": 76,
"text": "\\pi_1(X,A)"
},
{
"math_id": 77,
"text": "A\\subset X"
},
{
"math_id": 78,
"text": "A"
},
{
"math_id": 79,
"text": "(y,x)"
},
{
"math_id": 80,
"text": "x\\sim y"
},
{
"math_id": 81,
"text": "(z,y)"
},
{
"math_id": 82,
"text": "(z,x)"
},
{
"math_id": 83,
"text": "X \\times X"
},
{
"math_id": 84,
"text": "s = t = id_X"
},
{
"math_id": 85,
"text": "\\{x\\}"
},
{
"math_id": 86,
"text": "f: X_0 \\to Y"
},
{
"math_id": 87,
"text": "X_0\\times_YX_0 \\subset X_0\\times X_0"
},
{
"math_id": 88,
"text": "Y"
},
{
"math_id": 89,
"text": "X_0"
},
{
"math_id": 90,
"text": "X_1 = X_0\\times_YX_0"
},
{
"math_id": 91,
"text": "X_1 \\rightrightarrows X_0"
},
{
"math_id": 92,
"text": "\\mathcal{U} = \\{U_i\\}_{i\\in I}"
},
{
"math_id": 93,
"text": "\\mathcal{G}_0 = \\coprod U_i"
},
{
"math_id": 94,
"text": "\\mathcal{G}_1 = \\coprod U_{ij}"
},
{
"math_id": 95,
"text": "\\begin{align}\ns = \\phi_j: U_{ij} \\to U_j\\\\\nt = \\phi_i: U_{ij} \\to U_i\n\\end{align}"
},
{
"math_id": 96,
"text": "\\varepsilon: U_i \\to U_{ii}"
},
{
"math_id": 97,
"text": "\\mathcal{G}_n = \\mathcal{G}_1\\times_{\\mathcal{G}_0} \\cdots \\times_{\\mathcal{G}_0}\\mathcal{G}_1"
},
{
"math_id": 98,
"text": "n"
},
{
"math_id": 99,
"text": "\\mathcal{G}_n"
},
{
"math_id": 100,
"text": "\\begin{matrix}\nU_{ijk} & \\to & U_{ij} \\\\\n\\downarrow & & \\downarrow \\\\\nU_{ik} & \\to & U_{i}\n\\end{matrix}"
},
{
"math_id": 101,
"text": "U_i"
},
{
"math_id": 102,
"text": "[\\sigma] \\in \\check{H}^k(\\mathcal{U},\\underline{A})"
},
{
"math_id": 103,
"text": "\\sigma:\\coprod U_{i_1\\cdots i_k} \\to A"
},
{
"math_id": 104,
"text": "g"
},
{
"math_id": 105,
"text": "gx = y"
},
{
"math_id": 106,
"text": "\\mathrm{ob}(C)=X"
},
{
"math_id": 107,
"text": "\\mathrm{hom}(C)=G\\times X"
},
{
"math_id": 108,
"text": "s(g,x) = x"
},
{
"math_id": 109,
"text": "t(g,x) = gx"
},
{
"math_id": 110,
"text": "G \\ltimes X"
},
{
"math_id": 111,
"text": "X\\rtimes G"
},
{
"math_id": 112,
"text": "(h,y)(g,x) = (hg,x)"
},
{
"math_id": 113,
"text": "y=gx"
},
{
"math_id": 114,
"text": "(g,x)"
},
{
"math_id": 115,
"text": "gx=x"
},
{
"math_id": 116,
"text": "[\\mathrm{Gr},\\mathrm{Set}]"
},
{
"math_id": 117,
"text": "\\mathrm{Gr}"
},
{
"math_id": 118,
"text": "F"
},
{
"math_id": 119,
"text": "X=F(\\mathrm{Gr})"
},
{
"math_id": 120,
"text": "F_g"
},
{
"math_id": 121,
"text": "X\\to X"
},
{
"math_id": 122,
"text": "\\mathrm{Gr} \\to \\mathrm{Set}"
},
{
"math_id": 123,
"text": "\\mathrm{Hom}(\\mathrm{Gr},-)"
},
{
"math_id": 124,
"text": "\\mathrm{ob}(\\mathrm{Gr})"
},
{
"math_id": 125,
"text": "\\mathrm{Hom}(\\mathrm{Gr},\\mathrm{Gr})"
},
{
"math_id": 126,
"text": "\\{F_g\\mid g\\in G\\}"
},
{
"math_id": 127,
"text": "\\mathbb{Z}/2"
},
{
"math_id": 128,
"text": "X = \\{-2, -1, 0, 1, 2\\}"
},
{
"math_id": 129,
"text": "-2 \\mapsto 2"
},
{
"math_id": 130,
"text": "1 \\mapsto -1"
},
{
"math_id": 131,
"text": "[X/G]"
},
{
"math_id": 132,
"text": "\\{[0],[1],[2]\\}"
},
{
"math_id": 133,
"text": "[0]"
},
{
"math_id": 134,
"text": "\nG\n"
},
{
"math_id": 135,
"text": "\nGL(n)\n"
},
{
"math_id": 136,
"text": "\n\\mathbb{A}^n\n"
},
{
"math_id": 137,
"text": "\n[\\mathbb{A}^n/G]\n"
},
{
"math_id": 138,
"text": "\\mathbb{P}(n_1,\\ldots, n_k)"
},
{
"math_id": 139,
"text": "\n\\begin{align}\n & & X \\\\\n & & \\downarrow \\\\\nY &\\rightarrow & Z \n\\end{align}\n"
},
{
"math_id": 140,
"text": "\nf:X\\to Z\n"
},
{
"math_id": 141,
"text": "\ng:Y\\to Z\n"
},
{
"math_id": 142,
"text": "\nX\\times_ZY\n"
},
{
"math_id": 143,
"text": "\n(x,\\phi,y)\n"
},
{
"math_id": 144,
"text": "\nx \\in \\text{Ob}(X)\n"
},
{
"math_id": 145,
"text": "\ny \\in \\text{Ob}(Y)\n"
},
{
"math_id": 146,
"text": "\n\\phi: f(x) \\to g(y)\n"
},
{
"math_id": 147,
"text": "\nZ\n"
},
{
"math_id": 148,
"text": "\n(\\alpha,\\beta)\n"
},
{
"math_id": 149,
"text": "\n\\alpha: x \\to x'\n"
},
{
"math_id": 150,
"text": "\n\\beta: y \\to y'\n"
},
{
"math_id": 151,
"text": "\n(x,\\phi,y), (x',\\phi',y')\n"
},
{
"math_id": 152,
"text": "\nf(\\alpha):f(x) \\to f(x')\n"
},
{
"math_id": 153,
"text": "\ng(\\beta):g(y) \\to g(y')\n"
},
{
"math_id": 154,
"text": "\n\\phi,\\phi'\n"
},
{
"math_id": 155,
"text": "\nC_1 \\overset{d}{\\rightarrow}C_0\n"
},
{
"math_id": 156,
"text": "C_0"
},
{
"math_id": 157,
"text": "C_1\\oplus C_0"
},
{
"math_id": 158,
"text": "C_1"
},
{
"math_id": 159,
"text": "d"
},
{
"math_id": 160,
"text": "c_1 + c_0 \\in C_1\\oplus C_0"
},
{
"math_id": 161,
"text": "\nt(c_1 + c_0) = d(c_1) + c_0.\n"
},
{
"math_id": 162,
"text": "(G, X)"
},
{
"math_id": 163,
"text": "x_0"
},
{
"math_id": 164,
"text": "h"
},
{
"math_id": 165,
"text": "G(x_0)"
},
{
"math_id": 166,
"text": "G."
},
{
"math_id": 167,
"text": "H"
},
{
"math_id": 168,
"text": "K"
},
{
"math_id": 169,
"text": "H,K"
},
{
"math_id": 170,
"text": "\\operatorname{GPD}(H,K)"
},
{
"math_id": 171,
"text": " H \\to K "
},
{
"math_id": 172,
"text": " H,K "
},
{
"math_id": 173,
"text": " G,H,K "
},
{
"math_id": 174,
"text": "\\operatorname{Grpd}(G \\times H, K) \\cong \\operatorname{Grpd}(G, \\operatorname{GPD}(H,K))."
},
{
"math_id": 175,
"text": "i : \\mathbf{Grpd} \\to \\mathbf{Cat}"
},
{
"math_id": 176,
"text": " \\hom_{\\mathbf{Grpd}}(C[C^{-1}], G) \\cong \\hom_{\\mathbf{Cat}}(C, i(G)) "
},
{
"math_id": 177,
"text": " \\hom_{\\mathbf{Cat}}(i(G), C) \\cong \\hom_{\\mathbf{Grpd}}(G, \\mathrm{Core}(C)) "
},
{
"math_id": 178,
"text": "C[C^{-1}]"
},
{
"math_id": 179,
"text": "\\mathrm{Core}(C)"
},
{
"math_id": 180,
"text": "N : \\mathbf{Grpd} \\to \\mathbf{sSet}"
},
{
"math_id": 181,
"text": " \\hom_{\\mathbf{Grpd}}(\\pi_1(X), G) \\cong \\hom_{\\mathbf{sSet}}(X, N(G)) "
},
{
"math_id": 182,
"text": "\\mathcal{G}_1,\\mathcal{G}_0"
},
{
"math_id": 183,
"text": "s,t: \\mathcal{G}_1 \\to \\mathcal{G}_0"
},
{
"math_id": 184,
"text": "i:\\mathcal{G}_0 \\to\\mathcal{G}_1"
},
{
"math_id": 185,
"text": "\\begin{matrix}\n\\bullet & \\to & \\bullet \\\\\n\\downarrow & & \\downarrow \\\\\n\\bullet & \\xrightarrow{a} & \\bullet\n\\end{matrix}\n"
},
{
"math_id": 186,
"text": "\\begin{matrix}\n\\bullet & \\xrightarrow{a} & \\bullet \\\\\n\\downarrow & & \\downarrow \\\\\n\\bullet & \\to & \\bullet\n\\end{matrix}"
},
{
"math_id": 187,
"text": "\\begin{matrix}\n\\bullet & \\to & \\bullet \\\\\n\\downarrow & & \\downarrow \\\\\n\\bullet & \\xrightarrow{a} & \\bullet \\\\\n\\downarrow & & \\downarrow \\\\\n\\bullet & \\to & \\bullet\n\\end{matrix}"
}
] | https://en.wikipedia.org/wiki?curid=12543 |
1254323 | Brent's method | Root-finding algorithm
In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker. Consequently, the method is also known as the Brent–Dekker method.
Modern improvements on Brent's method include Chandrupatla's method, which is simpler and faster for functions that are flat around their roots; Ridders' method, which performs exponential interpolations instead of quadratic providing a simpler closed formula for the iterations; and the ITP method which is a hybrid between regula-falsi and bisection that achieves optimal worst-case and asymptotic guarantees.
Dekker's method.
The idea to combine the bisection method with the secant method goes back to .
Suppose that we want to solve the equation "f"("x") = 0. As with the bisection method, we need to initialize Dekker's method with two points, say "a"0 and "b"0, such that "f"("a"0) and "f"("b"0) have opposite signs. If "f" is continuous on ["a"0, "b"0], the intermediate value theorem guarantees the existence of a solution between "a"0 and "b"0.
Three points are involved in every iteration:
Two provisional values for the next iterate are computed. The first one is given by linear interpolation, also known as the secant method:
formula_0
and the second one is given by the bisection method
formula_1
If the result of the secant method, "s", lies strictly between "b""k" and "m", then it becomes the next iterate ("b""k"+1 = "s"), otherwise the midpoint is used ("b""k"+1 = "m").
Then, the value of the new contrapoint is chosen such that "f"("a""k"+1) and "f"("b""k"+1) have opposite signs. If "f"("a""k") and "f"("b""k"+1) have opposite signs, then the contrapoint remains the same: "a""k"+1 = "a""k". Otherwise, "f"("b""k"+1) and "f"("b""k") have opposite signs, so the new contrapoint becomes "a""k"+1 = "b""k".
Finally, if |"f"("a""k"+1)| < |"f"("b""k"+1)|, then "a""k"+1 is probably a better guess for the solution than "b""k"+1, and hence the values of "a""k"+1 and "b""k"+1 are exchanged.
This ends the description of a single iteration of Dekker's method.
Dekker's method performs well if the function "f" is reasonably well-behaved. However, there are circumstances in which every iteration employs the secant method, but the iterates "b""k" converge very slowly (in particular, |"b""k" − "b""k"−1| may be arbitrarily small). Dekker's method requires far more iterations than the bisection method in this case.
Brent's method.
proposed a small modification to avoid the problem with Dekker's method. He inserts an additional test which must be satisfied before the result of the secant method is accepted as the next iterate. Two inequalities must be simultaneously satisfied:
Given a specific numerical tolerance formula_2, if the previous step used the bisection method, the inequality formula_3 must hold to perform interpolation, otherwise the bisection method is performed and its result used for the next iteration.
If the previous step performed interpolation, then the inequality formula_4 is used instead to perform the next action (to choose) interpolation (when inequality is true) or bisection method (when inequality is not true).
Also, if the previous step used the bisection method, the inequality formula_5
must hold, otherwise the bisection method is performed and its result used for the next iteration. If the previous step performed interpolation, then the inequality formula_6
is used instead.
This modification ensures that at the kth iteration, a bisection step will be performed in at most formula_7 additional iterations, because the above conditions force consecutive interpolation step sizes to halve every two iterations, and after at most formula_7 iterations, the step size will be smaller than formula_2, which invokes a bisection step. Brent proved that his method requires at most "N"2 iterations, where "N" denotes the number of iterations for the bisection method. If the function "f" is well-behaved, then Brent's method will usually proceed by either inverse quadratic or linear interpolation, in which case it will converge superlinearly.
Furthermore, Brent's method uses inverse quadratic interpolation instead of linear interpolation (as used by the secant method). If "f"("b""k"), "f"("a""k") and "f"("b""k"−1) are distinct, it slightly increases the efficiency. As a consequence, the condition for accepting "s" (the value proposed by either linear interpolation or inverse quadratic interpolation) has to be changed: "s" has to lie between (3"a""k" + "b""k") / 4 and "b""k".
Algorithm.
input "a", "b", and (a pointer to) a function for "f"
calculate "f"("a")
calculate "f"("b")
if "f"("a")"f"("b") ≥ 0 then
exit function because the root is not bracketed.
end if
if |"f"("a")| < |"f"("b")| then
swap ("a","b")
end if
"c" := "a"
set mflag
repeat until "f"("b" or "s") = 0 or |"b" − "a"| is small enough "(convergence)"
if "f"("a") ≠ "f"("c") and "f"("b") ≠ "f"("c") then
formula_8 "(inverse quadratic interpolation)"
else
formula_9 "(secant method)"
end if
if "(condition 1)" "s" is not or
"(condition 2)" (mflag is set and |"s"−"b"| ≥ |"b"−"c"|/2) or
"(condition 3)" (mflag is cleared and |"s"−"b"| ≥ |"c"−"d"|/2) or
"(condition 4)" (mflag is set and |"b"−"c"| < |δ|) or
"(condition 5)" (mflag is cleared and |"c"−"d"| < |δ|) then
formula_10 "(bisection method)"
set mflag
else
clear mflag
end if
calculate "f"("s")
"d" := "c" "(d is assigned for the first time here; it won't be used above on the first iteration because mflag is set)"
"c" := "b"
if "f"("a")"f"("s") < 0 then
"b" := "s"
else
"a" := "s"
end if
if |"f"("a")| < |"f"("b")| then
swap ("a","b")
end if
end repeat
output "b" "or s (return the root)"
Example.
Suppose that we are seeking a zero of the function defined by "f"("x") = ("x" + 3)("x" − 1)2.
We take ["a"0, "b"0] = [−4, 4/3] as our initial interval.
We have "f"("a"0) = −25 and "f"("b"0) = 0.48148 (all numbers in this section are rounded), so the conditions "f"("a"0) "f"("b"0) < 0 and |"f"("b"0)| ≤ |"f"("a"0)| are satisfied.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " s = \\begin{cases} b_k - \\frac{b_k-b_{k-1}}{f(b_k)-f(b_{k-1})} f(b_k), & \\mbox{if } f(b_k)\\neq f(b_{k-1}) \\\\ m & \\mbox{otherwise } \\end{cases} "
},
{
"math_id": 1,
"text": " m = \\frac{a_k+b_k}{2}. "
},
{
"math_id": 2,
"text": "\\delta"
},
{
"math_id": 3,
"text": " |\\delta| < |b_k - b_{k-1}| "
},
{
"math_id": 4,
"text": "|\\delta| < |b_{k-1} - b_{k-2}|"
},
{
"math_id": 5,
"text": "|s-b_k| < \\begin{matrix} \\frac12 \\end{matrix} |b_k - b_{k-1}|"
},
{
"math_id": 6,
"text": "|s-b_k| < \\begin{matrix} \\frac12 \\end{matrix} |b_{k-1} - b_{k-2}|"
},
{
"math_id": 7,
"text": "2\\log_2(|b_{k-1}-b_{k-2}|/\\delta)"
},
{
"math_id": 8,
"text": " s := \\frac{af(b)f(c)}{(f(a)-f(b))(f(a)-f(c))} + \\frac{bf(a)f(c)}{(f(b)-f(a))(f(b)-f(c))} + \\frac{cf(a)f(b)}{(f(c)-f(a))(f(c)-f(b))} "
},
{
"math_id": 9,
"text": " s := b - f(b) \\frac{b-a}{f(b)-f(a)} "
},
{
"math_id": 10,
"text": " s := \\frac{a+b}{2} "
}
] | https://en.wikipedia.org/wiki?curid=1254323 |
1254566 | Numerical differentiation | Use of numerical analysis to estimate derivatives of functions
In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.
Finite differences.
The simplest method is to use finite difference approximations.
A simple two-point estimation is to compute the slope of a nearby secant line through the points ("x", "f"("x")) and ("x" + "h", "f"("x" + "h")). Choosing a small number h, h represents a small change in x, and it can be either positive or negative. The slope of this line is
formula_0
This expression is Newton's difference quotient (also known as a first-order divided difference).
The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to h. As h approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of "f" at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line:
formula_1
Since immediately substituting 0 for h results in formula_2 indeterminate form, calculating the derivative directly can be unintuitive.
Equivalently, the slope could be estimated by employing positions "x" − "h" and x.
Another two-point formula is to compute the slope of a nearby secant line through the points ("x" − "h", "f"("x" − "h")) and ("x" + "h", "f"("x" + "h")). The slope of this line is
formula_3
This formula is known as the symmetric difference quotient. In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to formula_4. Hence for small values of h this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at x, the value of the function at x is not involved.
The estimation error is given by
formula_5
where formula_6 is some point between formula_7 and formula_8.
This error does not include the rounding error due to numbers being represented and calculations being performed in limited precision.
The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including TI-82, TI-83, TI-84, TI-85, all of which use this method with "h" = 0.001.
Step size.
An important consideration in practice when the function is calculated using floating-point arithmetic of finite precision is the choice of step size, h. If chosen too small, the subtraction will yield a large rounding error. In fact, all the finite-difference formulae are ill-conditioned and due to cancellation will produce a value of zero if h is small enough. If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse.
For basic central differences, the optimal step is the cube-root of machine epsilon.
For the numerical derivative formula evaluated at x and "x" + "h", a choice for h that is small without producing a large rounding error is formula_9 (though not when "x" = 0), where the machine epsilon ε is typically of the order of for double precision. A formula for h that balances the rounding error against the secant error for optimum accuracy is
formula_10
(though not when formula_11), and to employ it will require knowledge of the function.
For computer calculations the problems are exacerbated because, although x necessarily holds a representable floating-point number in some precision (32 or 64-bit, "etc".), "x" + "h" almost certainly will not be exactly representable in that precision. This means that "x" + "h" will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that ("x" + "h") − "x" will "not" equal h; the two function evaluations will not be exactly h apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as "h" = 0.1 will not be a round number in binary; it is 0.000110011001100...2 A possible approach is as follows:
h := sqrt(eps) * x;
xph := x + h;
dx := xph - x;
slope := (F(xph) - F(x)) / dx;
However, with computers, compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that "dx" and h are the same. With C and similar languages, a directive that "xph" is a volatile variable will prevent this.
Other methods.
Higher-order methods.
Higher-order methods for approximating the derivative, as well as methods for higher derivatives, exist.
Given below is the five-point method for the first derivative (five-point stencil in one dimension):
formula_12
where formula_13.
For other stencil configurations and derivative orders, the Finite Difference Coefficients Calculator is a tool that can be used to generate derivative approximation methods for any stencil with any derivative order (provided a solution exists).
Higher derivatives.
Using Newton's difference quotient,
formula_14
the following can be shown (for "n" > 0):
formula_15
Complex-variable methods.
The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if formula_16 is a holomorphic function, real-valued on the real line, which can be evaluated at points in the complex plane near formula_17, then there are stable methods. For example, the first derivative can be calculated by the complex-step derivative formula:
formula_18
The recommended step size to obtain accurate derivatives for a range of conditions is formula_19.
This formula can be obtained by Taylor series expansion:
formula_20
The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs multicomplex numbers, resulting in multicomplex derivatives.
formula_21
where the formula_22 denote the multicomplex imaginary units; formula_23. The formula_24 operator extracts the formula_25th component of a multicomplex number of level formula_26, e.g., formula_27 extracts the real component and formula_28 extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative
formula_29
A C++ implementation of multicomplex arithmetics is available.
In general, derivatives of any order can be calculated using Cauchy's integral formula:
formula_30
where the integration is done numerically.
Using complex variables for numerical differentiation was started by Lyness and Moler in 1967. Their algorithm is applicable to higher-order derivatives.
A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner. An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.
Differential quadrature.
Differential quadrature is the approximation of derivatives by using weighted sums of function values. Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with "quadrature", meaning numerical integration, where weighted sums are used in methods such as Simpson's method or the Trapezoidal rule. There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter. Differential quadrature is used to solve partial differential equations.
There are further methods for computing derivatives from noisy data.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{f(x + h) - f(x)}{h}."
},
{
"math_id": 1,
"text": "f'(x) = \\lim_{h \\to 0} \\frac{f(x + h) - f(x)}{h}."
},
{
"math_id": 2,
"text": "\\frac{0}{0}"
},
{
"math_id": 3,
"text": "\\frac{f(x + h) - f(x - h)}{2h}."
},
{
"math_id": 4,
"text": "h^2"
},
{
"math_id": 5,
"text": "R = \\frac{-f^{(3)}(c)}{6} h^2,"
},
{
"math_id": 6,
"text": "c"
},
{
"math_id": 7,
"text": "x - h"
},
{
"math_id": 8,
"text": "x + h"
},
{
"math_id": 9,
"text": "\\sqrt{\\varepsilon} x"
},
{
"math_id": 10,
"text": "h = 2\\sqrt{\\varepsilon\\left|\\frac{f(x)}{f''(x)}\\right|}"
},
{
"math_id": 11,
"text": "f''(x) = 0"
},
{
"math_id": 12,
"text": "f'(x) = \\frac{-f(x + 2h) + 8 f(x + h) - 8 f(x - h) + f(x - 2h)}{12h} + \\frac{h^4}{30} f^{(5)}(c),"
},
{
"math_id": 13,
"text": "c \\in [x - 2h, x + 2h]"
},
{
"math_id": 14,
"text": "f'(x) = \\lim_{h \\to 0} \\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 15,
"text": "f^{(n)}(x) = \\lim_{h\\to 0} \\frac{1}{h^n} \\sum_{k=0}^n (-1)^{k+n} \\binom{n}{k} f(x + kh)"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "f'(x) = \\frac{\\Im(f(x + \\mathrm{i}h))}{h} + O(h^2), \\quad \\mathrm{i^2}:=-1."
},
{
"math_id": 19,
"text": "h = 10^{-200}"
},
{
"math_id": 20,
"text": "f(x+\\mathrm{i}h) = f(x) + \\mathrm{i}h f'(x) - \\tfrac{1}{2!} h^2 f''(x) - \\tfrac{\\mathrm{i}}{3!} h^3 f^{(3)}(x) + \\cdots."
},
{
"math_id": 21,
"text": "f^{(n)}(x) \\approx \\frac{\\mathcal{C}^{(n)}_{n^2-1}(f(x + \\mathrm{i}^{(1)} h + \\cdots + \\mathrm{i}^{(n)} h))}{h^n}"
},
{
"math_id": 22,
"text": "\\mathrm{i}^{(k)}"
},
{
"math_id": 23,
"text": "\\mathrm{i}^{(1)} \\equiv \\mathrm{i}"
},
{
"math_id": 24,
"text": "\\mathcal{C}^{(n)}_k"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "\\mathcal{C}^{(n)}_0"
},
{
"math_id": 28,
"text": "\\mathcal{C}^{(n)}_{n^2-1}"
},
{
"math_id": 29,
"text": "\\frac{\\partial^2 f(x, y)}{\\partial x \\,\\partial y} \\approx \\frac{\\mathcal{C}^{(2)}_3(f(x + \\mathrm{i}^{(1)} h, y + \\mathrm{i}^{(2)} h))}{h^2}"
},
{
"math_id": 30,
"text": "f^{(n)}(a) = \\frac{n!}{2\\pi i} \\oint_\\gamma \\frac{f(z)}{(z - a)^{n+1}} \\,\\mathrm{d}z,"
}
] | https://en.wikipedia.org/wiki?curid=1254566 |
12545981 | Topological entropy in physics | Type of physical entropy
The topological entanglement entropy or "topological entropy", usually denoted by formula_0, is a number characterizing many-body states that possess topological order.
A non-zero topological entanglement entropy reflects the presence of long range quantum entanglements in a many-body quantum state. So the topological entanglement entropy links topological order with pattern of long range quantum entanglements.
Given a topologically ordered state, the topological entropy can be extracted from the asymptotic behavior of the Von Neumann entropy measuring the quantum entanglement between a spatial block and the rest of the system. The entanglement entropy of a simply connected region of boundary length "L", within an infinite two-dimensional topologically ordered state, has the following form for large "L":
formula_1
where formula_2 is the topological entanglement entropy.
The topological entanglement entropy is equal to the logarithm of the total quantum dimension of the quasiparticle excitations of the state.
For example, the simplest fractional quantum Hall states, the Laughlin states at filling fraction 1/"m", have "γ" = ½log("m"). The "Z"2 fractionalized states, such as topologically ordered states of
"Z"2 spin-liquid, quantum dimer models on non-bipartite lattices, and Kitaev's toric code state, are characterized "γ" = log(2).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma"
},
{
"math_id": 1,
"text": " S_L \\; \\longrightarrow \\; \\alpha L -\\gamma +\\mathcal{O}(L^{-\\nu}) \\; , \\qquad \\nu>0 \\,\\!"
},
{
"math_id": 2,
"text": "-\\gamma"
}
] | https://en.wikipedia.org/wiki?curid=12545981 |
1254615 | Arithmetic underflow | Computer programming condition
The term arithmetic underflow (also floating point underflow, or just underflow) is a condition in a computer program where the result of a calculation is a number of more precise absolute value than the computer can actually represent in memory on its central processing unit (CPU).
Arithmetic underflow can occur when the true result of a floating point operation is smaller in magnitude (that is, closer to zero) than the smallest value representable as a normal floating point number in the target datatype. Underflow can in part be regarded as negative overflow of the exponent of the floating point value. For example, if the exponent part can represent values from −128 to 127, then a result with a value less than −128 may cause underflow.
For integers, the term "integer underflow" typically refers to a special kind of integer overflow or "integer wraparound" condition whereby the result of subtraction would result in a value less than the minimum allowed for a given integer type, i.e. the ideal result was closer to negative infinity than the output type's representable value closest to negative infinity.
Underflow gap.
The interval between − zero and "fminN", where "fminN" is the smallest positive normal floating point value, is called the underflow gap. This is because the size of this interval is many orders of magnitude larger than the distance between adjacent normal floating point values just outside the gap. For instance, if the floating point datatype can represent 20 bits, the underflow gap is 221 times larger than the absolute distance between adjacent floating point values just outside the gap.
In older designs, the underflow gap had just one usable value, zero. When an underflow occurred, the true result was replaced by zero (either directly by the hardware, or by system software handling the primary underflow condition). This replacement is called "flush to zero".
The 1984 edition of IEEE 754 introduced subnormal numbers. The subnormal numbers (including zero) fill the underflow gap with values where the absolute distance between adjacent values is the same as for adjacent values just outside the underflow gap. This enables "gradual underflow", where a nearest subnormal value is used, just as a nearest normal value is used when possible. Even when using gradual underflow, the nearest value may be zero.
The absolute distance between adjacent floating point values just outside the gap is called the machine epsilon, typically characterized by the largest value whose sum with the value 1 will result in the answer with value 1 in that floating point scheme. This can be written as formula_0, where formula_1 is a function which converts the real value into the floating point representation. While the machine epsilon is not to be confused with the underflow level (assuming subnormal numbers), it is closely related. The machine epsilon is dependent on the number of bits which make up the significand, whereas the underflow level depends on the number of digits which make up the exponent field. In most floating point systems, the underflow level is smaller than the machine epsilon.
Handling of underflow.
The occurrence of an underflow may set a ("sticky") status bit, raise an exception, at the hardware level generate an interrupt, or may cause some combination of these effects.
As specified in IEEE 754, the underflow condition is only signaled if there is also a loss of precision. Typically this is determined as the final result being inexact.
However, if the user is trapping on underflow, this may happen regardless of consideration for loss of precision. The default handling in IEEE 754 for underflow (as well as other exceptions) is to record as a floating point status that underflow has occurred. This is specified for the application-programming level, but often also interpreted as how to handle it at the hardware level.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "fl(1+\\epsilon) = fl(1)"
},
{
"math_id": 1,
"text": "fl()"
}
] | https://en.wikipedia.org/wiki?curid=1254615 |
1254860 | Mode 7 | Graphics mode on the Super NES video game console
Mode 7 is a graphics mode on the Super Nintendo Entertainment System video game console that allows a background layer to be rotated and scaled on a scanline-by-scanline basis to create many different depth effects. It also supports wrapping effects such as translation and reflection.
The most famous of these effects is the application of a perspective effect on a background layer by scaling and rotating the background layer in this manner. This transforms the background layer into a two-dimensional horizontal texture-mapped plane that trades height for depth. Thus, an impression of three-dimensional graphics is achieved.
Mode 7 was one of Nintendo's prominent selling points for the Super NES platform in publications such as "Nintendo Power" and "Super NES Player's Guide". Similar "faux" 3D techniques have been presented on a few 2D systems other than the Super NES, in select peripherals and games.
Overview.
The Super NES console has eight graphics modes, numbered from 0 to 7, for displaying background layers. The last one (background mode 7) has a single layer that can be scaled and rotated. Two-dimensional affine transformations can produce any combination of translation, scaling, reflection, rotation, and shearing. However, many games create additional effects by setting a different transformation matrix for each scanline. In this way, pseudo-perspective, curved surface, and distortion effects can be achieved.
Mode 7 graphics are generated for each pixel by mapping screen coordinates to background coordinates using an affine transformation and sampling the corresponding background color. The 2D affine transformation is specified for each scanline by 6 parameters: formula_0, formula_1, formula_2, and formula_3 ( which together define the matrix formula_4), and formula_5 and formula_6 (which define the vector formula_7, the origin). Specifically, screen coordinate formula_8 is translated to the origin coordinate system, the matrix is applied, and the result is translated back to the original coordinate system to obtain formula_9.
In 2D matrix notation:
formula_10
formula_11.
All arithmetic is carried out on 16-bit signed fixed point numbers, while all offsets are limited to 13 bits. The radix point is between bits 7 and 8.
Usage in games.
This graphical method is suited to racing games, and is used extensively for the overworld sections of role-playing games such as Square's popular 1994 game "Final Fantasy VI". The effect enables developers to create the impression of sprawling worlds that continue toward the horizon.
A particular utilization technique with Mode 7 allows pixels of the background layer to be in front of sprites. Examples include the second and fifth stage of ', the second and fifth stage of ', the introduction screen of "", when a player falls off the stage in "Super Mario Kart", some cinematics in "Super Metroid", and in some boss battles in "Super Mario World".
Mode 7-type effects can be implemented on the Super NES without the hardware acceleration of Mode 7, such as "Axelay"'s rolling pin vertical scrolling; and then it uses Mode 7 in one boss and in the end credits sequence.
Many Mode 7 games were remade for Game Boy Advance using effects implemented by software.
The Sega Genesis has no hardware-native feature comparable to Mode 7. However, as in "Tales of Phantasia" and "Star Ocean"'s sprite effect add-ins, some comparable technical feats were programmed entirely in software, as in "Dick Vitale's "Awesome, Baby!" College Hoops" and "Zero Tolerance". The Sega CD, an add-on for the Genesis, added scaling and rotation support on hardware level, as used by "Sonic CD" and '. Similarly, such Amiga games include ', "Lionheart", "Obitus", and "Brian the Lion".
Filip Hautekeete and Peter Vermeulen created a demo showcasing an emulated interpretation of the Mode 7 graphics mode found in the Super NES to test the hardware capabilities of the Atari Jaguar. Impressed with the demo, Atari Corporation decided to make a game that combined "F-Zero" and "Super Mario Kart" with a "cutesy" atmosphere, becoming the starting point of "Atari Karts".
Selection of Mode 7 games.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\na\n"
},
{
"math_id": 1,
"text": "\nb\n"
},
{
"math_id": 2,
"text": "\nc\n"
},
{
"math_id": 3,
"text": "\nd\n"
},
{
"math_id": 4,
"text": "\n\\mathbf{M}\n"
},
{
"math_id": 5,
"text": "\nx_0\n"
},
{
"math_id": 6,
"text": "\ny_0\n"
},
{
"math_id": 7,
"text": "\n\\mathbf{r}_0\n"
},
{
"math_id": 8,
"text": "\n\\mathbf{r}\n"
},
{
"math_id": 9,
"text": "\n\\mathbf{r}^\\prime\n"
},
{
"math_id": 10,
"text": "\n\\mathbf{r}^\\prime = \\mathbf{M} (\\mathbf{r} - \\mathbf{r}_0) + \\mathbf{r}_0\n"
},
{
"math_id": 11,
"text": "\n\\begin{bmatrix} x' \\\\ y' \\end{bmatrix}\n =\n\\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix}\n\\left(\n\\begin{bmatrix} x \\\\ y \\end{bmatrix}\n- \\begin{bmatrix} x_0 \\\\ y_0 \\end{bmatrix}\n\\right)\n+ \\begin{bmatrix} x_0 \\\\ y_0 \\end{bmatrix}\n"
}
] | https://en.wikipedia.org/wiki?curid=1254860 |
12550286 | AB magnitude | The AB magnitude system is an astronomical magnitude system. Unlike many other magnitude systems, it is based on flux measurements that are calibrated in absolute units, namely spectral flux densities.
Definition.
The "monochromatic" AB magnitude is defined as the logarithm of a spectral flux density with the usual scaling of astronomical magnitudes and a zero-point of about janskys (symbol Jy), where 1 Jy = 10−26 W Hz−1 m−2 = 10−23 erg s−1 Hz−1 cm−2 ("about" because the true definition of the zero point is based on magnitudes as shown below). If the spectral flux density is denoted "fν", the monochromatic AB magnitude is:
formula_0
or, with "fν" still in janskys,
formula_1
The exact definition is stated relative to the cgs units of erg s−1 cm−2 Hz−1:
formula_2
Note: there is a sign error in the original Oke & Gunn (1983) equation.
Inverting this leads to the true definition of the numerical value "" often cited:
formula_3 erg s−1 cm−2 Hz−1
Actual measurements are always made across some continuous range of wavelengths. The "bandpass" AB magnitude is defined so that the zero point corresponds to a bandpass-averaged spectral flux density of about :
formula_4
where "e"("ν") is the "equal-energy" filter response function and the ("hν")−1 term assumes that the detector is a photon-counting device such as a CCD or photomultiplier. (Filter responses are sometimes expressed as quantum efficiencies, that is, in terms of their response per photon, rather than per unit energy. In those cases the ("hν")−1 term has been folded into the definition of "e"("ν") and should not be included.)
The STMAG system is similarly defined, but for constant flux per unit wavelength interval instead.
AB stands for "absolute" in the sense that no relative reference object is used (unlike using Vega as a baseline object). This must not be confused with absolute magnitude in the sense of the apparent brightness of an object if seen from a distance of 10 parsecs.
Expression in terms of "fλ".
In some fields, spectral flux densities are expressed per unit wavelength, "fλ", rather than per unit frequency, "fν". At any specific wavelength,
formula_5
where "fν" is measured per frequency (say, in hertz), and "fλ" is measured per wavelength (say, in centimeters). If the wavelength unit is ångströms,
formula_6
This can then be plugged into the equations above.
The "pivot wavelength" of a given bandpass is the value of "λ" that makes the above conversion exact for observations made in that bandpass. For an equal-energy response function as defined above, it is
formula_7
For a response function expressed in the quantum-efficiency convention, it is:
formula_8
Conversion from other magnitude systems.
Magnitudes in the AB system can be converted to other systems. However, because all magnitude systems involve integration of some assumed source spectrum over some assumed passband, such conversions are not necessarily trivial to calculate, and precise conversions depend on the actual bandpass of the observations in question. Various authors have computed conversions for standard situations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_\\text{AB} \\approx -2.5 \\log_{10} \\left(\\frac{f_{\\nu}}{3\\,631\\text{ Jy}}\\right),"
},
{
"math_id": 1,
"text": "m_\\text{AB} = -2.5 \\log_{10} f_{\\nu} + 8.90."
},
{
"math_id": 2,
"text": "m_\\text{AB} = -2.5 \\log_{10} f_{\\nu} - 48.60."
},
{
"math_id": 3,
"text": "f_{\\nu,0} = 10^{\\tfrac{48.60}{-2.5}} \\approx 3.631 \\times 10^{-20}"
},
{
"math_id": 4,
"text": "m_\\text{AB} \\approx -2.5 \\log_{10} \\left(\\frac{\\int f_\\nu {(h\\nu)}^{-1} e(\\nu)\\, \\mathrm{d}\\nu}{\\int 3\\,631\\text{ Jy}\\, {(h\\nu)}^{-1} e(\\nu)\\, \\mathrm{d}\\nu}\\right),"
},
{
"math_id": 5,
"text": "f_\\nu = \\frac{\\lambda^2}{c} f_\\lambda,"
},
{
"math_id": 6,
"text": "\\frac{f_\\nu}{\\text{Jy}} = 3.34 \\times 10^{4} \\left(\\frac{\\lambda}{\\AA{}}\\right)^2 \\frac{f_\\lambda}{\\text{erg} \\text{ cm}^{-2} \\text{ s}^{-1} \\AA^{-1}}."
},
{
"math_id": 7,
"text": "\\lambda_\\text{piv} = \\sqrt{\\frac{\\int e(\\lambda) \\lambda\\, \\mathrm{d}\\lambda}{\\int e(\\lambda) \\lambda^{-1}\\, \\mathrm{d}\\lambda}}."
},
{
"math_id": 8,
"text": "\\lambda_\\text{piv} = \\sqrt{\\frac{\\int e(\\lambda)\\, \\mathrm{d}\\lambda}{\\int e(\\lambda) \\lambda^{-2}\\, \\mathrm{d}\\lambda}}."
}
] | https://en.wikipedia.org/wiki?curid=12550286 |
1255316 | Taut foliation | In mathematics, tautness is a rigidity property of foliations. A taut foliation is a codimension 1 foliation of a closed manifold with the property that every leaf meets a transverse circle. By transverse circle, is meant a closed loop that is always transverse to the tangent field of the foliation.
If the foliated manifold has non-empty tangential boundary, then a codimension 1 foliation is taut if every leaf meets a transverse circle or a transverse arc with endpoints on the tangential boundary. Equivalently, by a result of Dennis Sullivan, a codimension 1 foliation is taut if there exists a Riemannian metric that makes each leaf a minimal surface. Furthermore, for compact manifolds the existence, for every leaf formula_0, of a transverse circle meeting formula_0, implies the existence of a single transverse circle meeting every leaf.
Taut foliations were brought to prominence by the work of William Thurston and David Gabai.
Relation to Reebless foliations.
Taut foliations are closely related to the concept of Reebless foliation. A taut foliation cannot have a Reeb component, since the component would act like a "dead-end" from which a transverse curve could never escape; consequently, the boundary torus of the Reeb component has no transverse circle puncturing it. A Reebless foliation can fail to be taut but the only leaves of the foliation with no puncturing transverse circle must be compact, and in particular, homeomorphic to a torus.
Properties.
The existence of a taut foliation implies various useful properties about a closed 3-manifold. For example, a closed, orientable 3-manifold, which admits a taut foliation with no sphere leaf, must be irreducible, covered by formula_1, and have negatively curved fundamental group.
Rummler–Sullivan theorem.
By a theorem of Hansklaus Rummler and Dennis Sullivan, the following conditions are equivalent for transversely orientable codimension one foliations formula_2 of closed, orientable, smooth manifolds M:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " L "
},
{
"math_id": 1,
"text": "\\mathbb R^3"
},
{
"math_id": 2,
"text": "\\left(M,{\\mathcal{F}}\\right)"
},
{
"math_id": 3,
"text": "\\mathcal{F}"
}
] | https://en.wikipedia.org/wiki?curid=1255316 |
12554375 | Slender-body theory | In fluid dynamics and electrostatics, slender-body theory is a methodology that can be used to take advantage of the slenderness of a body to obtain an approximation to a field surrounding it and/or the net effect of the field on the body. Principal applications are to Stokes flow — at very low Reynolds numbers — and in electrostatics.
Theory for Stokes flow.
Consider slender body of length formula_0 and typical diameter formula_1 with formula_2, surrounded by fluid of viscosity formula_3 whose motion is governed by the Stokes equations. Note that the Stokes' paradox implies that the limit of infinite aspect ratio formula_4 is singular, as no Stokes flow can exist around an infinite cylinder.
Slender-body theory allows us to derive an approximate relationship between the velocity of the body at each point along its length and the force per unit length experienced by the body at that point.
Let the axis of the body be described by formula_5, where formula_6 is an arc-length coordinate, and formula_7 is time. By virtue of the slenderness of the body, the force exerted on the fluid at the surface of the body may be approximated by a distribution of Stokeslets along the axis with force density formula_8 per unit length. formula_9 is assumed to vary only over lengths much greater than formula_10, and the fluid velocity at the surface adjacent to formula_5 is well-approximated by formula_11.
The fluid velocity formula_12 at a general point formula_13 due to such a distribution can be written in terms of an integral of the Oseen tensor (named after Carl Wilhelm Oseen), which acts as a Greens function for a single Stokeslet. We have
formula_14
where formula_15 is the identity tensor.
Asymptotic analysis can then be used to show that the leading-order contribution to the integral for a point formula_13 on the surface of the body adjacent to position formula_16 comes from the force distribution at formula_17. Since formula_18, we approximate formula_19. We then obtain
formula_20
where formula_21.
The expression may be inverted to give the force density in terms of the motion of the body:
formula_22
Two canonical results that follow immediately are for the drag force formula_23 on a rigid cylinder (length formula_0, radius formula_10) moving a velocity formula_24 either parallel to its axis or perpendicular to it. The parallel case gives
formula_25
while the perpendicular case gives
formula_26
with only a factor of two difference.
Note that the dominant length scale in the above expressions is the longer length formula_0; the shorter length has only a weak effect through the logarithm of the aspect ratio. In slender-body theory results, there are formula_27 corrections to the logarithm, so even for relatively large values of formula_28 the error terms will not be that small. | [
{
"math_id": 0,
"text": "\\ell"
},
{
"math_id": 1,
"text": "2a"
},
{
"math_id": 2,
"text": "\\ell \\gg a"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "\\ell/a \\rightarrow \\infty"
},
{
"math_id": 5,
"text": "\\boldsymbol{X}(s,t)"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "\\boldsymbol{f}(s)"
},
{
"math_id": 9,
"text": "\\boldsymbol{f}"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "\\partial\\boldsymbol{X}/\\partial t"
},
{
"math_id": 12,
"text": "\\boldsymbol{u}(\\boldsymbol{x})"
},
{
"math_id": 13,
"text": "\\boldsymbol{x}"
},
{
"math_id": 14,
"text": " \\boldsymbol{u}(\\boldsymbol{x}) = \\int_0^\\ell \\frac{\\boldsymbol{f}(s)}{8\\pi\\mu} \\cdot \\left( \\frac{\\mathbf{I}}{|\\boldsymbol{x} - \\boldsymbol{X}|} + \\frac{(\\boldsymbol{x} - \\boldsymbol{X})(\\boldsymbol{x} - \\boldsymbol{X})}{|\\boldsymbol{x} - \\boldsymbol{X}|^3} \\right) \\, \\mathrm{d}s "
},
{
"math_id": 15,
"text": "\\mathbf{I}"
},
{
"math_id": 16,
"text": "s_0"
},
{
"math_id": 17,
"text": "|s- s_0| = O(a)"
},
{
"math_id": 18,
"text": "a \\ll \\ell"
},
{
"math_id": 19,
"text": "\\boldsymbol{f}(s) \\approx \\boldsymbol{f}(s_0)"
},
{
"math_id": 20,
"text": "\\frac{\\partial \\boldsymbol{X}}{\\partial t} \\sim \\frac{\\ln(\\ell/a)}{4\\pi\\mu} \\boldsymbol{f}(s) \\cdot \\Bigl( \\mathbf{I} + \\boldsymbol{X}'\\boldsymbol{X}' \\Bigr)"
},
{
"math_id": 21,
"text": "\\boldsymbol{X}' = \\partial \\boldsymbol{X}/\\partial s"
},
{
"math_id": 22,
"text": "\\boldsymbol{f}(s) \\sim \\frac{4\\pi\\mu}{\\ln(\\ell/a)} \\frac{\\partial \\boldsymbol{X}}{\\partial t} \\cdot \\Bigl( \\mathbf{I} - \\textstyle\\frac{1}{2} \\boldsymbol{X}'\\boldsymbol{X}' \\Bigr) "
},
{
"math_id": 23,
"text": "F"
},
{
"math_id": 24,
"text": "u"
},
{
"math_id": 25,
"text": "F \\sim \\frac{2\\pi\\mu\\ell u}{\\ln(\\ell/a)}"
},
{
"math_id": 26,
"text": "F \\sim \\frac{4\\pi\\mu\\ell u}{\\ln(\\ell/a)}"
},
{
"math_id": 27,
"text": "O(1)"
},
{
"math_id": 28,
"text": "\\ell/a"
}
] | https://en.wikipedia.org/wiki?curid=12554375 |
12555662 | Active contour model | Active contour model, also called snakes, is a framework in computer vision introduced by Michael Kass, Andrew Witkin, and Demetri Terzopoulos for delineating an object outline from a possibly noisy 2D image. The snakes model is popular in computer vision, and snakes are widely used in applications like object tracking, shape recognition, segmentation, edge detection and stereo matching.
A snake is an energy minimizing, deformable spline influenced by constraint and image forces that pull it towards object contours and internal forces that resist deformation. Snakes may be understood as a special case of the general technique of matching a deformable model to an image by means of energy minimization. In two dimensions, the active shape model represents a discrete version of this approach, taking advantage of the point distribution model to restrict the shape range to an explicit domain learnt from a training set.
Snakes do not solve the entire problem of finding contours in images, since the method requires knowledge of the desired contour shape beforehand. Rather, they depend on other mechanisms such as interaction with a user, interaction with some higher level image understanding process, or information from image data adjacent in time or space.
Motivation.
In computer vision, contour models describe the boundaries of shapes in an image. Snakes in particular are designed to solve problems where the approximate shape of the boundary is known. By being a deformable model, snakes can adapt to differences and noise in stereo matching and motion tracking. Additionally, the method can find Illusory contours in the image by ignoring missing boundary information.
Compared to classical feature extraction techniques, snakes have multiple advantages:
The key drawbacks of the traditional snakes are
Energy formulation.
A simple elastic snake is defined by a set of "n" points formula_0 for formula_1, the internal elastic energy term formula_2, and the external edge-based energy term formula_3. The purpose of the internal energy term is to control the deformations made to the snake, and the purpose of the external energy term is to control the fitting of the contour onto the image. The external energy is usually a combination of the forces due to the image itself formula_4 and the constraint forces introduced by the user formula_5
The energy function of the snake is the sum of its external energy and internal energy, or
formula_6
Internal energy.
The internal energy of the snake is composed of the continuity of the contour formula_7 and the smoothness of the contour formula_8.
formula_9
This can be expanded as
formula_10
where formula_11 and formula_12 are user-defined weights; these control the internal energy function's sensitivity to the amount of stretch in the snake and the amount of curvature in the snake, respectively, and thereby control the number of constraints on the shape of the snake.
In practice, a large weight formula_11 for the continuity term penalizes changes in distances between points in the contour. A large weight formula_12 for the smoothness term penalizes oscillations in the contour and will cause the contour to act as a thin plate.
Image energy.
Energy in the image is some function of the features of the image. This is one of the most common points of modification in derivative methods. Features in images and images themselves can be processed in many and various ways.
For an image formula_13, lines, edges, and terminations present in the image, the general formulation of energy due to the image is
formula_14
where formula_15, formula_16, formula_17 are weights of these salient features. Higher weights indicate that the salient feature will have a larger contribution to the image force.
Line functional.
The line functional is the intensity of the image, which can be represented as
formula_18
The sign of formula_15 will determine whether the line will be attracted to either dark lines or light lines.
Some smoothing or noise reduction may be used on the image, which then the line functional appears as
formula_19
Edge functional.
The edge functional is based on the image gradient. One implementation of this is
formula_20
A snake originating far from the desired object contour may erroneously converge to some local minimum. Scale space continuation can be used in order to avoid these local minima. This is achieved by using a blur filter on the image and reducing the amount of blur as the calculation progresses to refine the fit of the snake. The energy functional using scale space continuation is
formula_21
where formula_22 is a Gaussian with standard deviation formula_23. Minima of this function fall on the zero-crossings of formula_24 which define edges as per Marr–Hildreth theory.
Termination functional.
Curvature of level lines in a slightly smoothed image can be used to detect corners and terminations in an image. Using this method, let formula_25 be the image smoothed by
formula_26
with gradient angle
formula_27
unit vectors along the gradient direction
formula_28
and unit vectors perpendicular to the gradient direction
formula_29
The termination functional of energy can be represented as
formula_30
Constraint energy.
Some systems, including the original snakes implementation, allowed for user interaction to guide the snakes, not only in initial placement but also in their energy terms. Such constraint energy formula_31 can be used to interactively guide the snakes towards or away from particular features.
Optimization through gradient descent.
Given an initial guess for a snake, the energy function of the snake is iteratively minimized. Gradient descent minimization is one of the simplest optimizations which can be used to minimize snake energy. Each iteration takes one step in the negative gradient of the point with controlled step size formula_32 to find local minima. This gradient-descent minimization can be implemented as
formula_33
Where formula_34 is the force on the snake, which is defined by the negative of the gradient of the energy field.
formula_35
Assuming the weights formula_36 and formula_37 are constant with respect to formula_38, this iterative method can be simplified to
formula_39
Discrete approximation.
In practice, images have finite resolution and can only be integrated over finite time steps formula_40. As such, discrete approximations must be made for practical implementation of snakes.
The energy function of the snake can be approximated by using the discrete points on the snake.
formula_41
Consequentially, the forces of the snake can be approximated as
formula_42
Gradient approximation can be done through any finite approximation method with respect to "s", such as Finite difference.
Numerical instability due to discrete time.
The introduction of discrete time into the algorithm can introduce updates which the snake is moved past the minima it is attracted to; this further can cause oscillations around the minima or lead to a different minima being found.
This can be avoided through tuning the time step such that the step size is never greater than a pixel due to the image forces. However, in regions of low energy, the internal energies will dominate the update.
Alternatively, the image forces can be normalized for each step such that the image forces only update the snake by one pixel. This can be formulated as
formula_43
where formula_44 is near the value of the pixel size. This avoids the problem of dominating internal energies that arise from tuning the time step.
Numerical instability due to discrete space.
The energies in a continuous image may have zero-crossing that do not exist as a pixel in the image. In this case, a point in the snake would oscillate between the two pixels that neighbor this zero-crossing. This oscillation can be avoided by using interpolation between pixels instead of nearest neighbor.
Some variants of snakes.
The default method of snakes has various limitation and corner cases where the convergence performs poorly. Several alternatives exist which addresses issues of the default method, though with their own trade-offs. A few are listed here.
GVF snake model.
The gradient vector flow (GVF) snake model addresses two issues with snakes:
In 2D, the GVF vector field formula_45 minimizes the energy functional
formula_46
where formula_47 is a controllable smoothing term. This can be solved by solving the Euler equations
formula_48
formula_49
This can be solved through iteration towards a steady-state value.
formula_50
formula_51
This result replaces the default external force.
formula_52
The primary issue with using GVF is the smoothing term formula_47 causes rounding of the edges of the contour. Reducing the value of formula_47 reduces the rounding but weakens the amount of smoothing.
The balloon model.
The balloon model addresses these problems with the default active contour model:
The balloon model introduces an inflation term into the forces acting on the snake
formula_53
where formula_54 is the normal unitary vector of the curve at formula_55 and formula_56 is the magnitude of the force. formula_56 should have the same magnitude as the image normalization factor formula_57 and be smaller in value than formula_57 to allow forces at image edges to overcome the inflation force.
Three issues arise from using the balloon model:
Diffusion snakes model.
The diffusion snake model addresses the sensitivity of snakes to noise, clutter, and occlusion. It implements a modification of the Mumford–Shah functional and its cartoon limit and incorporates statistical shape knowledge. The default image energy functional formula_4 is replaced with
formula_58
where formula_59 is based on a modified Mumford–Shah functional
formula_60
where formula_61 is the piecewise smooth model of the image formula_62 of domain formula_63. Boundaries formula_64 are defined as
formula_65
where formula_66 are quadratic B-spline basis functions and formula_67 are the control points of the splines. The modified cartoon limit is obtained as formula_68 and is a valid configuration of formula_59.
The functional formula_69 is based on training from binary images of various contours and is controlled in strength by the parameter formula_70. For a Gaussian distribution of control point vectors formula_71 with mean control point vector formula_72 and covariance matrix formula_73 , the quadratic energy that corresponds to the Gaussian probability is
formula_74
The strength of this method relies on the strength of the training data as well as the tuning of the modified Mumford–Shah functional. Different snakes will require different training data sets and tunings.
Geometric active contours.
Geometric active contour, or geodesic active contour (GAC) or conformal active contours employs ideas from Euclidean curve shortening evolution. Contours split and merge depending on the detection of objects in the image. These models are largely inspired by level sets, and have been extensively employed in medical image computing.
For example, the gradient descent curve evolution equation of GAC is
formula_75
where formula_76 is a halting function, "c" is a Lagrange multiplier, formula_77 is the curvature, and formula_78 is the unit inward normal. This particular form of curve evolution equation is only dependent on the velocity in the normal direction. It therefore can be rewritten equivalently in an Eulerian form by inserting the level set function formula_79 into it as follows
formula_80
This simple yet powerful level-set reformation enables active contours to handle topology changes during the gradient descent curve evolution. It has inspired tremendous progress in the related fields, and using numerical methods to solve the level-set reformulation is now commonly known as the level-set method. Although the level set method has become quite a popular tool for implementing active contours, Wang and Chan argued that not all curve evolution equations should be "directly" solved by it.
More recent developments in active contours address modeling of regional properties, incorporation of flexible shape priors and fully automatic segmentation, etc.
Statistical models combining local and global features have been formulated by Lankton and Allen Tannenbaum.
Relations to graph cuts.
Graph cuts, or max-flow/min-cut, is a generic method for minimizing a particular form of energy called Markov random field (MRF) energy. The Graph cuts method has been applied to image segmentation as well, and it sometimes outperforms the level set method when the model is MRF or can be approximated by MRF.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf v_i "
},
{
"math_id": 1,
"text": "i=0, \\ldots, n-1 "
},
{
"math_id": 2,
"text": "E_\\text{internal} "
},
{
"math_id": 3,
"text": "E_\\text{external} "
},
{
"math_id": 4,
"text": " E_\\text{image} "
},
{
"math_id": 5,
"text": " E_\\text{con}"
},
{
"math_id": 6,
"text": " E_\\text{snake}^*=\\int\\limits_0^1E_\\text{snake}(\\mathbf{v}(s))\\,ds=\\int\\limits_0^1 ( E_\\text{internal}(\\mathbf{v}(s))+E_\\text{image}(\\mathbf{v}(s))+E_\\text{con}(\\mathbf{v}(s)) )\\,ds"
},
{
"math_id": 7,
"text": " E_\\text{cont} "
},
{
"math_id": 8,
"text": " E_\\text{curv} "
},
{
"math_id": 9,
"text": "E_\\text{internal}=E_\\text{cont}+E_\\text{curv}"
},
{
"math_id": 10,
"text": "E_\\text{internal}=\\frac{1}{2}(\\alpha\\,\\!(s)\\left | \\mathbf{v}_s(s) \\right \\vert^2) + \\frac{1}{2} (\\beta\\,\\!(s)\\left | \\mathbf{v}_{ss}(s) \\right \\vert ^2) =\\frac{1}{2} \\bigg(\\alpha\\,\\!(s) \\left \\| \\frac{d\\bar v}{ds}(s) \\right \\Vert^2 + \\beta\\,\\!(s) \\left \\| \\frac{d^2\\bar v}{ds^2}(s) \\right \\Vert ^2 \\bigg)"
},
{
"math_id": 11,
"text": " \\alpha (s) "
},
{
"math_id": 12,
"text": " \\beta (s) "
},
{
"math_id": 13,
"text": " I(x,y) "
},
{
"math_id": 14,
"text": "E_\\text{image}=w_\\text{line}E_\\text{line}+w_\\text{edge}E_\\text{edge}+w_\\text{term} E_\\text{term}, "
},
{
"math_id": 15,
"text": "w_\\text{line}"
},
{
"math_id": 16,
"text": "w_\\text{edge}"
},
{
"math_id": 17,
"text": "w_\\text{term}"
},
{
"math_id": 18,
"text": "E_\\text{line}= I(x,y)"
},
{
"math_id": 19,
"text": "E_\\text{line}= \\operatorname{filter}(I(x,y))"
},
{
"math_id": 20,
"text": "E_\\text{edge}=-\\left | \\nabla I(x,y)\\right \\vert ^2."
},
{
"math_id": 21,
"text": "E_\\text{edge}=-\\left | G_\\sigma \\cdot \\nabla^2 I \\right \\vert ^2"
},
{
"math_id": 22,
"text": " G_\\sigma "
},
{
"math_id": 23,
"text": " \\sigma "
},
{
"math_id": 24,
"text": " G_\\sigma \\, \\nabla^2 I "
},
{
"math_id": 25,
"text": " C(x,y) "
},
{
"math_id": 26,
"text": " C(x,y)= G_\\sigma \\cdot I(x,y) "
},
{
"math_id": 27,
"text": " \\theta = \\arctan \\left( \\frac {C_y}{C_x} \\right),"
},
{
"math_id": 28,
"text": " \\mathbf n = (\\cos \\theta,\\sin \\theta),"
},
{
"math_id": 29,
"text": "\\mathbf n_\\perp = (-\\sin \\theta,\\cos \\theta). "
},
{
"math_id": 30,
"text": " E_\\text{term}={\\partial \\theta\\over\\partial n_\\perp} = {\\partial^2 C / \\partial n_\\perp^2 \\over \\partial C / \\partial n} = {{C_{yy} C_x^2-2C_{xy} C_xC_y+C_{xx} C_y^2}\\over(C_x^2+C_y^2)^{3/2}}"
},
{
"math_id": 31,
"text": " E_{con} "
},
{
"math_id": 32,
"text": " \\gamma "
},
{
"math_id": 33,
"text": "\\bar v_i \\leftarrow \\bar v_i + F_\\text{snake}(\\bar v_i)"
},
{
"math_id": 34,
"text": " F_\\text{snake}(\\bar v_i) "
},
{
"math_id": 35,
"text": " F_\\text{snake}(\\bar v_i) = - \\nabla E_\\text{snake} (\\bar v_i) = - \\Bigg( w_\\text{internal} \\, \\nabla E_\\text{internal} (\\bar v_i)+w_\\text{external} \\, \\nabla E_\\text{external} (\\bar v_i) \\Bigg) "
},
{
"math_id": 36,
"text": " \\alpha(s) "
},
{
"math_id": 37,
"text": " \\beta(s) "
},
{
"math_id": 38,
"text": " s "
},
{
"math_id": 39,
"text": "\\bar v_i \\leftarrow \\bar v_i - \\gamma \\Bigg\\{ w_\\text{internal} \\bigg[ \\alpha \\frac{\\partial ^2 \\bar v}{\\partial s^2} (\\bar v_i)+\\beta \\frac{\\partial ^4 \\bar v}{\\partial s^4} (\\bar v_i) \\bigg] + \\nabla E_\\text{ext} (\\bar v_i) \\Bigg\\} "
},
{
"math_id": 40,
"text": " \\tau "
},
{
"math_id": 41,
"text": "E_\\text{snake}^*\\approx \\sum_1^n E_\\text{snake}(\\bar v_i)"
},
{
"math_id": 42,
"text": "F_\\text{snake}^*\\approx - \\sum_{i=1}^n \\nabla E_\\text{snake} (\\bar v_i). "
},
{
"math_id": 43,
"text": " F_\\text{image} = - k \\frac{\\nabla E_\\text{image}}{\\|\\nabla E_\\text{image}\\|} "
},
{
"math_id": 44,
"text": " \\tau k "
},
{
"math_id": 45,
"text": " F_\\text{GVF} "
},
{
"math_id": 46,
"text": " E_\\text{GVF} = \\iint \\mu(u_x^2+u_y^2+v_x^2+v_y^2)+|\\nabla f|^2|\\mathbf v-\\nabla f|^2 \\, dx \\, dy "
},
{
"math_id": 47,
"text": " \\mu "
},
{
"math_id": 48,
"text": " \\mu \\, \\nabla^2 u - \\Bigg (u - \\frac{\\partial}{\\partial x}F_\\text{ext}\\Bigg )\\Bigg (\\frac{\\partial}{\\partial x} F_\\text{ext}(x,y)^2 + \\frac{\\partial}{\\partial y}F_\\text{ext}(x,y)^2\\Bigg ) = 0 "
},
{
"math_id": 49,
"text": " \\mu \\, \\nabla^2 v - \\Bigg (v - \\frac{\\partial}{\\partial y}F_\\text{ext}\\Bigg )\\Bigg (\\frac{\\partial}{\\partial x} F_\\text{ext}(x,y)^2 + \\frac{\\partial}{\\partial y} F_\\text{ext}(x,y)^2\\Bigg ) = 0 "
},
{
"math_id": 50,
"text": " u_{i+1} = u_i + \\mu \\, \\nabla^2 u_i - \\Bigg (u_i - \\frac{\\partial}{\\partial x} F_\\text{ext}\\Bigg )\\Bigg (\\frac{\\partial}{\\partial x} F_\\text{ext}(x,y)^2 + \\frac{\\partial}{\\partial y} F_\\text{ext}(x,y)^2\\Bigg ) "
},
{
"math_id": 51,
"text": " v_{i+1} = v_i + \\mu \\, \\nabla^2 v_i - \\Bigg (v_i - \\frac{\\partial}{\\partial y}F_\\text{ext}\\Bigg )\\Bigg (\\frac{\\partial}{\\partial x}F_\\text{ext}(x,y)^2 + \\frac{\\partial}{\\partial y} F_\\text{ext}(x,y)^2\\Bigg ) "
},
{
"math_id": 52,
"text": " F_\\text{ext}^* = F_\\text{GVF} "
},
{
"math_id": 53,
"text": "F_\\text{inflation} = k_1 \\vec n (s) "
},
{
"math_id": 54,
"text": " \\vec n (s) "
},
{
"math_id": 55,
"text": " v(s) "
},
{
"math_id": 56,
"text": " k_1 "
},
{
"math_id": 57,
"text": " k "
},
{
"math_id": 58,
"text": "\nE_\\text{image}^* = E_i + \\alpha E_c\n"
},
{
"math_id": 59,
"text": " E_i "
},
{
"math_id": 60,
"text": "\nE[J,B] = \\frac{1}{2} \\int_D (I(\\vec x) - J(\\vec x))^2 \\, d \\vec x + \\lambda \\frac{1}{2} \\int_{D/B} \\vec \\nabla J(\\vec x) \\cdot \\vec \\nabla J(\\vec x) \\, d \\vec x + \\nu \\int_0^1 \\Bigg ( \\frac{d}{ds} B(s) \\Bigg )^2 \\, ds\n"
},
{
"math_id": 61,
"text": " J(\\vec x) "
},
{
"math_id": 62,
"text": " I(\\vec x) "
},
{
"math_id": 63,
"text": " D "
},
{
"math_id": 64,
"text": " B(s) "
},
{
"math_id": 65,
"text": "\nB(s) = \\sum_{n=1}^N \\vec p_n B_n (s)\n"
},
{
"math_id": 66,
"text": " B_n (s) "
},
{
"math_id": 67,
"text": " \\vec p_n "
},
{
"math_id": 68,
"text": " \\lambda \\to \\infty "
},
{
"math_id": 69,
"text": "E_c"
},
{
"math_id": 70,
"text": " \\alpha "
},
{
"math_id": 71,
"text": " \\vec z "
},
{
"math_id": 72,
"text": " \\vec z_0 "
},
{
"math_id": 73,
"text": " \\Sigma "
},
{
"math_id": 74,
"text": "\nE_c(\\vec z) = \\frac{1}{2} (\\vec z - \\vec z_0)^t \\Sigma^*(\\vec z - \\vec z_0)\n"
},
{
"math_id": 75,
"text": "\n\\frac{\\partial C}{\\partial t} = g(I)(c+\\kappa)\\vec{N}-\\langle \\, \\nabla g,\\vec{N}\\rangle\\vec{N}\n"
},
{
"math_id": 76,
"text": " g(I) "
},
{
"math_id": 77,
"text": " \\kappa "
},
{
"math_id": 78,
"text": " \\vec{N} "
},
{
"math_id": 79,
"text": " \\Phi "
},
{
"math_id": 80,
"text": "\n\\frac{\\partial \\Phi}{\\partial t} = |\\nabla \\Phi| \\operatorname{div} \\Bigg ( g(I)\\frac{ \\nabla \\Phi}{|\\nabla \\Phi|} \\Bigg ) + c g(I) |\\nabla \\Phi|\n"
}
] | https://en.wikipedia.org/wiki?curid=12555662 |
12555671 | Relaxation (iterative method) | In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.
Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discretizations of differential equations. They are also used for the solution of linear equations for linear least-squares problems and also for systems of linear inequalities, such as those arising in linear programming. They have also been developed for solving nonlinear systems of equations.
Relaxation methods are important especially in the solution of linear systems used to model elliptic partial differential equations, such as Laplace's equation and its generalization, Poisson's equation. These equations describe boundary-value problems, in which the solution-function's values are specified on boundary of a domain; the problem is to compute a solution also on its interior. Relaxation methods are used to solve the linear equations resulting from a discretization of the differential equation, for example by finite differences.
Iterative relaxation of solutions is commonly dubbed smoothing because with certain equations, such as Laplace's equation, it resembles repeated application of a local smoothing filter to the solution vector. These are not to be confused with relaxation methods in mathematical optimization, which approximate a difficult problem by a simpler problem whose "relaxed" solution provides information about the solution of the original problem.
Model problem of potential theory.
When φ is a smooth real-valued function on the real numbers, its second derivative can be approximated by:
formula_0
Using this in both dimensions for a function φ of two arguments at the point ("x", "y"), and solving for φ("x", "y"), results in:
formula_1
To approximate the solution of the Poisson equation:
formula_2
numerically on a two-dimensional grid with grid spacing "h", the relaxation method assigns the given values of function φ to the grid points near the boundary and arbitrary values to the interior grid points, and then repeatedly performs the assignment
φ := φ* on the interior points, where φ* is defined by:
formula_3
until convergence.
The method is easily generalized to other numbers of dimensions.
Convergence and acceleration.
While the method converges under general conditions, it typically makes slower progress than competing methods. Nonetheless, the study of relaxation methods remains a core part of linear algebra, because the transformations of relaxation theory provide excellent preconditioners for new methods. Indeed, the choice of preconditioner is often more important than the choice of iterative method.
Multigrid methods may be used to accelerate the methods. One can first compute an approximation on a coarser grid – usually the double spacing 2"h" – and use that solution with interpolated values for the other grid points as the initial assignment. This can then also be done recursively for the coarser computation. | [
{
"math_id": 0,
"text": "\\frac{d^2\\varphi(x)}{{dx}^2} = \\frac{\\varphi(x{-}h)-2\\varphi(x)+\\varphi(x{+}h)}{h^2}\\,+\\,\\mathcal{O}(h^2)\\,."
},
{
"math_id": 1,
"text": "\\varphi(x, y) = \\tfrac{1}{4}\\left(\\varphi(x{+}h,y)+\\varphi(x,y{+}h)+\\varphi(x{-}h,y)+\\varphi(x,y{-}h)\n\\,-\\,h^2{\\nabla}^2\\varphi(x,y)\\right)\\,+\\,\\mathcal{O}(h^4)\\,."
},
{
"math_id": 2,
"text": "{\\nabla}^2 \\varphi = f\\,"
},
{
"math_id": 3,
"text": "\\varphi^*(x, y) = \\tfrac{1}{4}\\left(\\varphi(x{+}h,y)+\\varphi(x,y{+}h)+\\varphi(x{-}h,y)+\\varphi(x,y{-}h)\n\\,-\\,h^2f(x,y)\\right)\\,,"
}
] | https://en.wikipedia.org/wiki?curid=12555671 |
1255570 | Bending | Strain caused by an external load
In applied mechanics, bending (also known as flexure) characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element.
The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two. When the length is considerably longer than the width and the thickness, the element is called a beam. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any geometric form where the length and the width are of the same order of magnitude but the thickness of the structure (known as the 'wall') is considerably smaller. A large diameter, but thin-walled, short tube supported at its ends and loaded laterally is an example of a shell experiencing bending.
In the absence of a qualifier, the term "bending" is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the term more precise, engineers refer to a specific object such as; the "bending of rods", the "bending of beams", the "bending of plates", the "bending of shells" and so on.
Quasi-static bending of beams.
A beam deforms and stresses develop inside it when a transverse load is applied on it. In the quasi-static case, the amount of bending deflection and the stresses that develop are assumed not to change over time. In a horizontal beam supported at the ends and loaded downwards in the middle, the material at the over-side of the beam is compressed while the material at the underside is stretched. There are two forms of internal stresses caused by lateral loads:
These last two forces form a couple or moment as they are equal in magnitude and opposite in direction. This bending moment resists the sagging deformation characteristic of a beam experiencing bending. The stress distribution in a beam can be predicted quite accurately when some simplifying assumptions are used.
Euler–Bernoulli bending theory.
In the Euler–Bernoulli theory of slender beams, a major assumption is that 'plane sections remain plane'. In other words, any deformation due to shear across the section is not accounted for (no shear deformation). Also, this linear distribution is only applicable if the maximum stress is less than the yield stress of the material. For stresses that exceed yield, refer to article plastic bending. At yield, the maximum stress experienced in the section (at the furthest points from the neutral axis of the beam) is defined as the flexural strength.
Consider beams where the following are true:
In this case, the equation describing beam deflection (formula_0) can be approximated as:
formula_1
where the second derivative of its deflected shape with respect to formula_2 is interpreted as its curvature, formula_3 is the Young's modulus, formula_4 is the area moment of inertia of the cross-section, and formula_5 is the internal bending moment in the beam.
If, in addition, the beam is homogeneous along its length as well, and not tapered (i.e. constant cross section), and deflects under an applied transverse load formula_6, it can be shown that:
formula_7
This is the Euler–Bernoulli equation for beam bending.
After a solution for the displacement of the beam has been obtained, the bending moment (formula_5) and shear force (formula_8) in the beam can be calculated using the relations
formula_9
Simple beam bending is often analyzed with the Euler–Bernoulli beam equation. The conditions for using simple bending theory are:
Compressive and tensile forces develop in the direction of the beam axis under bending loads. These forces induce stresses on the beam. The maximum compressive stress is found at the uppermost edge of the beam while the maximum tensile stress is located at the lower edge of the beam. Since the stresses between these two opposing maxima vary linearly, there therefore exists a point on the linear path between them where there is no bending stress. The locus of these points is the neutral axis. Because of this area with no stress and the adjacent areas with low stress, using uniform cross section beams in bending is not a particularly efficient means of supporting a load as it does not use the full capacity of the beam until it is on the brink of collapse. Wide-flange beams (I-beams) and truss girders effectively address this inefficiency as they minimize the amount of material in this under-stressed region.
The classic formula for determining the bending stress in a beam under simple bending is:
formula_10
where
Extensions of Euler-Bernoulli beam bending theory.
Plastic bending.
The equation formula_17 is valid only when the stress at the extreme fiber (i.e., the portion of the beam farthest from the neutral axis) is below the yield stress of the material from which it is constructed. At higher loadings the stress distribution becomes non-linear, and ductile materials will eventually enter a "plastic hinge" state where the magnitude of the stress is equal to the yield stress everywhere in the beam, with a discontinuity at the neutral axis where the stress changes from tensile to compressive. This plastic hinge state is typically used as a limit state in the design of steel structures.
Complex or asymmetrical bending.
The equation above is only valid if the cross-section is symmetrical. For homogeneous beams with asymmetrical sections, the maximum bending stress in the beam is given by
formula_18
where formula_19 are the coordinates of a point on the cross section at which the stress is to be determined as shown to the right, formula_20 and formula_12 are the bending moments about the y and z centroid axes, formula_21 and formula_14 are the second moments of area (distinct from moments of inertia) about the y and z axes, and formula_22 is the product of moments of area. Using this equation it is possible to calculate the bending stress at any point on the beam cross section regardless of moment orientation or cross-sectional shape. Note that formula_23 do not change from one point to another on the cross section.
Large bending deformation.
For large deformations of the body, the stress in the cross-section is calculated using an extended version of this formula. First the following assumptions must be made:
Large bending considerations should be implemented when the bending radius formula_24 is smaller than ten section heights h:
formula_25
With those assumptions the stress in large bending is calculated as:
formula_26
where
formula_27 is the normal force
formula_28 is the section area
formula_5 is the bending moment
formula_24 is the local bending radius (the radius of bending at the current section)
formula_29 is the area moment of inertia along the "x"-axis, at the formula_13 place (see Steiner's theorem)
formula_13 is the position along "y"-axis on the section area in which the stress formula_30 is calculated.
When bending radius formula_24 approaches infinity and formula_31, the original formula is back:
formula_32.
Timoshenko bending theory.
In 1921, Timoshenko improved upon the Euler–Bernoulli theory of beams by adding the effect of shear into the beam equation. The kinematic assumptions of the Timoshenko theory are:
However, normals to the axis are not required to remain perpendicular to the axis after deformation.
The equation for the quasistatic bending of a linear elastic, isotropic, homogeneous beam of constant cross-section beam under these assumptions is
formula_33
where formula_4 is the area moment of inertia of the cross-section, formula_28 is the cross-sectional area, formula_34 is the shear modulus, formula_35 is a shear correction factor, and formula_6 is an applied transverse load. For materials with Poisson's ratios (formula_36) close to 0.3, the shear correction factor for a rectangular cross-section is approximately
formula_37
The rotation (formula_38) of the normal is described by the equation
formula_39
The bending moment (formula_5) and the shear force (formula_8) are given by
formula_40
Beams on elastic foundations.
According to Euler–Bernoulli, Timoshenko or other bending theories, the beams on elastic foundations can be explained. In some applications such as rail tracks, foundation of buildings and machines, ships on water, roots of plants etc., the beam subjected to loads is supported on continuous elastic foundations (i.e. the continuous reactions due to external loading is distributed along the length of the beam)
Dynamic bending of beams.
The dynamic bending of beams, also known as flexural vibrations of beams, was first investigated by Daniel Bernoulli in the late 18th century. Bernoulli's equation of motion of a vibrating beam tended to overestimate the natural frequencies of beams and was improved marginally by Rayleigh in 1877 by the addition of a mid-plane rotation. In 1921 Stephen Timoshenko improved the theory further by incorporating the effect of shear on the dynamic response of bending beams. This allowed the theory to be used for problems involving high frequencies of vibration where the dynamic Euler–Bernoulli theory is inadequate. The Euler-Bernoulli and Timoshenko theories for the dynamic bending of beams continue to be used widely by engineers.
Euler–Bernoulli theory.
The Euler–Bernoulli equation for the dynamic bending of slender, isotropic, homogeneous beams of constant cross-section under an applied transverse load formula_41 is
formula_42
where formula_3 is the Young's modulus, formula_4 is the area moment of inertia of the cross-section, formula_43 is the deflection of the neutral axis of the beam, and formula_44 is mass per unit length of the beam.
Free vibrations.
For the situation where there is no transverse load on the beam, the bending equation takes the form
formula_45
Free, harmonic vibrations of the beam can then be expressed as
formula_46
and the bending equation can be written as
formula_47
The general solution of the above equation is
formula_48
where formula_49 are constants and
formula_50
Timoshenko–Rayleigh theory.
In 1877, Rayleigh proposed an improvement to the dynamic Euler–Bernoulli beam theory by including the effect of rotational inertia of the cross-section of the beam. Timoshenko improved upon that theory in 1922 by adding the effect of shear into the beam equation. Shear deformations of the normal to the mid-surface of the beam are allowed in the Timoshenko–Rayleigh theory.
The equation for the bending of a linear elastic, isotropic, homogeneous beam of constant cross-section under these assumptions is
formula_51
where formula_52 is the polar moment of inertia of the cross-section, formula_53 is the mass per unit length of the beam, formula_24 is the density of the beam, formula_28 is the cross-sectional area, formula_34 is the shear modulus, and formula_35 is a shear correction factor. For materials with Poisson's ratios (formula_36) close to 0.3, the shear correction factor are approximately
formula_54
Free vibrations.
For free, harmonic vibrations the Timoshenko–Rayleigh equations take the form
formula_55
This equation can be solved by noting that all the derivatives of formula_0 must have the same form to cancel out and hence as solution of the form formula_56 may be expected. This observation leads to the characteristic equation
formula_57
The solutions of this quartic equation are
formula_58
where
formula_59
The general solution of the Timoshenko-Rayleigh beam equation for free vibrations can then be written as
formula_60
Quasistatic bending of plates.
The defining feature of beams is that one of the dimensions is much "larger" than the other two. A structure is called a plate when it is flat and one of its dimensions is much "smaller" than the other two. There are several theories that attempt to describe the deformation and stress in a plate under applied loads two of which have been used widely. These are
Kirchhoff–Love theory of plates.
The assumptions of Kirchhoff–Love theory are
These assumptions imply that
formula_61
where formula_62 is the displacement of a point in the plate and formula_63 is the displacement of the mid-surface.
The strain-displacement relations are
formula_64
The equilibrium equations are
formula_65
where formula_6 is an applied load normal to the surface of the plate.
In terms of displacements, the equilibrium equations for an isotropic, linear elastic plate in the absence of external load can be written as
formula_66
In direct tensor notation,
formula_67
Mindlin–Reissner theory of plates.
The special assumption of this theory is that normals to the mid-surface remain straight and inextensible but not necessarily normal to the mid-surface after deformation. The displacements of the plate are given by
formula_68
where formula_69 are the rotations of the normal.
The strain-displacement relations that result from these assumptions are
formula_70
where formula_71 is a shear correction factor.
The equilibrium equations are
formula_72
where
formula_73
Dynamic bending of plates.
Dynamics of thin Kirchhoff plates.
The dynamic theory of plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes. The equations that govern the dynamic bending of Kirchhoff plates are
formula_74
where, for a plate with density formula_75,
formula_76
and
formula_77
The figures below show some vibrational modes of a circular plate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w"
},
{
"math_id": 1,
"text": "\\cfrac{\\mathrm{d}^2 w(x)}{\\mathrm{d} x^2}=\\frac{M(x)}{E(x) I(x)}"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "E"
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "M"
},
{
"math_id": 6,
"text": "q(x)"
},
{
"math_id": 7,
"text": "\nEI~\\cfrac{\\mathrm{d}^4 w(x)}{\\mathrm{d} x^4} = q(x)\n"
},
{
"math_id": 8,
"text": "Q"
},
{
"math_id": 9,
"text": "\nM(x) = -EI~\\cfrac{\\mathrm{d}^2 w}{\\mathrm{d} x^2} ~;~~ Q(x) = \\cfrac{\\mathrm{d}M}{\\mathrm{d}x}.\n"
},
{
"math_id": 10,
"text": "\\sigma_x = \\frac{M_z y}{I_z} = \\frac{M_z}{W_z}"
},
{
"math_id": 11,
"text": "{\\sigma_x}"
},
{
"math_id": 12,
"text": "M_z"
},
{
"math_id": 13,
"text": "y"
},
{
"math_id": 14,
"text": "I_z"
},
{
"math_id": 15,
"text": "W_z"
},
{
"math_id": 16,
"text": "W_z = I_z / y"
},
{
"math_id": 17,
"text": "\\sigma = \\tfrac{M y}{I_x}"
},
{
"math_id": 18,
"text": "\\sigma_x(y,z) = - \\frac {M_z~I_y + M_y~I_{yz}} {I_y~I_z - I_{yz}^2}y + \\frac {M_y~I_z + M_z~I_{yz}} {I_y~I_z - I_{yz}^2}z"
},
{
"math_id": 19,
"text": "y,z"
},
{
"math_id": 20,
"text": "M_y"
},
{
"math_id": 21,
"text": "I_y"
},
{
"math_id": 22,
"text": "I_{yz}"
},
{
"math_id": 23,
"text": "M_y, M_z, I_y, I_z, I_{yz}"
},
{
"math_id": 24,
"text": "\\rho"
},
{
"math_id": 25,
"text": "\\rho < 10 h."
},
{
"math_id": 26,
"text": "\n\\sigma = \\frac {F} {A} + \\frac {M} {\\rho A} + {\\frac {M} {{I_x}'}}y{\\frac {\\rho}{\\rho +y}}\n"
},
{
"math_id": 27,
"text": "F"
},
{
"math_id": 28,
"text": "A"
},
{
"math_id": 29,
"text": "{{I_x}'}"
},
{
"math_id": 30,
"text": "\\sigma"
},
{
"math_id": 31,
"text": "y\\ll\\rho"
},
{
"math_id": 32,
"text": "\\sigma = {F \\over A} \\pm \\frac {My}{I} "
},
{
"math_id": 33,
"text": "\n EI~\\cfrac{\\mathrm{d}^4 w}{\\mathrm{d} x^4} = q(x) - \\cfrac{EI}{k A G}~\\cfrac{\\mathrm{d}^2 q}{\\mathrm{d} x^2}\n"
},
{
"math_id": 34,
"text": "G"
},
{
"math_id": 35,
"text": "k"
},
{
"math_id": 36,
"text": "\\nu"
},
{
"math_id": 37,
"text": "\n k = \\cfrac{5 + 5\\nu}{6 + 5\\nu}\n "
},
{
"math_id": 38,
"text": "\\varphi(x)"
},
{
"math_id": 39,
"text": "\n \\cfrac{\\mathrm{d}\\varphi}{\\mathrm{d}x} = -\\cfrac{\\mathrm{d}^2w}{\\mathrm{d}x^2} -\\cfrac{q(x)}{kAG}\n "
},
{
"math_id": 40,
"text": "\n M(x) = -EI~ \\cfrac{\\mathrm{d}\\varphi}{\\mathrm{d}x} ~;~~ Q(x) = kAG\\left(\\cfrac{\\mathrm{d}w}{\\mathrm{d}x}-\\varphi\\right) = -EI~\\cfrac{\\mathrm{d}^2\\varphi}{\\mathrm{d}x^2} = \\cfrac{\\mathrm{d}M}{\\mathrm{d}x}\n "
},
{
"math_id": 41,
"text": "q(x,t)"
},
{
"math_id": 42,
"text": "\n EI~\\cfrac{\\partial^4 w}{\\partial x^4} + m~\\cfrac{\\partial^2 w}{\\partial t^2} = q(x,t)\n "
},
{
"math_id": 43,
"text": "w(x,t)"
},
{
"math_id": 44,
"text": "m"
},
{
"math_id": 45,
"text": "\n EI~\\cfrac{\\partial^4 w}{\\partial x^4} + m~\\cfrac{\\partial^2 w}{\\partial t^2} = 0\n "
},
{
"math_id": 46,
"text": "\n w(x,t) = \\text{Re}[\\hat{w}(x)~e^{-i\\omega t}] \\quad \\implies \\quad \\cfrac{\\partial^2 w}{\\partial t^2} = -\\omega^2~w(x,t)\n "
},
{
"math_id": 47,
"text": "\n EI~\\cfrac{\\mathrm{d}^4 \\hat{w}}{\\mathrm{d}x^4} - m\\omega^2\\hat{w} = 0\n "
},
{
"math_id": 48,
"text": "\n \\hat{w} = A_1\\cosh(\\beta x) + A_2\\sinh(\\beta x) + A_3\\cos(\\beta x) + A_4\\sin(\\beta x)\n "
},
{
"math_id": 49,
"text": "A_1,A_2,A_3,A_4"
},
{
"math_id": 50,
"text": "\n \\beta := \\left(\\cfrac{m}{EI}~\\omega^2\\right)^{1/4}\n"
},
{
"math_id": 51,
"text": "\n\\begin{align}\n& EI~\\frac{\\partial^4 w}{\\partial x^4} + m~\\frac{\\partial^2 w}{\\partial t^2} - \\left(J + \\frac{E I m}{k A G}\\right)\\frac{\\partial^4 w}{\\partial x^2~\\partial t^2} + \\frac{J m}{k A G}~\\frac{\\partial^4 w}{\\partial t^4} \\\\[6pt]\n= {} & q(x,t) + \\frac{J}{k A G}~\\frac{\\partial^2 q}{\\partial t^2} - \\frac{EI}{k A G}~\\frac{\\partial^2 q}{\\partial x^2}\n\\end{align}\n"
},
{
"math_id": 52,
"text": "J = \\tfrac{mI}{A}"
},
{
"math_id": 53,
"text": "m = \\rho A"
},
{
"math_id": 54,
"text": "\n \\begin{align}\n k &= \\frac{5 + 5\\nu}{6 + 5\\nu} \\quad \\text{rectangular cross-section}\\\\[6pt]\n &= \\frac{6 + 12\\nu + 6\\nu^2}{7 + 12\\nu + 4\\nu^2} \\quad \\text{circular cross-section}\n \\end{align}\n "
},
{
"math_id": 55,
"text": "\n EI~\\cfrac{\\mathrm{d}^4 \\hat{w}}{\\mathrm{d} x^4} + m\\omega^2\\left(\\cfrac{J}{m} + \\cfrac{E I}{k A G}\\right)\\cfrac{\\mathrm{d}^2 \\hat{w}}{\\mathrm{d} x^2} + m\\omega^2\\left(\\cfrac{\\omega^2 J}{k A G}-1\\right)~\\hat{w} = 0\n "
},
{
"math_id": 56,
"text": "e^{kx}"
},
{
"math_id": 57,
"text": "\n \\alpha~k^4 + \\beta~k^2 + \\gamma = 0 ~;~~ \\alpha := EI ~,~~ \\beta := m\\omega^2\\left(\\cfrac{J}{m} + \\cfrac{E I}{k A G}\\right) ~,~~ \\gamma := m\\omega^2\\left(\\cfrac{\\omega^2 J}{k A G}-1\\right)\n "
},
{
"math_id": 58,
"text": "\n k_1 = +\\sqrt{z_+} ~,~~ k_2 = -\\sqrt{z_+} ~,~~ k_3 = +\\sqrt{z_-} ~,~~ k_4 = -\\sqrt{z_-}\n "
},
{
"math_id": 59,
"text": "\n z_+ := \\cfrac{-\\beta + \\sqrt{\\beta^2 - 4\\alpha\\gamma}}{2\\alpha} ~,~~\n z_-:= \\cfrac{-\\beta - \\sqrt{\\beta^2 - 4\\alpha\\gamma}}{2\\alpha}\n "
},
{
"math_id": 60,
"text": "\n \\hat{w} = A_1~e^{k_1 x} + A_2~e^{-k_1 x} + A_3~e^{k_3 x} + A_4~e^{-k_3 x}\n "
},
{
"math_id": 61,
"text": "\n \\begin{align}\n u_\\alpha(\\mathbf{x}) & = - x_3~\\frac{\\partial w^0}{\\partial x_\\alpha}\n = - x_3~w^0_{,\\alpha} ~;~~\\alpha=1,2 \\\\\n u_3(\\mathbf{x}) & = w^0(x_1, x_2)\n \\end{align}\n"
},
{
"math_id": 62,
"text": "\\mathbf{u}"
},
{
"math_id": 63,
"text": "w^0"
},
{
"math_id": 64,
"text": "\n \\begin{align}\n \\varepsilon_{\\alpha\\beta} & =\n - x_3~w^0_{,\\alpha\\beta} \\\\\n \\varepsilon_{\\alpha 3} & = 0 \\\\\n \\varepsilon_{33} & = 0\n \\end{align}\n"
},
{
"math_id": 65,
"text": "\n M_{\\alpha\\beta,\\alpha\\beta} + q(x) = 0 ~;~~ M_{\\alpha\\beta} := \\int_{-h}^h x_3~\\sigma_{\\alpha\\beta}~dx_3\n"
},
{
"math_id": 66,
"text": "\n w^0_{,1111} + 2~w^0_{,1212} + w^0_{,2222} = 0\n "
},
{
"math_id": 67,
"text": "\n \\nabla^2\\nabla^2 w = 0\n "
},
{
"math_id": 68,
"text": "\n \\begin{align}\n u_\\alpha(\\mathbf{x}) & = - x_3~\\varphi_\\alpha ~;~~\\alpha=1,2 \\\\\n u_3(\\mathbf{x}) & = w^0(x_1, x_2)\n \\end{align}\n"
},
{
"math_id": 69,
"text": "\\varphi_\\alpha"
},
{
"math_id": 70,
"text": "\n \\begin{align}\n \\varepsilon_{\\alpha\\beta} & =\n - x_3~\\varphi_{\\alpha,\\beta} \\\\\n \\varepsilon_{\\alpha 3} & = \\cfrac{1}{2}~\\kappa\\left(w^0_{,\\alpha}- \\varphi_\\alpha\\right) \\\\\n \\varepsilon_{33} & = 0\n \\end{align}\n"
},
{
"math_id": 71,
"text": "\\kappa"
},
{
"math_id": 72,
"text": "\n \\begin{align}\n & M_{\\alpha\\beta,\\beta}-Q_\\alpha = 0 \\\\\n & Q_{\\alpha,\\alpha}+q = 0\n \\end{align}\n "
},
{
"math_id": 73,
"text": "\n Q_\\alpha := \\kappa~\\int_{-h}^h \\sigma_{\\alpha 3}~dx_3\n"
},
{
"math_id": 74,
"text": "\n M_{\\alpha\\beta,\\alpha\\beta} - q(x,t) = J_1~\\ddot{w}^0 - J_3~\\ddot{w}^0_{,\\alpha\\alpha}\n"
},
{
"math_id": 75,
"text": "\\rho = \\rho(x)"
},
{
"math_id": 76,
"text": "\n J_1 := \\int_{-h}^h \\rho~dx_3 ~;~~\n J_3 := \\int_{-h}^h x_3^2~\\rho~dx_3\n"
},
{
"math_id": 77,
"text": "\n \\ddot{w}^0 = \\frac{\\partial^2 w^0}{\\partial t^2} ~;~~\n \\ddot{w}^0_{,\\alpha\\beta} = \\frac{\\partial^2 \\ddot{w}^0}{\\partial x_\\alpha\\, \\partial x_\\beta} \n"
}
] | https://en.wikipedia.org/wiki?curid=1255570 |
12557 | Gilles Deleuze | French philosopher (1925–1995)
Gilles Louis René Deleuze ( , ; 18 January 1925 – 4 November 1995) was a French philosopher who, from the early 1950s until his death in 1995, wrote on philosophy, literature, film, and fine art. His most popular works were the two volumes of "Capitalism and Schizophrenia": "Anti-Oedipus" (1972) and "A Thousand Plateaus" (1980), both co-written with psychoanalyst Félix Guattari. His metaphysical treatise "Difference and Repetition" (1968) is considered by many scholars to be his magnum opus.
An important part of Deleuze's oeuvre is devoted to the reading of other philosophers: the Stoics, Leibniz, Hume, Kant, Nietzsche, Spinoza, and Bergson. A. W. Moore, citing Bernard Williams's criteria for a great thinker, ranks Deleuze among the "greatest philosophers". Although he once characterized himself as a "pure metaphysician", his work has influenced a variety of disciplines across the humanities, including philosophy, art, and literary theory, as well as movements such as post-structuralism and postmodernism.
Life.
Early life.
Gilles Deleuze was born into a middle-class family in Paris and lived there for most of his life. His mother was Odette Camaüer and his father, Louis, was an engineer. His initial schooling was undertaken during World War II, during which time he attended the Lycée Carnot. He also spent a year in "khâgne" at the Lycée Henri IV. During the Nazi occupation of France, Deleuze's brother, three years his senior, Georges, was arrested for his participation in the French Resistance, and died while in transit to a concentration camp. In 1944, Deleuze went to study at the Sorbonne. His teachers there included several noted specialists in the history of philosophy, such as Georges Canguilhem, Jean Hyppolite, Ferdinand Alquié, and Maurice de Gandillac. Deleuze's lifelong interest in the canonical figures of modern philosophy owed much to these teachers.
Career.
Deleuze passed the agrégation in philosophy in 1948, and taught at various lycées (Amiens, Orléans, Louis le Grand) until 1957, when he took up a position at the University of Paris. In 1953, he published his first monograph, "Empiricism and Subjectivity", on David Hume. This monograph was based on his 1947 DES ("diplôme d'études supérieures") thesis, roughly equivalent to an M.A. thesis, which was conducted under the direction of Jean Hyppolite and Georges Canguilhem. From 1960 to 1964, he held a position at the Centre National de Recherche Scientifique. During this time he published the seminal "Nietzsche and Philosophy" (1962) and befriended Michel Foucault. From 1964 to 1969, he was a professor at the University of Lyon. In 1968, Deleuze defended his two DrE dissertations amid the ongoing May 68 demonstrations; he later published his two dissertations under the titles "Difference and Repetition" (supervised by Gandillac) and "Expressionism in Philosophy: Spinoza" (supervised by Alquié).
In 1969, he was appointed to the University of Paris VIII at Vincennes/St. Denis, an experimental school organized to implement educational reform. This new university drew a number of well-known academics, including Foucault (who suggested Deleuze's hiring) and the psychoanalyst Félix Guattari. Deleuze taught at Paris VIII until his retirement in 1987.
Personal life.
Deleuze's outlook on life was sympathetic to transcendental ideas, "nature as god" ethics, and the monist experience. Some of the important ideas he advocated for and found inspiration in include his personally coined expression pluralism = monism, as well as the concepts of Being and Univocity.
He married Denise Paul "Fanny" Grandjouan in 1956 and they had two children.
According to James Miller, Deleuze portrayed little visible interest in actually "doing" many of the risky things he so vividly conjured up in his lectures and writing. Married, with two children, he outwardly lived the life of a conventional French professor. He kept his fingernails untrimmed because, as he once explained, he lacked "normal protective fingerprints", and therefore could not "touch an object, particularly a piece of cloth, with the pads of my fingers without sharp pain".
When once asked to talk about his life, he replied: "Academics' lives are seldom interesting." Deleuze concludes his reply to this critic thus:
<templatestyles src="Template:Blockquote/styles.css" />What do you know about me, given that I believe in secrecy? ... If I stick where I am, if I don't travel around, like anyone else I make my inner journeys that I can only measure by my emotions, and express very obliquely and circuitously in what I write. ... Arguments from one's own privileged experience are bad and reactionary arguments.
Death.
Deleuze, who had suffered from respiratory ailments from a young age, developed tuberculosis in 1968 and underwent lung removal. He suffered increasingly severe respiratory symptoms for the rest of his life. In the last years of his life, simple tasks such as writing required laborious effort. Overwhelmed by his respiratory problems, he died by suicide on 4 November 1995, throwing himself from the window of his apartment.
Before his death, Deleuze had announced his intention to write a book entitled "La Grandeur de Marx" ("The Greatness of Marx"), and left behind two chapters of an unfinished project entitled "Ensembles and Multiplicities" (these chapters have been published as the essays "Immanence: A Life" and "The Actual and the Virtual"). He is buried in the cemetery of the village of Saint-Léonard-de-Noblat.
Philosophy.
Deleuze's works fall into two groups: on the one hand, monographs interpreting the work of other philosophers (Baruch Spinoza, Gottfried Wilhelm Leibniz, David Hume, Immanuel Kant, Friedrich Nietzsche, Henri Bergson, Michel Foucault) and artists (Marcel Proust, Franz Kafka, Francis Bacon); on the other, eclectic philosophical tomes organized by concept (e.g., difference, sense, events, schizophrenia, economy, cinema, desire, philosophy). However, both of these aspects are seen by his critics and analysts as often overlapping, in particular, due to his prose and the unique mapping of his books that allow for multifaceted readings.
Metaphysics.
Deleuze's main philosophical project in the works he wrote prior to his collaborations with Guattari can be summarized as an inversion of the traditional metaphysical relationship between identity and difference. Traditionally, difference is seen as derivative from identity: e.g., to say that "X is different from Y" assumes some X and Y with at least relatively stable identities (as in Plato's forms). On the contrary, Deleuze claims that all identities are effects of difference. Identities are neither logically nor metaphysically prior to difference, Deleuze argues, "given that there exist differences of nature between things of the same genus." That is, not only are no two things ever the same, the categories used to identify individuals in the first place derive from differences. Apparent identities such as "X" are composed of endless series of differences, where "X" = "the difference between x and xformula_0", and "xformula_0" = "the difference between...", and so forth. Difference, in other words, goes all the way down. To confront reality honestly, Deleuze argues, beings must be grasped exactly as they are, and concepts of identity (forms, categories, resemblances, unities of apperception, predicates, etc.) fail to attain what he calls "difference in itself." "If philosophy has a positive and direct relation to things, it is only insofar as philosophy claims to grasp the thing itself, according to what it is, in its difference from everything it is not, in other words, in its "internal difference"."
Like Kant, Deleuze considers traditional notions of space and time as unifying forms imposed by the subject. He, therefore, concludes that pure difference is non-spatiotemporal; it is an idea, what Deleuze calls "the virtual". (The coinage refers to Proust's definition of what is constant in both the past and the present: "real without being actual, ideal without being abstract.") While Deleuze's virtual ideas superficially resemble Plato's forms and Kant's ideas of pure reason, they are not originals or models, nor do they transcend possible experience; instead they are the conditions of actual experience, the internal difference in itself. "The concept they [the conditions] form is identical to its object." A Deleuzean idea or concept of difference is therefore not a wraith-like abstraction of an experienced thing, it is a real system of differential relations that creates actual spaces, times, and sensations.
Thus, Deleuze at times refers to his philosophy as a transcendental empiricism (), alluding to Kant. In Kant's transcendental idealism, experience only makes sense when organized by intuitions (namely, space and time) and concepts (such as causality). Assuming the content of these intuitions and concepts to be qualities of the world as it exists independently of human perceptual access, according to Kant, spawns seductive but senseless metaphysical beliefs (for example, extending the concept of causality beyond possible experience results in unverifiable speculation about a first cause). Deleuze inverts the Kantian arrangement: experience exceeds human concepts by presenting novelty, and this raw experience of difference actualizes an idea, unfettered by prior categories, forcing the invention of new ways of thinking (see "Epistemology").
Simultaneously, Deleuze claims that being is univocal, i.e., that all of its senses are affirmed in one voice. Deleuze borrows the doctrine of "ontological univocity" from the medieval philosopher John Duns Scotus. In medieval disputes over the nature of God, many eminent theologians and philosophers (such as Thomas Aquinas) held that when one says that "God is good", God's goodness is only analogous to human goodness. Scotus argued to the contrary that when one says that "God is good", the goodness in question is exactly the same sort of goodness that is meant when one says "Jane is good". That is, God only differs from humans in degree, and properties such as goodness, power, reason, and so forth are univocally applied, regardless of whether one is talking about God, a person, or a flea.
Deleuze adapts the doctrine of univocity to claim that being is, univocally, difference. "With univocity, however, it is not the differences which are and must be: it is being which is Difference, in the sense that it is said of difference. Moreover, it is not we who are univocal in a Being which is not; it is we and our individuality which remains equivocal in and for a univocal Being." Here Deleuze at once echoes and inverts Spinoza, who maintained that everything that exists is a modification of the one substance, God or Nature. For Deleuze, there is no one substance, only an always-differentiating process, an origami cosmos, always folding, unfolding, refolding. Deleuze summarizes this ontology in the paradoxical formula "pluralism = monism".
"Difference and Repetition" (1968) is Deleuze's most sustained and systematic attempt to work out the details of such a metaphysics, but his other works develop similar ideas. In "Nietzsche and Philosophy" (1962), for example, reality is a play of forces; in "Anti-Oedipus" (1972), a "body without organs"; in "What is Philosophy?" (1991), a "plane of immanence" or "chaosmos".
Epistemology.
Deleuze's unusual metaphysics entails an equally atypical epistemology, or what he calls a transformation of "the image of thought". According to Deleuze, the traditional image of thought, found in philosophers such as Aristotle, René Descartes, and Edmund Husserl, misconceives thinking as a mostly unproblematic business. Truth may be hard to discover—it may require a life of pure theorizing, or rigorous computation, or systematic doubt—but thinking is able, at least in principle, to correctly grasp facts, forms, ideas, etc. It may be practically impossible to attain a God's-eye, neutral point of view, but that is the ideal to approximate: a disinterested pursuit that results in a determinate, fixed truth; an orderly extension of common sense. Deleuze rejects this view as papering over the metaphysical flux, instead claiming that genuine thinking is a violent confrontation with reality, an involuntary rupture of established categories. Truth changes thought; it alters what people think is possible. By setting aside the assumption that thinking has a natural ability to recognize the truth, Deleuze says, people attain a "thought without image", a thought always determined by problems rather than solving them. "All this, however, presupposes codes or axioms which do not result by chance, but which do not have an intrinsic rationality either. It's just like theology: everything about it is quite rational if you accept sin, the immaculate conception, and the incarnation. Reason is always a region carved out of the irrational—not sheltered from the irrational at all, but traversed by it and only defined by a particular kind of relationship among irrational factors. Underneath all reason lies delirium, and drift."
"The Logic of Sense", published in 1969, is one of Deleuze's most peculiar works in the field of epistemology. Michel Foucault, in his essay "Theatrum Philosophicum" about the book, attributed this to how he begins with his metaphysics but approaches it through language and truth; the book is focused on "the simple condition that instead of denouncing metaphysics as the neglect of being, we force it to speak of extrabeing". In it, he refers to epistemological paradoxes: in the first series, as he analyzes Lewis Carroll's "Alice in Wonderland", he remarks that "the personal self requires God and the world in general. But when substantives and adjectives begin to dissolve, when the names of pause and rest are carried away by the verbs of pure becoming and slide into the language of events, all identity disappears from the self, the world, and God."
Deleuze's peculiar readings of the history of philosophy stem from this unusual epistemological perspective. To read a philosopher is no longer to aim at finding a single, correct interpretation, but is instead to present a philosopher's attempt to grapple with the problematic nature of reality. "Philosophers introduce new concepts, they explain them, but they don't tell us, not completely anyway, the problems to which those concepts are a response. [...] The history of philosophy, rather than repeating what a philosopher says, has to say what he must have taken for granted, what he didn't say but is nonetheless present in what he did say."
Likewise, rather than seeing philosophy as a timeless pursuit of truth, reason, or universals, Deleuze defines philosophy as the creation of concepts. For Deleuze, concepts are not identity conditions or propositions, but metaphysical constructions that define a range of thinking, such as Plato's ideas, Descartes's "cogito", or Kant's doctrine of the faculties. A philosophical concept "posits itself and its object at the same time as it is created." In Deleuze's view, then, philosophy more closely resembles practical or artistic production than it does an adjunct to a definitive scientific description of a pre-existing world (as in the tradition of John Locke or Willard Van Orman Quine).
In his later work (from roughly 1981 onward), Deleuze sharply distinguishes art, philosophy, and science as three distinct disciplines, each relating to reality in different ways. While philosophy creates concepts, the arts create novel qualitative combinations of sensation and feeling (what Deleuze calls "percepts" and "affects"), and the sciences create quantitative theories based on fixed points of reference such as the speed of light or absolute zero (which Deleuze calls "functives"). According to Deleuze, none of these disciplines enjoy primacy over the others: they are different ways of organizing the metaphysical flux, "separate melodic lines in constant interplay with one another." For example, Deleuze does not treat cinema as an art representing an external reality, but as an ontological practice that creates different ways of organizing movement and time. Philosophy, science, and art are equally, and essentially, creative and practical. Hence, instead of asking traditional questions of identity such as "is it true?" or "what is it?", Deleuze proposes that inquiries should be functional or practical: "what does it do?" or "how does it work?"
Values.
In ethics and politics Deleuze again echoes Spinoza, albeit in a sharply Nietzschean key. Following his rejection of any metaphysics based on identity, Deleuze criticizes the notion of an individual as an arresting or halting of differentiation (as the etymology of the word "individual" suggests[how so?differentiation is not not dividing: citation needed]). Guided by the naturalistic ethics of Spinoza and Nietzsche, Deleuze instead seeks to understand individuals and their moralities as products of the organization of pre-individual desires and powers.
In the two volumes of "Capitalism and Schizophrenia", "Anti-Oedipus" (1972) and "A Thousand Plateaus" (1980), Deleuze and Guattari describe history as a congealing and regimentation of "desiring-production" (a concept combining features of Freudian drives and Marxist labor) into the modern individual (typically neurotic and repressed), the nation-state (a society of continuous control), and capitalism (an anarchy domesticated into infantilizing commodification). Deleuze, following Karl Marx, welcomes capitalism's destruction of traditional social hierarchies as liberating but inveighs against its homogenization of all values to the aims of the market.
The first part of "Capitalism and Schizophrenia" undertakes a universal history and posits the existence of a separate socius (the social body that takes credit for production) for each mode of production: the earth for the tribe, the body of the despot for the empire, and capital for capitalism."
In his 1990 essay "Postscript on the Societies of Control" ("Post-scriptum sur les sociétés de contrôle"), Deleuze builds on Foucault's notion of the society of discipline to argue that society is undergoing a shift in structure and control. Where societies of discipline were characterized by discrete physical enclosures (such as schools, factories, prisons, office buildings, etc.), institutions and technologies introduced since World War II have dissolved the boundaries between these enclosures. As a result, social coercion and discipline have moved into the lives of individuals considered as "masses, samples, data, markets, or 'banks'." The mechanisms of modern societies of control are described as continuous, following and tracking individuals throughout their existence via transaction records, mobile location tracking, and other personally identifiable information.
But how does Deleuze square his pessimistic diagnoses with his ethical naturalism? Deleuze claims that standards of value are internal or immanent: to live well is to fully express one's power, to go to the limits of one's potential, rather than to judge what exists by non-empirical, transcendent standards. Modern society still suppresses difference and alienates people from what they can do. To affirm reality, which is a flux of change and difference, established identities must be overturned and so become all that they can become—though exactly what cannot be known in advance. The pinnacle of Deleuzean practice, then, is creativity. "Herein, perhaps, lies the secret: to bring into existence and not to judge. If it is so disgusting to judge, it is not because everything is of equal value, but on the contrary, because what has value can be made or distinguished only by defying judgment. What expert judgment, in art, could ever bear on the work to come?"
Deleuze's interpretations.
Deleuze's studies of individual philosophers and artists are purposely heterodox. Deleuze once famously described his method of interpreting philosophers as "buggery ("enculage")", as sneaking behind an author and producing an offspring which is recognizably his, yet also monstrous and different.
The various monographs thus are not attempts to present what Nietzsche or Spinoza strictly intended, but re-stagings of their ideas in different and unexpected ways. Deleuze's peculiar readings aim to enact the creativity he believes is the acme of philosophical practice. A parallel in painting Deleuze points to is Francis Bacon's "Study after Velázquez"—it is quite beside the point to say that Bacon "gets Velasquez wrong". Similar considerations apply, in Deleuze's view, to his own uses of mathematical and scientific terms, "pace" critics such as Alan Sokal: "I'm not saying that Resnais and Prigogine, or Godard and Thom, are doing the same thing. I'm pointing out, rather, that there are remarkable similarities between scientific creators of functions and cinematic creators of images. And the same goes for philosophical concepts, since there are distinct concepts of these spaces."
Similarities with Heidegger.
From the 1930s onward, German philosopher Martin Heidegger wrote in a series of manuscripts and books on concepts of Difference, Identity, Representation, and Event; notably among these the "Beiträge zur Philosophie" "(Vom Ereignis)" (Written 1936-38; published posthumously 1989); none of the relevant texts were translated into French by Deleuze's death in 1995, excluding any strong possibility of appropriation. However, Heidegger's early work can be traced through mathematician Albert Lautman, who drew heavily from Heidegger's "Sein und Zeit" and "Vom Wesen des Grundes" (1928), which James Bahoh describes as having "...decisive influence on the twentieth century mathematician and philosopher [...] whose theory of dialectical Ideas Deleuze appropriated and modified for his own use." The similarities between Heidegger's later, post-turn, 1930-1976 thought and Deleuze's early works in the 60s and 70s are generally described by Deleuze-scholar Daniel W. Smith in the following way: ""Difference and Repetition" could be read as a response to "Being and Time" (for Deleuze, Being is difference, and time is repetition)."Bahoh continues in saying that: "...then "Beiträge" could be read as "Difference and Repetition"'s unknowing and anachronistic doppelgänger." Deleuze and Heidegger's philosophy is considered to converge on the topics of Difference and the Event. Where, for Heidegger, an evental being is constituted in part by difference as "...an essential dimension of the concept of event"; for Deleuze, being is difference, and difference "differentiates by way of events." In contrast to this, however, Jussi Backman argues that, for Heidegger, being is united only insofar as it consists of and "is" difference, or rather as the movement of difference, not too dissimilar to Deleuze's later claims:"...the unity and univocity of being (in the sense of being), its 'selfsameness,' paradoxically consists exclusively in difference."This mutual apprehension of a differential, Evental ontology lead both thinkers into an extended critique of the representation characteristic to Platonic, Aristotelian, and Cartesian thought; as Joe Hughes states: ""Difference and Repetition" is a detective novel. It tells the story of what some readers of Deleuze might consider a horrendous crime [...]: the birth of representation." Heidegger formed his critiques most decisively in the concept of the fourfold ["German: das Geviert"], a non-metaphysical grounding for the thing (as opposed to "object") as "ungrounded, mediated, meaningful, and shared" united in an "event of appropriation" ["Ereignis"]. This evental ontology continues in "Identität und Differenz", where the fundamental concept expressed in "Difference and Repetition", of dethroning the primacy of identity, can be seen throughout the text. Even in earlier Heideggerian texts such as "Sein und Zeit", however, the critique of representation is "...cast in terms of the being of truth, or the processes of uncovering and covering (grounded in Dasein's existence) whereby beings come into and withdraw from phenomenal presence." In parallel, Deleuze's extended critique of representation (in the sense of detailing a "genealogy" of the antiquated beliefs as well) is given "...in terms of being or becoming as difference and repetition, together with genetic processes of individuation whereby beings come to exist and pass out of existence."
Time and space, for both thinkers, is also constituted in nearly identical ways. Time-space in the "Beiträge" and the three syntheses in "Difference and Repetition" both apprehend time as grounded in difference, whilst the distinction between the time-space of the world [Welt] and the time-space as the eventual production of such a time-space is mirrored by Deleuze's categorization between the temporality of what is actual and temporality of the virtual in the first and the second/third syntheses respectively.
Another parallel can be found in their utilization of so-called "generative paradoxes," or rather problems whose fundamental problematic element is constantly outside the categorical grasp fond of formal, natural, and human sciences. For Heidegger, this is the Earth in the fourfold, something which has as one of its traits the behaviour of "resisting articulation," what he characterizes as a "strife"; for Deleuze, a similar example can be spotted in the paradox of regress, or of indefinite proliferation in the "Logic of Sense".
Reception.
In the 1960s, Deleuze's portrayal of Nietzsche as a metaphysician of difference rather than a reactionary mystic contributed greatly to the plausibility and popularity of "left-wing Nietzscheanism" as an intellectual stance. His books "Difference and Repetition" (1968) and "The Logic of Sense" (1969) led Michel Foucault to declare that "one day, perhaps, this century will be called Deleuzian." (Deleuze, for his part, said Foucault's comment was "a joke meant to make people who like us laugh, and make everyone else livid.") In the 1970s, the "Anti-Oedipus", written in a style by turns vulgar and esoteric, offering a sweeping analysis of the family, language, capitalism, and history via eclectic borrowings from Freud, Marx, Nietzsche, and dozens of other writers, was received as a theoretical embodiment of the anarchic spirit of May 1968. In 1994 and 1995, "L'Abécédaire de Gilles Deleuze", an eight-hour series of interviews between Deleuze and Claire Parnet, aired on France's Arte Channel.
In the 1980s and 1990s, almost all of Deleuze's books were translated into English. Deleuze's work is frequently cited in English-speaking academia (in 2007, e.g., he was the 11th most frequently cited author in English-speaking publications in the humanities, between Freud and Kant). In the English-speaking academy, Deleuze's work is typically classified as continental philosophy.
However, some French and some Anglophone philosophers criticised Deleuze's work.
According to Pascal Engel, Deleuze's metaphilosophical approach makes it impossible to reasonably disagree with a philosophical system, and so destroys meaning, truth, and philosophy itself. Engel summarizes Deleuze's metaphilosophy thus: "When faced with a beautiful philosophical concept you should just sit back and admire it. You should not question it."
American philosopher Stanley Rosen objects to Deleuze's interpretation of Nietzsche's eternal return.
Vincent Descombes argues that Deleuze's account of a difference that is not derived from identity (in "Nietzsche and Philosophy") is incoherent.
Slavoj Žižek states that the Deleuze of "Anti-Oedipus" ("arguably Deleuze's worst book"), the "political" Deleuze under the "'bad' influence" of Guattari, ends up, despite protestations to the contrary, as "the ideologist of late capitalism".
Allegations of idealism and negligence of material conditions.
Peter Hallward argues that Deleuze's insistence that being is necessarily creative and always-differentiating entails that his philosophy can offer no insight into, and is supremely indifferent to, the material conditions of existence. Thus Hallward claims that Deleuze's thought is literally other-worldly, aiming only at a passive contemplation of the dissolution of all identity into the theophanic self-creation of nature.
Descombes argues that his analysis of history in "Anti-Oedipus" is 'utter idealism', criticizing reality for falling short of a non-existent ideal of schizophrenic becoming.
Žižek claims that Deleuze's ontology oscillates between materialism and idealism.
Relation with monism.
Alain Badiou claims that Deleuze's metaphysics only apparently embraces plurality and diversity, remaining at bottom monist. Badiou further argues that, in practical matters, Deleuze's monism entails an ascetic, aristocratic fatalism akin to ancient Stoicism.
American philosopher Todd May argues that Deleuze's claim that difference is ontologically primary ultimately contradicts his embrace of immanence, i.e., his monism. However, May believes that Deleuze can discard the primacy-of-difference thesis, and accept a Wittgensteinian holism without significantly altering his practical philosophy.
It has more recently been argued that Deleuze's criticism of the history of philosophy as the metaphysical priority of identity over difference is a false distinction, and that Deleuze inadvertently reaches conclusions akin to such idealist philosophers of identity as Schelling.
Subjectivity and individuality.
Other European philosophers have criticized Deleuze's theory of subjectivity. For example, Manfred Frank claims that Deleuze's theory of individuation as a process of bottomless differentiation fails to explain the unity of consciousness.
Žižek also calls Deleuze to task for allegedly reducing the subject to "just another" substance and thereby failing to grasp the nothingness that, according to Lacan and Žižek, defines subjectivity. What remains worthwhile in Deleuze's oeuvre, Žižek finds, are precisely Deleuze's engagements with virtuality as the product of negativity.
Science wars.
In "Fashionable Nonsense" (1997), physicists Alan Sokal and Jean Bricmont accuse Deleuze of abusing mathematical and scientific terms, particularly by sliding between accepted technical meanings and his own idiosyncratic use of those terms in his works. Sokal and Bricmont state that they don't object to metaphorical reasoning, including with mathematical concepts, but mathematical and scientific terms are useful only insofar as they are precise. They give examples of mathematical concepts being "abused" by taking them out of their intended meaning, rendering the idea into normal language reduces it to truism or nonsense. In their opinion, Deleuze used mathematical concepts about which the typical reader might be not knowledgeable, and thus served to display erudition rather than enlightening the reader. Sokal and Bricmont state that they only deal with the "abuse" of mathematical and scientific concepts and explicitly suspend judgment about Deleuze's wider contributions.
Influence.
Other scholars in continental philosophy, feminist studies and sexuality studies have taken Deleuze's analysis of the sexual dynamics of sadism and masochism with a level of uncritical celebration following the 1989 Zone Books translation of the 1967 booklet on Leopold von Sacher-Masoch, "Le froid et le cruel" (Coldness and Cruelty). As sexuality historian Alison M. Moore notes, Deleuze's own value placed on difference is poorly reflected in this booklet which fails to differentiate between Masoch's own view of his desire and that imposed upon him by the pathologizing forms of psychiatric thought prevailing in the late nineteenth century which produced the concept of 'masochism' (a term Masoch himself emphatically rejected).
Smith, Protevi and Voss note "Sokal and Bricmont’s 1999 intimations" underestimated Deleuze's awareness of mathematics and pointed out several "positive views of Deleuze’s use of mathematics as provocations for [...] his philosophical concepts", and that Deleuze's epistemology and ontology can be "brought together" with dynamical systems theory, chaos theory, biology, and geography.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "^\\prime"
}
] | https://en.wikipedia.org/wiki?curid=12557 |
12557839 | 6-cubic honeycomb | The 6-cubic honeycomb or hexeractic honeycomb is the only regular space-filling tessellation (or honeycomb) in Euclidean 6-space.
It is analogous to the square tiling of the plane and to the cubic honeycomb of 3-space.
Constructions.
There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,34,4}. Another form has two alternating 6-cube facets (like a checkerboard) with Schläfli symbol {4,33,31,1}. The lowest symmetry Wythoff construction has 64 types of facets around each vertex and a prismatic product Schläfli symbol {∞}(6).
Related honeycombs.
The [4,34,4], , Coxeter group generates 127 permutations of uniform tessellations, 71 with unique symmetry and 70 with unique geometry. The expanded 6-cubic honeycomb is geometrically identical to the 6-cubic honeycomb.
The "6-cubic honeycomb" can be alternated into the 6-demicubic honeycomb, replacing the 6-cubes with 6-demicubes, and the alternated gaps are filled by 6-orthoplex facets.
Trirectified 6-cubic honeycomb.
A trirectified 6-cubic honeycomb, , contains all birectified 6-orthoplex facets and is the Voronoi tessellation of the D6* lattice. Facets can be identically colored from a doubled formula_0×2, 4,34,4 symmetry, alternately colored from formula_0, [4,34,4] symmetry, three colors from formula_1, [4,33,31,1] symmetry, and 4 colors from formula_2, [31,1,3,3,31,1] symmetry. | [
{
"math_id": 0,
"text": "{\\tilde{C}}_6"
},
{
"math_id": 1,
"text": "{\\tilde{B}}_6"
},
{
"math_id": 2,
"text": "{\\tilde{D}}_6"
}
] | https://en.wikipedia.org/wiki?curid=12557839 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.