id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
70904996 | Rio scale | Measure for extraterrestrial intelligence events
The Rio scale was proposed in 2000 as a means of quantifying the significance of a SETI detection. The scale was designed by Iván Almár and Jill Tarter to help tell policy-makers how likely, from 0 to 10, it is that an extraterrestrial radio signal has been produced by an intelligent civilization.
The scale is inspired by the Torino scale, which is used to determine the impact risk associated with near-Earth objects. Just as the Torino scale takes into account how significant an object's impact on the planet would be, the Rio scale takes into account how much a public announcement of the discovery of extraterrestrial intelligence would probably impact society.
The IAA SETI Permanent Study Group officially adopted the Rio scale as a way of bringing perspective to claims of extraterrestrial intelligence (ETI) detection, and as an acknowledgement that even false ETI detections may have disastrous consequences.
The scale was modified in 2011 to include a consideration of whether contact was achieved through an interstellar message or a physical extraterrestrial artifact, including all indications of intelligent extraterrestrial life such as technosignatures. A 2.0 version of the scale was proposed in 2018.
Calculation.
In its 2.0 version, the Rio Scale, R, of a given event is calculated as the product of two terms.
formula_0
The first term, Q, is the significance of the consequences of an event. It is determined considering three factors: the estimated distance to the source of the signal (a value between 0 and 4), the prospects for communicating with the source (a value between 0 and 4) and how likely is that the sender is aware of humanity (a value between -1 and 2). The value of each factor is determined by answering a question and Q is calculated by summing the three values.
The second term, δ, is the probability that the event actually occurred. Its value is determined by first calculating a term, J, based on three factors: the probability that the signal is real, the probability that it is not instrumental, and the probability that it is not natural or human-made. The values for these factors are determined by answering a questionnaire and J is calculated by summing them. δ is then calculated using the formula "δ"
10(10-J)/2.
The final R value, going from 0 to 10, is the likelihood that the observed event was produced by an intelligent civilization.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R = Q\\delta"
}
]
| https://en.wikipedia.org/wiki?curid=70904996 |
7090506 | Dielectric loss | Amount of electromagnetic energy dissipated by a dielectric material
In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle δ or the corresponding loss tangent tan("δ"). Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Electromagnetic field perspective.
For time-varying electromagnetic fields, the electromagnetic energy is typically viewed as waves propagating either through free space, in a transmission line, in a microstrip line, or through a waveguide. Dielectrics are often used in all of these environments to mechanically support electrical conductors and keep them at a fixed separation, or to provide a barrier between different gas pressures yet still transmit electromagnetic power. Maxwell’s equations are solved for the electric and magnetic field components of the propagating waves that satisfy the boundary conditions of the specific environment's geometry. In such electromagnetic analyses, the parameters permittivity ε, permeability μ, and conductivity σ represent the properties of the media through which the waves propagate. The permittivity can have real and imaginary components (the latter excluding σ effects, see below) such that
formula_0
If we assume that we have a wave function such that
formula_1
then Maxwell's curl equation for the magnetic field can be written as:
formula_2
where ε′′ is the imaginary component of permittivity attributed to "bound" charge and dipole relaxation phenomena, which gives rise to energy loss that is indistinguishable from the loss due to the "free" charge conduction that is quantified by σ. The component ε′represents the familiar lossless permittivity given by the product of the "free space" permittivity and the "relative" real/absolute permittivity, or formula_3
Loss tangent.
The loss tangent is then defined as the ratio (or angle in a complex plane) of the lossy reaction to the electric field E in the curl equation to the lossless reaction:
formula_4
Solution for the electric field of the electromagnetic wave is
formula_5
where:
For dielectrics with small loss, square root can be approximated using only zeroth and first order terms of binomial expansion. Also, tan "δ" ≈ "δ" for small δ.
formula_7
Since power is electric field intensity squared, it turns out that the power decays with propagation distance z as
formula_8
where:
There are often other contributions to power loss for electromagnetic waves that are not included in this expression, such as due to the wall currents of the conductors of a transmission line or waveguide. Also, a similar analysis could be applied to the magnetic permeability where
formula_9
with the subsequent definition of a magnetic loss tangent
formula_10
The electric loss tangent can be similarly defined:
formula_11
upon introduction of an effective dielectric conductivity (see relative permittivity#Lossy medium).
Discrete circuit perspective.
A capacitor is a discrete electrical circuit component typically made of a dielectric placed between conductors. One lumped element model of a capacitor includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR), as shown in the figure below. The ESR represents losses in the capacitor. In a low-loss capacitor the ESR is very small (the conduction is high leading to a low resistivity), and in a lossy capacitor the ESR can be large. Note that the ESR is "not" simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity representing the loss due to both the dielectric's conduction electrons and the bound dipole relaxation phenomena mentioned above. In a dielectric, one of the conduction electrons or the dipole relaxation typically dominates loss in a particular dielectric and manufacturing method. For the case of the conduction electrons being the dominant loss, then
formula_12
where "C" is the lossless capacitance.
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's loss tangent is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. The loss tangent is then
formula_13 .
Since the same AC current flows through both "ESR" and "Xc", the loss tangent is also the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor. For this reason, a capacitor's loss tangent is sometimes stated as its "dissipation factor", or the reciprocal of its "quality factor Q", as follows
formula_14
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\varepsilon = \\varepsilon' - j \\varepsilon'' ."
},
{
"math_id": 1,
"text": " \\mathbf E = \\mathbf E_{o}e^{j \\omega t},"
},
{
"math_id": 2,
"text": " \\nabla \\times \\mathbf H = j \\omega \\varepsilon' \\mathbf E + ( \\omega \\varepsilon'' + \\sigma )\\mathbf E "
},
{
"math_id": 3,
"text": "\\varepsilon' = \\varepsilon_0 \\varepsilon'_r."
},
{
"math_id": 4,
"text": " \\tan \\delta = \\frac{\\omega \\varepsilon'' + \\sigma} {\\omega \\varepsilon'} ."
},
{
"math_id": 5,
"text": "E = E_o e^{-j k \\sqrt{1 - j \\tan \\delta} z},"
},
{
"math_id": 6,
"text": "k = \\omega \\sqrt{\\mu \\varepsilon'} = \\tfrac {2 \\pi} {\\lambda} ,"
},
{
"math_id": 7,
"text": "E = E_o e^{- j k \\left(1 - j \\frac{\\tan \\delta}{2}\\right) z} = E_o e^{-k\\frac{\\tan \\delta}{2} z} e^{-j k z},"
},
{
"math_id": 8,
"text": "P = P_o e^{-k z \\tan \\delta},"
},
{
"math_id": 9,
"text": " \\mu = \\mu' - j \\mu'' ,"
},
{
"math_id": 10,
"text": " \\tan \\delta_m = \\frac{\\mu''} {\\mu'} ."
},
{
"math_id": 11,
"text": " \\tan \\delta_e = \\frac{\\varepsilon''} {\\varepsilon'} ,"
},
{
"math_id": 12,
"text": " \\mathrm{ESR} = \\frac {\\sigma} {\\varepsilon' \\omega^2 C} "
},
{
"math_id": 13,
"text": " \\tan \\delta = \\frac {\\mathrm{ESR}} {|X_{c}|} = \\omega C \\cdot \\mathrm{ESR} = \\frac {\\sigma} {\\varepsilon' \\omega} "
},
{
"math_id": 14,
"text": " \\tan \\delta = \\mathrm{DF} = \\frac {1} {Q} ."
}
]
| https://en.wikipedia.org/wiki?curid=7090506 |
709116 | Singular value | Square roots of the eigenvalues of the self-adjoint operator
In mathematics, in particular functional analysis, the singular values of a compact operator formula_0 acting between Hilbert spaces formula_1 and formula_2, are the square roots of the (necessarily non-negative) eigenvalues of the self-adjoint operator formula_3 (where formula_4 denotes the adjoint of formula_5).
The singular values are non-negative real numbers, usually listed in decreasing order ("σ"1("T"), "σ"2("T"), …). The largest singular value "σ"1("T") is equal to the operator norm of "T" (see Min-max theorem).
If "T" acts on Euclidean space formula_6, there is a simple geometric interpretation for the singular values: Consider the image by formula_5 of the unit sphere; this is an ellipsoid, and the lengths of its semi-axes are the singular values of formula_5 (the figure provides an example in formula_7).
The singular values are the absolute values of the eigenvalues of a normal matrix "A", because the spectral theorem can be applied to obtain unitary diagonalization of formula_8 as formula_9. Therefore, formula_10.
Most norms on Hilbert space operators studied are defined using singular values. For example, the Ky Fan-"k"-norm is the sum of first "k" singular values, the trace norm is the sum of all singular values, and the Schatten norm is the "p"th root of the sum of the "p"th powers of the singular values. Note that each norm is defined only on a special class of operators, hence singular values can be useful in classifying different operators.
In the finite-dimensional case, a matrix can always be decomposed in the form formula_11, where formula_12 and formula_13 are unitary matrices and formula_14 is a rectangular diagonal matrix with the singular values lying on the diagonal. This is the singular value decomposition.
Basic properties.
For formula_15, and formula_16.
Min-max theorem for singular values. Here formula_17 is a subspace of formula_18 of dimension formula_19.
formula_20
Matrix transpose and conjugate do not alter singular values.
formula_21
For any unitary formula_22
formula_23
Relation to eigenvalues:
formula_24
Relation to trace:
formula_25.
If formula_26 is full rank, the product of singular values is formula_27.
If formula_28 is full rank, the product of singular values is formula_29.
If formula_8 is full rank, the product of singular values is formula_30.
The smallest singular value.
The smallest singular value of a matrix "A" is "σ"n("A"). It has the following properties for a non-singular matrix A:
Intuitively, if "σ"n("A") is small, then the rows of A are "almost" linearly dependent. If it is "σ"n("A") = 0, then the rows of A are linearly dependent and A is not invertible.
Inequalities about singular values.
See also.
Singular values of sub-matrices.
For formula_31
Singular values of "A" + "B".
For formula_37
Singular values of "AB".
For formula_40
For formula_37
formula_43
Singular values and eigenvalues.
For formula_44.
History.
This concept was introduced by Erhard Schmidt in 1907. Schmidt called singular values "eigenvalues" at that time. The name "singular value" was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of the "n"th singular number:
formula_51
This formulation made it possible to extend the notion of singular values to operators in Banach space.
Note that there is a more general concept of "s-numbers", which also includes Gelfand and Kolmogorov width.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T: X \\rightarrow Y"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "T^*T"
},
{
"math_id": 4,
"text": "T^*"
},
{
"math_id": 5,
"text": "T"
},
{
"math_id": 6,
"text": "\\Reals ^n"
},
{
"math_id": 7,
"text": "\\Reals^2"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "A = U\\Lambda U^*"
},
{
"math_id": 10,
"text": "\\sqrt{A^* A} = \\sqrt{U \\Lambda^* \\Lambda U^*} = U \\left| \\Lambda \\right| U^*"
},
{
"math_id": 11,
"text": "\\mathbf{U\\Sigma V^*}"
},
{
"math_id": 12,
"text": "\\mathbf{U}"
},
{
"math_id": 13,
"text": "\\mathbf{V^*}"
},
{
"math_id": 14,
"text": "\\mathbf{\\Sigma}"
},
{
"math_id": 15,
"text": "A \\in \\mathbb{C}^{m \\times n}"
},
{
"math_id": 16,
"text": "i = 1,2, \\ldots, \\min \\{m,n\\}"
},
{
"math_id": 17,
"text": "U: \\dim(U) = i"
},
{
"math_id": 18,
"text": "\\mathbb{C}^n"
},
{
"math_id": 19,
"text": "i"
},
{
"math_id": 20,
"text": "\\begin{align}\n \\sigma_i(A) &= \\min_{\\dim(U)=n-i+1} \\max_{\\underset{\\| x \\|_2 = 1}{x \\in U}} \\left\\| Ax \\right\\|_2. \\\\\n \\sigma_i(A) &= \\max_{\\dim(U)=i} \\min_{\\underset{\\| x \\|_2 = 1}{x \\in U}} \\left\\| Ax \\right\\|_2.\n\\end{align}"
},
{
"math_id": 21,
"text": "\\sigma_i(A) = \\sigma_i\\left(A^\\textsf{T}\\right) = \\sigma_i\\left(A^*\\right)."
},
{
"math_id": 22,
"text": "U \\in \\mathbb{C}^{m \\times m}, V \\in \\mathbb{C}^{n \\times n}."
},
{
"math_id": 23,
"text": "\\sigma_i(A) = \\sigma_i(UAV)."
},
{
"math_id": 24,
"text": "\\sigma_i^2(A) = \\lambda_i\\left(AA^*\\right) = \\lambda_i\\left(A^*A\\right)."
},
{
"math_id": 25,
"text": "\\sum_{i=1}^n \\sigma_i^2=\\text{tr}\\ A^\\ast A"
},
{
"math_id": 26,
"text": "A^\\top A"
},
{
"math_id": 27,
"text": "\\sqrt{\\det A^\\top A}"
},
{
"math_id": 28,
"text": "A A^\\top"
},
{
"math_id": 29,
"text": "\\sqrt{\\det A A^\\top}"
},
{
"math_id": 30,
"text": "|\\det A|"
},
{
"math_id": 31,
"text": "A \\in \\mathbb{C}^{m \\times n}."
},
{
"math_id": 32,
"text": "B"
},
{
"math_id": 33,
"text": "\\sigma_{i+1}(A) \\leq \\sigma_i (B) \\leq \\sigma_i(A)"
},
{
"math_id": 34,
"text": "\\sigma_{i+2}(A) \\leq \\sigma_i (B) \\leq \\sigma_i(A)"
},
{
"math_id": 35,
"text": "(m-k)\\times(n-l)"
},
{
"math_id": 36,
"text": "\\sigma_{i+k+l}(A) \\leq \\sigma_i (B) \\leq \\sigma_i(A)"
},
{
"math_id": 37,
"text": "A, B \\in \\mathbb{C}^{m \\times n}"
},
{
"math_id": 38,
"text": "\\sum_{i=1}^{k} \\sigma_i(A + B) \\leq \\sum_{i=1}^{k} (\\sigma_i(A) + \\sigma_i(B)), \\quad k=\\min \\{m,n\\}"
},
{
"math_id": 39,
"text": "\\sigma_{i+j-1}(A + B) \\leq \\sigma_i(A) + \\sigma_j(B). \\quad i,j\\in\\mathbb{N},\\ i + j - 1 \\leq \\min \\{m,n\\}"
},
{
"math_id": 40,
"text": "A, B \\in \\mathbb{C}^{n \\times n}"
},
{
"math_id": 41,
"text": "\\begin{align}\n \\prod_{i=n}^{i=n-k+1} \\sigma_i(A) \\sigma_i(B) &\\leq \\prod_{i=n}^{i=n-k+1} \\sigma_i(AB) \\\\\n \\prod_{i=1}^k \\sigma_i(AB) &\\leq \\prod_{i=1}^k \\sigma_i(A) \\sigma_i(B), \\\\\n \\sum_{i=1}^k \\sigma_i^p(AB) &\\leq \\sum_{i=1}^k \\sigma_i^p(A) \\sigma_i^p(B),\n\\end{align}"
},
{
"math_id": 42,
"text": "\\sigma_n(A) \\sigma_i(B) \\leq \\sigma_i (AB) \\leq \\sigma_1(A) \\sigma_i(B) \\quad i = 1, 2, \\ldots, n. "
},
{
"math_id": 43,
"text": "2 \\sigma_i(A B^*) \\leq \\sigma_i \\left(A^* A + B^* B\\right), \\quad i = 1, 2, \\ldots, n. "
},
{
"math_id": 44,
"text": "A \\in \\mathbb{C}^{n \\times n}"
},
{
"math_id": 45,
"text": "\\lambda_i \\left(A + A^*\\right) \\leq 2 \\sigma_i(A), \\quad i = 1, 2, \\ldots, n."
},
{
"math_id": 46,
"text": "\\left|\\lambda_1(A)\\right| \\geq \\cdots \\geq \\left|\\lambda_n(A)\\right|"
},
{
"math_id": 47,
"text": "k = 1, 2, \\ldots, n"
},
{
"math_id": 48,
"text": " \\prod_{i=1}^k \\left|\\lambda_i(A)\\right| \\leq \\prod_{i=1}^{k} \\sigma_i(A)."
},
{
"math_id": 49,
"text": "p>0"
},
{
"math_id": 50,
"text": " \\sum_{i=1}^k \\left|\\lambda_i^p(A)\\right| \\leq \\sum_{i=1}^{k} \\sigma_i^p(A)."
},
{
"math_id": 51,
"text": "\\sigma_n(T) = \\inf\\big\\{\\, \\|T-L\\| : L\\text{ is an operator of finite rank }<n \\,\\big\\}."
}
]
| https://en.wikipedia.org/wiki?curid=709116 |
70912899 | Sheth–Tormen approximation | Halo mass function
The Sheth–Tormen approximation is a halo mass function.
Background.
The Sheth–Tormen approximation extends the Press–Schechter formalism by assuming that halos are not necessarily spherical, but merely elliptical. The distribution of the density fluctuation is as follows: formula_0, where formula_1, formula_2, and formula_3. The parameters were empirically obtained from the five-year release of WMAP.
Discrepancies with simulations.
In 2010, the Bolshoi Cosmological Simulation predicted that the Sheth–Tormen approximation is inaccurate for the most distant objects. Specifically, the Sheth–Tormen approximation overpredicts the abundance of haloes by a factor of formula_4 for objects with a redshift formula_5, but is accurate at low redshifts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(\\sigma_r)=A\\sqrt{\\frac{2a}{\\pi}}[1+(\\frac{\\sigma_r^2}{a\\delta_c^2})^{0.3}]\\frac{\\delta_c}{\\sigma_r}\\exp(-\\frac{a\\delta_c^2}{2\\sigma_r^2})"
},
{
"math_id": 1,
"text": "\\delta_c=1.686"
},
{
"math_id": 2,
"text": "a=0.707"
},
{
"math_id": 3,
"text": "A=0.3222"
},
{
"math_id": 4,
"text": "10"
},
{
"math_id": 5,
"text": "z>10"
}
]
| https://en.wikipedia.org/wiki?curid=70912899 |
70917667 | Exceptional point | Singularities in the parameter space
In quantum physics, exceptional points are singularities in the parameter space where two or more eigenstates (eigenvalues and eigenvectors) coalesce. These points appear in dissipative systems, which make the Hamiltonian describing the system non-Hermitian.
Photonics.
The losses in photonic systems, are a feature used to study non-Hermitian physics. Adding non-Hermiticity (such as dichroïsm) in photonic systems where exist Dirac points transforms these degeneracy points into a pair of exceptional points. It has been demonstrated experimentally in numerous photonic systems such as microcavities and photonic crystals. The first demonstration of exceptional points was done by Woldemar Voigt in 1902 in a crystal.
Fidelity and fidelity susceptibility.
In condensed matter and many-body physics, fidelity is often used to detect quantum phase transitions in parameter space. The definition of fidelity is the inner product of the ground state wave functions of two adjacent points in parameter space, formula_0, where formula_1 is a small quantity. After series expansion, formula_2, the first-order correction term of fidelity is zero, and the coefficient of the second-order correction term is called the fidelity susceptibility. The fidelity susceptibility diverges toward positive infinity as the parameters approach the quantum phase transition point.
formula_3
For the exceptional points of non-Hermitian quantum systems, after appropriately generalizing the definition of fidelity,
formula_4
the real part of the fidelity susceptibility diverges toward negative infinity when the parameters approach the exceptional points.
formula_5
For non-Hermitian quantum systems with PT symmetry, fidelity can be used to analyze whether exceptional points are of higher-order. Many numerical methods such as the Lanczos algorithm, Density Matrix Renormalization Group (DMRG), and other tensor network algorithms are relatively easy to calculate only for the ground state, but have many difficulties in computing the excited states. Because fidelity only requires the ground state calculations, this approach allows most numerical methods to analyze non-Hermitian systems without excited states, and find the exceptional point, as well as to determine whether it is a higher-order exceptional point.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F=|\\langle\\psi_0(\\lambda)|\\psi_0(\\lambda+\\epsilon)\\rangle|^2"
},
{
"math_id": 1,
"text": "\\epsilon"
},
{
"math_id": 2,
"text": "F=1-\\chi_F\\epsilon^2+\\mathcal{O}(\\epsilon^3)"
},
{
"math_id": 3,
"text": "\\lim_{\\lambda\\to\\lambda_{\\mathrm{QCP}}}\\mathbb{Re}\\chi_F=\\infty"
},
{
"math_id": 4,
"text": "F=\\langle\\psi_0^L(\\lambda)|\\psi_0^R(\\lambda+\\epsilon)\\rangle\\langle\\psi_0^L(\\lambda+\\epsilon)|\\psi_0^R(\\lambda)\\rangle"
},
{
"math_id": 5,
"text": "\\lim_{\\lambda\\to\\lambda_{\\mathrm{EP}}}\\mathbb{Re}\\chi_F=-\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=70917667 |
70923477 | Note G | Computer algorithm
Note G is a computer algorithm written by Ada Lovelace that was designed to calculate Bernoulli numbers using the hypothetical analytical engine. Note G is generally agreed to be the first algorithm specifically for a computer, and Lovelace is considered as the first computer programmer as a result. The algorithm was the last note in a series labelled A to G, which she employed as visual aids to accompany her English translation of Luigi Menabrea's 1842 French transcription of Charles Babbage's lecture on the analytical engine at the University of Turin, "Notions sur la machine analytique de Charles Babbage" ("Elements of Charles Babbage’s Analytical Machine"). Lovelace's Note G was never tested, as the engine was never built. Her notes, along with her translation, were published in 1843.
In the modern era, thanks to more readily available computing equipment and programming resources, Lovelace's algorithm has since been tested, after being "translated" into modern programming languages. These tests have independently concluded that there was a bug in the script, due to a minor typographical error, rendering the algorithm in its original state unusable.
Origin.
In 1840, Charles Babbage was invited to give a seminar in Turin on his analytical engine, the only public explanation he ever gave on the engine. During Babbage's lecture, mathematician Luigi Menabrea wrote an account of the engine in French. A friend of Babbage's, Charles Wheatstone, suggested that in order to contribute, Lovelace should translate Menabrea's account. Babbage suggested that she augment the account with appendices, which she compiled at the end of her translation as a series of seven "notes" labelled A-G. Her translation was published in August 1843, in Taylor's Scientific Memoirs, wherein Lovelace's name was signed "A.A.L". In these notes, Lovelace described the capabilities of Babbage's analytical engine if it were to be used for computing, laying out a more ambitious plan for the engine than even Babbage himself had.
Lovelace's notes for the article were three times longer than the article itself. In the first notes, she explores beyond the numerical ambitions that Babbage had for the machine, and suggests the machine could take advantage of computation in order to deal with the realms of music, graphics, and language.
<templatestyles src="Template:Blockquote/styles.css" />Again, it might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
She explains to readers how the analytical engine was separate from Babbage's earlier difference engine, and likens its function to the Jacquard machine, in that it used binary punch cards to denote machine language. In note C, this point is furthered by the fact that simultaneous and iterated actions can be made by the machine, ensuring that any card or collection of cards can be used several times in the solution of a single problem, essentially anticipating modern methods of control flow and looping. These ideas were brought to a head in the final note, G, where Lovelace sought to demonstrate an example of computation.
Note G only made use of only the four arithmetical operations: addition, subtraction, multiplication and division, the implementation of Babbage's vision:
<templatestyles src="Template:Blockquote/styles.css" />Under the impossibility of my here explaining the process through which this end is attained, we must limit ourselves to admitting that the first four operations of arithmetic, that is addition, subtraction, multiplication and division, can be performed in a direct manner through the intervention of the machine. This granted, the machine is thence capable of performing every species of numerical calculation, for all such calculations ultimately resolve themselves into the four operations we have just named.
— It also uses Babbage's idea of storing information in columns of discs, each denoted by formula_0 (for variable) and a subscript number denoting which column is being referred to.
Function.
Lovelace used a recursive equation to calculate Bernoulli numbers, wherein she used the previous values in an equation to generate the next one. her method ran thus:
formula_1
formula_2
where formula_3 is a binomial coefficient,
formula_4.
Bernoulli numbers can be calculated in many ways, but Lovelace deliberately chose an elaborate method in order to demonstrate the power of the engine. In Note G, she states: "We will terminate these Notes by following up in detail the steps through which the engine could compute the Numbers of Bernoulli, this being (in the form in which we shall deduce it) a rather complicated example of its powers." The particular algorithm used by Lovelace in Note G generates the eighth Bernoulli number (labelled as formula_5, as she started with formula_6.)
Notation.
The table of the algorithm organises each command in order. Each command denotes one operation being made on two terms. The second column states only the operator being used. Variables are notated as "formula_0", where the superscript before it represents the amount of different values the variable has been assigned to, and the subscript after it represents the ordinal assignment of the variable, that is which variable it is. (For example, formula_7 refers to the second assignment of variable number 4. Any variables hitherto undefined have a superscript of 0.) The variables are numbered starting from formula_8. The third column tells the computer exactly what command is taking place, (For example, on line 1, the command performed is "formula_9" - the first iteration of variable 2 is multiplied by the first iteration of variable 3.) and only incorporates one operation between two terms per line. Column 4 - "Variables receiving results" takes note of where the result of the operation in column 3 should be stored. In this way, any variables in this column have their superscript number incremented by one each time. (e.g. on line 1, the result of formula_9 is assigned to variables formula_10, formula_11, and formula_12.)
Column 5 states whether either of the variables used in the operation of the command has been changed. Enclosed in curly braces, two rows per command put the original variable on the left side of an equals sign, and the new variable on the other side - that is, if the variable has been changed, its superscript is incremented by one, and if not, it remains the same. (e.g. line three assigns the result of formula_13 to the second iteration of the variable formula_11, and the fifth column reflects this by noting;
formula_14
formula_11 has changed, but formula_15 hasn't.
In column 6, "Statement of Results", the result assigned to the variable in column 4 is shown in its exact value based on the values of the two terms previously assigned. (e.g. on line 1 - formula_9 - formula_16 was set at the beginning to be formula_17, and formula_18 was set to be the variable formula_19. Therefore, formula_20, in mathematical notation.) This column is ostensibly not computed by the engine, and appears to be more to aid clarity and the reader's ability to follow the steps of the program. (For example, line 5 has a fraction being divided by two, which is notated as it being multiplied by a half, probably for coherence and the typographical complexity of a nested fraction.) It also makes use of separate variable notation outside of the program, the formula_21 and formula_22 variables, which are multiplied successively to find the final value, formula_5, thus:
formula_23
Beyond this, each successive column shows the values of a given variable over time. Each time a variable either changes, or has its value become relevant by token of its presence as one of the terms in the current command, its value is stated or restated in its respective column. Otherwise, it is marked with an ellipsis to denote its irrelevancy. This presumably mimics the computer's need for only relevant information, thereby tracking the value of a variable as the program parses.
Method.
The program sought to calculate what is known by modern convention as the eighth Bernoulli number, listed as formula_5, as Lovelace begins counting from formula_6.
Error.
In operation 4, the division supposedly taking place is "formula_24", to be stored in variable formula_25. However, the "Statement of results" says that the division should be:
formula_26
As a matter of fact, the division is the wrong way round; formula_27 is the second iteration of formula_10, as can be seen in operation 2. Likewise, formula_28 is the second iteration of formula_11, as can be seen in operation 3. Thus, operation 4 should not be formula_24, but rather formula_29. This bug means that if the engine were ever to run this algorithm in this state, it would fail to generate Bernoulli numbers correctly, and would find its final goal value (the eighth Bernoulli number, formula_30) to be formula_31.
Modern implementations.
Lovelace's program can be implemented in a modern programming language, though due to the above stated error, if transcribed exactly it would return an incorrect final value for formula_5. The original program generalised in pseudocode follows as thus:
The implementation in pseudocode highlights the fact that computer languages define variables on a stack, which obviates the need for tracking and specifying the current iteration of a variable. In addition, Lovelace's program only allowed for variables to be defined by performing addition, subtraction, multiplication or division on two terms that were previously defined variables. Modern syntax would be capable of performing each calculation more concisely. This restriction becomes apparent in a few places, for example on command 6 (formula_32). Here Lovelace defines a hitherto undefined variable (formula_33) by itself, thereby assuming that all undefined variables are automatically equal to 0, where most modern programming languages would return an error or list the variable as null. What she intended was "formula_34", but had constrained herself to only using variables as terms. Likewise, in command 8 (formula_35), the strict notation of two-term arithmetic becomes cumbersome, as in order to define formula_36 as 2, Lovelace assigns its value (0) to itself plus formula_16 (2). It is due to this restrictive notation that formula_36 is defined thus.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "B_n = -\\sum_{k=0}^{n-1} \\frac{n!}{(n+1-k)!\\cdot k!} "
},
{
"math_id": 2,
"text": "B_k = -\\sum_{k=0}^{n-1} \\binom{n}{k} \\frac{B_k}{n+1-k}"
},
{
"math_id": 3,
"text": "\\binom{n}{k}"
},
{
"math_id": 4,
"text": "\\displaystyle \\binom{n}{k} = \\frac{n!}{k!(n-k)!}"
},
{
"math_id": 5,
"text": "B_7"
},
{
"math_id": 6,
"text": "B_0"
},
{
"math_id": 7,
"text": "{}^2V_4"
},
{
"math_id": 8,
"text": "V_0"
},
{
"math_id": 9,
"text": "{}^1V_2 \\times {}^1V_3"
},
{
"math_id": 10,
"text": "V_4"
},
{
"math_id": 11,
"text": "V_5"
},
{
"math_id": 12,
"text": "V_6"
},
{
"math_id": 13,
"text": "{}^1V_5 + {}^1V_1"
},
{
"math_id": 14,
"text": "\\begin{Bmatrix}{}^1V_5 = {}^2V_5 \\\\ {}^1V_1 = {}^1V_1 \\end{Bmatrix}"
},
{
"math_id": 15,
"text": "V_1"
},
{
"math_id": 16,
"text": "V_2"
},
{
"math_id": 17,
"text": "2"
},
{
"math_id": 18,
"text": "V_3"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "{}^1V_2 \\times {}^1V_3 = 2n"
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "B_7=-1(A_0+B_1A_1+B_3A_3+B_5A_5)"
},
{
"math_id": 24,
"text": "{}^2V_5 \\div {}^2V_4"
},
{
"math_id": 25,
"text": "{}^1V_{11}"
},
{
"math_id": 26,
"text": "\\frac{2n-1}{2n+1}"
},
{
"math_id": 27,
"text": "2n-1"
},
{
"math_id": 28,
"text": "2n+1"
},
{
"math_id": 29,
"text": "{}^2V_4 \\div {}^2V_5"
},
{
"math_id": 30,
"text": "-\\tfrac{1}{30}"
},
{
"math_id": 31,
"text": "-\\tfrac{25621}{630}"
},
{
"math_id": 32,
"text": "V_{13} = V_{13}-V_{11}"
},
{
"math_id": 33,
"text": "V_{13}"
},
{
"math_id": 34,
"text": "0-V_{11}"
},
{
"math_id": 35,
"text": "V_7 = V_2 + V_7"
},
{
"math_id": 36,
"text": "V_7"
}
]
| https://en.wikipedia.org/wiki?curid=70923477 |
70926536 | Mean transverse energy | Accelerator physics energy quantity
In accelerator physics, the mean transverse energy (MTE) is a quantity that describes the variance of the transverse momentum of a beam. While the quantity has a defined value for any particle beam, it is generally used in the context of photoinjectors for electron beams.
Definition.
For a beam consisting of formula_0 particles with momenta formula_1 and mass formula_2 traveling prominently in the formula_3 direction the mean transverse energy is given by
formula_4
Where formula_5 is the component of the momentum formula_1 perpendicular to the beam axis formula_3. For a continuous, normalized distribution of particles formula_6 the MTE is
formula_7
Relation to Other Quantities.
Emittance is a common quantity in beam physics which describes the volume of a beam in phase space, and is normally conserved through typical linear beam transformations; for example, one may transition from a beam with a large spatial size and a small momentum spread to one with a small spatial size and a large momentum spread, both cases retaining the same emittance. Due to its conservation, the emittance at the species source (e.g., photocathode for electrons) is the lower limit on attainable emittance.
For a beam born with a spatial size formula_8 and a 1-D MTE the minimum 2-D (formula_9 and formula_10) emittance is
formula_11
The emittance of each dimension may be multiplied together to get the higher dimensional emittance. For a photocathode the spatial size of the beam is typically equal to the spatial size of the ionizing laser beam and the MTE may depend on several factors involving the
cathode, the laser, and the extraction field. Due to the linear independence of the laser spot size and the MTE, the beam size is often factored out, formulating the 1-D thermal emittance
formula_12
Likewise, the maximum brightness, or phase space density, is given by
formula_13
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\mathbf{p_{i}}"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "\\hat{n}"
},
{
"math_id": 4,
"text": " \\text{MTE} = \\frac{1}{N} \\sum_{i} \\frac{ \\mathbf{p^{2}_{i,\\perp}} }{2m} "
},
{
"math_id": 5,
"text": " \\mathbf{p_{\\perp}} "
},
{
"math_id": 6,
"text": "f(\\mathbf{p_{\\perp}}, \\mathbf{p_\\parallel})"
},
{
"math_id": 7,
"text": " \\text{MTE} = \\int \\frac{ \\mathbf{p^{2}_{\\perp}} }{2m} f(\\mathbf{p_{\\perp}}, \\mathbf{p_\\parallel}) \\,dp_{\\parallel} \\,d^{2}p_{\\perp}"
},
{
"math_id": 8,
"text": "\\sigma_x"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "p_x"
},
{
"math_id": 11,
"text": " \\varepsilon = \\sigma_x \\sqrt{\\frac{ \\text{MTE} }{mc^2}}"
},
{
"math_id": 12,
"text": "\\varepsilon_{\\text{th}} = \\sqrt{\\frac{\\text{MTE}}{m_ec^2}}"
},
{
"math_id": 13,
"text": " B_{n,4D} = \\frac{m_0 c^2 \\epsilon_0 E_0}{2\\pi MTE} "
}
]
| https://en.wikipedia.org/wiki?curid=70926536 |
7092764 | Characteristic mode analysis | Characteristic modes (CM) form a set of functions which, under specific boundary conditions, diagonalizes operator relating field and induced sources. Under certain conditions, the set of the CM is unique and complete (at least theoretically) and thereby capable of describing the behavior of a studied object in full.
This article deals with characteristic mode decomposition in electromagnetics, a domain in which the CM theory has originally been proposed.
Background.
CM decomposition was originally introduced as set of modes diagonalizing a scattering matrix. The theory has, subsequently, been generalized by Harrington and Mautz for antennas. Harrington, Mautz and their students also successively developed several other extensions of the theory. Even though some precursors were published back in the late 1940s, the full potential of CM has remained unrecognized for an additional 40 years. The capabilities of CM were revisited in 2007 and, since then, interest in CM has dramatically increased. The subsequent boom of CM theory is reflected by the number of prominent publications and applications.
Definition.
For simplicity, only the original form of the CM – formulated for perfectly electrically conducting (PEC) bodies in free space — will be treated in this article. The electromagnetic quantities will solely be represented as Fourier's images in frequency domain. Lorenz's gauge is used.
The scattering of an electromagnetic wave on a PEC body is represented via a boundary condition on the PEC body, namely
formula_1
with formula_2 representing unitary normal to the PEC surface, formula_3 representing incident electric field intensity, and formula_4 representing scattered electric field intensity defined as
formula_5
with formula_6 being imaginary unit, formula_7 being angular frequency, formula_8 being vector potential
formula_9
formula_10 being vacuum permeability, formula_11 being scalar potential
formula_12
formula_13 being vacuum permittivity, formula_14 being scalar Green's function
formula_15
and formula_16 being wavenumber. The integro-differential operator formula_17 is the one to be diagonalized via characteristic modes.
The governing equation of the CM decomposition is
formula_18
with formula_19 and formula_20 being real and imaginary parts of impedance operator, respectively: formula_21 The operator, formula_22 is defined by
formula_23
The outcome of (1) is a set of characteristic modes formula_24, formula_25, accompanied by associated characteristic numbers formula_26. Clearly, (1) is a generalized eigenvalue problem, which, however, cannot be analytically solved (except for a few canonical bodies). Therefore, the numerical solution described in the following paragraph is commonly employed.
Matrix formulation.
Discretization formula_27 of the body of the scatterer formula_0 into formula_28 subdomains as formula_29 and using a set of linearly independent piece-wise continuous functions formula_30, formula_31, allows current density formula_32 to be represented as
formula_33
and by applying the Galerkin method, the impedance operator (2)
formula_34
The eigenvalue problem (1) is then recast into its matrix form
formula_35
which can easily be solved using, e.g., the generalized Schur decomposition or the implicitly restarted Arnoldi method yielding a finite set of expansion coefficients formula_36 and associated characteristic numbers formula_26. The properties of the CM decomposition are investigated below.
Properties.
The properties of CM decomposition are demonstrated in its matrix form.
First, recall that the bilinear forms
formula_37
and
formula_38
where superscript formula_39 denotes the Hermitian transpose and where formula_40 represents an arbitrary surface current distribution, correspond to the radiated power and the reactive net power, respectively. The following properties can then be easily distilled:
formula_43
then spans the range of formula_44 and indicates whether the characteristic mode is capacitive (formula_45), inductive (formula_46), or in resonance (formula_47). In reality, the Rayleigh quotient is limited by the numerical dynamics of the machine precision used and the number of correctly found modes is limited.
formula_51
This last relation presents the ability of characteristic modes to diagonalize the impedance operator (2) and demonstrates far field orthogonality, i.e.,
formula_52
Modal quantities.
The modal currents can be used to evaluate antenna parameters in their modal form, for example:
These quantities can be used for analysis, feeding synthesis, radiator's shape optimization, or antenna characterization.
Applications and further development.
The number of potential applications is enormous and still growing:
The prospective topics include
Software.
CM decomposition has recently been implemented in major electromagnetic simulators, namely in FEKO, CST-MWS, and WIPL-D. Other packages are about to support it soon, for example HFSS and CEM One. In addition, there is a plethora of in-house and academic packages which are capable of evaluating CM and many associated parameters.
Alternative bases.
CM are useful to understand radiator's operation better. They have been used with great success for many practical purposes. However, it is important to stress that they are not perfect and it is often better to use other formulations such as energy modes, radiation modes, stored energy modes or radiation efficiency modes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": " \\boldsymbol{\\hat{n}} \\times \\boldsymbol{E}^\\mathrm{i} = -\\boldsymbol{\\hat{n}} \\times \\boldsymbol{E}^\\mathrm{s}, "
},
{
"math_id": 2,
"text": "\\boldsymbol{\\hat{n}}"
},
{
"math_id": 3,
"text": "\\boldsymbol{E}^\\mathrm{i}"
},
{
"math_id": 4,
"text": "\\boldsymbol{E}^\\mathrm{s}"
},
{
"math_id": 5,
"text": "\\boldsymbol{E}^\\mathrm{s} = -\\mathrm{j}\\omega\\boldsymbol{A} - \\nabla\\varphi, "
},
{
"math_id": 6,
"text": "\\mathrm{j}"
},
{
"math_id": 7,
"text": "\\omega"
},
{
"math_id": 8,
"text": "\\boldsymbol{A}"
},
{
"math_id": 9,
"text": " \\boldsymbol{A} \\left(\\boldsymbol{r}\\right) = \\mu_0 \\int\\limits_\\Omega \\boldsymbol{J} \\left(\\boldsymbol{r}'\\right) G \\left(\\boldsymbol{r}, \\boldsymbol{r}'\\right) \\, \\mathrm{d}S, "
},
{
"math_id": 10,
"text": "\\mu_0"
},
{
"math_id": 11,
"text": "\\varphi"
},
{
"math_id": 12,
"text": " \\varphi \\left(\\boldsymbol{r}\\right) = - \\frac{1}{\\mathrm{j}\\omega\\epsilon_0} \\int\\limits_\\Omega \\nabla\\cdot\\boldsymbol{J} \\left(\\boldsymbol{r}'\\right) G \\left(\\boldsymbol{r}, \\boldsymbol{r}'\\right) \\, \\mathrm{d}S, "
},
{
"math_id": 13,
"text": "\\epsilon_0"
},
{
"math_id": 14,
"text": "G \\left(\\boldsymbol{r},\\boldsymbol{r}'\\right)"
},
{
"math_id": 15,
"text": " G \\left(\\boldsymbol{r},\\boldsymbol{r}'\\right) = \\frac{\\mathrm{e}^{-\\mathrm{j}k\\left|\\boldsymbol{r} - \\boldsymbol{r}'\\right|}}{4\\pi \\left|\\boldsymbol{r} - \\boldsymbol{r}'\\right|} "
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "\\boldsymbol{\\hat{n}} \\times \\boldsymbol{E}^\\mathrm{s} \\left(\\boldsymbol{J} \\right)"
},
{
"math_id": 18,
"text": " \\mathcal{X} \\left(\\boldsymbol{J}_n\\right) = \\lambda_n \\mathcal{R} \\left(\\boldsymbol{J}_n\\right) \\qquad\\mathrm{(1)} "
},
{
"math_id": 19,
"text": "\\mathcal{R}"
},
{
"math_id": 20,
"text": "\\mathcal{X}"
},
{
"math_id": 21,
"text": "\\mathcal{Z}(\\cdot) = \\mathcal{R}(\\cdot) + \\mathrm{j}\\mathcal{X}(\\cdot)\\,."
},
{
"math_id": 22,
"text": "\\mathcal{Z}"
},
{
"math_id": 23,
"text": " \\mathcal{Z} \\left(\\boldsymbol{J}\\right) = \\boldsymbol{\\hat{n}} \\times \\boldsymbol{\\hat{n}} \\times \\boldsymbol{E}^\\mathrm{s} \\left(\\boldsymbol{J}\\right). \\qquad\\mathrm{(2)} "
},
{
"math_id": 24,
"text": "\\left\\{\\boldsymbol{J}_n\\right\\}"
},
{
"math_id": 25,
"text": "n\\in \\left\\{1,2,\\dots\\right\\}"
},
{
"math_id": 26,
"text": "\\left\\{\\lambda_n\\right\\}"
},
{
"math_id": 27,
"text": "\\mathcal{D}"
},
{
"math_id": 28,
"text": "M"
},
{
"math_id": 29,
"text": "\\Omega^M = \\mathcal{D}\\left(\\Omega\\right)"
},
{
"math_id": 30,
"text": "\\left\\{\\boldsymbol{\\psi}_n\\right\\}"
},
{
"math_id": 31,
"text": "n\\in\\left\\{1,\\dots,N\\right\\}"
},
{
"math_id": 32,
"text": "\\boldsymbol{J}"
},
{
"math_id": 33,
"text": " \\boldsymbol{J} \\left(\\boldsymbol{r}\\right) \\approx \\sum\\limits_{n=1}^N I_n \\boldsymbol{\\psi}_n \\left(\\boldsymbol{r}\\right) "
},
{
"math_id": 34,
"text": " \\mathbf{Z} = \\mathbf{R} + \\mathrm{j} \\mathbf{X} = \\left[Z_{uv}\\right] = \\left[\\,\\int\\limits_\\Omega \\boldsymbol{\\psi}_u^\\ast \\cdot \\mathcal{Z} \\left(\\boldsymbol{\\psi}_v\\right) \\, \\mathrm{d}S\\right]. "
},
{
"math_id": 35,
"text": " \\mathbf{X} \\mathbf{I}_n = \\lambda_n \\mathbf{R}\\mathbf{I}_n, "
},
{
"math_id": 36,
"text": "\\left\\{\\mathbf{I}_n\\right\\}"
},
{
"math_id": 37,
"text": " P_\\mathrm{r} \\approx \\frac{1}{2} \\mathbf{I}^\\mathrm{H} \\mathbf{R} \\mathbf{I} \\geq 0 "
},
{
"math_id": 38,
"text": " 2\\omega\\left(W_\\mathrm{m} - W_\\mathrm{e}\\right) \\approx \\frac{1}{2} \\mathbf{I}^\\mathrm{H} \\mathbf{X} \\mathbf{I}, "
},
{
"math_id": 39,
"text": "^\\mathrm{H}"
},
{
"math_id": 40,
"text": "\\mathbf{I}"
},
{
"math_id": 41,
"text": "\\mathbf{R}"
},
{
"math_id": 42,
"text": "\\mathbf{X}"
},
{
"math_id": 43,
"text": "\t\\lambda_n \\approx \\frac{\\mathbf{I}_n^\\mathrm{H}\\mathbf{X}\\mathbf{I}_n}{\\mathbf{I}_n^\\mathrm{H}\\mathbf{R}\\mathbf{I}_n} "
},
{
"math_id": 44,
"text": "-\\infty \\leq \\lambda_n \\leq \\infty"
},
{
"math_id": 45,
"text": "\\lambda_n < 0"
},
{
"math_id": 46,
"text": "\\lambda_n > 0"
},
{
"math_id": 47,
"text": "\\lambda_n = 0"
},
{
"math_id": 48,
"text": "\\lambda_n = \\lambda_n \\left(\\omega\\right)"
},
{
"math_id": 49,
"text": "\\lambda_n \\left(\\omega\\right)"
},
{
"math_id": 50,
"text": "\\mathbf{I}_n \\in \\mathbb{R}^{N\\times 1}"
},
{
"math_id": 51,
"text": "\t\\frac{1}{2} \\mathbf{I}_m^\\mathrm{H} \\mathbf{Z} \\mathbf{I}_n \\approx \\left(1 + \\mathrm{j} \\lambda_n\\right) \\delta_{mn}. "
},
{
"math_id": 52,
"text": "\t\\frac{1}{2} \\sqrt{\\frac{\\varepsilon_0}{\\mu_0}} \\int\\limits_0^{2\\pi} \\int\\limits_0^\\pi \\boldsymbol{F}_m^\\ast \\cdot \\boldsymbol{F}_n \\sin \\vartheta \\, \\mathrm{d} \\vartheta \\, \\mathrm{d} \\varphi = \\delta_{mn}. "
},
{
"math_id": 53,
"text": "\\boldsymbol{F}_n \\left(\\boldsymbol{\\hat{e}}, \\boldsymbol{\\hat{r}}\\right)"
},
{
"math_id": 54,
"text": "\\boldsymbol{\\hat{e}}"
},
{
"math_id": 55,
"text": "\\boldsymbol{\\hat{r}}"
},
{
"math_id": 56,
"text": "\\boldsymbol{D}_n \\left(\\boldsymbol{\\hat{e}}, \\boldsymbol{\\hat{r}}\\right)"
},
{
"math_id": 57,
"text": "\\eta_n"
},
{
"math_id": 58,
"text": "Q_n"
},
{
"math_id": 59,
"text": "Z_n"
}
]
| https://en.wikipedia.org/wiki?curid=7092764 |
709308 | Montague grammar | Approach to natural language semantics
__notoc__
Montague grammar is an approach to natural language semantics, named after American logician Richard Montague. The Montague grammar is based on mathematical logic, especially higher-order predicate logic and lambda calculus, and makes use of the notions of intensional logic, via Kripke models. Montague pioneered this approach in the 1960s and early 1970s.
Overview.
Montague's thesis was that natural languages (like English) and formal languages (like programming languages) can be treated in the same way:
There is in my opinion no important theoretical difference between natural languages and the artificial languages of logicians; indeed, I consider it possible to comprehend the syntax and semantics of both kinds of language within a single natural and mathematically precise theory. On this point I differ from a number of philosophers, but agree, I believe, with Chomsky and his associates. ("Universal Grammar" 1970)
Montague published what soon became known as Montague grammar in three papers:
Illustration.
Montague grammar can represent the meanings of quite complex sentences
compactly. Below is a grammar presented in Eijck and Unger's textbook.
The types of the syntactic categories in the grammar are as follows, with "t"
denoting a term (a reference to an entity) and "f" denoting a formula.
The meaning of a sentence obtained by the rule formula_0 is obtained by
applying the function for NP to the function for VP.
The types of VP and NP might appear unintuitive because of the question as to the meaning of a noun phrase that is not simply a term. This is because meanings of many noun phrases, such as "the man who whistles", are not just terms in predicate logic, but also include a predicate for the activity, like "whistles", which cannot be represented in the term (consisting of constant and function symbols but not of predicates). So we need some term, for example "x", and a formula "whistles(x)" to refer to the man who whistles. The meaning of verb phrases VP can be expressed with that term, for example stating that a particular "x" satisfies sleeps(x) formula_1 snores(x) (expressed as a function from "x" to that formula). Now the function associated with NP takes that kind of function and combines it with the formulas needed to express the meaning of the noun phrase. This particular way of stating NP and VP is not the only possible one.
Key is the meaning of an expression is obtained as a function of its components, either by function application (indicated by boldface parentheses enclosing function and argument) or by constructing a new function from the functions associated with the component. This compositionality makes it possible to assign meanings reliably to arbitrarily complex sentence structures, with auxiliary clauses and many other complications.
The meanings of other categories of expressions are either similarly function applications, or higher-order functions. The following are the rules of the grammar, with
the first column indicating a non-terminal symbol, the second column one possible
way of producing that non-terminal from other non-terminals and terminals,
and the third column indicating the corresponding meaning.
Here are example expressions and their associated meaning, according to the above grammar, showing that the meaning of a given sentence is formed from its constituent
expressions, either by forming a new higher-order function, or by applying
a higher-order function for one expression to the meaning of another.
The following are other examples of sentences translated into the predicate logic by the grammar.
In popular culture.
In David Foster Wallace's novel "Infinite Jest", the protagonist Hal Incandenza has written an essay entitled "Montague Grammar and the Semantics of Physical Modality". Montague grammar is also referenced explicitly and implicitly several times throughout the book. | [
{
"math_id": 0,
"text": "S : \\mathit{NP}\\ \\mathit{VP}"
},
{
"math_id": 1,
"text": "\\wedge"
}
]
| https://en.wikipedia.org/wiki?curid=709308 |
70931046 | ZFK equation | Reaction–diffusion equation
ZFK equation, abbreviation for Zeldovich–Frank-Kamenetskii equation, is a reaction–diffusion equation that models premixed flame propagation. The equation is named after Yakov Zeldovich and David A. Frank-Kamenetskii who derived the equation in 1938 and is also known as the Nagumo equation. The equation is analogous to KPP equation except that is contains an exponential behaviour for the reaction term and it differs fundamentally from KPP equation with regards to the propagation velocity of the traveling wave. In non-dimensional form, the equation reads
formula_0
with a typical form for formula_1 given by
formula_2
where formula_3 is the non-dimensional dependent variable (typically temperature) and formula_4 is the Zeldovich number. In the ZFK regime, formula_5. The equation reduces to Fisher's equation for formula_6 and thus formula_6 corresponds to KPP regime. The minimum propagation velocity formula_7 (which is usually the long time asymptotic speed) of a traveling wave in the ZFK regime is given by
formula_8
whereas in the KPP regime, it is given by
formula_9
Traveling wave solution.
Similar to Fisher's equation, a traveling wave solution can be found for this problem. Suppose the wave to be traveling from right to left with a constant velocity formula_10, then in the coordinate attached to the wave, i.e., formula_11, the problem becomes steady. The ZFK equation reduces to
formula_12
satisfying the boundary conditions formula_13 and formula_14. The boundary conditions are satisfied sufficiently smoothly so that the derivative formula_15 also vanishes as formula_16. Since the equation is translationally invariant in the formula_17 direction, an additional condition, say for example formula_18, can be used to fix the location of the wave. The speed of the wave formula_10 is obtained as part of the solution, thus constituting a nonlinear eigenvalue problem. Numerical solution of the above equation, formula_19, the eigenvalue formula_10 and the corresponding reaction term formula_1 are shown in the figure, calculated for formula_20.
Asymptotic solution.
The ZFK regime as formula_21 is formally analyzed using activation energy asymptotics. Since formula_4 is large, the term formula_22 will make the reaction term practically zero, however that term will be non-negligible if formula_23. The reaction term will also vanish when formula_24 and formula_25. Therefore, it is clear that formula_1 is negligible everywhere except in a thin layer close to the right boundary formula_25. Thus the problem is split into three regions, an inner diffusive-reactive region flanked on either side by two outer convective-diffusive regions.
Outer region.
The problem for outer region is given by
formula_26
The solution satisfying the condition formula_13 is formula_27. This solution is also made to satisfy formula_28 (an arbitrary choice) to fix the wave location somewhere in the domain because the problem is translationally invariant in the formula_17 direction. As formula_29, the outer solution behaves like formula_30 which in turn implies formula_31
The solution satisfying the condition formula_14 is formula_25. As formula_32, the outer solution behaves like formula_25 and thus formula_33.
We can see that although formula_19 is continuous at formula_34, formula_15 has a jump at formula_34. The transition between the derivatives is described by the inner region.
Inner region.
In the inner region where formula_23, reaction term is no longer negligible. To investigate the inner layer structure, one introduces a stretched coordinate encompassing the point formula_34 because that is where formula_19 is approaching unity according to the outer solution and a stretched dependent variable according to formula_35 Substituting these variables into the governing equation and collecting only the leading order terms, we obtain
formula_36
The boundary condition as formula_37 comes from the local behaviour of the outer solution obtained earlier, which when we write in terms of the inner zone coordinate becomes formula_38 and formula_39. Similarly, as formula_40. we find formula_41. The first integral of the above equation after imposing these boundary conditions becomes
formula_42
which implies formula_43. It is clear from the first integral, the wave speed square formula_44 is proportional to integrated (with respect to formula_19) value of formula_1 (of course, in the large formula_4 limit, only the inner zone contributes to this integral). The first integral after substituting formula_43 is given by
formula_45
KPP–ZFK transition.
In the KPP regime, formula_47 For the reaction term used here, the KPP speed that is applicable for formula_6 is given by
formula_48
whereas in the ZFK regime, as we have seen above formula_46. Numerical integration of the equation for various values of formula_4 showed that there exists a critical value formula_49 such that only for formula_50, formula_47 For formula_51, formula_7 is greater than formula_52. As formula_5, formula_7 approaches formula_46 thereby approaching the ZFK regime. The region between the KPP regime and the ZFK regime is called the KPP–ZFK transition zone.
The critical value depends on the reaction model, for example we obtain
formula_53
formula_54
Clavin–Liñán model.
To predict the KPP–ZFK transition analytically, Paul Clavin and Amable Liñán proposed a simple piecewise linear model
formula_55
where formula_56 and formula_57 are constants. The KPP velocity of the model is formula_58, whereas the ZFK velocity is obtained as formula_59 in the double limit formula_60 and formula_61 that mimics a sharp increase in the reaction near formula_25.
For this model there exists a critical value formula_62 such that
formula_63
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\partial \\theta}{\\partial t} = \\frac{\\partial^2\\theta}{\\partial x^2} + \\omega(\\theta)"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "\\omega =\\frac{\\beta^2}{2} \\theta(1-\\theta) e^{-\\beta(1-\\theta)} "
},
{
"math_id": 3,
"text": "\\theta\\in [0,1]"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "\\beta\\gg 1"
},
{
"math_id": 6,
"text": "\\beta\\ll 1"
},
{
"math_id": 7,
"text": "U_{min}"
},
{
"math_id": 8,
"text": "U_{ZFK} \\propto \\sqrt{2\\int_0^1\\omega(\\theta) d\\theta} "
},
{
"math_id": 9,
"text": "U_{KPP} = 2 \\sqrt{\\left.\\frac{d\\omega}{d\\theta}\\right |_{\\theta=0}}."
},
{
"math_id": 10,
"text": "U"
},
{
"math_id": 11,
"text": "z=x+Ut"
},
{
"math_id": 12,
"text": "U\\frac{d\\theta}{dz} = \\frac{d^2\\theta}{dz^2} + \\frac{\\beta^2}{2} \\theta(1-\\theta)e^{-\\beta(1-\\theta)}"
},
{
"math_id": 13,
"text": "\\theta(-\\infty)=0"
},
{
"math_id": 14,
"text": "\\theta(+\\infty)=1"
},
{
"math_id": 15,
"text": "d\\theta/dz"
},
{
"math_id": 16,
"text": "z\\rightarrow \\pm\\infty"
},
{
"math_id": 17,
"text": "z"
},
{
"math_id": 18,
"text": "\\theta(0)=1/2"
},
{
"math_id": 19,
"text": "\\theta"
},
{
"math_id": 20,
"text": "\\beta=15"
},
{
"math_id": 21,
"text": "\\beta\\rightarrow\\infty"
},
{
"math_id": 22,
"text": "e^{-\\beta(1-\\theta)}"
},
{
"math_id": 23,
"text": "1-\\theta \\sim 1/\\beta"
},
{
"math_id": 24,
"text": "\\theta=0"
},
{
"math_id": 25,
"text": "\\theta=1"
},
{
"math_id": 26,
"text": "U\\frac{d\\theta}{dz} = \\frac{d^2\\theta}{dz^2}."
},
{
"math_id": 27,
"text": "\\theta=e^{Uz}"
},
{
"math_id": 28,
"text": "\\theta(0)=1"
},
{
"math_id": 29,
"text": "z\\rightarrow 0^-"
},
{
"math_id": 30,
"text": "\\theta=1+Uz + \\cdots"
},
{
"math_id": 31,
"text": "d\\theta/dz=U + \\cdots."
},
{
"math_id": 32,
"text": "z\\rightarrow 0^+"
},
{
"math_id": 33,
"text": "d\\theta/dz=0"
},
{
"math_id": 34,
"text": "z=0"
},
{
"math_id": 35,
"text": "\\eta = \\beta z, \\, \\Theta = \\beta(1-\\theta)."
},
{
"math_id": 36,
"text": "2\\frac{d^2\\Theta}{d\\eta^2} = \\Theta e^{-\\Theta}."
},
{
"math_id": 37,
"text": "\\eta\\rightarrow -\\infty"
},
{
"math_id": 38,
"text": "\\Theta \\rightarrow -U\\eta=+\\infty"
},
{
"math_id": 39,
"text": "d\\Theta/d\\eta=-U"
},
{
"math_id": 40,
"text": "\\eta\\rightarrow+\\infty"
},
{
"math_id": 41,
"text": "\\Theta=d\\Theta/d\\eta=0"
},
{
"math_id": 42,
"text": "\\begin{align}\n\\left.\\left(\\frac{d\\Theta}{d\\eta}\\right)^2\\right |_{\\Theta=\\infty} - \\left.\\left(\\frac{d\\Theta}{d\\eta}\\right)^2\\right |_{\\Theta=0} &= \\int_0^\\infty \\Theta e^{-\\Theta}d\\Theta\\\\\nU^2 &= 1\n\\end{align}"
},
{
"math_id": 43,
"text": "U=1"
},
{
"math_id": 44,
"text": "U^2"
},
{
"math_id": 45,
"text": "\\frac{d\\Theta}{d\\eta}= - \\sqrt{1-(\\Theta+1)\\exp(-\\Theta)}."
},
{
"math_id": 46,
"text": "U_{ZFK}=1"
},
{
"math_id": 47,
"text": "U_{min}=U_{KPP}."
},
{
"math_id": 48,
"text": "U_{KPP} = 2 \\sqrt{\\left.\\frac{d\\omega}{d\\theta}\\right |_{\\theta=0}}= \\sqrt 2 \\beta e^{-\\beta/2}"
},
{
"math_id": 49,
"text": "\\beta_*=1.64"
},
{
"math_id": 50,
"text": "\\beta\\leq \\beta_*"
},
{
"math_id": 51,
"text": "\\beta\\geq \\beta_*"
},
{
"math_id": 52,
"text": "U_{KPP}"
},
{
"math_id": 53,
"text": "\\beta_*=3.04 \\quad \\text{for}\\quad \\omega \\propto (1-\\theta) e^{-\\beta(1-\\theta)}"
},
{
"math_id": 54,
"text": "\\beta_*=5.11 \\quad \\text{for}\\quad \\omega \\propto (1-\\theta)^2 e^{-\\beta(1-\\theta)}."
},
{
"math_id": 55,
"text": "\\omega(\\theta)=\\begin{cases}\n\\theta \\quad \\text{if} \\quad 0\\leq \\theta\\leq 1-\\epsilon,\\\\\nh(1-\\theta)/\\epsilon^2 \\quad \\text{if} \\quad 1-\\epsilon\\leq \\theta\\leq 1\n\\end{cases}"
},
{
"math_id": 56,
"text": "h"
},
{
"math_id": 57,
"text": "\\epsilon"
},
{
"math_id": 58,
"text": "U_{KPP}=2"
},
{
"math_id": 59,
"text": "U_{ZFK}=\\sqrt h"
},
{
"math_id": 60,
"text": "\\epsilon\\rightarrow 0"
},
{
"math_id": 61,
"text": "h\\rightarrow\\infty"
},
{
"math_id": 62,
"text": "h_*=1-\\epsilon^2"
},
{
"math_id": 63,
"text": "\\begin{cases}\nh<h_*: &\\quad U_{min}=U_{KPP},\\\\\nh>h_*: &\\quad U_{min}=\\frac{h/(1-\\epsilon)+1-\\epsilon}{\\sqrt{h/(1-\\epsilon)-\\epsilon}},\\\\\nh\\gg h_*: &\\quad U_{min}\\rightarrow U_{ZFK}\n\\end{cases}"
}
]
| https://en.wikipedia.org/wiki?curid=70931046 |
70934258 | Wilson matrix | Mathematical structure used in graph theory
Wilson matrix is the following formula_0 matrix having integers as elements:
formula_1
This is the coefficient matrix of the following system of linear equations considered in a paper by J. Morris published in 1946:
formula_2
Morris ascribes the source of the set of equations to one T. S. Wilson but no details about Wilson have been provided. The particular system of equations was used by Morris to illustrate the concept of ill-conditioned system of equations. The matrix formula_3 has been used as an example and for test purposes in many research papers and books over the years. John Todd has referred to formula_3 as “the notorious matrix W of T. S. Wilson”.
Research problems spawned by Wilson matrix.
A consideration of the condition number of the Wilson matrix has spawned several interesting research problems relating to condition numbers of matrices in certain special classes of matrices having some or all the special features of the Wilson matrix. In particular, the following special classes of matrices have been studied:
An exhaustive computation of the condition numbers of the matrices in the above sets has yielded the following results:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "4\\times 4"
},
{
"math_id": 1,
"text": "W = \\begin{bmatrix}5&7&6&5 \\\\ 7&10&8&7 \\\\ 6&8&10&9 \\\\ 5&7&9&10\\end{bmatrix}"
},
{
"math_id": 2,
"text": " \n\\text{(S1)}\\quad \\begin{align} \n5x+7y+6z+5u & = 23\\\\\n7x+10y+8z+7u & = 32\\\\\n6x+8y+10z+9u&=33\\\\\n5x+7y+9z+10u&=31\n\\end{align} \n"
},
{
"math_id": 3,
"text": "W"
},
{
"math_id": 4,
"text": " 1"
},
{
"math_id": 5,
"text": "W^{-1} = \n\\begin{bmatrix} 68 & -41 & -17 & 10\\\\ -41 & 25 & 10 & -6 \\\\ -17 & 10 & 5 &- 3 \\\\ 10 & -6 & -3 & 2 \\end{bmatrix}\n"
},
{
"math_id": 6,
"text": " \\lambda^4-35 \\lambda^3+146 \\lambda^2-100 \\lambda+1"
},
{
"math_id": 7,
"text": " \\quad 0.01015004839789187,\\quad 0.8431071498550294,\\quad 3.858057455944953,\\quad 30.28868534580213"
},
{
"math_id": 8,
"text": "\\kappa_2(W)= (\\text{max eigen value})/(\\text{min eigen value})=30.28868534580213/0.01015004839789187 = 2984.09270167549"
},
{
"math_id": 9,
"text": "(S1)"
},
{
"math_id": 10,
"text": "x=y=z=u=1"
},
{
"math_id": 11,
"text": "W= R^TR"
},
{
"math_id": 12,
"text": "R\n=\\begin{bmatrix} \\sqrt{5} & \\frac{7}{\\sqrt{5}} & \\frac{6}{\\sqrt{5}} & \\sqrt{5} \\\\ 0 & \\frac{1}{\\sqrt{5}} & -\\frac{2}{\\sqrt{5}} & 0 \\\\ 0 & 0 & \\sqrt{2} & \\frac{3}{\\sqrt{2}} \\\\ 0 & 0 & 0 & \\frac{1}{\\sqrt{2}}\\end{bmatrix}"
},
{
"math_id": 13,
"text": "W= LDL^T"
},
{
"math_id": 14,
"text": "L =\n\\begin{bmatrix} \n1 & 0 & 0 & 0 \\\\ \\frac{7}{5} & 1 & 0 & 0 \\\\ \\frac{6}{5} & -2 & 1 & 0 \\\\ 1 & 0 & \\frac{3}{2} & 1\n\\end{bmatrix}, \n\\quad \nD=\\begin{bmatrix} \n5 & 0 & 0 & 0 \\\\ 0 & \\frac{1}{5} & 0 & 0 \\\\ 0 & 0 & 2 & 0 \\\\ 0 & 0 & 0 & \\frac{1}{2}\n\\end{bmatrix} \n"
},
{
"math_id": 15,
"text": "W=Z^TZ"
},
{
"math_id": 16,
"text": "Z"
},
{
"math_id": 17,
"text": "Z=\n\\begin{bmatrix}\n2 & 3 & 2 & 2 \\\\ 1 & 1 & 2 & 1 \\\\ 0 & 0 & 1 & 2 \\\\ 0 & 0 & 1 & 1 \n\\end{bmatrix}\n"
},
{
"math_id": 18,
"text": "S="
},
{
"math_id": 19,
"text": " 4 \\times 4 "
},
{
"math_id": 20,
"text": " P = "
},
{
"math_id": 21,
"text": "S"
},
{
"math_id": 22,
"text": "7.6119\\times 10^4"
},
{
"math_id": 23,
"text": "\n\\begin{bmatrix}\n2 & 7 & 10 & 10 \\\\ 7 & 10 & 10 & 9 \\\\ 10 & 10 & 10 & 1 \\\\ 10 & 9 & 1 & 10\n\\end{bmatrix}\n"
},
{
"math_id": 24,
"text": "P"
},
{
"math_id": 25,
"text": "3.5529 \\times 10^4"
},
{
"math_id": 26,
"text": "\n\\begin{bmatrix}\n9 & 1 & 1 & 5 \\\\ 1 & 10 & 1 & 9 \\\\ 1 & 1 & 10 & 1 \\\\ 5 & 9 & 1 & 10\n\\end{bmatrix}\n"
}
]
| https://en.wikipedia.org/wiki?curid=70934258 |
7093648 | Strömgren photometric system | The Strömgren photometric system, abbreviated also as "uvbyβ" or simply "uvby", and sometimes referred as Strömgren - Crawford photometric system, is a four-colour medium-passband photometric system plus Hβ (H-beta) filters for determining magnitudes and obtaining spectral classification of stars. Its use was pioneered by the Danish astronomer Bengt Strömgren in 1956 and was extended by his colleague the American astronomer David L. Crawford in 1958.
It is often considered to be a powerful tool and successful investigating the brightness and effective temperature of stars. This photometric system also has a general advantage as it can be used to measure the effects of reddening and interstellar extinction. This system also allows calculation of parameters from the formula_0 and formula_1 filters formula_2 without the effects of reddening, termed formula_3 and formula_4.
Wavelength and half-width response functions.
The following table shows the characteristics of each of the filters used (represented colors are only approximate):
Indices.
There are four main highly applied and technical indices: formula_2; formula_3; formula_4; and formula_5.
Where;
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "b"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "(b-y)"
},
{
"math_id": 3,
"text": "m_1"
},
{
"math_id": 4,
"text": "c_1"
},
{
"math_id": 5,
"text": "\\beta"
},
{
"math_id": 6,
"text": "m_1 = (v-b) - (b-y)"
},
{
"math_id": 7,
"text": "c_1=(u-v) - (v-b)"
},
{
"math_id": 8,
"text": "\\beta = \\beta_{narrow} - \\beta_{wide} "
}
]
| https://en.wikipedia.org/wiki?curid=7093648 |
709367 | Mod n cryptanalysis | Attack applicable to block and stream ciphers
In cryptography, mod "n" cryptanalysis is an attack applicable to block and stream ciphers. It is a form of partitioning cryptanalysis that exploits unevenness in how the cipher operates over equivalence classes (congruence classes) modulo "n". The method was first suggested in 1999 by John Kelsey, Bruce Schneier, and David Wagner and applied to RC5P (a variant of RC5) and M6 (a family of block ciphers used in the FireWire standard). These attacks used the properties of binary addition and bit rotation modulo a Fermat prime.
Mod 3 analysis of RC5P.
For RC5P, analysis was conducted modulo 3. It was observed that the operations in the cipher (rotation and addition, both on 32-bit words) were somewhat biased over congruence classes mod 3. To illustrate the approach, consider left rotation by a single bit:
formula_0
Then, because
formula_1
it follows that
formula_2
Thus left rotation by a single bit has a simple description modulo 3. Analysis of other operations (data dependent rotation and modular addition) reveals similar, notable biases. Although there are some theoretical problems analysing the operations in combination, the bias can be detected experimentally for the entire cipher. In (Kelsey et al., 1999), experiments were conducted up to seven rounds, and based on this they conjecture that as many as 19 or 20 rounds of RC5P can be distinguished from random using this attack. There is also a corresponding method for recovering the secret key.
Against M6 there are attacks mod 5 and mod 257 that are even more effective. | [
{
"math_id": 0,
"text": "X \\lll 1=\\left\\{\\begin{matrix} 2X, & \\mbox{if } X < 2^{31} \\\\ 2X + 1 - 2^{32}, & \\mbox{if } X \\geq 2^{31}\\end{matrix}\\right."
},
{
"math_id": 1,
"text": "2^{32} \\equiv 1\\pmod 3,\\,"
},
{
"math_id": 2,
"text": "X \\lll 1 \\equiv 2X\\pmod 3."
}
]
| https://en.wikipedia.org/wiki?curid=709367 |
70939631 | Xenon isotope geochemistry | Method of geochemical research
Xenon isotope geochemistry uses the abundance of xenon (Xe) isotopes and total xenon to investigate how Xe has been generated, transported, fractionated, and distributed in planetary systems. Xe has nine stable or very long-lived isotopes. Radiogenic 129Xe and fissiogenic 131,132,134,136Xe isotopes are of special interest in geochemical research. The radiogenic and fissiogenic properties can be used in deciphering the early chronology of Earth. Elemental Xe in the atmosphere is depleted and isotopically enriched in heavier isotopes relative to estimated solar abundances. The depletion and heavy isotopic enrichment can be explained by hydrodynamic escape to space that occurred in Earth's early atmosphere. Differences in the Xe isotope distribution between the deep mantle (from Ocean Island Basalts, or OIBs), shallower Mid-ocean Ridge Basalts (MORBs), and the atmosphere can be used to deduce Earth's history of formation and differentiation of the solid Earth into layers.
Background.
Xe is the heaviest noble gas in the Earth's atmosphere. It has seven stable isotopes (126Xe,128Xe,129Xe,130Xe,131Xe, 132Xe, 134Xe) and two isotopes (124Xe, 136Xe) with long-lived half-lives. Xe has four synthetic radioisotopes with very short half-lives, usually less than one month.
Xenon-129 can be used to examine the early history of the Earth. 129Xe was derived from the extinct nuclide of iodine, iodine-129 or 129I (with a half-life of 15.7 Million years, or Myr), which can be used in iodine-xenon (I-Xe) dating. The production of 129Xe stopped within about 100 Myr after the start of the Solar System because 129I became extinct. In the modern atmosphere, about 6.8% of atmospheric 129Xe originated from the decay 129I in the first ~100 Myr of the Solar System's history, i.e., during and immediately following Earth's accretion.
Fissiogenic Xe isotopes were generated mainly from the extinct nuclide, plutonium-244 or 244Pu (half-life of 80 Myr), and also the extant nuclide, uranium-238 or 238U (half-life of 4468 Myr). Spontaneous fission of 238U has generated ~5% as much fissiogenic Xe as 244Pu. Pu and U fission produce the four fissiogenic isotopes, 136Xe, 134Xe, 132Xe, and 131Xe in distinct proportions. A reservoir that remains an entirely closed system over Earth's history has a ratio of Pu- to U-derived fissiogenic Xe reaching to ~27. Accordingly, the isotopic composition of the fissiogenic Xe for a closed-system reservoir would largely resemble that produced from pure 244Pu fission. Loss of Xe from a reservoir after 244Pu becomes extinct (500 Myr) would lead to a greater contribution of 238U fission to the fissiogenic Xe.
Notation.
Differences in the abundance of isotopes among natural samples are extremely small (almost always below 0.1% or 1 per mille). Nevertheless, these very small differences can record meaningful geological processes. To compare these tiny but meaningful differences, isotope abundances in natural materials are often reported relative to isotope abundances in designated standards, with the delta (δ) notation. The absolute values of Xe isotopes are normalized to atmospheric 130Xe. Define formula_0 where i = 124, 126, 128, 129, 131, 132, 134, 136.
Applications.
The age of Earth.
Iodine-129 decays with a half-life of 15.7 Ma into 129Xe, resulting in excess 129Xe in primitive meteorites relative to primordial Xe isotopic compositions. The property of 129I can be used in radiometric chronology. However, as detailed below, the age of Earth's formation cannot be deduced directly from I-Xe dating. The major problem is the Xe closure time, or the time when the early Earth system stopped gaining substantial new material from space. When the Earth became closed for the I-Xe system, Xe isotope evolution began to obey a simple radioactive decay law as shown below and became predictable.
The principle of radiogenic chronology is, if at time t1 the quantity of a radioisotope is P1 while at some previous time this quantity was P0, the interval between t1 and t0 is given by the law of radioactive decay as
formula_1
Here formula_2 is the decay constant of the radioisotope, which is the probability of decay per nucleus per unit time. The decay constant is related to the half life t1/2, by t1/2= ln(2)/formula_2
Calculations.
The I-Xe system was first applied in 1975 to estimate the age of the Earth. For all Xe isotopes, the initial isotope composition of iodine in the Earth is given by
formula_3
where formula_4 is the isotopic ratios of iodine at the time that Earth primarily formed, formula_5 is the isotopic ratio of iodine at the end of stellar nucleosynthesis, and formula_6 is the time interval between the end of stellar nucleosynthesis and the formation of the Earth. The estimated iodine-127 concentration in the Bulk Silicate Earth (BSE) (= crust + mantle average) ranges from 7 to 10 parts per billion (ppb) by mass. If the BSE represents Earth's chemical composition, the total 127I in the BSE ranges from 2.26×1017 to 3.23×1017 moles. The meteorite Bjurböle is 4.56 billion years old with an initial 129I/127I ratio of 1.1×10−4, so an equation can be derived as
formula_7
where formula_8 is the interval between the formation of the Earth and the formation of meteorite Bjurböle. Given the half life of 129I of 15.7 Myr, and assuming that all the initial 129I has decayed to 129Xe, the following equation can be derived:
formula_9
129Xe in the modern atmosphere is 3.63×1013 grams. The iodine content for BSE lies between 10 and 12 ppb by mass. Consequently, formula_10 should be 108 Myr, i.e., the Xe-closure age is 108 Myr younger than the age of meteorite Bjurböle. The estimated Xe closure time was ~4.45 billion years ago when the growing Earth started to retain Xe in its atmosphere, which is coincident with ages derived from other geochronology dating methods.
Xe closure age problem.
There are some disputes about using I-Xe dating to estimate the Xe closure time. First, in the early solar system, planetesimals collided and grew into larger bodies that accreted to form the Earth. But there could be a 107 to 108 years time gap in Xe closure time between the Earth's inner and outer regions. Some research support 4.45 Ga probably represents the time when the last giant impactor (Martian-size) hit Earth, but some regard it as the time of core-mantle differentiation. The second problem is that the total inventory of 129Xe on Earth may be larger than that of the atmosphere since the lower mantle hadn't been entirely mixed, which may underestimate 129Xe in the calculation. Last but not least, if Xe gas not been lost from the atmosphere during a long interval of early Earth's history, the chronology based on 129I-129Xe would need revising since 129Xe and 127Xe could be greatly altered.
Loss of earth's earliest atmosphere.
Compared with solar xenon, Earth's atmospheric Xe is enriched in heavy isotopes by 3 to 4% per atomic mass unit (amu). However, the total abundance of xenon gas is depleted by one order of magnitude relative to other noble gases. The elemental depletion while relative enrichment in heavy isotopes is called the "Xenon paradox". A possible explanation is that some processes can specifically diminish xenon rather than other light noble gases (e.g. Krypton) and preferentially remove lighter Xe isotopes.
In the last 2 decades, two categories of models have been proposed to solve the xenon paradox. The first assumes that the Earth accreted from porous planetesimals, and isotope fractionation happened due to gravitational separation. However, this model cannot reproduce the abundance and isotopic composition of light noble gases in the atmosphere. The second category supposes a massive impact resulted in an aerodynamic drag on heavier gases. Both the aerodynamic drag and the downward gravitational effect lead to a mass-dependent loss of Xe gases. But following research suggested that Xe isotope mass fractionation shouldn't be a rapid, single event.
Research published since 2018 on noble gases preserved in Archean (3.5–3.0 Ga old) samples may provide a solution to the Xe paradox. Isotopically mass fractionated Xe is found in tiny inclusions of ancient seawater in Archean barite and hydrothermal quartz. The distribution of Xe isotopes lies between the primordial solar and the modern atmospheric Xe isotope patterns. The isotopic fractionation gradually increases relative to the solar distribution as Earth evolves over its first 2 billion years. This two billion-year history of evolving Xe fractionation coincides with early solar system conditions including high solar extreme ultraviolet (EUV) radiation and large impacts that could energize large rates of hydrogen escape to space that are big enough to drag out xenon. However, models of neutral xenon atoms escaping cannot resolve the problem that other lighter noble gas elements don't show the signal of depletion or mass-dependent fractionation. For example, because Kr is lighter than Xe, Kr should also have escaped in a neutral wind. Yet the isotopic distribution of atmospheric Kr on Earth is significantly less fractionated than atmospheric Xe.
A current explanation is that hydrodynamic escape can preferentially remove lighter atmospheric species and lighter isotopes of Xe in the form of charged ions instead of neutral atoms. Hydrogen is liberated from hydrogen-bearing gases (H2 or CH4) by photolysis in the early Earth atmosphere. Hydrogen is light and can be abundant at the top of the atmosphere and escape. In the polar regions where there are open magnetic field lines, hydrogen ions can drag ionized Xe out from the atmosphere to space even though neutral Xe cannot escape.
The mechanism is summarized as below.
Xe can be directly photo-ionized by UV radiation in range of formula_11
formula_12
Or Xe can be ionized by change exchange with H2 and CO2 through
formula_13
formula_14
where H+ and CO2+ can come from EUV dissociation. Xe+ is chemically inert in H, H2, or CO2 atmospheres. As a result, Xe+ tends to persist. These ions interact strongly with each other through the Coulomb force and are finally dragged away by strong ancient polar wind. Isotope mass fractionation accumulates as lighter isotopes of Xe+ preferentially escape from the Earth. A preliminary model suggests that Xe can escape in the Archean if the atmosphere contains >1% H2 or >0.5% methane.
When O2 levels increased in the atmosphere, Xe+ could exchange positive charge with O2 though
formula_15
From this reaction, Xe escape stopped when the atmosphere became enriched in O2. As a result, Xe isotope fractionation may provide insights into the long history of hydrogen escape that ended with the Great Oxidation Event (GOE). Understanding Xe isotopes is promising to reconstruct hydrogen or methane escape history that irreversibly oxidized the Earth and drove biological evolution toward aerobic ecological systems. Other factors, such as the hydrogen (or methane) concentration becoming too low or EUV radiation from the aging Sun becoming too weak, can also cease the hydrodynamic escape of Xe, but are not mutually exclusive.
Organic hazes on Archean Earth could also scavenge isotopically heavy Xe. Ionized Xe can be chemically incorporated into organic materials, going through the terrestrial weathering cycle on the surface. The trapped Xe is mass fractionated by about 1% per amu in heavier isotopes but they may be released again and recover the original unfractionated composition, making them not sufficient to totally resolve Xe paradox.
Comparison between Kr and Xe in the atmosphere.
Observed atmospheric Xe is depleted relative to Chondritic meteorites by a factor of 4 to 20 when compared to Kr. In contrast, the stable isotopes of Kr are barely fractionated. This mechanism is unique to Xe since Kr+ ions are quickly neutralized via
formula_16
formula_17
Therefore, Kr can be rapidly returned to neutral and wouldn't be dragged away by the charged ion wind in the polar region. Hence Kr is retained in the atmosphere.
Relation with Mass Independent Fraction of Sulfur Isotopes (MIF-S).
The signal of mass-independent fractionation of sulfur isotopes, known as MIF-S, correlates with the end of Xe isotope fractionation. During the Great Oxidation Event (GOE), the ozone layer formed when O2 rose, accounting for the end of the MIF-S signature. The disappearance of the MIF-S signal has been regarded as changing the redox ratio of Earth's surface reservoirs. However, potential memory effects of MIF-S due to oxidative weathering can lead to large uncertainty on the process and chronology of GOE. Compared to the MIF-S signals, hydrodynamic escape of Xe is not affected by the ozone formation and may be even more sensitive to O2 availability, promising to provide more details about the oxidation history of Earth.
Xe Isotopes as mantle tracers.
Xe isotopes are also promising in tracing mantle dynamics in Earth's evolution. The first explicit recognition of non-atmospheric Xe in terrestrial samples came from the analysis of CO2-well gas in New Mexico, displaying an excess of 129I-derived or primitive source 129Xe and high content in 131-136Xe due to the decay of 238U. At present, the excess of 129Xe and 131-136Xe has been widely observed in mid-ocean ridge basalt (MORBs) and Oceanic island basalt (OIBs). Because 136Xe receives more fissiogenic contribution than other heavy Xe isotopes, 129Xe (decay of 129I) and 136Xe are usually normalized to 130Xe when discussing Xe isotope trends of different mantle sources. MORBs' 129Xe/130Xe and 136Xe/130Xe ratios lie on a trend from atmospheric ratios to higher values and seemingly contaminated by the air. Oceanic island basalt (OIBs) data lies lower than those in MORBs, implying different Xe sources for OIBs and MORBs.
The deviations in 129Xe/130Xe ratio between air and MORBs show that mantle degassing occurred before 129I was extinct, otherwise 129Xe/130Xe in the air would be the same as in the mantle. The differences in the 129Xe/130Xe ratio between MORBs and OIBs may indicate that the mantle reservoirs are still not thoroughly mixed. The chemical differences between OIBs and MORBs still await discovery.
To obtain mantle Xe isotope ratios, it is necessary to remove contamination by atmospheric Xe, which could start before 2.5 billion years ago. Theoretically, the many non-radiogenic isotopic ratios (124Xe/130Xe, 126Xe/130Xe, and 128Xe/130Xe) can be used to accurately correct for atmospheric contamination if slight differences between air and mantle can be precisely measured. Still, we cannot reach such precision with current techniques.
Xe in other planets.
Mars.
On Mars, Xe isotopes in the present atmosphere are mass fractionated relative to their primordial composition from in situ measurement of the Curiosity Rover at Gale Crater, Mars. Paleo-atmospheric Xe trapped in the Martian regolith breccia NWA 11220 is mass-dependently fractionated relative to solar Xe by ~16.2‰. The extent of fractionation is comparable for Mars and Earth, which may be compelling evidence that hydrodynamic escape also occurred in the Mars history. The regolith breccia NWA7084 and the >4 Ga orthopyroxene ALH84001 Martian meteorites trap ancient Martian atmospheric gases with little if any Xe isotopic fractionation relative to modern Martian atmospheric Xe. Alternative models for Mars consider that the isotopic fractionation and escape of Mars atmospheric Xe occurred very early in the planet's history and ceased around a few hundred million years after planetary formation rather than continuing during its evolutionary history
Venus.
Xe has not been detected in Venus's atmosphere. 132Xe has an upper limit of 10 parts per billion by volume. The absence of data on the abundance of Xe precludes us from evaluating if the abundance of Xe is close to solar values or if there is Xe paradox on Venus. The lack also prevents us from checking if the isotopic composition has been mass dependently fractionated, as in the case of Earth and Mars.
Jupiter.
Jupiter's atmosphere has 2.5 ± 0.5 times the solar abundance values for Xenon and similarly elevated argon and krypton (2.1 ± 0.5 and 2.7 ± 0.5 times solar values separately). These signals of enrichment are due to these elements coming to Jupiter in very cold (T<30K) icy planetesimals. | [
{
"math_id": 0,
"text": "\\rm \\delta_{Xe}=[(^iXe/^{130}Xe)_{sample}/(^iXe/^{130}Xe)_{atm}-1]\\times 1000"
},
{
"math_id": 1,
"text": "\\rm \\Delta t = t_1-t_0=(1/\\lambda)ln(P_0/P_1)"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "\\rm (\\frac{^{129}I}{^{127}I})_E=(\\frac{^{129}I}{^{127}I})_0e^{-\\lambda \\Delta t_E}"
},
{
"math_id": 4,
"text": "\\rm (\\frac{^{129}I}{^{127}I})_E"
},
{
"math_id": 5,
"text": "\\rm (\\frac{^{129}I}{^{127}I})_0"
},
{
"math_id": 6,
"text": "\\rm \\Delta t _E"
},
{
"math_id": 7,
"text": "\\rm (\\frac{^{129}I}{^{127}I})_E=(\\frac{^{129}I}{^{127}I})_Be^{-\\lambda (\\Delta t_E-\\Delta t_B)}"
},
{
"math_id": 8,
"text": "\\rm \\Delta t_E-\\Delta t_B"
},
{
"math_id": 9,
"text": "\\rm \\Delta t_E -\\Delta t_B =\\frac{1}{\\lambda}ln(\\frac{(^{129}I/^{127}I)_B}{(^{129}I/^{127}I)_E})=\\frac{1}{\\lambda}ln(\\frac{(^{129}I/^{127}I)_B}{(^{129}Xe/^{127}I)_{BSE}}\n)"
},
{
"math_id": 10,
"text": "\\rm \\Delta t_E -\\Delta t_B"
},
{
"math_id": 11,
"text": "91.2\\ \\rm{nm}<\\lambda<102.3nm"
},
{
"math_id": 12,
"text": "\\rm Xe+hv\\rightarrow Xe^+ +e^-"
},
{
"math_id": 13,
"text": "\\rm CO_2^+ +Xe\\rightarrow CO_2+Xe^+"
},
{
"math_id": 14,
"text": "\\rm H_2^++Xe\\rightarrow HXe^+ +H"
},
{
"math_id": 15,
"text": "\\rm Xe^++O_2\\rightarrow Xe+O_2^+\n"
},
{
"math_id": 16,
"text": "\\rm Kr^+ +H_2\\rightarrow KrH^+ +H"
},
{
"math_id": 17,
"text": "\\rm KrH^+ +e^-\\rightarrow Kr+H"
}
]
| https://en.wikipedia.org/wiki?curid=70939631 |
7094111 | Zappa–Szép product | Mathematics concept
In mathematics, especially group theory, the Zappa–Szép product (also known as the Zappa–Rédei–Szép product, general product, knit product, exact factorization or bicrossed product) describes a way in which a group can be constructed from two subgroups. It is a generalization of the direct and semidirect products. It is named after Guido Zappa (1940) and Jenő Szép (1950) although it was independently studied by others including B.H. Neumann (1935), G.A. Miller (1935), and J.A. de Séguier (1904).
Internal Zappa–Szép products.
Let "G" be a group with identity element "e", and let "H" and "K" be subgroups of "G". The following statements are equivalent:
If either (and hence both) of these statements hold, then "G" is said to be an internal Zappa–Szép product of "H" and "K".
Examples.
Let "G" = GL("n",C), the general linear group of invertible "n × n" matrices over the complex numbers. For each matrix "A" in "G", the QR decomposition asserts that there exists a unique unitary matrix "Q" and a unique upper triangular matrix "R" with positive real entries on the main diagonal such that "A" = "QR". Thus "G" is a Zappa–Szép product of the unitary group "U"("n") and the group (say) "K" of upper triangular matrices with positive diagonal entries.
One of the most important examples of this is Philip Hall's 1937 theorem on the existence of Sylow systems for soluble groups. This shows that every soluble group is a Zappa–Szép product of a Hall "p'"-subgroup and a Sylow "p"-subgroup, and in fact that the group is a (multiple factor) Zappa–Szép product of a certain set of representatives of its Sylow subgroups.
In 1935, George Miller showed that any non-regular transitive permutation group with a regular subgroup is a Zappa–Szép product of the regular subgroup and a point stabilizer. He gives PSL(2,11) and the alternating group of degree 5 as examples, and of course every alternating group of prime degree is an example. This same paper gives a number of examples of groups which cannot be realized as Zappa–Szép products of proper subgroups, such as the quaternion group and the alternating group of degree 6.
External Zappa–Szép products.
As with the direct and semidirect products, there is an external version of the Zappa–Szép product for groups which are not known "a priori" to be subgroups of a given group. To motivate this, let "G" = "HK" be an internal Zappa–Szép product of subgroups "H" and "K" of the group "G". For each "k" in "K" and each "h" in "H", there exist α("k", "h") in "H" and β("k", "h") in "K" such that "kh" = α("k", "h") β("k", "h"). This defines mappings α : "K" × "H" → "H" and β : "K" × "H" → "K" which turn out to have the following properties:
for all "h"1, "h"2 in "H", "k"1, "k"2 in "K". From these, it follows that
More concisely, the first three properties above assert the mapping α : "K" × "H" → "H" is a left action of "K" on (the underlying set of) "H" and that β : "K" × "H" → "K" is a right action of "H" on (the underlying set of) "K". If we denote the left action by "h" → "k""h" and the right action by "k" → "k""h", then the last two properties amount to "k"("h"1"h"2) = "k""h"1 "k""h"1"h"2 and ("k"1"k"2)"h" = "k"1"k"2"h" "k"2"h".
Turning this around, suppose "H" and "K" are groups (and let "e" denote each group's identity element) and suppose there exist mappings α : "K" × "H" → "H" and β : "K" × "H" → "K" satisfying the properties above. On the cartesian product "H" × "K", define a multiplication and an inversion mapping by, respectively,
Then "H" × "K" is a group called the external Zappa–Szép product of the groups "H" and "K". The subsets "H" × {"e"} and {"e"} × "K" are subgroups isomorphic to "H" and "K", respectively, and "H" × "K" is, in fact, an internal Zappa–Szép product of "H" × {"e"} and {"e"} × "K".
Relation to semidirect and direct products.
Let "G" = "HK" be an internal Zappa–Szép product of subgroups "H" and "K". If "H" is normal in "G", then the mappings α and β are given by, respectively, α("k","h") = "k h k"− 1 and β("k", "h") = "k". This is easy to see because formula_0 and formula_1 since by normality of formula_2, formula_3. In this case, "G" is an internal semidirect product of "H" and "K".
If, in addition, "K" is normal in "G", then α("k","h") = "h". In this case, "G" is an internal direct product of "H" and "K".
See also.
Complement (group theory)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(h_1k_1)(h_2k_2) = (h_1k_1h_2k_1^{-1})(k_1k_2)"
},
{
"math_id": 1,
"text": "h_1k_1h_2k_1^{-1}\\in H"
},
{
"math_id": 2,
"text": "H"
},
{
"math_id": 3,
"text": "k_1h_2k_1^{-1}\\in H"
}
]
| https://en.wikipedia.org/wiki?curid=7094111 |
70941480 | Round-robin voting | Voting systems using paired comparisonsRound-robin, paired comparison, Condorcet completion, or tournament voting methods, are a set of ranked voting systems that choose winners by comparing every pair of candidates one-on-one, similar to a round-robin tournament. In each paired matchup, we record the total number of voters who prefer each candidate in a beats matrix. Then, a majority-preferred (Condorcet) candidate is elected, if one exists. Otherwise, if there is a cyclic tie, the candidate "closest" to being a Condorcet winner is elected, based on the recorded beats matrix. How "closest" is defined varies by method.
Round-robin methods are one of the four major categories of single-winner electoral methods, along with multi-stage methods (like RCV-IRV), positional methods (like plurality and Borda), and graded methods (like score and STAR voting).
Most, but not all, election methods meeting the Condorcet criterion are based on pairwise counting.
Summary.
In paired voting, each voter ranks candidates from first to last (or rates them on a scale). For each pair of candidates (as in a round-robin tournament), we count how many votes rank each candidate over the other.
Pairwise counting.
Pairwise counts are often displayed in a "pairwise comparison" or "outranking matrix" such as those below. In these matrices, each row represents each candidate as a 'runner', while each column represents each candidate as an 'opponent'. The cells at the intersection of rows and columns each show the result of a particular pairwise comparison. Cells comparing a candidate to themselves are left blank.
Imagine there is an election between four candidates: A, B, C and D. The first matrix below records the preferences expressed on a single ballot paper, in which the voter's preferences are (B, C, A, D); that is, the voter ranked B first, C second, A third, and D fourth. In the matrix a '1' indicates that the runner is preferred over the opponent, while a '0' indicates that the opponent is preferred over the runner.
In this matrix the number in each cell indicates either the number of votes for runner over opponent (runner,opponent) or the number of votes for opponent over runner (opponent, runner).
If pairwise counting is used in an election that has three candidates named A, B, and C, the following pairwise counts are produced:
If the number of voters who have no preference between two candidates is not supplied, it can be calculated using the supplied numbers. Specifically, start with the total number of voters in the election, then subtract the number of voters who prefer the first over the second, and then subtract the number of voters who prefer the second over the first.
The "pairwise comparison matrix" for these comparisons is shown below.
A candidate cannot be pairwise compared to itself (for example candidate A can't be compared to candidate A), so the cell that indicates this comparison is either empty or contains a 0.
Each ballot can be transformed into this style of matrix, and then added to all other ballot matrices using matrix addition. The resulting sum of all ballots in an election is called the sum matrix, and it summarizes all the voter preferences.
An election counting method can use the sum matrix to identify the winner of the election.
Suppose that this imaginary election has two additional voters, and their preferences are (D, A, C, B) and (A, C, B, D). Added to the first voter, these ballots yield the following sum matrix:
In the sum matrix above, formula_0 is the Condorcet winner, because they beats every other candidate one-on-one. When there is no Condorcet winner, ranked-robin methods such as ranked pairs use the information contained in the sum matrix to choose a winner.
The first matrix above, which represents a single ballot, is inversely symmetric: (runner,opponent) is ¬(opponent,runner). Or (runner,opponent) + (opponent,runner) = 1. The sum matrix has this property: (runner, opponent) + (opponent, runner) = N for N voters, if all runners are fully ranked by each voter.
Number of pairwise comparisons.
For formula_1 candidates, there are formula_2 pairwise matchups, assuming it is necessary to keep track of tied ranks; when working with margins, only half of these are necessary (as storing both candidates' percentages becomes redundant). For example, for 3 candidates there are 3 pairwise comparisons, for 4 candidates there are 12 pairwise comparisons, and for 5 candidates there are 20 pairwise comparisons.
Example.
Suppose that Tennessee is holding an election on the location of its capital. The population is concentrated around four major cities. All voters want the capital to be as close to them as possible. The options are:
The preferences of each region's voters are:
These ranked preferences indicate which candidates the voter prefers. For example, the voters in the first column prefer Memphis as their 1st choice, Nashville as their 2nd choice, etc. As these ballot preferences are converted into pairwise counts they can be entered into a table.
The following square-grid table displays the candidates in the same order in which they appear above.
The following tally table shows another table arrangement with the same numbers. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "N \\cdot (N-1) "
}
]
| https://en.wikipedia.org/wiki?curid=70941480 |
70946299 | Midpoint theorem (triangle) | Geometric theorem involving midpoints on a triangle
The midpoint theorem or midline theorem states that if the midpoints of two sides of a triangle are connected, then the resulting line segment will be parallel to the third side and have half of its length. The midpoint theorem generalizes to the intercept theorem, where rather than using midpoints, both sides are partitioned in the same ratio.
The converse of the theorem is true as well. That is if a line is drawn through the midpoint of triangle side parallel to another triangle side then the line will bisect the third side of the triangle.
The triangle formed by the three parallel lines through the three midpoints of sides of a triangle is called its medial triangle.
Proof.
<templatestyles src="Math_proof/styles.css" />Proof
Given: In a formula_0 the points M and N are the midpoints of the sides AB and AC respectively.
Construction: MN is extended to D where MN=DN, join C to D.
To Prove:
Proof:
Hence by Side angle side.
formula_6
Therefore, the corresponding sides and angles of congruent triangles are equal
Transversal AC intersects the lines AB and CD and alternate angles ∠MAN and ∠DCN are equal. Therefore
Hence BCDM is a parallelogram. BC and DM are also equal and parallel. | [
{
"math_id": 0,
"text": "\\triangle ABC "
},
{
"math_id": 1,
"text": "MN\\parallel BC"
},
{
"math_id": 2,
"text": "MN={1\\over 2}BC"
},
{
"math_id": 3,
"text": "AN=CN"
},
{
"math_id": 4,
"text": "\\angle ANM=\\angle CND"
},
{
"math_id": 5,
"text": "MN=DN"
},
{
"math_id": 6,
"text": "\\triangle AMN\\cong\\triangle CDN "
},
{
"math_id": 7,
"text": "AM=BM=CD"
},
{
"math_id": 8,
"text": "\\angle MAN=\\angle DCN"
},
{
"math_id": 9,
"text": "AM\\parallel CD\\parallel BM"
},
{
"math_id": 10,
"text": "MN={1\\over 2}MD={1\\over 2}BC"
}
]
| https://en.wikipedia.org/wiki?curid=70946299 |
70951352 | Splittance | Distance of a graph from a split graph
In graph theory, a branch of mathematics, the splittance of an undirected graph measures its distance from a split graph. A split graph is a graph whose vertices can be partitioned into an independent set (with no edges within this subset) and a clique (having all possible edges within this subset). The splittance is the smallest number of edge additions and removals that transform the given graph into a split graph.
Calculation from degree sequence.
The splittance of a graph can be calculated only from the degree sequence of the graph, without examining the detailed structure of the graph. Let G be any graph with n vertices, whose degrees in decreasing order are "d"1 ≥ "d"2 ≥ "d"3 ≥ … ≥ "dn". Let m be the largest index for which "di" ≥ "i" – 1. Then the splittance of G is
formula_0
The given graph is a split graph already if "σ"("G") = 0. Otherwise, it can be made into a split graph by calculating m, adding all missing edges between pairs of the m vertices of maximum degree, and removing all edges between pairs of the remaining vertices. As a consequence, the splittance and a sequence of edge additions and removals that realize it can be computed in linear time.
Applications.
The splittance of a graph has been used in parameterized complexity as a parameter to describe the efficiency of algorithms. For instance, graph coloring is fixed-parameter tractable under this parameter: it is possible to optimally color the graphs of bounded splittance in linear time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma(G)=\\tbinom{m}{2}-\\frac12\\sum_{i=1}^m d_i +\\frac12\\sum_{i=m+1}^n d_i."
}
]
| https://en.wikipedia.org/wiki?curid=70951352 |
70951426 | Wilson action | Lattice gauge theory action
In lattice field theory, the Wilson action is a discrete formulation of the Yang–Mills action, forming the foundation of lattice gauge theory. Rather than using Lie algebra valued gauge fields as the fundamental parameters of the theory, group valued link fields are used instead, which correspond to the smallest Wilson lines on the lattice. In modern simulations of pure gauge theory, the action is usually modified by introducing higher order operators through Symanzik improvement, significantly reducing discretization errors. The action was introduced by Kenneth Wilson in his seminal 1974 paper, launching the study of lattice field theory.
Links and plaquettes.
Lattice gauge theory is formulated in terms of elements of the compact gauge group rather than in terms of the Lie algebra valued gauge fields formula_0, where formula_1 are the group generators. The Wilson line, which describes parallel transport of Lie group elements through spacetime along a path formula_2, is defined in terms of the gauge field by
formula_3
where formula_4 is the path-ordering operator. Discretizing spacetime as a lattice with points indexed by a vector formula_5, the gauge field take on values only at these points formula_6. To first order in lattice spacing formula_7 the smallest possible Wilson lines, those between two adjacent points, are known as links
formula_8
where formula_9 is a unit vector in the formula_10 direction. Since to first order the path ordering operator drops out, the link is related to the discretized gauge field by formula_11. They are the fundamental gauge theory variables of lattice gauge theory, with the path integral measure (mathematics) over the links given by the Haar measure at each lattice point.
Working in some representation of the gauge group, links are matrix valued and orientated. Links of an opposite orientation are defined so that the product of the link from formula_5 to formula_12 with the link in the opposite direction is equal to the identity, which in the case of formula_13 gauge groups means that formula_14. Under a gauge transformation formula_15, the link transforms the same way as the Wilson line
formula_16
The smallest non-trivial loop of link fields on the lattice is known as a plaquette, formed from four links around a square in the formula_10-formula_17 plane
formula_18
The trace of a plaquette is a gauge invariant quantity, analogous to the Wilson loop in the continuum. Using the BCH formula and the lattice gauge field expression for the link variable, the plaquette can be written to lowest order in lattice spacing in terms of the discretized field strength tensor
formula_19
Lattice gauge action.
By rescaling the gauge field using the gauge coupling formula_20 and working in a representation with index formula_21, defined through formula_22, the Yang–Mills action in the continuum can be rewritten as
formula_23
where the field strength tensor is Lie algebra valued formula_24. Since the plaquettes relate the link variables to the discretized field strength tensor, this allows one to construct a lattice version of the Yang–Mills action using them. This is the Wilson action, given in terms of a sum over all plaquettes of one orientation on the lattice
formula_25
It reduces down to the discretized Yang–Mills action with lattice artifacts coming in at order formula_26.
This action is far from unique. A lattice gauge action can be constructed from any discretized Wilson loop. As long as the loops are suitably averaged over orientations and translations in spacetime to give rise to the correct symmetries, the action will reduce back down to the continuum result. The advantage of using plaquettes is its simplicity and that the action lends itself well to improvement programs used to reduce lattice artifacts.
Symanzik improvement.
The Wilson action formula_26 errors can be reduced through Symanzik improvement, whereby additional higher order operators are added to the action to cancel these lattice artifacts. There are many higher order operators that can be added to the Wilson action corresponding to various loops of links. For formula_13 gauge theories, the Lüscher–Weisz action uses formula_27 rectangles formula_28 and parallelograms formula_29 formed from links around a cube
formula_30
where formula_31 is the inverse coupling constant and formula_32 and formula_33 are the coefficients which are tuned to minimize lattice artifacts.
The value of the two prefactors can be calculated either by using the action to simulate known results and tuning the parameters to minimize errors, or else by calculating them using tadpole improved perturbation theory. For the case of an formula_34 gauge theory the latter method yields
formula_35
where formula_36 is the value of the mean link and formula_37 is the quantum chromodynamics fine-structure constant
formula_38
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_\\mu(x) = A^a_\\mu(x) T^a"
},
{
"math_id": 1,
"text": "T^a"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "\nW[x,y] = \\mathcal P e^{i\\int_C A_\\mu dx^\\mu},\n"
},
{
"math_id": 4,
"text": "\\mathcal P"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "A_\\mu(n)"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "\nU_\\mu(n) = W[n, n+\\hat \\mu]+\\mathcal O(a),\n"
},
{
"math_id": 9,
"text": "\\hat \\mu"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "U_\\mu(n) =e^{iaA_\\mu(n)}"
},
{
"math_id": 12,
"text": "n+\\hat \\mu"
},
{
"math_id": 13,
"text": "\\text{SU}(N)"
},
{
"math_id": 14,
"text": "U_{-\\mu}(n) = U_\\mu(n-\\hat \\mu)^\\dagger"
},
{
"math_id": 15,
"text": "\\Omega(n)"
},
{
"math_id": 16,
"text": "\nU_\\mu(n) \\rightarrow \\Omega(n) U_\\mu(n) \\Omega(n+\\hat \\mu)^\\dagger.\n"
},
{
"math_id": 17,
"text": "\\nu"
},
{
"math_id": 18,
"text": "\nU_{\\mu\\nu}(n) = U_\\mu(n)U_\\nu(n+\\hat \\mu)U_\\mu(n+\\hat \\nu)^\\dagger U_\\nu(n)^\\dagger.\n"
},
{
"math_id": 19,
"text": "\nU_{\\mu\\nu}(n) = e^{ia^2 F_{\\mu\\nu}(n)+\\mathcal O(a^3)}.\n"
},
{
"math_id": 20,
"text": "g"
},
{
"math_id": 21,
"text": "\\rho"
},
{
"math_id": 22,
"text": "\\text{tr}[T^aT^b] = \\rho\\delta^{ab}"
},
{
"math_id": 23,
"text": "\nS = \\frac{1}{2g^2\\rho}\\int d^4 x \\ \\text{tr}[F_{\\mu\\nu}F^{\\mu\\nu}],\n"
},
{
"math_id": 24,
"text": "F_{\\mu\\nu}= F^a_{\\mu\\nu}T^a"
},
{
"math_id": 25,
"text": "S = \\frac{1}{g^2 \\rho}\\sum_{n}\\sum_{\\mu<\\nu} \\text{Re} \\ \\text{tr}[1-U_{\\mu\\nu}(n)]."
},
{
"math_id": 26,
"text": "\\mathcal O(a^2)"
},
{
"math_id": 27,
"text": "2\\times 1"
},
{
"math_id": 28,
"text": "U_{rt}"
},
{
"math_id": 29,
"text": "U_{pg}"
},
{
"math_id": 30,
"text": "\nS[U] = \\frac{\\beta}{N} \\sum_{pl}\\text{Re} \\ \\text{tr}(1-U_{\\mu\\nu}) + \\frac{\\beta_{rt}}{N}\\sum_{rt}\\text{Re} \\ \\text{tr}(1-U_{rt}) + \\frac{\\beta_{pg}}{N}\\sum_{pg}\\text{Re} \\ \\text{tr}(1-U_{pg}),\n"
},
{
"math_id": 31,
"text": "\\beta = 2N/g^2"
},
{
"math_id": 32,
"text": "\\beta_{rt}"
},
{
"math_id": 33,
"text": "\\beta_{pg}"
},
{
"math_id": 34,
"text": "\\text{SU}(3)"
},
{
"math_id": 35,
"text": "\n\\beta_{rt} = - \\frac{\\beta_{pl}}{20u_0^2}(1+0.4805\\alpha_s), \\ \\ \\ \\ \\ \\ \\ \\ \\beta_{pg} = - \\frac{\\beta}{u_0^2}0.03325\\alpha_s,\n"
},
{
"math_id": 36,
"text": "u_0"
},
{
"math_id": 37,
"text": "\\alpha_s"
},
{
"math_id": 38,
"text": "\nu_0 = \\big(\\tfrac{1}{3}\\text{Re} \\ \\text{tr}\\langle U_{\\mu\\nu}\\rangle\\big)^{1/4}, \\ \\ \\ \\ \\ \\ \\ \\ \\alpha_s = - \\frac{\\ln(\\tfrac{1}{3}\\text{Re} \\ \\text{tr}\\langle U_{\\mu\\nu})}{3.06839}.\n"
}
]
| https://en.wikipedia.org/wiki?curid=70951426 |
70952312 | Saprobic system | Tool to measure water quality
The saprobic system is a tool to measure water quality, and specifically it deals with the capacity of a water body to self-regulate and degrade organic matter. The saprobic system derives from so-called saprobes — organisms that thrive through degradation of organic matter, which is called saprotrophic nutrition.
The saprobic system is based on a survey of indicator organisms. For example, the abundance of "Lymnaea stagnalis" water snails and other organisms is estimated, and using a formula, the listed saprobic and tolerance values of the organisms allow the water quality grade — the saprobic index — to be computed.
Saprobic water quality is expressed in four classes ranging from I to IV; and with three intermediate grades (I-II, II-III and III-IV). Water bodies of class I are the cleanest and of the highest quality. The inherent drawback of the saprobic systems as a water quality measure is that it only regards biodegradable organic material, and so ignores other factors like heavy metal pollution. Though the presence of certain organisms can rule out the presence of toxic substances, the incorporation of such organisms would deviate from the saprobic system's concept.
Computing the saprobic index.
This section explains how the saprobic index of a water body is computed according to the Zelinka & Marvan method; without adjusting for several confounding factors.
In a first iteration, the abundance A of each indicator species is counted and converted to categories ranging from 1 to 7. An abundance of 1 means that only one or two animals were found, while the class 7 means more than 1000 individuals during a survey. There are different abundance classes — for example, some methods use classes where the next-bigger class contains roughly double the number of individuals. The following table follows the DIN 38410-1 (2008) standard used in Germany, where the next-bigger class is about three times larger than the previous one.
The saprobic value s denotes how much organic matter must be present for an aquatic species to thrive. An animal with a saprobic value 1 can only survive in water with little organic matter present, while one with a value of 4 requires water bodies with a large amount of organic matter. The aforementioned example, the "Lymnaea stagnalis" snail, has a saprobic value of 2.0. The annelid worm "Tubifex tubifex" needs a lot of organic matter and has an s value of 3.6.
The weighting factor g has a value of either 1, 2, 4, 8 or 16, and denotes a tolerance range. If a species can survive in both unpolluted and heavily polluted water, g is very small because finding the species in a survey has little predictive value. In practice, only indicator species with a weighting factor g ≥ 4 are used. For example, a caddisfly, "Agapetus fuscipes", has a g value of 16, while the zebra mussel's value is g = 4.
The saprobic index of a water body - the water quality - is finally computed with the following formula:
formula_0
The water body's quality, in Roman numerals, is the rounded value of S.
Some species and their s and g values.
The species used in Germany to measure saprobic water quality tend to group around s = 2, while other countries like Austria and the Czech Republic use a more diverse list of organisms.
Pantle & Buck method.
The earlier Pantle & Buck method (1955) uses the same saprobic values s of each species, but not the weighting factor g. The Pantle-Buck saprobity index S, ranging from 0 to 4, is thus calculated:
formula_1
where the abundance A is expressed as one of nine subjective categories, ranging from "very rare" to "mass development". It does not require the organisms to be counted – which can save a lot of time – but raises the issues of intra- and inter-rater reliability.
Confounding factors and corrections.
The saprobic index is only regarded as a valid estimate if the sum of the abundance classes is at least 20. For example, if a survey only found a total of 500 individuals of any species, the sample would still be valid if the survey found four species with 125 individuals each (abundance class 5).
Likewise, a single water body has to be surveyed several times in different months in order to account for fluctuations.
During its history, several correcting factors have been introduced. For example, they deal with the flow rate of the river (fast-flowing water bodies are inherently better oxygenated, thus speeding up organic matter degradation), water acidification, and human-made changes to the water body. Likewise, corrections must be applied for the altitude of the ecosystem (lowland rivers naturally carry more organic matter than mountainous ones, where biomass production is lower), and for the different size of catchment areas.
The saprobic system was never designed to accurately indicate water quality if only a selection of organisms is surveyed. Deviations can be sizeable if a survey only studies ciliates and members of the macrozoobenthos (benthos animals larger than 1 millimeter), as the latter's abundance can be easily influenced by oxygen levels and not by the availability of organic matter.
History.
The saprobic system has a long history in German-language countries. The idea of saprobes to estimate water quality has been foreshadowed by the works of Arthur Hill Hassall (1850) and Ferdinand Julius Cohn (1853). In a series of publications, the Germans botanists Richard Kolkwitz and Maximilian Marsson (1902, 1908, 1909) have developed the saprobic system to judge water quality. They compiled a list of about 300 plant and 500 animal species (excluding fish), and estimated saprobic values for them.
In 1955, H. Knöpp introduced abundance classes, and the calculation of a water quality index was established during the 1950s and 1960s (Pantle & Buck, 1955; Zelinka & Marvan, 1961; Marvan, 1969).
In 2000, the Pantle & Buck technique has been criticized because it requires the surveyed organisms to be identified by genus, something that freshwater ecologists are rarely trained for. Furthermore, it focuses on aquatic organisms that are prevalent in Western Europe, something that hampers water quality assays in Eastern Europe and Asia.
The procedure used in Germany to estimate the saprobic index has been standardized in DIN 38410.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = \\frac{\\sum_{i=1}^n A\\cdot s\\cdot g}{\\sum_{i=1}^n A\\cdot g}"
},
{
"math_id": 1,
"text": "S = \\frac{\\sum_{i=1}^n A\\cdot s}{\\sum_{i=1}^n A}"
}
]
| https://en.wikipedia.org/wiki?curid=70952312 |
70954606 | Tagged Deterministic Finite Automaton | In the automata theory, a tagged deterministic finite automaton (TDFA) is an extension of deterministic finite automaton (DFA). In addition to solving the recognition problem for regular languages, TDFA is also capable of submatch extraction and parsing. While canonical DFA can find out if a string belongs to the language defined by a regular expression, TDFA can also extract substrings that match specific subexpressions. More generally, TDFA can identify positions in the input string that match tagged positions in a regular expression (tags are meta-symbols similar to capturing parentheses, but without the pairing requirement).
History.
TDFA were first described by Ville Laurikari in 2000.
Prior to that it was unknown whether it is possible to perform submatch extraction in one pass on a deterministic finite-state automaton,
so this paper was an important advancement.
Laurikari described TDFA construction and gave a proof that the determinization process terminates,
however the algorithm did not handle disambiguation correctly.
In 2007 Chris Kuklewicz implemented TDFA in a Haskell library Regex-TDFA with POSIX longest-match semantics.
Kuklewicz gave an informal description of the algorithm
and answered the principal question whether TDFA are capable of POSIX longest-match disambiguation,
which was doubted by other researchers.
In 2017 Ulya Trafimovich described TDFA with one-symbol lookahead.
The use of a lookahead symbol reduces the number of registers and register operations in a TDFA,
which makes it faster and often smaller than Laurikari TDFA.
Trafimovich called TDFA variants with and without lookahead TDFA(1) and TDFA(0) by analogy with LR parsers LR(1) and LR(0).
The algorithm was implemented in the open-source lexer generator RE2C.
Trafimovich formalized Kuklewicz disambiguation algorithm.
In 2018 Angelo Borsotti worked on an experimental Java implementation of TDFA;
it was published later in 2021.
In 2019 Borsotti and Trafimovich adapted POSIX disambiguation algorithm by Okui and Suzuki to TDFA.
They gave a formal proof of correctness of the new algorithm
and showed that it is faster than Kuklewicz algorithm in practice.
In 2020 Trafimovich published an article about TDFA implementation in RE2C.
In 2022 Borsotti and Trafimovich published a paper with a detailed description of TDFA construction.
The paper incorporated their past research and presented multi-pass TDFA that are better suited to just-in-time determinization.
They also compared TDFA against other algorithms and provided benchmarks.
Formal definition.
TDFA have the same basic structure as ordinary DFA: a finite set of states linked by transitions. In addition to that, TDFA have a fixed set of registers that hold tag values, and register operations on transitions that set or copy register values.
The values may be scalar offsets, or offset lists for tags that match repeatedly (the latter can be represented efficiently using a trie structure). There is no one-to-one mapping between tags in a regular expression and registers in a TDFA: a single tag may need many registers, and the same register may hold values of different tags.
The following definition is according to Trafimovich and Borsotti. The original definition by Laurikari is slightly different.
A tagged deterministic finite automaton formula_1 is a tuple
formula_2, where:
Example.
shows an example TDFA for regular expression formula_0
with alphabet formula_21 and a set of tags formula_22
that matches strings of the form formula_23 with at least one symbol.
TDFA has four states formula_24 three of which are final formula_25.
The set of registers is formula_26
with a subset of final registers formula_27
where register formula_28 corresponds to formula_13-th tag.
Transitions have operations defined by the formula_29 function,
and final states have operations defined by the formula_30 function (marked with wide-tipped arrow).
For example, to match string formula_31,
one starts in state 0,
matches the first formula_32 and moves to state 1 (setting registers formula_33 to undefined and formula_34 to the current position 0),
matches the second formula_32 and loops to state 1 (register values are now formula_35),
matches formula_36 and moves to state 2 (register values are now formula_37),
executes the final operations in state 2 (register values are now formula_38)
and finally exits TDFA.
Complexity.
Canonical DFA solve the recognition problem in linear time.
The same holds for TDFA, since the number of registers and register operations is fixed and depends only on the regular expression, but not on the length of input.
The overhead on submatch extraction depends on tag density in a regular expression
and nondeterminism degree of each tag (the maximum number of registers needed to track all possible values of the tag in a single TDFA state).
On one extreme, if there are no tags, a TDFA is identical to a canonical DFA.
On the other extreme, if every subexpression is tagged, a TDFA effectively performs full parsing and has many operations on every transition.
In practice for real-world regular expressions with a few submatch groups the overhead is negligible compared to matching with canonical DFA.
TDFA construction.
TDFA construction is performed in a few steps.
First, a regular expression is converted to a tagged nondeterministic finite automaton (TNFA).
Second, a TNFA is converted to a TDFA using a determinization procedure;
this step also includes disambiguation that resolves conflicts between ambiguous TNFA paths.
After that, a TDFA can optionally go through a number of optimizations that reduce the number of registers and operations,
including minimization that reduces the number of states.
Algorithms for all steps of TDFA construction with pseudocode are given in the paper by Borsotti and Trafimovich.
This section explains TDFA construction on the example of a regular expression formula_39,
where formula_40 is a tag and formula_41 are alphabet symbols.
Tagged NFA.
TNFA is a nondeterministic finite automaton with tagged ε-transitions.
It was first described by Laurikari,
although similar constructions were known much earlier as Mealy machines and nondeterministic finite-state transducers.
TNFA construction is very similar to Thompson's construction:
it mirrors the structure of a regular expression.
Importantly, TNFA preserves ambiguity in a regular expression:
if it is possible to match a string in two different ways,
then TNFA for this regular expression has two different accepting paths for this string.
TNFA definition by Borsotti and Trafimovich differs from the original one by Laurikari
in that TNFA can have negative tags on transitions:
they are needed to make the absence of match explicit in cases when there is a bypass for a tagged transition.
shows TNFA for the example regular expression formula_39.
It has three kinds of transitions:
transitions on alphabet symbols formula_42 (dark blue),
tagged ε-transitions (the one from state 4 to state 12 with tag formula_40 and the one from state 10 to state 11 with negative tag formula_43, bright green), and
untagged ε-transitions (light blue).
TNFA has a single start state 0 and a single final state 11.
In order to understand TNFA determinization, it helps to understand TNFA simulation first.
Recall the canonical ε-NFA simulation: it constructs a subset of active states as an ε-closure of the start state,
and then in a loop repeatedly steps on the next input symbol and constructs ε-closure of the active state set.
Eventually the loop terminates: either the active set becomes empty (which means a failure),
or all input symbols get consumed (which means a success if the active set contains a final state, otherwise a failure).
TNFA simulation is similar, but it additionally tracks tag values.
Every time simulation encounters a tagged transition, it updates tag value to the current offset.
Since the simulation tracks multiple nondeterministic paths simultaneously, tag values along these paths may differ and should be tracked separately.
Another difficulty is the need for disambiguation:
unlike canonical NFA simulation, TNFA simulation needs to distinguish ambiguous paths,
as they may have different tag values.
shows an example of TNFA simulation on a string formula_44:
Disambiguation.
Ambiguity means the existence of multiple different parse trees for the same input.
It is a property of a regular expression;
ambiguity is preserved by TNFA construction
and gets resolved during TNFA simulation or determinization.
One way to resolve ambiguity is use a disambiguation policy, the most notable examples being the leftmost-greedy
and the longest-match (POSIX) policies.
The leftmost-greedy policy is defined in terms of regular expression structure;
it admits a simple and efficient implementation.
The POSIX policy, on the other hand, is defined in terms of the structure of parse results;
it is more difficult to implement and computationally more complex than the leftmost-greedy policy.
TDFA can work with both policies, and there is no runtime overhead
on disambiguation, since it happens during determinization and gets built into TDFA structure.
For TNFA simulation, on the other hand, the time spent on disambiguation is included in the runtime.
The example regular expression formula_39 is deliberately ambiguous, as
it allows one to parse formula_44 in two different ways:
either as the left alternative formula_45, or the right one formula_44.
Depending on which alternative is preferred, tag formula_40 should either have value
formula_47 (the offset at the position between symbols formula_32 and formula_36),
or formula_46 (undefined).
Both POSIX and leftmost greedy disambiguation policies agree that the first alternative is preferable in this case.
Determinization.
TNFA determinization is based on the canonical powerset construction algorithm that converts an NFA to a DFA.
The algorithm simulates NFA on all possible strings. At each step of the simulation, the active set of NFA states forms a new DFA state. If the new state is identical to an existing DFA state, it is discarded and replaced with the existing one, and the current branch of simulation terminates. Otherwise the new state is added to the growing set of DFA states and simulation from this state continues. Eventually determinization terminates: although the set of all possible strings is infinite, the set of different DFA states is finite, and at some point all new states become identical to existing ones.
In the case of TDFA naive powerset construction faces a problem:
TDFA states contain tag information, which changes at every step of the simulation (as the offsets increase).
This prevents TDFA states from mapping: a pair of states that contain identical TNFA states but different tag offsets are not identical and cannot be mapped.
As a result, simulation continues indefinitely, the set of TDFA states grows, and determinization does not terminate.
To solve this problem, Laurikari applied the idea of indirection:
instead of storing immediate tag values in TDFA states, he suggested storing them indirectly in registers.
Tag values in registers may be different, but it doesn't matter as long as the registers are identical.
This solves the termination problem:
even if TDFA states have different registers, they can be made identical (mapped to each other)
by adding operations that copy the corresponding register values.
Indirection is not free: it requires adding runtime register operations on transitions that update register values.
To reduce the runtime overhead, Trafimovich used the lookahead optimization.
The idea is to move register operations from the incoming transition into a TDFA state
to the outgoing transitions from this state.
This way the operations get split on the lookahead symbol,
which reduces the overlap between register lifetimes and results in a faster TDFA.
To use this optimization, it is necessary to track lookahead tags in each TDFA state under construction
and take them into account when mapping TDFA states.
shows the determinization process for the running example.
shows the resulting TDFA.
Optimizations.
The goal of optimizations is to reduce TDFA size and the number of registers and operations on transitions.
This section describes a few optimizations that are used in a practical TDFA implementation.
None of these optimizations is particularly complex or vital for TDFA operation, but when applied
together and in the correct order they can make TDFA considerably faster and smaller.
Fixed-tags optimization is applied at the regular expression level (before TNFA construction).
The idea is, if a pair of tags happens to be within fixed distance from each other, there
is no need to track both of them: the value of one tag can be computed from the value of the other tag one by adding a fixed offset.
For example, in the regular expression formula_48
tags formula_49 and formula_50 are within one symbol distance from each other,
so formula_49 can be computed as formula_51.
This optimization can reduce both TDFA construction time (as fixed tags are excluded from TNFA construction and determinization)
and matching time (as there are fewer register operations).
Register optimizations are applied after TDFA construction.
A TDFA induces a control flow graph (CFG) on registers:
operations on transitions form basic blocks,
and there is an arc between two blocks if one of them is reachable from the other one by TDFA transitions, without passing through other register operations.
CFG represents a program on registers as variables, so the usual compiler optimizations can be applied to it
(such as liveness analysis, dead code elimination and register allocation).
TDFA minimization is very similar to DFA minimization, except for one additional restriction:
register actions on TDFA transitions must be taken into account.
So, TDFA states that are identical, but have different register actions on incoming transitions on the same symbol, cannot be merged.
All the usual algorithms for DFA minimization can be adapted to TDFA minimization,
for example Moore's algorithm is used in the RE2C lexer generator.
shows an optimized TDFA for regular expression formula_39.
Note that it is in fact the same as a TDFA for a semantically equivalent regular expression formula_45,
where ambiguity has been removed.
Ambiguity is resolved during determinization, and the unoptimized TDFA on is unambiguous,
but it has some built-in redundancy that can be removed by optimizations.
Multi-pass TDFA.
TDFA with registers are well suited for ahead-of-time determinization, when the time spent on TDFA construction is not included in the runtime (e.g. in lexer generators).
But for just-in-time determinization (e.g. in regular expression libraries) it is desirable to reduce TDFA construction time.
Another concern is tag density in a regular expression:
TDFA with registers are efficient if the number of tags is relatively small.
But for heavily tagged regular expressions TDFA with registers are suboptimal: transitions get cluttered with operations, making TDFA execution slow.
Register optimizations also become problematic due to the size of liveness and interference information.
Multi-pass TDFA address both issues: they reduce TDFA construction time,
and they are better suited to dense submatch extraction.
The main difference with canonical TDFA is that multi-pass TDFA have no register operations.
Instead they have multiple passes: a forward pass that matches the input
string and records a sequence of TDFA states, and one or more backward passes that iterate through the recorded
states and collect submatch information.
A single backward pass is sufficient, but an extra pass may be used e.g. to estimate the necessary amount of memory for submatch results.
In order to trace back the matching TNFA path from a sequence of TDFA states,
multi-pass TDFA use backlinks on transitions and in final states.
A backlink is a pair, where the first component is an index in backlink arrays on preceding transitions,
and the second component is a sequence of tags for a fragment of TNFA path between TDFA states.
shows an example of a multi-pass TDFA for regular expression formula_52 matching on a string formula_31
(compare it to that shows TDFA with registers for the same regular expression):
Related automata.
StaDFA described by Mohammad Imran Chowdhury
are very similar to TDFA, except that they have register operations in states, not on transitions.
DSSTs (Deterministic Streaming String Transducers) described by Grathwohl
are more distant relatives to TDFA, better suited to full parsing than submatch extraction.
DSST states contain path trees constructed by the ε-closure,
while TDFA states contain similar information in a decomposed form (register versions and lookahead tags).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(1 a 2)^* 3 (a|4 b) 5 b^*"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "(\\Sigma, T, S, S_f, s_0, R, R_f, \\delta, \\varphi)"
},
{
"math_id": 3,
"text": "\\Sigma"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "s_0"
},
{
"math_id": 7,
"text": "S_f \\subseteq S"
},
{
"math_id": 8,
"text": "R"
},
{
"math_id": 9,
"text": "R_f"
},
{
"math_id": 10,
"text": "\\delta: S \\times \\Sigma \\rightarrow S \\times O^*"
},
{
"math_id": 11,
"text": "\\varphi: S_f \\rightarrow O^*"
},
{
"math_id": 12,
"text": "O"
},
{
"math_id": 13,
"text": "i"
},
{
"math_id": 14,
"text": "i \\leftarrow v"
},
{
"math_id": 15,
"text": "v \\in \\{\\mathbf{n},\\mathbf{p}\\}"
},
{
"math_id": 16,
"text": "j"
},
{
"math_id": 17,
"text": "i \\leftarrow j"
},
{
"math_id": 18,
"text": "i \\leftarrow j \\cdot h"
},
{
"math_id": 19,
"text": "h"
},
{
"math_id": 20,
"text": "\\{\\mathbf{n},\\mathbf{p}\\}"
},
{
"math_id": 21,
"text": "\\Sigma=\\{a,b\\}"
},
{
"math_id": 22,
"text": "T=\\{1,2,3,4,5\\}"
},
{
"math_id": 23,
"text": "a \\dots a b \\dots b"
},
{
"math_id": 24,
"text": "S=\\{0,1,2,3\\}"
},
{
"math_id": 25,
"text": "S_f=\\{1,2,3\\}"
},
{
"math_id": 26,
"text": "R=\\{r_1,r_2,r_3,r_4,r_5\\}"
},
{
"math_id": 27,
"text": "R_f=\\{r_1,r_2,r_3,r_4,r_5\\}"
},
{
"math_id": 28,
"text": "r_i"
},
{
"math_id": 29,
"text": "\\delta"
},
{
"math_id": 30,
"text": "\\varphi"
},
{
"math_id": 31,
"text": "aab"
},
{
"math_id": 32,
"text": "a"
},
{
"math_id": 33,
"text": "r_1, r_2"
},
{
"math_id": 34,
"text": "r_3"
},
{
"math_id": 35,
"text": "r_1=0, r_2=r_3=1"
},
{
"math_id": 36,
"text": "b"
},
{
"math_id": 37,
"text": "r_1=1, r_2=r_3=r_4=2"
},
{
"math_id": 38,
"text": "r_1=1, r_2=r_3=r_4=2, r_5=3"
},
{
"math_id": 39,
"text": "a^* t b^* | ab"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "\\{a,b\\}"
},
{
"math_id": 42,
"text": "\\{a, b\\}"
},
{
"math_id": 43,
"text": "-t"
},
{
"math_id": 44,
"text": "ab"
},
{
"math_id": 45,
"text": "a^* t b^*"
},
{
"math_id": 46,
"text": "\\varnothing"
},
{
"math_id": 47,
"text": "1"
},
{
"math_id": 48,
"text": "t_1 (a|b) \\, t_2"
},
{
"math_id": 49,
"text": "t_1"
},
{
"math_id": 50,
"text": "t_2"
},
{
"math_id": 51,
"text": "t_2 - 1"
},
{
"math_id": 52,
"text": "(1 a 2)^* 3 (a|4 b)5b^*"
},
{
"math_id": 53,
"text": "(0,t_5)"
},
{
"math_id": 54,
"text": "t_5"
},
{
"math_id": 55,
"text": "(0,t_2 t_3 t_4)"
},
{
"math_id": 56,
"text": "t_2 t_3 t_4 b \\, t_5"
},
{
"math_id": 57,
"text": "(0,t_2 t_1)"
},
{
"math_id": 58,
"text": "t_2 t_1 a \\, t_2 t_3 t_4 b \\, t_5"
},
{
"math_id": 59,
"text": "(0,t_1)"
},
{
"math_id": 60,
"text": "t_1 a \\, t_2 t_1 a \\, t_2 t_3 t_4 b \\, t_5"
}
]
| https://en.wikipedia.org/wiki?curid=70954606 |
70956038 | Polyakov loop | Thermal Wilson loop
In quantum field theory, the Polyakov loop is the thermal analogue of the Wilson loop, acting as an order parameter for confinement in pure gauge theories at nonzero temperatures. In particular, it is a Wilson loop that winds around the compactified Euclidean temporal direction of a thermal quantum field theory. It indicates confinement because its vacuum expectation value must vanish in the confined phase due to its non-invariance under center gauge transformations. This also follows from the fact that the expectation value is related to the free energy of individual quarks, which diverges in this phase. Introduced by Alexander M. Polyakov in 1975, they can also be used to study the potential between pairs of quarks at nonzero temperatures.
Definition.
Thermal quantum field theory is formulated in Euclidean spacetime with a compactified imaginary temporal direction of length formula_0. This length corresponds to the inverse temperature of the field formula_1. Compactification leads to a special class of topologically nontrivial Wilson loops that wind around the compact direction known as Polyakov loops. In formula_2 theories a straight Polyakov loop on a spatial coordinate formula_3 is given by
formula_4
where formula_5 is the path-ordering operator and formula_6 is the Euclidean temporal component of the gauge field. In lattice field theory this operator is reformulated in terms of temporal link fields formula_7 at a spatial position formula_8 as
formula_9
The continuum limit of the lattice must be taken carefully to ensure that the compact direction has fixed extent. This is done by ensuring that the finite number of temporal lattice points formula_10 is such that formula_11 is constant as the lattice spacing formula_12 goes to zero.
Order parameter.
Gauge fields need to satisfy the periodicity condition formula_13 in the compactified direction. Meanwhile, gauge transformations only need to satisfy this up to a group center term formula_14 as formula_15. A change of basis can always diagonalize this so that formula_16 for a complex number formula_17. The Polyakov loop is topologically nontrivial in the temporal direction so unlike other Wilson loops it transforms as formula_18 under these transformations. Since this makes the loop gauge dependent for formula_19, by Elitzur's theorem non-zero expectation values of formula_20 imply that the center group must be spontaneously broken, implying confinement in pure gauge theory. This makes the Polyakov loop an order parameter for confinement in thermal pure gauge theory, with a confining phase occurring when formula_21 and deconfining phase when formula_22. For example, lattice calculations of quantum chromodynamics with infinitely heavy quarks that decouple from the theory shows that the deconfinement phase transition occurs at around a temperature of formula_23 MeV. Meanwhile, in a gauge theory with quarks, these break the center group and so confinement must instead be deduced from the spectrum of asymptotic states, the color neutral hadrons.
For gauge theories that lack a nontrivial group center that could be broken in the confining phase, the Polyakov loop expectation values are nonzero even in this phase. They are however still a good indicator of confinement since they generally experience a sharp jump at the phase transition. This is the case for example in the Higgs model with the exceptional gauge group formula_24.
The Nambu–Jona-Lasinio model lacks local color symmetry and thus cannot capture the effects of confinement. However, Polyakov loops can be used to construct the Polyakov-loop-extended Nambu–Jona-Lasinio model which treats both the chiral condensate and the Polyakov loops as classical homogeneous fields that couple to quarks according to the symmetries and symmetry breaking patters of quantum chromodynamics.
Quark free energy.
The free energy formula_25 of formula_26 quarks and formula_27 antiquarks, subtracting out the vacuum energy, is given in terms of the correlation functions of Polyakov loops
formula_28
This free energy is another way to see that the Polyakov loop acts as an order parameter for confinement since the free energy of a single quark is given by formula_29. Confinement of quarks means that it would take an infinite amount of energy to create a configuration with a single free quark, therefore its free energy must be infinite and so the Polyakov loop expectation value must vanish in this phase, in agreement with the center symmetry breaking argument.
The formula for the free energy can also be used to calculate the potential between a pair of infinitely massive quarks spatially separated by formula_30. Here the potential formula_31 is the first term in the free energy, so that the correlation function of two Polyakov loops is
formula_32
where formula_33 is the energy difference between the potential and the first excited state. In the confining phase the potential is linear formula_34, where the constant of proportionality is known as the string tension. The string tension acquired from the Polyakov loop is always bounded from above by the string tension acquired from the Wilson loop.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "\\beta \\propto 1/T"
},
{
"math_id": 2,
"text": "\\text{SU}(N)"
},
{
"math_id": 3,
"text": "\\boldsymbol x"
},
{
"math_id": 4,
"text": "\\Phi(\\boldsymbol x) = \\frac{1}{N}\\text{tr} \\ \\mathcal P \\exp\\bigg[\\int_0^\\beta dx_4 A_4(\\boldsymbol x, x_f)\\bigg],"
},
{
"math_id": 5,
"text": "\\mathcal P"
},
{
"math_id": 6,
"text": "A_4"
},
{
"math_id": 7,
"text": "U_4(\\boldsymbol m, j)"
},
{
"math_id": 8,
"text": "\\boldsymbol m"
},
{
"math_id": 9,
"text": "\n\\Phi(\\boldsymbol m) = \\frac{1}{N}\\text{tr} \\bigg[ \\prod_{j=0}^{N_T-1} U_4(\\boldsymbol m, j)\\bigg].\n"
},
{
"math_id": 10,
"text": "N_T"
},
{
"math_id": 11,
"text": "\\beta = N_T a"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "A_\\mu(\\boldsymbol x, x_4+\\beta) = A_\\mu(\\boldsymbol x, x_4)"
},
{
"math_id": 14,
"text": "h"
},
{
"math_id": 15,
"text": "\\Omega(\\boldsymbol x, x_4+\\beta) = h \\Omega(\\boldsymbol x, x_4)"
},
{
"math_id": 16,
"text": "h=zI"
},
{
"math_id": 17,
"text": "z"
},
{
"math_id": 18,
"text": "\\Phi(\\boldsymbol x) \\rightarrow z \\Phi(\\boldsymbol x)"
},
{
"math_id": 19,
"text": "z \\neq 1"
},
{
"math_id": 20,
"text": "\\langle \\Phi\\rangle"
},
{
"math_id": 21,
"text": "\\langle \\Phi\\rangle = 0"
},
{
"math_id": 22,
"text": "\\langle \\Phi\\rangle \\neq 0"
},
{
"math_id": 23,
"text": "270"
},
{
"math_id": 24,
"text": "G_2"
},
{
"math_id": 25,
"text": "F"
},
{
"math_id": 26,
"text": "N"
},
{
"math_id": 27,
"text": "\\bar N"
},
{
"math_id": 28,
"text": "\ne^{-\\beta F} = \\langle \\Phi(\\boldsymbol x_1)\\dots \\Phi(\\boldsymbol x_{N_q})\\Phi^\\dagger(\\boldsymbol x'_1) \\cdots \\Phi^\\dagger(\\boldsymbol x_{\\bar N}')\\rangle.\n"
},
{
"math_id": 29,
"text": "e^{-\\beta \\Delta F} = \\langle \\Phi(\\boldsymbol x)\\rangle"
},
{
"math_id": 30,
"text": "r =|\\boldsymbol x_1 - \\boldsymbol x_2|"
},
{
"math_id": 31,
"text": "V(r)"
},
{
"math_id": 32,
"text": "\n\\langle \\Phi(\\boldsymbol x_1)\\Phi(\\boldsymbol x_2)\\rangle \\propto e^{-\\beta V(r)}(1+\\mathcal O(e^{-\\beta\\Delta E(r)})),\n"
},
{
"math_id": 33,
"text": "\\Delta E"
},
{
"math_id": 34,
"text": "V(r) = \\sigma r"
}
]
| https://en.wikipedia.org/wiki?curid=70956038 |
7095671 | Gysin homomorphism | Long exact sequence
In the field of mathematics known as algebraic topology, the Gysin sequence is a long exact sequence which relates the cohomology classes of the base space, the fiber and the total space of a sphere bundle. The Gysin sequence is a useful tool for calculating the cohomology rings given the Euler class of the sphere bundle and vice versa. It was introduced by Gysin (1942), and is generalized by the Serre spectral sequence.
Definition.
Consider a fiber-oriented sphere bundle with total space "E", base space "M", fiber "S""k" and projection map
formula_0:
formula_1
Any such bundle defines a degree "k" + 1 cohomology class "e" called the Euler class of the bundle.
De Rham cohomology.
Discussion of the sequence is clearest with de Rham cohomology. There cohomology classes are represented by differential forms, so that "e" can be represented by a ("k" + 1)-form.
The projection map formula_0 induces a map in cohomology formula_2 called its pullback formula_3
formula_4
In the case of a fiber bundle, one can also define a pushforward map formula_5
formula_6
which acts by fiberwise integration of differential forms on the oriented sphere – note that this map goes "the wrong way": it is a covariant map between objects associated with a contravariant functor.
Gysin proved that the following is a long exact sequence
formula_7
where formula_8 is the wedge product of a differential form with the Euler class "e".
Integral cohomology.
The Gysin sequence is a long exact sequence not only for the de Rham cohomology of differential forms, but also for cohomology with integral coefficients. In the integral case one needs to replace the wedge product with the Euler class with the cup product, and the pushforward map no longer corresponds to integration.
Gysin homomorphism in algebraic geometry.
Let "i": "X" → "Y" be a (closed) regular embedding of codimension "d", "Y'" → "Y" a morphism and "i'": "X"' = "X" ×"Y" "Y"' → "Y"' the induced map. Let "N" be the pullback of the normal bundle of "i" to "X'". Then the refined Gysin homomorphism "i"! refers to the composition
formula_9
where
The homomorphism "i"! "encodes" intersection product in intersection theory in that one either shows the intersection product of "X" and "V" to be given by the formula formula_12 or takes this formula as a definition.
Example: Given a vector bundle "E", let "s": "X" → "E" be a section of "E". Then, when "s" is a regular section, formula_13 is the class of the zero-locus of "s", where ["X"] is the fundamental class of "X".
Notes.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "S^k \\hookrightarrow E \\stackrel{\\pi}{\\longrightarrow} M. "
},
{
"math_id": 2,
"text": "H^\\ast"
},
{
"math_id": 3,
"text": "\\pi^\\ast"
},
{
"math_id": 4,
"text": "\\pi^*:H^*(M)\\longrightarrow H^*(E). \\, "
},
{
"math_id": 5,
"text": "\\pi_\\ast"
},
{
"math_id": 6,
"text": "\\pi_*:H^*(E)\\longrightarrow H^{*-k}(M) "
},
{
"math_id": 7,
"text": "\\cdots \\longrightarrow H^n(E) \\stackrel{\\pi_*}{\\longrightarrow} H^{n-k}(M) \\stackrel{e_\\wedge}{\\longrightarrow} H^{n+1}(M) \\stackrel{\\pi^*}{\\longrightarrow} H^{n+1}(E) \\longrightarrow \\cdots"
},
{
"math_id": 8,
"text": "e_\\wedge"
},
{
"math_id": 9,
"text": "i^!: A_k(Y') \\overset{\\sigma}\\longrightarrow A_k(N) \\overset{\\text{Gysin}} \\longrightarrow A_{k-d}(X')"
},
{
"math_id": 10,
"text": "C_{X'/Y'} \\hookrightarrow N"
},
{
"math_id": 11,
"text": "X' \\hookrightarrow N"
},
{
"math_id": 12,
"text": "X \\cdot V = i^![V],"
},
{
"math_id": 13,
"text": "s^{!}[X]"
}
]
| https://en.wikipedia.org/wiki?curid=7095671 |
70958960 | Atmospheric circulation of exoplanets | Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well.
The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance.
Atmospheric motions.
Coriolis force.
When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as:
formula_0
where
This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotating planet.
Mathematical models.
Navier-Stokes momentum equation.
Conservation of momentum for a flow is given by the following equation:
formula_3
where
The term formula_10 is the centripetal acceleration due to the rotation of the planet.
Simplified model for large-scale motion.
The above equation can be simplified to a form suitable for large-scale atmospheric motion. First, the velocity vector formula_1 is split into the three components of wind:
formula_11
where
Next, we ignore friction and vertical wind. Thus, the equations for zonal and meridional wind simplify to:
formula_15
formula_16
and the equation in the vertical direction simplifies to the hydrostatic equilibrium equation:
formula_17
where the parameter formula_7 has absorbed the vertical component of the centripetal force. In the above equations:
formula_18
is the Coriolis parameter, formula_19 is the latitude and formula_20 is the radius of the planet.
Key drivers of circulation.
Thermodynamics.
Temperature gradients are one of the drivers of circulation, as one effect of atmospheric flow is to transport heat from places of high temperature to those of low temperature in an effort to reach thermal equilibrium. Generally, planets have stably stratified atmospheres. This means that motion due to temperature gradient in the vertical direction is opposed by the pressure gradient in the vertical direction. In this case, it is the horizontal temperature gradients (on constant pressure surfaces) which drive circulation. Such temperature gradients are typically maintained by uneven heating/cooling throughout a planet's atmosphere. On Earth, for example, at the equator, the atmosphere absorbs more net energy from the Sun that it does at the poles.
Planetary rotation.
As noted previously, planetary rotation is important when it comes to atmospheric circulation as Coriolis and centripetal forces arise as a results of planetary rotation. When considering a steady version of the simplified equations for large-scale motion presented above, both Coriolis and centripetal forces work to balance out the horizontal pressure gradients. Depending on the rotation rate of the planet, one of these forces will dominate and affect the atmospheric circulation accordingly.
Geostrophic balance.
For a planet with rapid rotation, the Coriolis force is the dominant force which balances pressure gradient. In this case the equations for large-scale motion further simplify to:
formula_21
formula_22
where the formula_23 subscript denotes a constant altitude surface and the formula_7 subscript denotes geostrophic wind. Note that in this case, the geostrophic wind is perpendicular to pressure gradient. This is due to the fact that Coriolis force acts perpendicularly to the direction of wind. Therefore, since pressure gradient induces a wind parallel to the gradient, the Coriolis force will act perpendicularly to the pressure gradient. As Coriolis force dominates in this regime, the resulting winds are perpendicular to pressure gradient.
Cyclostrophic balance.
For a planet with a low rotation rate and negligible Coriolis force, pressure gradient may instead be balanced by centripetal acceleration. In this case the equations for large-scale motion further simplify to:
formula_24
for a prevailing wind in the east-west direction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{a} = 2\\boldsymbol{\\Omega}\\times \\boldsymbol{u}"
},
{
"math_id": 1,
"text": "\\boldsymbol{u}"
},
{
"math_id": 2,
"text": "\\boldsymbol{\\Omega}"
},
{
"math_id": 3,
"text": "\\frac{d \\boldsymbol{u}}{d t} = -\\frac{1}{\\rho}\\nabla p - (2\\boldsymbol{\\Omega}\\times \\boldsymbol{u}) - [\\boldsymbol{\\Omega}\\times(\\boldsymbol{\\Omega}\\times\\boldsymbol{r})]-g\\boldsymbol{k}+\\boldsymbol{F}_{visc} "
},
{
"math_id": 4,
"text": "\\frac{d}{dt} = \\frac{\\partial}{\\partial t} + \\boldsymbol{u}\\cdot\\nabla"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "\\rho "
},
{
"math_id": 7,
"text": "g"
},
{
"math_id": 8,
"text": "\\boldsymbol{r}"
},
{
"math_id": 9,
"text": "\\boldsymbol{F}_{visc}"
},
{
"math_id": 10,
"text": "\\boldsymbol{\\Omega}\\times(\\boldsymbol{\\Omega}\\times\\boldsymbol{r})"
},
{
"math_id": 11,
"text": "\\boldsymbol{u} = u\\boldsymbol{i} + v\\boldsymbol{j} + w\\boldsymbol{k}"
},
{
"math_id": 12,
"text": "u "
},
{
"math_id": 13,
"text": "v"
},
{
"math_id": 14,
"text": "w"
},
{
"math_id": 15,
"text": "\\frac{du}{dt} - \\left(f + \\frac{u\\tan\\phi}{a}\\right)v = -\\frac{1}{\\rho}\\frac{\\partial p}{\\partial x}"
},
{
"math_id": 16,
"text": "\\frac{dv}{dt} + \\left(f + \\frac{u\\tan\\phi}{a}\\right)u = -\\frac{1}{\\rho}\\frac{\\partial p}{\\partial y}"
},
{
"math_id": 17,
"text": "\\frac{\\partial p}{\\partial z} = -g\\rho"
},
{
"math_id": 18,
"text": "f = 2\\Omega\\sin\\phi"
},
{
"math_id": 19,
"text": "\\phi"
},
{
"math_id": 20,
"text": "a"
},
{
"math_id": 21,
"text": "u_g = -\\frac{1}{f\\rho}\\left(\\frac{\\partial p}{\\partial y}\\right)_{z}"
},
{
"math_id": 22,
"text": "v_g = \\frac{1}{f\\rho}\\left(\\frac{\\partial p}{\\partial x}\\right)_{z}"
},
{
"math_id": 23,
"text": "z"
},
{
"math_id": 24,
"text": "\\frac{u^2\\tan\\phi}{a} = -\\frac{1}{\\rho}\\frac{\\partial p}{\\partial y}"
}
]
| https://en.wikipedia.org/wiki?curid=70958960 |
70959577 | Tokamak Chauffage Alfvén Brésilien | Tokamak at the University of Sao Paulo, Brazil
The Tokamak Chauffage Alfvén Brésilien (TCABR) is a tokamak situated at the University of São Paulo (USP), Brazil. TCABR is the largest tokamak in the southern hemisphere and one of the magnetic-confinement devices committed to advancing scientific knowledge in fusion power.
History.
TCABR was originally designed and constructed in Switzerland, at the "École Polytechnique Fédérale de Lausanne" (EPFL), and operated there from 1980 until 1992, under the name of "Tokamak Chauffage Alfvén" (TCA). The main focus of TCA was to assess and enhance plasma heating with Alfvén waves. A couple of years later, the machine was transferred to USP, passing through an upgrade and adding "Brésilien" to its name. The operation of TCABR began in 1999.
Properties.
The TCABR plasma is made of hydrogen and has a circular format. In general, its discharges are ohmically heated and the plasma current in TCABR reaches up to formula_0. The minor and major radii of TCABR are respectively formula_1 and formula_2, giving an aspect ratio of formula_3. The TCABR central electron temperature is around formula_4 (i.e., formula_5) and its mean electron density is formula_6, in units of formula_7. Other parameters of TCABR include the toroidal magnetic field, formula_8 the hydrogen filling pressure, formula_9, a discharge duration of formula_10, and a steady-phase duration around formula_11.
Research program.
The current purpose of the TCABR tokamak includes the study of Alfvén waves, but is not restricted to it. Other research areas are (i) the characterization of magnetohydrodynamic (MHD) instabilities, (ii) the study of high-confinement regimes induced by electrical polarization of external electrodes in the plasma edge, (iii) the investigation of edge turbulence, and (iv) the study of plasma poloidal and toroidal rotation using optical diagnostics. The TCABR team is also associated with a theoretical group focused on investigating instabilities and transport barriers in tokamaks and dynamical systems.
An upgrade in the TCABR is also being conducted. A set of 108 RMP coils will be installed to control and study edge localized modes (ELMs). New shaping coils will be added, allowing great flexibility in plasma configurations (e.g. single null, double null, snowflake, and negative triangularity configurations). The vacuum-vessel inner wall of TCABR will receive graphite tiles to decrease impurity deposition and energy loss in the plasma.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_P \\leq 100\\ \\text{kA}"
},
{
"math_id": 1,
"text": "a=18.0\\;\\text{cm}"
},
{
"math_id": 2,
"text": "R=61.5\\;\\text{cm}"
},
{
"math_id": 3,
"text": "A=R/a=3.4"
},
{
"math_id": 4,
"text": " k_{B} T_{e} \\leq 650 \\ \\text{eV} "
},
{
"math_id": 5,
"text": "T_{e}\\sim6\\times10^{6}\\;\\text{K}"
},
{
"math_id": 6,
"text": "0.9\\leq\\bar{n}_{e0}\\leq3"
},
{
"math_id": 7,
"text": "10^{19}\\;\\text{m}^{-3}"
},
{
"math_id": 8,
"text": "B_{0}\\sim1.1\\;\\text{T},"
},
{
"math_id": 9,
"text": "P_{H}\\simeq3\\times10^{-4}\\;\\text{Pa}"
},
{
"math_id": 10,
"text": "T_{D}\\simeq100\\;\\text{ms}"
},
{
"math_id": 11,
"text": "T\\leq60\\;\\text{ms}"
}
]
| https://en.wikipedia.org/wiki?curid=70959577 |
70959639 | Midpoint theorem (conics) | Collinearity of the midpoints of parallel chords in a conic
In geometry, the midpoint theorem describes a property of parallel chords in a conic. It states that the midpoints of parallel chords in a conic are located on a common line.
The common line or line segment for the midpoints is called the diameter. For a circle, ellipse or hyperbola the diameter goes through its center. For a parabola the diameter is always perpendicular to its directrix and for a pair of intersecting lines (from a degenerate conic) the diameter goes through the point of intersection.
Gallery (formula_0 = eccentricity): | [
{
"math_id": 0,
"text": "e"
}
]
| https://en.wikipedia.org/wiki?curid=70959639 |
7096097 | Gas thermometer | A gas thermometer is a thermometer that measures temperature by the variation in volume or pressure of a gas.
Volume Thermometer.
This thermometer functions by Charles's Law. Charles's Law states that when the temperature of a gas increases, so does the volume.
Using Charles's Law, the temperature can be measured by knowing the volume of gas at a certain temperature by using the formula, written below. Translating it to the correct levels of the device that is holding the gas. This works on the same principle as mercury thermometers.
formula_0
or
formula_1
formula_2 is the volume,
formula_3 is the thermodynamic temperature,
formula_4 is the constant for the system.
formula_4 is not a fixed constant across all systems and therefore needs to be found experimentally for a given system through testing with known temperature values.
Pressure Thermometer and Absolute Zero.
The constant volume gas thermometer plays a crucial role in understanding how absolute zero could be discovered long before the advent of cryogenics. Consider a graph of pressure versus temperature made not far from standard conditions (well above absolute zero) for three different samples of any ideal gas "(a, b, c)". To the extent that the gas is ideal, the pressure depends linearly on temperature, and the extrapolation to zero pressure occurs at absolute zero. Note that data could have been collected with three different amounts of the same gas, which would have rendered this experiment easy to do in the eighteenth century. | [
{
"math_id": 0,
"text": "V \\propto T\\,"
},
{
"math_id": 1,
"text": "\\frac{V}{T}=k"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=7096097 |
70961528 | Job 3 | Job 3 is the third chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around the 6th century BCE. This chapter belongs to the Dialogue section of the book, comprising Job 3:1–.
Text.
The original text is written in Hebrew language. This chapter is divided into 26 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 3 is grouped into the Dialogue section with the following outline:
The Dialogue section is composed in the format of poetry with distinctive syntax and grammar.
Job curses his day of birth (3:1–10).
After the prose prologue in chapters 1–2, the narrator of the Book of Job fades away until reappearing in chapter 42, so there is no interpreter to explain the conversation among the individual speakers and the readers have to attentively follow the threads of the dialogue. When seven days had passed since the arrival of Job's three friends, Job finally released his 'pent-up emotions', by cursing the day of his birth (verses 2–10), before turning to questioning in verses 11–26. In all of his words, Job did not directly curse God as the Adversary had predicted (1:11) or his wife had suggested (2:9). Nothing in Job's "self-curse" or "self-imprecation" is inconsistent with his faith in God, Job's words are best understood as a bitter cry of pain or protest out of an existential dilemma, preserving faith in the midst of an experience of disorientation, rather than an incantation to destroy the creation, because of the inability of literal fulfillment.
"After this opened Job his mouth, and cursed his day."
[Job said:] "As for that day, let it be darkness;"
"let God above not regard it;"
"and let not light shine upon it."
Job's Self-Lament (3:11–26).
Job's lament in this section has two discrete parts:
Each part commences with the Hebrew word , "lām-māh", "why".
The lament complements Job's initial cry (verses 1–10) with a series of rhetorical questions: posing an argument that because he was born (verse 10), the earliest chance he had of escaping this life of misery would have been to be still born (verses 11–12, 16), whereas in verses 13–19 Job regards death as 'falling into a peaceful sleep in a place where there is no trouble'. YHWH later poses His questions to Job (Job 38–41) that made Job realize that Job had been ignorant of the ways of the Lord.
[Job said:] "Why did I not die at birth,"
"come out from the womb and expire?""
Verse 11.
The two halves of the verse use the prepositional phrases ("at birth", literally "from the womb", and "come out from the womb", literally, "from the belly I went out"), both in the temporal sense of “on emerging from the womb."
The 'twin images of death' in two halves of the verse ("die", "expire") contrast the 'two symbols of life' in verse 12 ("knees to receive me", "breasts to nurse").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70961528 |
7096313 | Controlling for a variable | Binning data according to measured values of the variable
In causal models, controlling for a variable means binning data according to measured values of the variable. This is typically done so that the variable can no longer act as a confounder in, for example, an observational study or experiment.
When estimating the effect of explanatory variables on an outcome by regression, controlled-for variables are included as inputs in order to separate their effects from the explanatory variables.
A limitation of controlling for variables is that a causal model is needed to identify important confounders ("backdoor criterion" is used for the identification). Without having one, a possible confounder might remain unnoticed. Another associated problem is that if a variable which is not a real confounder is controlled for, it may in fact make other variables (possibly not taken into account) become confounders while they were not confounders before. In other cases, controlling for a non-confounding variable may cause underestimation of the true causal effect of the explanatory variables on an outcome (e.g. when controlling for a mediator or its descendant). "Counterfactual reasoning" mitigates the influence of confounders without this drawback"."
Experiments.
Experiments attempt to assess the effect of manipulating one or more independent variables on one or more dependent variables. To ensure the measured effect is not influenced by external factors, other variables must be held constant. The variables made to remain constant during an experiment are referred to as control variables.
For example, if an outdoor experiment were to be conducted to compare how different wing designs of a paper airplane (the independent variable) affect how far it can fly (the dependent variable), one would want to ensure that the experiment is conducted at times when the weather is the same, because one would not want weather to affect the experiment. In this case, the control variables may be wind speed, direction and precipitation. If the experiment were conducted when it was sunny with no wind, but the weather changed, one would want to postpone the completion of the experiment until the control variables (the wind and precipitation level) were the same as when the experiment began.
In controlled experiments of medical treatment options on humans, researchers randomly assign individuals to a treatment group or control group. This is done to reduce the confounding effect of irrelevant variables that are not being studied, such as the placebo effect.
Observational studies.
In an observational study, researchers have no control over the values of the independent variables, such as who receives the treatment. Instead, they must control for variables using statistics.
Observational studies are used when controlled experiments may be unethical or impractical. For instance, if a researcher wished to study the effect of unemployment (the independent variable) on health (the dependent variable), it would be considered unethical by institutional review boards to randomly assign some participants to have jobs and some not to. Instead, the researcher will have to create a sample which includes some employed people and some unemployed people. However, there could be factors that affect both whether someone is employed and how healthy he or she is. Part of any observed association between the independent variable (employment status) and the dependent variable (health) could be due to these outside, spurious factors rather than indicating a true link between them. This can be problematic even in a true random sample. By controlling for the extraneous variables, the researcher can come closer to understanding the true effect of the independent variable on the dependent variable.
In this context the extraneous variables can be controlled for by using multiple regression. The regression uses as independent variables not only the one or ones whose effects on the dependent variable are being studied, but also any potential confounding variables, thus avoiding omitted variable bias. "Confounding variables" in this context means other factors that not only influence the "dependent variable" (the outcome) but also influence the main "independent" variable.
OLS Regressions and control variables.
The simplest examples of control variables in regression analysis comes from Ordinary Least Squares (OLS) estimators. The OLS framework assumes the following:
Accordingly, a control variable can be interpreted as a linear explanatory variable that affects the mean value of Y (Assumption 1), but which does not present the primary variable of investigation, and which also satisfies the other assumptions above.
Example.
Consider a study about whether getting older affects someone's life satisfaction. (Some researchers perceive a "u-shape": life satisfaction appears to decline first and then rise after middle age.) To identify the control variables needed here, one could ask what other variables determine not only someone's life satisfaction but also their age. Many other variables determine life satisfaction. But "no other variable" determines how old someone is (as long as they remain alive). (All people keep getting older, at the same rate, no matter what their other characteristics.) So, no control variables are needed here.
To determine the needed control variables, it can be useful to construct a directed acyclic graph.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n (\\epsilon_i)_{i\\in N} \n "
},
{
"math_id": 1,
"text": "\n X^{'}X \n "
}
]
| https://en.wikipedia.org/wiki?curid=7096313 |
7096466 | Structure tensor | Tensor related to gradients
In mathematics, the structure tensor, also referred to as the second-moment matrix, is a matrix derived from the gradient of a function. It describes the distribution of the gradient in a specified neighborhood around a point and makes the information invariant to the observing coordinates. The structure tensor is often used in image processing and computer vision.
The 2D structure tensor.
Continuous version.
For a function formula_0 of two variables "p" = ("x", "y"), the structure tensor is the 2×2 matrix
formula_1
where formula_2 and formula_3 are the partial derivatives of formula_0 with respect to "x" and "y"; the integrals range over the plane formula_4; and "w" is some fixed "window function" (such as a Gaussian blur), a distribution on two variables. Note that the matrix formula_5 is itself a function of "p" = ("x", "y").
The formula above can be written also as formula_6, where formula_7 is the matrix-valued function defined by
formula_8
If the gradient formula_9 of formula_0 is viewed as a 2×1 (single-column) matrix, where formula_10 denotes transpose operation, turning a row vector to a column vector, the matrix formula_7 can be written as the matrix product formula_11 or tensor or outer product formula_12. Note however that the structure tensor formula_13 cannot be factored in this way in general except if formula_14 is a Dirac delta function.
Discrete version.
In image processing and other similar applications, the function formula_0 is usually given as a discrete array of samples formula_15, where "p" is a pair of integer indices. The 2D structure tensor at a given pixel is usually taken to be the discrete sum
formula_16
Here the summation index "r" ranges over a finite set of index pairs (the "window", typically formula_17 for some "m"), and "w"["r"] is a fixed "window weight" that depends on "r", such that the sum of all weights is 1. The values formula_18 are the partial derivatives sampled at pixel "p"; which, for instance, may be estimated from by formula_0 by finite difference formulas.
The formula of the structure tensor can be written also as formula_19, where formula_7 is the matrix-valued array such that
formula_20
Interpretation.
The importance of the 2D structure tensor formula_5 stems from the fact eigenvalues formula_21 (which can be ordered so that formula_22) and the corresponding eigenvectors formula_23 summarize the distribution of the gradient formula_24 of formula_0 within the window defined by formula_14 centered at formula_25.
Namely, if formula_26, then formula_27 (or formula_28) is the direction that is maximally aligned with the gradient within the window.
In particular, if formula_29 then the gradient is always a multiple of formula_27 (positive, negative or zero); this is the case if and only if formula_0 within the window varies along the direction formula_27 but is constant along formula_30. This condition of eigenvalues is also called linear symmetry condition because then the iso-curves of formula_0 consist in parallel lines, i.e there exists a one dimensional function formula_31 which can generate the two dimensional function formula_0 as formula_32 for some constant vector formula_33 and the coordinates formula_34.
If formula_35, on the other hand, the gradient in the window has no predominant direction; which happens, for instance, when the image has rotational symmetry within that window. This condition of eigenvalues is also called balanced body, or directional equilibrium condition because it holds when all gradient directions in the window are equally frequent/probable.
Furthermore, the condition formula_36 happens if and only if the function formula_0 is constant (formula_37) within formula_38.
More generally, the value of formula_39, for "k"=1 or "k"=2, is the formula_14-weighted average, in the neighborhood of "p", of the square of the directional derivative of formula_0 along formula_40. The relative discrepancy between the two eigenvalues of formula_5 is an indicator of the degree of anisotropy of the gradient in the window, namely how strongly is it biased towards a particular direction (and its opposite). This attribute can be quantified by the coherence, defined as
formula_41
if formula_42. This quantity is 1 when the gradient is totally aligned, and 0 when it has no preferred direction. The formula is undefined, even in the limit, when the image is constant in the window (formula_43). Some authors define it as 0 in that case.
Note that the average of the gradient formula_44 inside the window is not a good indicator of anisotropy. Aligned but oppositely oriented gradient vectors would cancel out in this average, whereas in the structure tensor they are properly added together. This is a reason for why formula_11 is used in the averaging of the structure tensor to optimize the direction instead of formula_44.
By expanding the effective radius of the window function formula_14 (that is, increasing its variance), one can make the structure tensor more robust in the face of noise, at the cost of diminished spatial resolution. The formal basis for this property is described in more detail below, where it is shown that a multi-scale formulation of the structure tensor, referred to as the multi-scale structure tensor, constitutes a "true multi-scale representation of directional data under variations of the spatial extent of the window function".
Complex version.
The interpretation and implementation of the 2D structure tensor becomes particularly accessible using complex numbers. The structure tensor consists in 3 real numbers
formula_45
where formula_46 , formula_47 and formula_48 in which integrals can be replaced by summations for discrete representation. Using Parseval's identity it is clear that the three real numbers are the second order moments of the power spectrum of formula_0. The following second order complex moment of the power spectrum of formula_0 can then be written as
formula_49
where formula_50 and formula_51 is the direction angle of the most significant eigenvector of the structure tensor formula_52 whereas formula_53 and formula_54 are the most and the least significant eigenvalues. From, this it follows that formula_55 contains both a certainty formula_56 and the optimal direction in double angle representation since it is a complex number consisting of two real numbers. It follows also that if the gradient is represented as a complex number, and is remapped by squaring (i.e. the argument angles of the complex gradient is doubled), then averaging acts as an optimizer in the mapped domain, since it directly delivers both the optimal direction (in double angle representation) and the associated certainty. The complex number represents thus how much linear structure (linear symmetry) there is in image formula_0, and the complex number is obtained directly by averaging the gradient in its (complex) double angle representation without computing the eigenvalues and the eigenvectors explicitly.
Likewise the following second order complex moment of the power spectrum of formula_0, which happens to be always real because formula_0 is real,
formula_57
can be obtained, with formula_53 and formula_54 being the eigenvalues as before. Notice that this time the magnitude of the complex gradient is squared (which is always real).
However, decomposing the structure tensor in its eigenvectors yields its tensor components as
formula_58
where formula_59 is the identity matrix in 2D because the two eigenvectors are always orthogonal (and sum to unity). The first term in the last expression of the decomposition, formula_60, represents the linear symmetry component of the structure tensor containing all directional information (as a rank-1 matrix), whereas the second term represents the balanced body component of the tensor, which lacks any directional information (containing an identity matrix formula_59). To know how much directional information there is in formula_0 is then the same as checking how large formula_61 is compared to formula_54.
Evidently, formula_62 is the complex equivalent of the first term in the tensor decomposition, whereas formula_63is the equivalent of the second term. Thus the two scalars, comprising three real numbers,
formula_64
where formula_65 is the (complex) gradient filter, and formula_66 is convolution, constitute a complex representation of the 2D Structure Tensor. As discussed here and elsewhere formula_14 defines the local image which is usually a Gaussian (with a certain variance defining the outer scale), and formula_67 is the (inner scale) parameter determining the effective frequency range in which the orientation formula_68 is to be estimated.
The elegance of the complex representation stems from that the two components of the structure tensor can be obtained as averages and independently. In turn, this means that formula_62 and formula_69 can be used in a scale space representation to describe the evidence for presence of unique orientation and the evidence for the alternative hypothesis, the presence of multiple balanced orientations, without computing the eigenvectors and eigenvalues. A functional, such as squaring the complex numbers have to this date not been shown to exist for structure tensors with dimensions higher than two. In Bigun 91, it has been put forward with due argument that this is because complex numbers are commutative algebras whereas quaternions, the possible candidate to construct such a functional by, constitute a non-commutative algebra.
The complex representation of the structure tensor is frequently used in fingerprint analysis to obtain direction maps containing certainties which in turn are used to enhance them, to find the locations of the global (cores and deltas) and local (minutia) singularities, as well as automatically evaluate the quality of the fingerprints.
The 3D structure tensor.
Definition.
The structure tensor can be defined also for a function formula_0 of three variables "p"=("x","y","z") in an entirely analogous way. Namely, in the continuous version we have formula_6, where
formula_70
where formula_71 are the three partial derivatives of formula_0, and the integral ranges over formula_72.
In the discrete version,formula_73, where
formula_74
and the sum ranges over a finite set of 3D indices, usually formula_75 for some m.
Interpretation.
As in the two-dimensional case, the eigenvalues formula_76 of formula_77, and the corresponding eigenvectors formula_78, summarize the distribution of gradient directions within the neighborhood of "p" defined by the window formula_14. This information can be visualized as an ellipsoid whose semi-axes are equal to the eigenvalues and directed along their corresponding eigenvectors.
In particular, if the ellipsoid is stretched along one axis only, like a cigar (that is, if formula_53 is much larger than both formula_54 and formula_79), it means that the gradient in the window is predominantly aligned with the direction formula_27, so that the isosurfaces of formula_0 tend to be flat and perpendicular to that vector. This situation occurs, for instance, when "p" lies on a thin plate-like feature, or on the smooth boundary between two regions with contrasting values.
If the ellipsoid is flattened in one direction only, like a pancake (that is, if formula_79 is much smaller than both formula_53 and formula_54), it means that the gradient directions are spread out but perpendicular to formula_80; so that the isosurfaces tend to be like tubes parallel to that vector. This situation occurs, for instance, when "p" lies on a thin line-like feature, or on a sharp corner of the boundary between two regions with contrasting values.
Finally, if the ellipsoid is roughly spherical (that is, if formula_81), it means that the gradient directions in the window are more or less evenly distributed, with no marked preference; so that the function formula_0 is mostly isotropic in that neighborhood. This happens, for instance, when the function has spherical symmetry in the neighborhood of "p". In particular, if the ellipsoid degenerates to a point (that is, if the three eigenvalues are zero), it means that formula_0 is constant (has zero gradient) within the window.
The multi-scale structure tensor.
The structure tensor is an important tool in scale space analysis. The multi-scale structure tensor (or multi-scale second moment matrix) of a function formula_0 is in contrast to other one-parameter scale-space features an image descriptor that is defined over "two" scale parameters.
One scale parameter, referred to as "local scale" formula_82, is needed for determining the amount of pre-smoothing when computing the image gradient formula_83. Another scale parameter, referred to as "integration scale" formula_84, is needed for specifying the spatial extent of the window function formula_85 that determines the weights for the region in space over which the components of the outer product of the gradient by itself formula_11 are accumulated.
More precisely, suppose that formula_0 is a real-valued signal defined over formula_86. For any local scale formula_87, let a multi-scale representation formula_88 of this signal be given by formula_89 where formula_90 represents a pre-smoothing kernel. Furthermore, let formula_83 denote the gradient of the scale space representation.
Then, the "multi-scale structure tensor/second-moment matrix" is defined by
formula_91
Conceptually, one may ask if it would be sufficient to use any self-similar families of smoothing functions formula_90 and formula_85. If one naively would apply, for example, a box filter, however, then non-desirable artifacts could easily occur. If one wants the multi-scale structure tensor to be well-behaved over both increasing local scales formula_82 and increasing integration scales formula_84, then it can be shown that both the smoothing function and the window function "have to" be Gaussian. The conditions that specify this uniqueness are similar to the scale-space axioms that are used for deriving the uniqueness of the Gaussian kernel for a regular Gaussian scale space of image intensities.
There are different ways of handling the two-parameter scale variations in this family of image descriptors. If we keep the local scale parameter formula_82 fixed and apply increasingly broadened versions of the window function by increasing the integration scale parameter formula_84 only, then we obtain a "true formal scale space representation of the directional data computed at the given local scale" formula_82. If we couple the local scale and integration scale by a "relative integration scale" formula_92, such that formula_93 then for any fixed value of formula_94, we obtain a reduced self-similar one-parameter variation, which is frequently used to simplify computational algorithms, for example in corner detection, interest point detection, texture analysis and image matching.
By varying the relative integration scale formula_92 in such a self-similar scale variation, we obtain another alternative way of parameterizing the multi-scale nature of directional data obtained by increasing the integration scale.
A conceptually similar construction can be performed for discrete signals, with the convolution integral replaced by a convolution sum and with the continuous Gaussian kernel formula_95 replaced by the discrete Gaussian kernel formula_96:
formula_97
When quantizing the scale parameters formula_82 and formula_84 in an actual implementation, a finite geometric progression formula_98 is usually used, with "i" ranging from 0 to some maximum scale index "m". Thus, the discrete scale levels will bear certain similarities to image pyramid, although spatial subsampling may not necessarily be used in order to preserve more accurate data for subsequent processing stages.
Applications.
The eigenvalues of the structure tensor play a significant role in many image processing algorithms, for problems like corner detection, interest point detection, and feature tracking. The structure tensor also plays a central role in the Lucas-Kanade optical flow algorithm, and in its extensions to estimate affine shape adaptation; where the magnitude of formula_54 is an indicator of the reliability of the computed result. The tensor has been used for scale space analysis, estimation of local surface orientation from monocular or binocular cues, non-linear fingerprint enhancement, diffusion-based image processing, and several other image processing problems. The structure tensor can be also applied in geology to filter seismic data.
Processing spatio-temporal video data with the structure tensor.
The three-dimensional structure tensor has been used to analyze three-dimensional video data (viewed as a function of "x", "y", and time "t").
If one in this context aims at image descriptors that are "invariant" under Galilean transformations, to make it possible to compare image measurements that have been obtained under variations of a priori unknown image velocities formula_99
formula_100
it is, however, from a computational viewpoint preferable to parameterize the components in the structure tensor/second-moment matrix formula_101 using the notion of "Galilean diagonalization"
formula_102
where formula_103 denotes a Galilean transformation of spacetime and formula_104 a two-dimensional rotation over the spatial domain,
compared to the abovementioned use of eigenvalues of a 3-D structure tensor, which corresponds to an eigenvalue decomposition and a (non-physical) three-dimensional rotation of spacetime
formula_105
To obtain true Galilean invariance, however, also the shape of the spatio-temporal window function needs to be adapted, corresponding to the transfer of affine shape adaptation from spatial to spatio-temporal image data.
In combination with local spatio-temporal histogram descriptors,
these concepts together allow for Galilean invariant recognition of spatio-temporal events. | [
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "\nS_w(p) =\n\\begin{bmatrix}\n\\int w(r) (I_x(p-r))^2\\,dr & \\int w(r) I_x(p-r)I_y(p-r)\\,dr \\\\[10pt]\n\\int w(r) I_x(p-r)I_y(p-r)\\,dr & \\int w(r) (I_y(p-r))^2\\,dr\n\\end{bmatrix}\n"
},
{
"math_id": 2,
"text": "I_x"
},
{
"math_id": 3,
"text": "I_y"
},
{
"math_id": 4,
"text": "\\mathbb{R}^2"
},
{
"math_id": 5,
"text": "S_w"
},
{
"math_id": 6,
"text": "S_w(p) = \\int w(r) S_0(p-r) \\, dr"
},
{
"math_id": 7,
"text": "S_0"
},
{
"math_id": 8,
"text": "\nS_0(p)=\n\\begin{bmatrix}\n(I_x(p))^2 & I_x(p)I_y(p) \\\\[10pt]\nI_x(p)I_y(p) & (I_y(p))^2\n\\end{bmatrix}\n"
},
{
"math_id": 9,
"text": "\\nabla I = (I_x,I_y)^\\text{T}"
},
{
"math_id": 10,
"text": "(\\cdot)^\\text{T}"
},
{
"math_id": 11,
"text": "(\\nabla I)(\\nabla I)^\\text{T}"
},
{
"math_id": 12,
"text": "\\nabla I \\otimes \\nabla I"
},
{
"math_id": 13,
"text": "S_w(p)"
},
{
"math_id": 14,
"text": "w"
},
{
"math_id": 15,
"text": "I[p]"
},
{
"math_id": 16,
"text": "\nS_w[p] =\n\\begin{bmatrix}\n\\sum_r w[r] (I_x[p-r])^2 & \\sum_r w[r] I_x[p-r]I_y[p-r] \\\\[10pt]\n\\sum_r w[r] I_x[p-r]I_y[p-r] & \\sum_r w[r] (I_y[p-r])^2\n\\end{bmatrix}\n"
},
{
"math_id": 17,
"text": "\\{-m \\ldots +m\\}\\times\\{-m \\ldots +m\\}"
},
{
"math_id": 18,
"text": "I_x[p],I_y[p]"
},
{
"math_id": 19,
"text": "S_w[p]=\\sum_r w[r] S_0[p-r]"
},
{
"math_id": 20,
"text": "\nS_0[p] =\n\\begin{bmatrix}\n(I_x[p])^2 & I_x[p]I_y[p] \\\\[10pt]\nI_x[p]I_y[p] & (I_y[p])^2\n\\end{bmatrix}\n"
},
{
"math_id": 21,
"text": "\\lambda_1,\\lambda_2"
},
{
"math_id": 22,
"text": "\\lambda_1 \\geq \\lambda_2\\geq 0"
},
{
"math_id": 23,
"text": "e_1,e_2"
},
{
"math_id": 24,
"text": "\\nabla I = (I_x,I_y)"
},
{
"math_id": 25,
"text": "p"
},
{
"math_id": 26,
"text": "\\lambda_1 > \\lambda_2"
},
{
"math_id": 27,
"text": "e_1"
},
{
"math_id": 28,
"text": "-e_1"
},
{
"math_id": 29,
"text": "\\lambda_1 > 0, \\lambda_2 = 0"
},
{
"math_id": 30,
"text": "e_2"
},
{
"math_id": 31,
"text": "g "
},
{
"math_id": 32,
"text": "I(x,y)=g(d^\\text{T} p)"
},
{
"math_id": 33,
"text": "d=(d_x,d_y)^T "
},
{
"math_id": 34,
"text": "p=(x,y)^T "
},
{
"math_id": 35,
"text": "\\lambda_1 = \\lambda_2"
},
{
"math_id": 36,
"text": "\\lambda_1 = \\lambda_2 = 0"
},
{
"math_id": 37,
"text": "\\nabla I = (0,0)"
},
{
"math_id": 38,
"text": "W"
},
{
"math_id": 39,
"text": "\\lambda_k "
},
{
"math_id": 40,
"text": "e_k"
},
{
"math_id": 41,
"text": "c_w=\\left(\\frac{\\lambda_1-\\lambda_2}{\\lambda_1+\\lambda_2}\\right)^2"
},
{
"math_id": 42,
"text": "\\lambda_2>0"
},
{
"math_id": 43,
"text": "\\lambda_1=\\lambda_2=0"
},
{
"math_id": 44,
"text": "\\nabla I"
},
{
"math_id": 45,
"text": "\nS_w(p) =\n\\begin{bmatrix}\n\\mu_{20} & \\mu_{11} \\\\[10pt]\n\\mu_{11} & \\mu_{02}\n\\end{bmatrix}\n"
},
{
"math_id": 46,
"text": "\n\\mu_{20} =\\int (w(r) (I_x(p-r))^2\\,dr\n"
},
{
"math_id": 47,
"text": "\n\\mu_{02} =\\int (w(r) (I_y(p-r))^2\\,dr\n"
},
{
"math_id": 48,
"text": "\n\\mu_{11} =\\int w(r) I_x(p-r)I_y(p-r)\\,dr\n"
},
{
"math_id": 49,
"text": "\n\\kappa_{20} = \\mu_{20}-\\mu_{02}+i2\\mu_{11}=\\int w(r) (I_x(p-r)+i I_y(p-r))^2 \\, dr =(\\lambda_1-\\lambda_2)\\exp(i2\\phi)\n"
},
{
"math_id": 50,
"text": "i=\\sqrt{-1}"
},
{
"math_id": 51,
"text": "\\phi"
},
{
"math_id": 52,
"text": "\\phi=\\angle{e_1}"
},
{
"math_id": 53,
"text": "\\lambda_1"
},
{
"math_id": 54,
"text": "\\lambda_2"
},
{
"math_id": 55,
"text": "\\kappa_{20} "
},
{
"math_id": 56,
"text": "|\\kappa_{20}|=\\lambda_1-\\lambda_2"
},
{
"math_id": 57,
"text": "\n\\kappa_{11} = \\mu_{20}+\\mu_{02} = \\int w(r) |I_x(p-r)+i I_y(p-r)|^2 \\, dr = \\lambda_1+\\lambda_2\n"
},
{
"math_id": 58,
"text": "\nS_w(p) =\n\\lambda_1 e_1e_1^\\text{T}+\n\\lambda_2 e_2e_2^\\text{T} =(\\lambda_1 -\\lambda_2)e_1e_1^\\text{T}+\n\\lambda_2( e_1e_1^\\text{T}+e_2e_2^\\text{T})= (\\lambda_1 -\\lambda_2)e_1e_1^\\text{T}+\n\\lambda_2 E\n"
},
{
"math_id": 59,
"text": "E"
},
{
"math_id": 60,
"text": "(\\lambda_1 -\\lambda_2)e_1e_1^\\text{T}"
},
{
"math_id": 61,
"text": "\\lambda_1-\\lambda_2 "
},
{
"math_id": 62,
"text": "\\kappa_{20}"
},
{
"math_id": 63,
"text": "\\tfrac 1 2 (|\\kappa_{20}|-\\kappa_{11}) = \\lambda_2"
},
{
"math_id": 64,
"text": "\n\\begin{align}\n\\kappa_{20} &=& (\\lambda_1-\\lambda_2)\\exp(i2\\phi) &= w*(h*I)^2 \\\\\n\\kappa_{11} &=& \\lambda_1+\\lambda_2&=w*|h*I|^2 \\\\\n\\end{align}\n"
},
{
"math_id": 65,
"text": "h(x,y)=(x+iy)\\exp(-(x^2+y^2)/(2\\sigma^2)) "
},
{
"math_id": 66,
"text": "*"
},
{
"math_id": 67,
"text": "\\sigma "
},
{
"math_id": 68,
"text": "2\\phi"
},
{
"math_id": 69,
"text": "\\kappa_{11}"
},
{
"math_id": 70,
"text": "\nS_0(p) =\n\\begin{bmatrix}\n(I_x(p))^2 & I_x(p)I_y(p) & I_x(p)I_z(p) \\\\[10pt]\nI_x(p)I_y(p) & (I_y(p))^2 & I_y(p)I_z(p) \\\\[10pt]\nI_x(p)I_z(p) & I_y(p)I_z(p) & (I_z(p))^2\n\\end{bmatrix}\n"
},
{
"math_id": 71,
"text": "I_x,I_y,I_z"
},
{
"math_id": 72,
"text": "\\mathbb{R}^3"
},
{
"math_id": 73,
"text": "S_w[p] = \\sum_r w[r] S_0[p-r]"
},
{
"math_id": 74,
"text": "\nS_0[p] =\n\\begin{bmatrix}\n(I_x[p])^2 & I_x[p]I_y[p] & I_x[p]I_z[p] \\\\[10pt]\nI_x[p]I_y[p] & (I_y[p])^2 & I_y[p]I_z[p]\\\\[10pt]\nI_x[p]I_z[p] & I_y[p]I_z[p] & (I_z[p])^2\n\\end{bmatrix}\n"
},
{
"math_id": 75,
"text": "\\{-m \\ldots +m\\} \\times \\{-m \\ldots +m\\} \\times \\{-m \\ldots +m\\}"
},
{
"math_id": 76,
"text": "\\lambda_1,\\lambda_2,\\lambda_3"
},
{
"math_id": 77,
"text": "S_w[p]"
},
{
"math_id": 78,
"text": "\\hat{e}_1,\\hat{e}_2,\\hat{e}_3"
},
{
"math_id": 79,
"text": "\\lambda_3"
},
{
"math_id": 80,
"text": "e_3"
},
{
"math_id": 81,
"text": "\\lambda_1\\approx\\lambda_2\\approx\\lambda_3"
},
{
"math_id": 82,
"text": "t"
},
{
"math_id": 83,
"text": "(\\nabla I)(x; t)"
},
{
"math_id": 84,
"text": "s"
},
{
"math_id": 85,
"text": "w(\\xi; s)"
},
{
"math_id": 86,
"text": "\\mathbb{R}^k"
},
{
"math_id": 87,
"text": "t > 0"
},
{
"math_id": 88,
"text": "I(x; t)"
},
{
"math_id": 89,
"text": "I(x; t) = h(x; t)*I(x)"
},
{
"math_id": 90,
"text": "h(x; t)"
},
{
"math_id": 91,
"text": "\n\\mu(x; t, s) =\n\\int_{\\xi \\in \\mathbb{R}^k}\n(\\nabla I)(x-\\xi; t) \\, (\\nabla I)^\\text{T}(x-\\xi; t) \\,\nw(\\xi; s) \\, d\\xi\n"
},
{
"math_id": 92,
"text": "r \\geq 1"
},
{
"math_id": 93,
"text": "s = r t"
},
{
"math_id": 94,
"text": "r"
},
{
"math_id": 95,
"text": " g(x; t)"
},
{
"math_id": 96,
"text": "T(n; t)"
},
{
"math_id": 97,
"text": "\n\\mu(x; t, s) =\n\\sum_{n \\in \\mathbb{Z}^k}\n(\\nabla I)(x-n; t) \\, (\\nabla I)^\\text{T}(x-n; t) \\,\nw(n; s)\n"
},
{
"math_id": 98,
"text": "\\alpha^i"
},
{
"math_id": 99,
"text": "v = (v_x, v_y)^\\text{T}"
},
{
"math_id": 100,
"text": " \\begin{bmatrix} x' \\\\ y' \\\\ t' \\end{bmatrix} = G \\begin{bmatrix} x \\\\ y \\\\ t \\end{bmatrix} = \\begin{bmatrix} x - v_x \\, t \\\\ y - v_y \\, t \\\\ t \\end{bmatrix} ,"
},
{
"math_id": 101,
"text": "S"
},
{
"math_id": 102,
"text": " S' = R_\\text{space}^{-\\text{T}} \\, G^{-\\text{T}} \\, S \\, G^{-1} \\, R_\\text{space}^{-1} = \\begin{bmatrix} \\nu_1 & \\, & \\, \\\\ \\, & \\nu_2 & \\, \\\\ \\, & \\, & \\nu_3 \\end{bmatrix} "
},
{
"math_id": 103,
"text": "G"
},
{
"math_id": 104,
"text": "R_\\text{space}"
},
{
"math_id": 105,
"text": " S'' = R_\\text{spacetime}^{-\\text{T}} \\, S \\, R_\\text{spacetime}^{-1} = \\begin{bmatrix} \\lambda_1 & & \\\\ & \\lambda_2 & \\\\ & & \\lambda_3 \\end{bmatrix} ."
}
]
| https://en.wikipedia.org/wiki?curid=7096466 |
70967383 | Mimetic interpolation | In mathematics, mimetic interpolation is a method for interpolating differential forms. In contrast to other interpolation methods, which estimate a field at a location given its values on neighboring points, mimetic interpolation estimates the field's formula_0-form given the field's "projection" on neighboring grid elements. The grid elements can be grid points as well as cell edges or faces, depending on formula_1.
Mimetic interpolation is particularly relevant in the context of vector and pseudo-vector fields as the method conserves line integrals and fluxes, respectively.
Interpolation of integrated forms.
Let formula_2 be a differential formula_3-form, then mimetic interpolation is the linear combination
formula_4
where formula_5 is the interpolation of formula_6, and the coefficients formula_7 represent the strengths of the field on grid element formula_8. Depending on formula_3, formula_8 can be a node (formula_9), a cell edge (formula_10), a cell face (formula_11) or a cell volume (formula_12). In the above, the formula_13 are the interpolating formula_3-forms, which are centered on formula_8 and decay away from formula_8 in a way similar to the tent functions. Examples of formula_13 are the Whitney forms for simplicial meshes in formula_14 dimensions.
An important advantage of mimetic interpolation over other interpolation methods is that the field strengths formula_7 are scalars and thus coordinate system invariant.
Interpolating forms.
In many cases it is desirable for the interpolating forms formula_13 to pick the field's strength on particular grid elements without interference from other formula_15. This allows one to assign field values to specific grid elements, which can then be interpolated in-between. A common case is linear interpolation for which the interpolating functions (formula_16-forms) are zero on all nodes except on one, where the interpolating function is one. A similar construct can be applied to mimetic interpolation
formula_17
That is, the integral of formula_13 is zero on all cell elements, except on formula_8 where the integral returns one. For formula_18 this amounts to formula_19 where formula_20 is a grid point. For formula_21 the integral is over edges and hence the integral formula_22 is zero expect on edge formula_8. For formula_11 the integral is over faces and for formula_23 over cell volumes.
Conservation properties.
Mimetic interpolation respects the properties of differential forms. In particular, Stokes' theorem
formula_24
is satisfied with formula_25 denoting the interpolation of formula_26. Here, formula_27 is the exterior derivative, formula_28 is any manifold of dimensionality formula_3 and formula_29 is the boundary of formula_28. This confers to mimetic interpolation conservation properties, which are not generally shared by other interpolation methods.
Commutativity between the interpolation operator and the exterior derivative.
To be mimetic, the interpolation must satisfy
formula_30
where formula_31 is the interpolation operator of a formula_3-form, i.e. formula_32. In other words, the interpolation operators and the exterior derivatives commute. Note that different interpolation methods are required for each type of form (formula_33), formula_34. The above equation is all that is needed to satisfy Stokes' theorem for the interpolated form
formula_35
Other calculus properties derive from the commutativity between interpolation and formula_27. For instance, formula_36,
formula_37
The last step gives zero since formula_38 when integrated over the boundary formula_29.
Projection.
The interpolated formula_5 is often projected onto a target, formula_3-dimensional, oriented manifolds formula_39,formula_40For formula_18 the target is a point, for formula_10 it is a line, for formula_11 an area, etc.
Applications.
Many physical fields can be represented as formula_0-forms. When discretizing fields in numerical modeling, each type of formula_0-form often acquires its own staggering in accordance with numerical stability requirements, e.g. the need to prevent the checkerboard instability. This led to the development of the exterior finite element and discrete exterior calculus methods, both of which rely on a field discretization that are compatible with the field type.
The table below lists some examples of physical fields, their type, their corresponding form and interpolation method, as well as software that can be leveraged to interpolate, remap or regrid the field:
Example.
Consider quadrilateral cells in two dimensions with their node indexed formula_41 in the counterclockwise direction. Further, let formula_42 and formula_43 be the parametric coordinates of each cell (formula_44). Thenformula_45
are the bilinear interpolating forms of formula_46 in the unit square (formula_47). The corresponding formula_48 edge interpolating forms are
formula_49
were we assumed the edges to be indexed in counterclockwise direction and with the edges pointing to the east and north. At lowest order, there is only one interpolating form for formula_50,
formula_51
where formula_52 is the wedge product.
We can verify that the above interpolating forms satisfy the mimetic conditions formula_53 and formula_54. Take for instance,
formula_55where formula_56, formula_57, formula_58 and formula_59 are the field values evaluated at the corners of the quadrilateral in the unit square space. Likewise, we have
formula_60
with formula_61, formula_62, being the 1-form formula_63projected onto edge formula_64. Note that formula_65 is also known as the pullback. If formula_66 is the map that parametrizes edge formula_64, formula_67, formula_68, then formula_69where the integration is performed in formula_70 space. Consider for instance edge formula_71 , then formula_72 with formula_73 and formula_74 denoting the start and points. For a general 1-form formula_75, one gets formula_76.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k\n"
},
{
"math_id": 1,
"text": "k = 0, 1, 2, \\cdots"
},
{
"math_id": 2,
"text": "\\omega^k\n"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\\bar{\\omega}^k = \\sum_i \\left(\\int_{M_i} \\omega^k \\right) \\phi_i^k\n"
},
{
"math_id": 5,
"text": "\\bar{\\omega}^k\n"
},
{
"math_id": 6,
"text": "\\omega^k"
},
{
"math_id": 7,
"text": "\\int_{M_i} \\omega^k\n"
},
{
"math_id": 8,
"text": "M_i"
},
{
"math_id": 9,
"text": "k = 0\n"
},
{
"math_id": 10,
"text": "k=1"
},
{
"math_id": 11,
"text": "k=2"
},
{
"math_id": 12,
"text": "k=3"
},
{
"math_id": 13,
"text": "\\phi_i^k"
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "\\phi_j^k"
},
{
"math_id": 16,
"text": "0"
},
{
"math_id": 17,
"text": "\\int_{M_j} \\phi_i^k = \\delta_{ij}."
},
{
"math_id": 18,
"text": "k=0"
},
{
"math_id": 19,
"text": "\\phi_i^0(x_j) = \\delta_{ij}"
},
{
"math_id": 20,
"text": "x_j"
},
{
"math_id": 21,
"text": "k = 1"
},
{
"math_id": 22,
"text": "\\int_{M_j} \\phi_i^1"
},
{
"math_id": 23,
"text": "k = 3"
},
{
"math_id": 24,
"text": "\\int_M \\overline{d \\omega^k} = \\int_{\\partial M} \\bar{\\omega}^k\n"
},
{
"math_id": 25,
"text": "\\overline{d \\omega^k}\n"
},
{
"math_id": 26,
"text": "d \\omega^k\n"
},
{
"math_id": 27,
"text": "d"
},
{
"math_id": 28,
"text": "M"
},
{
"math_id": 29,
"text": "\\partial M"
},
{
"math_id": 30,
"text": "I_{k+1} ( d \\omega^k ) = d (I_k \\omega^k)"
},
{
"math_id": 31,
"text": "I_k"
},
{
"math_id": 32,
"text": "\\bar{\\omega}^k = I_k \\omega^k"
},
{
"math_id": 33,
"text": "k = 0, 1, \\cdots"
},
{
"math_id": 34,
"text": "I_{k+1} \\neq I_k"
},
{
"math_id": 35,
"text": "\\int_M \\overline{d \\omega^k } = \\int_M I_{k+1} ( d \\omega^k ) = \\int_M d (I_k \\omega^k) = \\int_{\\partial M} I_k \\omega^k = \\int_{\\partial M} \\bar{\\omega}^k. \n"
},
{
"math_id": 36,
"text": "d^2 \\omega = 0"
},
{
"math_id": 37,
"text": "\\int_M I_{k+2} (d^2 \\omega^k) = \\int_M d (I_{k+1} d \\omega^k) = \\int_{\\partial M} I_{k+1} d \\omega^k = \\int_{\\partial M} d (I_k \\omega^k) = 0. \n"
},
{
"math_id": 38,
"text": "\\int_{\\partial M} d(\\cdot) = 0\n"
},
{
"math_id": 39,
"text": "T"
},
{
"math_id": 40,
"text": "\\int_T \\bar{\\omega}^k = \\sum_i \\left( \\int_{M_i} \\omega^k \\right) \\left(\\int_T \\phi_i^k \\right)."
},
{
"math_id": 41,
"text": "i = 0, 1, 2, 3 \n"
},
{
"math_id": 42,
"text": "\\xi_1 \n"
},
{
"math_id": 43,
"text": "\\xi_2 \n"
},
{
"math_id": 44,
"text": "0 \\leq \\xi_1, \\xi_2 \\leq 1"
},
{
"math_id": 45,
"text": " \\begin{align}\n\\phi_0^0 & = (1-\\xi_1) (1-\\xi_2) \\\\[0.6ex]\n\\phi_1^0 & = \\xi_1 (1-\\xi_2) \\\\[0.6ex]\n\\phi_2^0 & = \\xi_1 \\xi_2 \\\\[0.6ex]\n\\phi_3^0 & = (1-\\xi_1) \\xi_2. \n\\end{align} "
},
{
"math_id": 46,
"text": "I_0 \n"
},
{
"math_id": 47,
"text": "\\xi_1, \\xi_2"
},
{
"math_id": 48,
"text": "I_1 \n"
},
{
"math_id": 49,
"text": " \\begin{align}\n\\phi_0^1 & = (1-\\xi_2) d\\xi_1 \\\\[0.6ex] \n\\phi_1^1 & = \\xi_1 d\\xi_2 \\\\[0.6ex] \n\\phi_2^1 & = \\xi_2 d\\xi_1 \\\\[0.6ex]\n\\phi_3^1 & = (1-\\xi_1) d\\xi_2,\n\\end{align} "
},
{
"math_id": 50,
"text": "I_2 \n"
},
{
"math_id": 51,
"text": "\\phi_0^2 = d\\xi_1 \\wedge d\\xi_2, \n"
},
{
"math_id": 52,
"text": "\\wedge \n"
},
{
"math_id": 53,
"text": "I_1 d\\omega^0 = d(I_0 \\omega^0)"
},
{
"math_id": 54,
"text": "d(I_1 \\omega^1) = I_2 d \\omega^1"
},
{
"math_id": 55,
"text": " \\begin{align} \nd(I_0 \\omega^0) & = d \\left(f_0 (1-\\xi_1) (1-\\xi_2)+f_1 \\xi_1 (1-\\xi_2) + f_2 \\xi_1 \\xi_2 + f_3 (1-\\xi_1) \\xi_2 \\right) \\\\[0.6ex]\n & = (f_1-f_0)(1-\\xi_2)d\\xi_1 + (f_2-f_1) \\xi_1 d\\xi_2 + (f_2-f_3) \\xi_2 d\\xi_1 + (f_3-f_0) (1-\\xi_1)d\\xi_2 \\\\[0.6ex]\n & = (f_1-f_0)\\phi_0^1 + (f_2-f_1)\\phi_1^1 + (f_2-f_3)\\phi_2^1 + (f_3-f_0)\\phi_3^1 \\\\[0.6ex]\n & = I_1 d\\omega^0\n\\end{align} "
},
{
"math_id": 56,
"text": "f_0 = \\omega^0(0,0)"
},
{
"math_id": 57,
"text": "f_1 = \\omega^0(1,0)"
},
{
"math_id": 58,
"text": "f_2 = \\omega^0(1,1)"
},
{
"math_id": 59,
"text": "f_3 = \\omega^0(0,1)"
},
{
"math_id": 60,
"text": " \\begin{align}\nd(I_1 \\omega^1) & = d( g_0(1-\\xi_2)d\\xi_1 + g_1 \\xi_1 d\\xi_2 + g_2 \\xi_2 d\\xi_1 + g_3 (1-\\xi_1)d\\xi_2) \\\\[0.6ex]\n & = (g_0 + g_1 - g_2 - g_3) d\\xi_1 \\wedge d\\xi_2 \\\\[0.6ex]\n & = I_2 d \\omega^1 \n\\end{align}"
},
{
"math_id": 61,
"text": "g_i = \\int_{M_i} \\omega^1 "
},
{
"math_id": 62,
"text": "i \\in \\{0, 1, 2, 3\\} "
},
{
"math_id": 63,
"text": "\\omega^1"
},
{
"math_id": 64,
"text": "i "
},
{
"math_id": 65,
"text": "g_i "
},
{
"math_id": 66,
"text": "F: \\mathbb{R} \\rightarrow \\mathbb{R}^2 "
},
{
"math_id": 67,
"text": "x = F_i(t) "
},
{
"math_id": 68,
"text": "0 \\leq t \\leq 1"
},
{
"math_id": 69,
"text": "g_i = \\int {F_i}^* \\omega^1 "
},
{
"math_id": 70,
"text": "t "
},
{
"math_id": 71,
"text": "i=2 "
},
{
"math_id": 72,
"text": "F_2(t) = x_3 + t(x_2 - x_3) "
},
{
"math_id": 73,
"text": "x_2 "
},
{
"math_id": 74,
"text": "x_3 "
},
{
"math_id": 75,
"text": "\\omega^1 = a(x) d\\xi_1 + b(x) d \\xi_2 "
},
{
"math_id": 76,
"text": "g_2 = \\int_0^1 \\left( a(t) \\frac{\\partial \\xi_1}{\\partial t} + b(t) \\frac{\\partial \\xi_2}{\\partial t} \\right) dt "
}
]
| https://en.wikipedia.org/wiki?curid=70967383 |
7096967 | Ground source heat pump | System to transfer heat to/from the ground
A ground source heat pump (also geothermal heat pump) is a heating/cooling system for buildings that use a type of heat pump to transfer heat to or from the ground, taking advantage of the relative constancy of temperatures of the earth through the seasons. Ground-source heat pumps (GSHPs) – or geothermal heat pumps (GHP), as they are commonly termed in North America – are among the most energy-efficient technologies for providing HVAC and water heating, using far less energy than can be achieved by burning a fuel in a boiler/furnace or by use of resistive electric heaters.
Efficiency is given as a coefficient of performance (CoP) which is typically in the range 3 – 6, meaning that the devices provide 3 – 6 units of heat for each unit of electricity used. Setup costs are higher than for other heating systems, due to the requirement to install ground loops over large areas or to drill bore holes, and for this reason, ground source is often suitable when new blocks of flats are built. Otherwise air-source heat pumps are often used instead.
Thermal properties of the ground.
Ground-source heat pumps take advantage of the difference between the ambient temperature and the temperature at various depths in the ground.
The thermal properties of the ground near the surface
can be described as follows:
The "penetration depth" is defined as the depth at which the temperature variable is less than 0.01 of the variation at the surface, and this depends on the type of soil:
History.
The heat pump was described by Lord Kelvin in 1853 and developed by Peter Ritter von Rittinger in 1855. Heinrich Zoelly had patented the idea of using it to draw heat from the ground in 1912.
After experimenting with a freezer, Robert C. Webber built the first direct exchange ground source heat pump in the late 1940s; sources disagree, however, as to the exact timeline of his invention The first successful commercial project was installed in the Commonwealth Building (Portland, Oregon) in 1948, and has been designated a National Historic Mechanical Engineering Landmark by ASME. Professor Carl Nielsen of Ohio State University built the first residential open loop version in his home in 1948.
As a result of the 1973 oil crisis, ground source heat pumps became popular in Sweden and have been growing slowly in worldwide acceptance since then. Open loop systems dominated the market until the development of polybutylene pipe in 1979 made closed loop systems economically viable.
As of 2004, there are over a million units installed worldwide, providing 12 GW of thermal capacity with a growth rate of 10% per year. Each year (as of 2011/2004, respectively), about 80,000 units are installed in the US and 27,000 in Sweden. In Finland, a geothermal heat pump was the most common heating system choice for new detached houses between 2006 and 2011 with market share exceeding 40%.
Arrangement.
Internal arrangement.
A heat pump is the central unit for the building's heating and cooling. It usually comes in two main variants:
"Liquid-to-water" heat pumps (also called "water-to-water") are hydronic systems that carry heating or cooling through the building through pipes to conventional radiators, underfloor heating, baseboard radiators and hot water tanks. These heat pumps are also preferred for pool heating. Heat pumps typically only heat water to about efficiently, whereas boilers typically operate at . The size of radiators designed for the higher temperatures achieved by boilers may be too small for use with heat pumps, requiring replacement with larger radiators when retrofitting a home from boiler to heat pump. When used for cooling, the temperature of the circulating water must normally be kept above the dew point to ensure that atmospheric humidity does not condense on the radiator.
"Liquid-to-air" heat pumps (also called "water-to-air") output forced air, and are most commonly used to replace legacy forced air furnaces and central air conditioning systems. There are variations that allow for split systems, high-velocity systems, and ductless systems. Heat pumps cannot achieve as high a fluid temperature as a conventional furnace, so they require a higher volume flow rate of air to compensate. When retrofitting a residence, the existing ductwork may have to be enlarged to reduce the noise from the higher air flow.
Ground heat exchanger.
Ground source heat pumps employ a ground heat exchanger in contact with the ground or groundwater to extract or dissipate heat. Incorrect design can result in the system freezing after a number of years or very inefficient system performance; thus accurate system design is critical to a successful system
Pipework for the ground loop is typically made of high-density polyethylene pipe and contains a mixture of water and anti-freeze (propylene glycol, denatured alcohol or methanol). Monopropylene glycol has the least damaging potential when it might leak into the ground, and is, therefore, the only allowed anti-freeze in ground sources in an increasing number of European countries.
A horizontal closed loop field is composed of pipes that are arrayed in a plane in the ground. A long trench, deeper than the frost line, is dug and U-shaped or slinky coils are spread out inside the same trench. Shallow horizontal heat exchangers experience seasonal temperature cycles due to solar gains and transmission losses to ambient air at ground level. These temperature cycles lag behind the seasons because of thermal inertia, so the heat exchanger will harvest heat deposited by the sun several months earlier, while being weighed down in late winter and spring, due to accumulated winter cold. Systems in wet ground or in water are generally more efficient than drier ground loops since water conducts and stores heat better than solids in sand or soil. If the ground is naturally dry, soaker hoses may be buried with the ground loop to keep it wet.
A vertical system consists of a number of boreholes some deep fitted with U-shaped pipes through which a heat-carrying fluid that absorbs (or discharges) heat from (or to) the ground is circulated. Bore holes are spaced at least 5–6 m apart and the depth depends on ground and building characteristics. Alternatively, pipes may be integrated with the foundation piles used to support the building. Vertical systems rely on migration of heat from surrounding geology, unless recharged during the summer and at other times when surplus heat is available. Vertical systems are typically used where there is insufficient available land for a horizontal system.
Pipe pairs in the hole are joined with a U-shaped cross connector at the bottom of the hole or comprises two small-diameter high-density polyethylene (HDPE) tubes thermally fused to form a U-shaped bend at the bottom. The space between the wall of the borehole and the U-shaped tubes is usually grouted completely with grouting material or, in some cases, partially filled with groundwater. For illustration, a detached house needing 10 kW (3 ton) of heating capacity might need three boreholes deep.
As an alternative to trenching, loops may be laid by mini horizontal directional drilling (mini-HDD). This technique can lay piping under yards, driveways, gardens or other structures without disturbing them, with a cost between those of trenching and vertical drilling. This system also differs from horizontal & vertical drilling as the loops are installed from one central chamber, further reducing the ground space needed. Radial drilling is often installed retroactively (after the property has been built) due to the small nature of the equipment used and the ability to bore beneath existing constructions.
In an open-loop system (also called a groundwater heat pump), the secondary loop pumps natural water from a well or body of water into a heat exchanger inside the heat pump. Since the water chemistry is not controlled, the appliance may need to be protected from corrosion by using different metals in the heat exchanger and pump. Limescale may foul the system over time and require periodic acid cleaning. This is much more of a problem with cooling systems than heating systems. A standing column well system is a specialized type of open-loop system where water is drawn from the bottom of a deep rock well, passed through a heat pump, and returned to the top of the well. A growing number of jurisdictions have outlawed open-loop systems that drain to the surface because these may drain aquifers or contaminate wells. This forces the use of more environmentally sound injection wells or a closed-loop system.
A closed pond loop consists of coils of pipe similar to a slinky loop attached to a frame and located at the bottom of an appropriately sized pond or water source. Artificial ponds are used as heat storage (up to 90% efficient) in some central solar heating plants, which later extract the heat (similar to ground storage) via a large heat pump to supply district heating.
The direct exchange geothermal heat pump (DX) is the oldest type of geothermal heat pump technology where the refrigerant itself is passed through the ground loop. Developed during the 1980s, this approach faced issues with the refrigerant and oil management system, especially after the ban of CFC refrigerants in 1989 and DX systems now are infrequently used.
Installation.
Because of the technical knowledge and equipment needed to design and size the system properly (and install the piping if heat fusion is required), a GSHP system installation requires a professional's services. Several installers have published real-time views of system performance in an online community of recent residential installations. The International Ground Source Heat Pump Association (IGSHPA), Geothermal Exchange Organization (GEO), Canadian GeoExchange Coalition and Ground Source Heat Pump Association maintain listings of qualified installers in the US, Canada and the UK. Furthermore, detailed analysis of soil thermal conductivity for horizontal systems and formation thermal conductivity for vertical systems will generally result in more accurately designed systems with a higher efficiency.
Thermal performance.
Cooling performance is typically expressed in units of BTU/hr/watt as the energy efficiency ratio (EER), while heating performance is typically reduced to dimensionless units as the coefficient of performance (COP). The conversion factor is 3.41 BTU/hr/watt. Since a heat pump moves three to five times more heat energy than the electric energy it consumes, the total energy output is much greater than the electrical input. This results in net thermal efficiencies greater than 300% as compared to radiant electric heat being 100% efficient. Traditional combustion furnaces and electric heaters can never exceed 100% efficiency. Ground source heat pumps can reduce energy consumption – and corresponding air pollution emissions – up to 72% compared to electric resistance heating with standard air-conditioning equipment.
Efficient compressors, variable speed compressors and larger heat exchangers all contribute to heat pump efficiency. Residential ground source heat pumps on the market today have standard COPs ranging from 2.4 to 5.0 and EERs ranging from 10.6 to 30. To qualify for an Energy Star label, heat pumps must meet certain minimum COP and EER ratings which depend on the ground heat exchanger type. For closed-loop systems, the ISO 13256-1 heating COP must be 3.3 or greater and the cooling EER must be 14.1 or greater.
Standards ARI 210 and 240 define Seasonal Energy Efficiency Ratio (SEER) and Heating Seasonal Performance Factors (HSPF) to account for the impact of seasonal variations on air source heat pumps. These numbers are normally not applicable and should not be compared to ground source heat pump ratings. However, Natural Resources Canada has adapted this approach to calculate typical seasonally adjusted HSPFs for ground-source heat pumps in Canada. The NRC HSPFs ranged from 8.7 to 12.8 BTU/hr/watt (2.6 to 3.8 in nondimensional factors, or 255% to 375% seasonal average electricity utilization efficiency) for the most populated regions of Canada.
For the sake of comparing heat pump appliances to each other, independently from other system components, a few standard test conditions have been established by the American Refrigerant Institute (ARI) and more recently by the International Organization for Standardization. Standard ARI 330 ratings were intended for closed-loop ground-source heat pumps, and assume secondary loop water temperatures of for air conditioning and for heating. These temperatures are typical of installations in the northern US. Standard ARI 325 ratings were intended for open-loop ground-source heat pumps, and include two sets of ratings for groundwater temperatures of and . ARI 325 budgets more electricity for water pumping than ARI 330. Neither of these standards attempts to account for seasonal variations. Standard ARI 870 ratings are intended for direct exchange ground-source heat pumps. ASHRAE transitioned to ISO 13256–1 in 2001, which replaces ARI 320, 325 and 330. The new ISO standard produces slightly higher ratings because it no longer budgets any electricity for water pumps.
Soil without artificial heat addition or subtraction and at depths of several metres or more remains at a relatively constant temperature year round. This temperature equates roughly to the average annual air temperature of the chosen location, usually at a depth of in the northern US. Because this temperature remains more constant than the air temperature throughout the seasons, ground source heat pumps perform with far greater efficiency during extreme air temperatures than air conditioners and air-source heat pumps.
Analysis of heat transfer.
A challenge in predicting the thermal response of a ground heat exchanger (GHE) is the diversity of the time and space scales involved. Four space scales and eight time scales are involved in the heat transfer of GHEs. The first space scale having practical importance is the diameter of the borehole (~ 0.1 m) and the associated time is on the order of 1 hr, during which the effect of the heat capacity of the backfilling material is significant. The second important space dimension is the half distance between two adjacent boreholes, which is on the order of several meters. The corresponding time is on the order of a month, during which the thermal interaction between adjacent boreholes is important. The largest space scale can be tens of meters or more, such as the half-length of a borehole and the horizontal scale of a GHE cluster. The time scale involved is as long as the lifetime of a GHE (decades).
The short-term hourly temperature response of the ground is vital for analyzing the energy of ground-source heat pump systems and for their optimum control and operation. By contrast, the long-term response determines the overall feasibility of a system from the standpoint of the life cycle. Addressing the complete spectrum of time scales require vast computational resources.
The main questions that engineers may ask in the early stages of designing a GHE are (a) what the heat transfer rate of a GHE as a function of time is, given a particular temperature difference between the circulating fluid and the ground, and (b) what the temperature difference as a function of time is, given a required heat exchange rate. In the language of heat transfer, the two questions can probably be expressed as
formula_0
where "T"f is the average temperature of the circulating fluid, "T"0 is the effective, undisturbed temperature of the ground, "ql" is the heat transfer rate of the GHE per unit time per unit length (W/m), and "R" is the total thermal resistance (m.K/W)."R"("t") is often an unknown variable that needs to be determined by heat transfer analysis. Despite "R"("t") being a function of time, analytical models exclusively decompose it into a time-independent part and a time-dependent part to simplify the analysis.
Various models for the time-independent and time-dependent R can be found in the references. Further, a Thermal response test is often performed to make a deterministic analysis of ground thermal conductivity to optimize the loopfield size, especially for larger commercial sites (e.g., over 10 wells).
Seasonal thermal storage.
The efficiency of ground source heat pumps can be greatly improved by using seasonal thermal energy storage and interseasonal heat transfer. Heat captured and stored in thermal banks in the summer can be retrieved efficiently in the winter. Heat storage efficiency increases with scale, so this advantage is most significant in commercial or district heating systems.
Geosolar combisystems have been used to heat and cool a greenhouse using an aquifer for thermal storage. In summer, the greenhouse is cooled with cold ground water. This heats the water in the aquifer which can become a warm source for heating in winter. The combination of cold and heat storage with heat pumps can be combined with water/humidity regulation. These principles are used to provide renewable heat and renewable cooling to all kinds of buildings.
Also the efficiency of existing small heat pump installations can be improved by adding large, cheap, water-filled solar collectors. These may be integrated into a to-be-overhauled parking lot, or in walls or roof constructions by installing one-inch PE pipes into the outer layer.
Environmental impact.
The US Environmental Protection Agency (EPA) has called ground source heat pumps the most energy-efficient, environmentally clean, and cost-effective space conditioning systems available. Heat pumps offer significant emission reductions potential, particularly where they are used for both heating and cooling and where the electricity is produced from renewable resources.
GSHPs have unsurpassed thermal efficiencies and produce zero emissions locally, but their electricity supply includes components with high greenhouse gas emissions unless the owner has opted for a 100% renewable energy supply. Their environmental impact, therefore, depends on the characteristics of the electricity supply and the available alternatives.
The GHG emissions savings from a heat pump over a conventional furnace can be calculated based on the following formula:
formula_1
Ground-source heat pumps always produce fewer greenhouse gases than air conditioners, oil furnaces, and electric heating, but natural gas furnaces may be competitive depending on the greenhouse gas intensity of the local electricity supply. In countries like Canada and Russia with low emitting electricity infrastructure, a residential heat pump may save 5 tons of carbon dioxide per year relative to an oil furnace, or about as much as taking an average passenger car off the road. But in cities like Beijing or Pittsburgh that are highly reliant on coal for electricity production, a heat pump may result in 1 or 2 tons more carbon dioxide emissions than a natural gas furnace. For areas not served by utility natural gas infrastructure, however, no better alternative exists.
The fluids used in closed loops may be designed to be biodegradable and non-toxic, but the refrigerant used in the heat pump cabinet and in direct exchange loops was, until recently, chlorodifluoromethane, which is an ozone-depleting substance. Although harmless while contained, leaks and improper end-of-life disposal contribute to enlarging the ozone hole. For new construction, this refrigerant is being phased out in favor of the ozone-friendly but potent greenhouse gas R410A. Open-loop systems (i.e. those that draw ground water as opposed to closed-loop systems using a borehole heat exchanger) need to be balanced by reinjecting the spent water. This prevents aquifer depletion and the contamination of soil or surface water with brine or other compounds from underground.
Before drilling, the underground geology needs to be understood, and drillers need to be prepared to seal the borehole, including preventing penetration of water between strata. The unfortunate example is a geothermal heating project in Staufen im Breisgau, Germany which seems the cause of considerable damage to historical buildings there. In 2008, the city centre was reported to have risen 12 cm, after initially sinking a few millimeters. The boring tapped a naturally pressurized aquifer, and via the borehole this water entered a layer of anhydrite, which expands when wet as it forms gypsum. The swelling will stop when the anhydrite is fully reacted, and reconstruction of the city center "is not expedient until the uplift ceases". By 2010 sealing of the borehole had not been accomplished. By 2010, some sections of town had risen by 30 cm.
Economics.
Ground source heat pumps are characterized by high capital costs and low operational costs compared to other HVAC systems. Their overall economic benefit depends primarily on the relative costs of electricity and fuels, which are highly variable over time and across the world. Based on recent prices, ground-source heat pumps currently have lower operational costs than any other conventional heating source almost everywhere in the world. Natural gas is the only fuel with competitive operational costs, and only in a handful of countries where it is exceptionally cheap, or where electricity is exceptionally expensive. In general, a homeowner may save anywhere from 20% to 60% annually on utilities by switching from an ordinary system to a ground-source system.
Capital costs and system lifespan have received much less study until recently, and the return on investment is highly variable. The rapid escalation in system price has been accompanied by rapid improvements in efficiency and reliability. Capital costs are known to benefit from economies of scale, particularly for open-loop systems, so they are more cost-effective for larger commercial buildings and harsher climates. The initial cost can be two to five times that of a conventional heating system in most residential applications, new construction or existing. In retrofits, the cost of installation is affected by the size of the living area, the home's age, insulation characteristics, the geology of the area, and the location of the property. Proper duct system design and mechanical air exchange should be considered in the initial system cost.
Capital costs may be offset by government subsidies; for example, Ontario offered $7000 for residential systems installed in the 2009 fiscal year. Some electric companies offer special rates to customers who install a ground-source heat pump for heating or cooling their building. Where electrical plants have larger loads during summer months and idle capacity in the winter, this increases electrical sales during the winter months. Heat pumps also lower the load peak during the summer due to the increased efficiency of heat pumps, thereby avoiding the costly construction of new power plants. For the same reasons, other utility companies have started to pay for the installation of ground-source heat pumps at customer residences. They lease the systems to their customers for a monthly fee, at a net overall saving to the customer.
The lifespan of the system is longer than conventional heating and cooling systems. Good data on system lifespan is not yet available because the technology is too recent, but many early systems are still operational today after 25–30 years with routine maintenance. Most loop fields have warranties for 25 to 50 years and are expected to last at least 50 to 200 years. Ground-source heat pumps use electricity for heating the house. The higher investment above conventional oil, propane or electric systems may be returned in energy savings in 2–10 years for residential systems in the US. The payback period for larger commercial systems in the US is 1–5 years, even when compared to natural gas. Additionally, because geothermal heat pumps usually have no outdoor compressors or cooling towers, the risk of vandalism is reduced or eliminated, potentially extending a system's lifespan.
Ground source heat pumps are recognized as one of the most efficient heating and cooling systems on the market. They are often the second-most cost-effective solution in extreme climates (after co-generation), despite reductions in thermal efficiency due to ground temperature. (The ground source is warmer in climates that need strong air conditioning, and cooler in climates that need strong heating.) The financial viability of these systems depends on the adequate sizing of ground heat exchangers (GHEs), which generally contribute the most to the overall capital costs of GSHP systems.
Commercial systems maintenance costs in the US have historically been between $0.11 to $0.22 per m2 per year in 1996 dollars, much less than the average $0.54 per m2 per year for conventional HVAC systems.
Governments that promote renewable energy will likely offer incentives for the consumer (residential), or industrial markets. For example, in the United States, incentives are offered both on the state and federal levels of government.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "q_l = [T_f(t) - T_0]/R(t)"
},
{
"math_id": 1,
"text": "\\text{GHG Savings}=\\mathrm{HL} \\left( \\frac\\mathrm{FI}{\\mathrm{AFUE} \\times 1000\\frac\\mathrm{kg}\\mathrm{ton}}-\\frac\\mathrm{EI}{\\mathrm{COP} \\times 3600\\frac\\mathrm{sec}\\mathrm{hr}}\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=7096967 |
70971264 | ProbLog | Probabilistic logic programming language
ProbLog is a probabilistic logic programming language that extends Prolog with probabilities. It minimally extends Prolog by adding the notion of a probabilistic fact, which combines the idea of logical atoms and random variables. Similarly to Prolog, ProbLog can query an atom. While Prolog returns the truth value of the queried atom, ProbLog returns the probability of it being true.
Semantics.
A probabilistic fact is a pair formula_0 with formula_1 a ground atom and formula_2 the probability of formula_1 being true. A rule is defined by an atom formula_3, called the head, and a finite set of formula_4 literals formula_5, called the body.
ProbLog programs consist of a set of probabilistic facts formula_6 and a set of rules formula_7. Using the distribution semantics, a probability distribution is defined over the two-valued well-founded models of the atoms in the program. The probability of a model is defined as formula_8 where the product runs over all the literals in the model formula_9. For a query atom formula_10 the distribution semantics defines a probability for the query
formula_11
in which the sum runs over all the models where formula_10 is true.
ProbLog supports multiple tasks:
Example.
ProbLog can for example be used to calculate the probability of getting wet given the probabilities for rain and the probabilities that someone brings an umbrella as follows:
0.4 :: rain(weekday).
0.9 :: rain(weekend).
0.8 :: umbrella_if_rainy(Day).
0.2 :: umbrella_if_dry(Day).
umbrella(Day) :- rain(Day), umbrella_if_rainy(Day).
umbrella(Day) :- \+rain(Day), umbrella_if_dry(Day).
wet(Day) :- rain(Day), \+umbrella(Day).
query(\+wet(weekend)).
The last rule before the query states that someone gets wet if it rains and no umbrella was brought. When ProbLog is asked to solve the "probabilistic inference" task, the query asks for the probability to stay dry on a weekend day. When solving the "most probable explanation" task, ProbLog will return the most likely reason for staying dry, i.e. because it is not raining or because the person has an umbrella.
Implementations.
The ProbLog language has been implemented as a YAP Prolog library (ProbLog 1). and as a stand-alone Python framework (ProbLog 2)
The source code of ProbLog 2 is licensed under Apache License, Version 2.0 and available on GitHub. The ProbLog language has also been implemented as part of the cplint probabilistic logic programming package for SWI-Prolog, YAP and XSB.
ProbLog variants.
ProbLog has been extended or used as inspiration for several different variants, including:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(p, a)"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "p \\in [0, 1]"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\{ b_1, b_2, . . ., b_n \\}"
},
{
"math_id": 6,
"text": "\\mathcal{F}"
},
{
"math_id": 7,
"text": "\\mathcal{R}"
},
{
"math_id": 8,
"text": "P(M) = \\prod_{l \\in M} P(l)"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "q"
},
{
"math_id": 11,
"text": "P(q) = \\sum_{M \\models q} P(M) = \\sum_{M \\models q} \\prod_{l \\in M} P(l)"
},
{
"math_id": 12,
"text": "P(q)"
},
{
"math_id": 13,
"text": "\\max_{M \\models q} P(M)"
}
]
| https://en.wikipedia.org/wiki?curid=70971264 |
70985189 | QARMA | QARMA (from Qualcomm ARM Authenticator) is a lightweight tweakable block cipher primarily known for its use in the ARMv8 architecture for protection of software as a cryptographic hash for the Pointer Authentication Code. The cipher was proposed by Roberto Avanzi in 2016. Two versions of QARMA are defined: QARMA-64 (64-bit block size with a 128-bit encryption key) and QARMA-128 (128-bit block size with a 256-bit key). The design of the QARMA was influenced by PRINCE and MANTIS. The cipher is intended for fully-unrolled hardware implementations with low latency (like memory encryption). Unlike the XTS mode, the address can be directly used as a tweak and does not need to be whitened with the block encryption first.
Architecture.
QARMA is an Even–Mansour cipher using three stages, with whitening keys "w0" and "w1" XORed in between:
All keys are derived from the "master" encryption key K using "specialisation":
The data is split into 16 "cells" (4-bit nibbles for QARMA-64, 8-bit bytes for QARMA-128). Internal state also contains 16 cells, arranged in a 4x4 matrix, and is initialized by plaintext (XORed with w0). In each round of formula_0, the state is transformed via operations formula_2:
The tweak for each round is updated using formula_6:
The rounds of formula_1 consist of inverse operations formula_9.
Central rounds, in addition to two rounds (formula_2 and formula_10), include multiplication of the state by an involutary matrix "Q".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\digamma"
},
{
"math_id": 1,
"text": "\\overline \\digamma"
},
{
"math_id": 2,
"text": "\\tau, M, S"
},
{
"math_id": 3,
"text": "\\tau"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "h, \\omega"
},
{
"math_id": 7,
"text": "h"
},
{
"math_id": 8,
"text": "\\omega"
},
{
"math_id": 9,
"text": "\\overline \\tau, \\overline M, \\overline S, \\overline h, \\overline \\omega"
},
{
"math_id": 10,
"text": "\\overline \\tau, \\overline M, \\overline S"
}
]
| https://en.wikipedia.org/wiki?curid=70985189 |
70988990 | NM-method | Statistical NM-method
The NM-method or Naszodi–Mendonca method is the operation that can be applied in statistics, econometrics, economics, sociology, and demography to construct counterfactual contingency tables. The method finds the matrix formula_0 (formula_1) which is "closest" to matrix formula_2 (formula_3 called the seed table) in the sense of being ranked the same but with the row and column totals of a target matrix formula_4 formula_5. While the row totals and column totals of formula_4 are known, matrix formula_4 itself may not be known.
Since the solution for matrix formula_0 is unique, the NM-method is a function: formula_6, where formula_7 is a row vector of ones of size formula_8, while formula_9 is a column vector of ones of size formula_10.
The NM-method was developed by Naszodi and Mendonca (2021) (and first applied by Naszodi and Mendonca (2019) to solve for matrix formula_0 in problems, where matrix formula_11 is not a sample from the population characterized by the row totals and column totals of matrix formula_4, but represents another population.
Their application aimed at quantifying intergenerational changes in the strength of educational homophily and thus measuring the historical change in social inequality between different educational groups in the US between 1980 and 2010. The trend in inequality was found to be U-shaped, supporting the view that with appropriate social and economic policies inequality can be reduced.
Definition of matrix ranking.
The closeness between two matrices of the same size can be defined in several ways. The Euclidean distance, and the Kullback-Leibler divergence are two well-known examples.
The NM-method is consistent with a definition relying on the ordinal Liu-Lu index which is the slightly modified version of the Coleman-index defined by Eq. (15) in Coleman (1958). According to this definition, matrix formula_0 is "closest" to matrix formula_2, if their Liu-Lu values are the same. In other words, if they are ranked the same by the ordinal Liu-Lu index.
If formula_2 is a 2×2 matrix, its scalar-valued Liu-Lu index is defined as
formula_12 , where
formula_13;
formula_14;
formula_15;
formula_16;
formula_17.
Following Coleman (1958), this index is interpreted as the “actual minus expected over maximum minus minimum”, where formula_18 is the actual value of the formula_19 entry of the seed matrix formula_20; formula_21 is its expected (integer) value under the counterfactual assumptions that the corresponding row total and column total of formula_20 are predetermined, while its interior is random. Also, formula_21 is its minimum value if the association between the row variable and the column variable of formula_20 is non-negative. Finally, formula_22 is the maximum value of formula_18 (formula_3) for given row total formula_23 and column total formula_24.
For matrix formula_2 of size n×m (formula_25, formula_26), the Liu-Lu index was generalized by Naszodi and Mendonca (2021) to a matrix-valued index. One of the preconditions for the generalization is that the row variable and the column variable of matrix formula_2 have to be ordered. Equating the generalized, matrix-valued Liu-Lu index of formula_2 with that of matrix formula_0 is equivalent to dichotomizing their ordered row variable and ordered column variable in formula_27 ways by explointing the ordered nature of the row and column variables. Than, equating the original, scalar-valued Liu-Lu indices of the 2×2 matrices obtained with the dichotomizations. I.e., for any pair of formula_28 (formula_29, and formula_30) the restriction formula_31 is imposed, where formula_32 is the formula_33 matrix formula_34 with its first block being of size formula_35, and its second block being of size formula_36. Similarly, formula_37 is the formula_38 matrix given by the transpose of formula_39 with its first block being of size formula_40, and its second block being of size formula_41.
Constraints on the row totals and column totals.
Matrix formula_0 should satisfy not only formula_42 but also the pair of constraints on its row totals and column totals: formula_43 and formula_44.
Solution.
Assuming that formula_45 for all pairs of formula_28 (where formula_29, and formula_30), the solution for formula_0 is unique, deterministic, and given by a closed-form formula.
For matrices formula_4 and formula_2 of size formula_46, the solution is
formula_47.
The other 3 cells of formula_48 are uniquely determined by the row totals and column totals. So, this is how the NM-method works for 2×2 seed tables.
For formula_49, and formula_50 matrices of size formula_51 (formula_25, formula_52), the solution is obtained by dichotomizing their ordered row variable and ordered column variable in all possible meaningful ways before solving formula_53 number of problems of 2×2 form. Each problem is defined for an formula_54 pair (formula_55 and formula_56) with formula_42, and the target row totals and column totals: formula_57, and formula_58, respectively. Each problem is to be solved separately by the formula for formula_59. The set of solutions determine formula_60 number of entries of matrix formula_61. Its remaining formula_62 elements are uniquely determined by the target row totals and column totals.
Next, let us see how the NM-method works if matrix formula_63 is such that the second precondition of formula_64 is not met for formula_65.
If formula_66 for all pairs of formula_67, the solution for formula_0 is also unique, deterministic, and given by a closed-form formula. However, the corresponding concept of matrix ranking is slightly different from the one discussed above. Liu and Lu (2006) define it as formula_68 , where formula_69; formula_70 is the smallest integer being larger than or equal to formula_71.
Finally, neither the NM-method, nor formula_72 is defined if formula_73 pair such that formula_74, while for another pair of formula_75 formula_76.
A numerical example.
Consider the following matrix formula_77 complemented with its row totals and column totals and the targets, i.e., the row totals and column totals of matrix formula_78:
As a first step of the NM-method, matrix formula_79 is multiplied by the formula_80, and formula_81 matrices for each pair of formula_28 (formula_82, and formula_83). It yields the following 9 matrices of size 2×2 with their target row totals and column totals:
The next step is to calculate the generalized matrix-valued Liu-Lu index formula_84, (where formula_85) by applying the formula of the original scalar-valued Liu-Lu index to each of the 9 matrices:
Apparently, matrix formula_86 is positive. Therefore, the NM-method is defined. Solving each of the 9 problems of the 2×2 form yields 9 entries of the formula_0 matrix. Its other 7 entries are uniquely determined by the target row totals and column totals. The solution for formula_87 is:
Another numerical example taken from Abbott et al.(2019).
Consider the following matrix formula_77 complemented with its row totals and column totals and the targets, i.e., the row totals and column totals of matrix formula_78:
As a first step of the NM-method, matrix formula_79 is multiplied by the formula_80, and formula_81 matrices for each pair of formula_28 (formula_88, and formula_89). It yields the following 4 matrices of size 2×2 with their target row totals and column totals:
The next step is to calculate the generalized matrix-valued Liu-Lu index formula_84, (where formula_85) by applying the formula of the original scalar-valued Liu-Lu index to each of the 4 matrices:
Apparently, matrix formula_86 is positive. Therefore, the NM-method is defined. Solving each of the 4 problems of the 2×2 form yields 4 entries of the formula_0 matrix. Its other 5 entries are uniquely determined by the target row totals and column totals. The solution for formula_87 is:
Implementation.
The NM-method is implemented in Excel, Visual Basic, R, and also in Stata.
Applications.
The NM-method can be applied to study various phenomena including assortative mating, intergenerational mobility as a type of social mobility, residential segregation, recruitment and talent management.
In all of these applications, matrices formula_48, formula_90, and formula_50 represent joint distributions of one-to-one matched entities (e.g. husbands and wives, or first born children and mothers, or dwellings and main tenants, or CEOs and companies, or chess instructors and their most talended students) characterized either by a dichotomous categorical variable (e.g. taking values vegetarian/non-vegetarian, Grandmaster/or not), or an ordered multinomial categorical variable (e.g. level of final educational attainment, skiers' ability level, income bracket, category of rental fee, credit rating, FIDE titles). Although the NM-method has a wide range of applicability, all the examples to be presented next are about assortative mating along the education level. In these applications, the two preconditions (of ordered trait variable, and positive assortative mating in all educational groups) are not debated to be met.
Assume that matrix formula_50 characterizes the joint educational distribution of husbands and wives in Zimbabwe, while matrix formula_90 characterizes the same in Yemen. Matrix formula_48 to be constructed with the NM-method tells us what would be the joint educational distribution of couples in Zimbabwe, if the educational distributions of husbands and wives were the same as in Yemen, while the overall desire for homogamy (also called as aggregate marital preferences in economics, or marital matching social norms/social barriers in sociology) were unchanged.
In a second application, matrices formula_50 and formula_90 characterize the same country in two different years. Matrix formula_50 is the joint educational distribution of American newlyweds in 2040, where the husbands are from Generation Z and being young adults when observed. Matrix formula_90 is the same but for Generation Y observed in year 2024. By constructing matrix formula_48, one can study in the future what would be the educational distribution among the just married American young couples if they sorted into marriages the same way as the males in Generation Z and their partners do, while the education level were the same as among the males in Generation Y and their partners.
In a third application, matrices formula_50 and formula_90 characterize again the same country in two different years. In this application, matrix formula_50 is the joint educational distribution of Portuguese young couples (where the male partners' age is between 30 and 34 years) in 2011. And formula_90 is the same but it is observed in year 1981. One may aim to construct matrix formula_48 in order to study what would have been the educational distribution of Portuguese young couples if they had sorted into marriages like their peers did in 2011, while their gender-specific educational distributions were the same as in 1981.
In each of the first two applications, matrix formula_48 represents a counterfactual joint distribution. It can be used to quantify certain ceteris paribus effects. More precisely, to quantify on a cardinal scale the difference between the directly unobservable degree of marital sorting in Zimbabwe and Yemen, or in Generation Z and Generation Y with a counterfactual decomposition. For the decomposition, the counterfactual table formula_48 is used to calculate the contribution of each of the driving forces (i.e., the observed structural availability of potential partners with various education levels determining the opportunities at the population level; and the unobservable non-structural drivers, e.g., aggregate matching preferences, desires, norms, barriers) and that of their interaction (i.e., the effect of changes in aggregate preferences/desires/norms/barriers due to changes in structural availability) to an observable cardinal scaled statistics (e.g. the share of educationally homogamous couples).
The third application was used by Naszodi and Mendonca (2021) as an example for a non-sense counterfactual: the education level has changed so drastically in Portugal over the three decades studied that this counterfactual is impossible to be obtained.
Some features of the NM-method.
First, the NM-method does not yield a meaningful solution if it reaches the limit of its applicability. For instance, in the third application, the NM-method signals with a negative entry in matrix formula_48 that the counterfactual is impossible (see: AlternativeMethod_US_1980s_2010s_age3035_main.xls Sheet PT_A1981_P2011_Not_meaningful). In this respect, the NM-method is similar to the linear probability model that signals the same with a predicted probabiity outside the unit interval formula_91.
Second, the NM-method commutes with merging neighboring categories of the row variable and that of the column variable: formula_92, where formula_93 is the row merging matrix of size formula_94; and formula_95, where formula_96 is the column merging matrix of size formula_97.
Third, the NM-method works even if there are zero entries in matrix formula_2.
Comparison with the IPF.
The iterative proportional fitting procedure (IPF) is also a function:formula_98. It is the operation of finding the fitted matrix formula_99 (formula_100) which fulfills a set of conditions similar to those met by matrix formula_0 constructed with the NM-method. E.g., matrix formula_101 is the closest to matrix formula_11 but with the row and column totals of the target matrix formula_102.
However, there are differences between the IPF and the NM-method. The IPF defines closeness of matrices of the same size by the cross-entropy, or the Kullback-Leibler divergence. Accordingly, the IPF compatible concept of distance between the 2×2 matrices formula_101 and formula_2 is zero, if their crossproduct ratios (also known as the odds ratio) are the same: formula_103. To recall, the NM-method's condition for equal ranking of matrices formula_0 and formula_2 is formula_104.
The following numerical example highlights that the IPF and the NM-method are not identical: formula_105. Consider the matrix formula_106 with its targets:
The NM-method yields the following matrix formula_0:
Whereas the solution for matrix formula_101 obtained with the IPF is:
The IPF is equivalent to the maximum likelihood estimator of a joint population distribution, where matrix formula_101 (the estimate for the joint population distribution) is calculated from matrix formula_2, the observed joint distribution in a random sample taken from the population characterized by the row totals and column totals of matrix formula_4. In contrast to the problem solved by the IPF, matrix formula_2 is not sampled from this population in the problem that the NM-method was developed to solve. In fact, in the NM-problem, matrices formula_2 and formula_4 characterize two different populations (either observed simultaneously like in the application for Zimbabwe and Yemen, or observed in two different points in time like in its application for the populations of Generation Z and Generation Y). This difference facilitates the choice between the NM-method and the IPF in empirical applications.
Deming and Stephan(1940), the inventors of the IPF, illustrated the application of their method on a classic maximum likelihood estimation problem, where matrix formula_2 was sampled from the population characterized by the row totals and column totals of matrix formula_4. They were aware of the fact that in general, the IPF is not suitable for counterfactual predictions: they explicitly warned that their algorithm is “not by itself useful for prediction” (see Stephan and Deming 1940 p. 444).
In addition, the domains are different for which the IPF and the NM-method yield solutions. First, unlike the NM-method, the IPF does not provide a solution for all seed tables formula_107 with zero entries (Csiszár (1975) found necessary and sufficient conditions for applying the IPF with general tables having zero entries). Second, unlike the IPF, the NM-method does not provide a meaningful solution for pairs of matrices formula_107 and formula_108 defining impossible counterfactuals. Third, the precondition of the NM-method (of either formula_109 or formula_110) is not a precondition for the applicability of the IPF.
Finally, unlike the NM, the IPF does not commute with the operation of merging neighboring categories of the row variable and that of the column variable as it is illustrated with a numerical example in Naszodi(2023) (see page 10).
For this reason, the transformed table obtained with the IPF can be sensitive to the choice of the number of trait categories.
Kenneth Macdonald (2023)
is at ease with the conclusion by Naszodi (2023)
that the IPF is suitable for sampling correction tasks, but not for generation of counterfactuals. Similarly to Naszodi, Macdonald also questions whether the row and column proportional transformations of the IPF preserve the structure of association within a contingency table that allows us to study social mobility.
Comparison with the Minimum Euclidean Distance Approach.
The Minimum Euclidean Distance Approach (MEDA) (defined by Abbott et al., 2019 following Fernández and Rogerson, 2001) is also a function:
formula_111.
First, MEDA assigns a scalar to matrix formula_50: it is the weight used for
constructing the convex combination of two extreme cases (random and perfectly assortative matching with the pair of marginals formula_112) by minimizing the Eucledean distance with formula_2. E.g. this scalar is formula_113 in the
numerical example taken from Abbott et al.(2019).
Second, for any pair of counterfactual marginal distributions (formula_114) the MEDA constructs the convex combination of the two extreme cases (random and perfectly assortative matches with the pair of marginals (formula_114)).
Differences between the NM and the MEDA:
while the NM holds the assortativeness unchanged by keeping the generalized matrix-valued Liu-Lu index formula_84 fixed, the MEDA does the same by keeping the scalar formula_115 fixed.
For formula_49, and formula_50 matrices of size formula_116 the two methods produces the same transformed table provided formula_115 ranks the contingency tables the same as the scalar-valued Liu-Lu index does.
However, for formula_107 matrices larger than 2×2, the
generalized Liu-Lu index is matrix-valued, so it is different from the scalar-valued formula_117.
Therefore, the NM-transformed table is also different from the MEDA-transformed table.
For instance,
in the numerical example taken from Abbott et al.(2019), the counterfactual table constructed by MEDA
is the matrix formula_101:
The difference between matrix formula_101 and matrix formula_0 is not negligible.
E.g. the share of homogamous couples is 2 percentage points smaller in the MEDA-constructed counterfactual matrix formula_101 than in the observed matrix formula_2,
whereas it is 3.4 percentage points smaller in the NM-constructed counterfactual matrix formula_0 relative to formula_2.
Because Abbott's example is not a fictional one,
but is based on the empirical educational distribution of American couples, therefore
the difference between 2 percentage points and 3.4 percentage points can be interpreted as the MEDA quantifies changes in inequality from one generation to another generation to be significantly smaller compared to the NM.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": " X \\in \\mathbb{R}^{n \\times m } "
},
{
"math_id": 2,
"text": "Z"
},
{
"math_id": 3,
"text": " Z \\in \\mathbb{N}^{n \\times m } "
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "( Y \\in \\mathbb{N}^{n \\times m }) "
},
{
"math_id": 6,
"text": " X=\\text{NM}(Z, Y e^T_m, e_nY): \\mathbb{N}^{n \\times m} \\times \\mathbb{N}^{n} \\times \\mathbb{N}^{m} \\mapsto \\mathbb{R}^{n \\times m}"
},
{
"math_id": 7,
"text": "e_n"
},
{
"math_id": 8,
"text": "1\\times n"
},
{
"math_id": 9,
"text": "e^T_m"
},
{
"math_id": 10,
"text": "m\\times 1"
},
{
"math_id": 11,
"text": "\\boldsymbol{Z}"
},
{
"math_id": 12,
"text": " \\text{LL}(Z)=\\frac{Z_{1,1}-Q^-(Z_{1,1})}{ \\text{min}(Z_{1,.} , Z_{.,1}) -Q^-(Z_{1,1})} "
},
{
"math_id": 13,
"text": " Z_{1,.}= Z_{1,1}+ Z_{1,2} "
},
{
"math_id": 14,
"text": "Z_{.,1}= Z_{1,1}+ Z_{2,1} "
},
{
"math_id": 15,
"text": "Z_{.,.}=Z_{.,1}+Z_{1,.}"
},
{
"math_id": 16,
"text": "Q(Z_{1,1})={Z_{1,.} Z_{.,1}}/{Z_{.,.}} "
},
{
"math_id": 17,
"text": "Q^-(Z_{1,1})=int[Q(Z_{1,1})] "
},
{
"math_id": 18,
"text": "Z_{1,1}"
},
{
"math_id": 19,
"text": "1,1 "
},
{
"math_id": 20,
"text": " Z"
},
{
"math_id": 21,
"text": " Q^-"
},
{
"math_id": 22,
"text": " \\text{min}(Z_{1,.} , Z_{.,1}) "
},
{
"math_id": 23,
"text": "Z_{1,.}"
},
{
"math_id": 24,
"text": "Z_{.,1}"
},
{
"math_id": 25,
"text": " n \\geq 2"
},
{
"math_id": 26,
"text": " m \\geq 2"
},
{
"math_id": 27,
"text": " (n-1) \\times (m-1)"
},
{
"math_id": 28,
"text": "i,j"
},
{
"math_id": 29,
"text": " i \\in \\{1, \\ldots, n-1 \\} "
},
{
"math_id": 30,
"text": "j \\in \\{1, \\ldots, m-1 \\}"
},
{
"math_id": 31,
"text": "\\text{LL}(V_i X W^T_j) = \\text{LL}(V_i Z W^T_j) "
},
{
"math_id": 32,
"text": "V_i "
},
{
"math_id": 33,
"text": "2 \\times n "
},
{
"math_id": 34,
"text": " V_i=\\begin{bmatrix} \\color{red}1 & \\color{red}\\cdots & \\color{red}1 & \\color{blue}0 & \\color{blue}\\cdots & \\color{blue}0 \\\\ \\color{red}0 & \\color{red}\\cdots & \\color{red}0 & \\color{blue}1 & \\color{blue}\\cdots & \\color{blue}1 \\end{bmatrix} "
},
{
"math_id": 35,
"text": "2 \\times i "
},
{
"math_id": 36,
"text": "2 \\times (n-i) "
},
{
"math_id": 37,
"text": " W^T_j "
},
{
"math_id": 38,
"text": "m \\times 2 "
},
{
"math_id": 39,
"text": " W_j=\\begin{bmatrix} \\color{red}1 & \\color{red}\\cdots & \\color{red}1 & \\color{blue}0 & \\color{blue}\\cdots & \\color{blue}0 \\\\ \\color{red}0 & \\color{red}\\cdots & \\color{red}0 & \\color{blue}1 & \\color{blue}\\cdots & \\color{blue}1 \\end{bmatrix} "
},
{
"math_id": 40,
"text": "2 \\times j "
},
{
"math_id": 41,
"text": "2 \\times (m-j) "
},
{
"math_id": 42,
"text": " \\text{LL}(V_i X W^T_j)= \\text{LL}(V_i Z W^T_j) "
},
{
"math_id": 43,
"text": "Xe^T_m=Ye^T_m "
},
{
"math_id": 44,
"text": "e_n X=e_n Y "
},
{
"math_id": 45,
"text": " \\text{LL}(V_i Z W^T_j) \\geq 0 "
},
{
"math_id": 46,
"text": "\\boldsymbol{2\\times 2}"
},
{
"math_id": 47,
"text": "X_{1,1} = \\frac{\\left[ Z_{1,1} - \\text{int}\\left({Z_{1,\\cdot} Z_{\\cdot,1}}/{Z_{\\cdot,\\cdot}}\\right)\\right] \\left[{\\text{min}\\left(Y_{1,\\cdot} , Y_{\\cdot,1} \\right)- \\text{int}\\left({Y_{1,\\cdot}Y_{\\cdot,1}}/ {Y_{\\cdot,\\cdot}} \\right) }\\right] }{\\text{min}\\left(Z_{1,\\cdot}, Z_{\\cdot,1} \\right)- \\text{int}\\left({Z_{1,\\cdot}Z_{\\cdot,1}}/{Z_{\\cdot,\\cdot}} \\right) } +\\text{int}\\left({Y_{1,\\cdot} Y_{\\cdot,1}}/{Y_{\\cdot,\\cdot}}\\right) "
},
{
"math_id": 48,
"text": " X "
},
{
"math_id": 49,
"text": "Y "
},
{
"math_id": 50,
"text": " Z "
},
{
"math_id": 51,
"text": "\\boldsymbol{n\\times m}"
},
{
"math_id": 52,
"text": " m \\geq 2 "
},
{
"math_id": 53,
"text": "(n-1)(m-1) "
},
{
"math_id": 54,
"text": " i,j "
},
{
"math_id": 55,
"text": " i \\in \\{1,..., n-1\\} "
},
{
"math_id": 56,
"text": " j \\in \\{1,..., m-1\\} "
},
{
"math_id": 57,
"text": " V_i X e^T_{m}= V_i Y e^T_{m} "
},
{
"math_id": 58,
"text": " e_{n} X W^T_j = e_{n} Y W^T_j "
},
{
"math_id": 59,
"text": "X_{1,1} "
},
{
"math_id": 60,
"text": "(n-1) (m-1) "
},
{
"math_id": 61,
"text": "X "
},
{
"math_id": 62,
"text": " m+n-1"
},
{
"math_id": 63,
"text": "Z "
},
{
"math_id": 64,
"text": "\\boldsymbol{\\text{LL}(V_i Z W^T_j) \\geq 0}"
},
{
"math_id": 65,
"text": " \\boldsymbol{\\forall i,j}"
},
{
"math_id": 66,
"text": "\\boldsymbol{\\text{LL}(V_i Z W^T_j) \\leq 0 } "
},
{
"math_id": 67,
"text": " \\boldsymbol{i,j}"
},
{
"math_id": 68,
"text": "\\text{LL}^-(Z)=\\frac{Z_{1,1}-Q^+(Z_{1,1})}{ Q^+(Z_{1,1})- max(0; Z_{1,.}-Z_{.,2}) } "
},
{
"math_id": 69,
"text": "Z_{.,2}= Z_{1,2}+ Z_{2,2} "
},
{
"math_id": 70,
"text": "Q^+(Z_{1,1})"
},
{
"math_id": 71,
"text": "Q"
},
{
"math_id": 72,
"text": " \\boldsymbol{\\text{LL}(Z)}"
},
{
"math_id": 73,
"text": " \\exist (i,j)"
},
{
"math_id": 74,
"text": "\\boldsymbol{\\text{LL}(V_i Z W^T_j) > 0 } "
},
{
"math_id": 75,
"text": "k,l (\\neq i,j) "
},
{
"math_id": 76,
"text": "\\boldsymbol{\\text{ LL}(V_k Z W^T_l) < 0} "
},
{
"math_id": 77,
"text": "\\color{green}Z "
},
{
"math_id": 78,
"text": " \\color{orange}Y "
},
{
"math_id": 79,
"text": "\\color{green} Z "
},
{
"math_id": 80,
"text": "\\boldsymbol{V_i}"
},
{
"math_id": 81,
"text": "\\boldsymbol{W^T_j} "
},
{
"math_id": 82,
"text": " i \\in \\{1, 2, 3 \\} "
},
{
"math_id": 83,
"text": "j \\in \\{1, 2, 3 \\} "
},
{
"math_id": 84,
"text": "\\text{LL}({Z})"
},
{
"math_id": 85,
"text": "\\text{LL}({Z})_{i,j}=\\text{LL}(V_i Z W^T_j)"
},
{
"math_id": 86,
"text": "\\text{LL}(Z)"
},
{
"math_id": 87,
"text": "\\boldsymbol{X}"
},
{
"math_id": 88,
"text": " i \\in \\{1, 2 \\} "
},
{
"math_id": 89,
"text": "j \\in \\{1, 2 \\} "
},
{
"math_id": 90,
"text": " Y "
},
{
"math_id": 91,
"text": " [0,1] "
},
{
"math_id": 92,
"text": " \\text{NM}(M_r Z, M_r Y e^T_m, M_r e_nY)=M_r \\text{NM}(Z, Y e^T_m, e_nY)"
},
{
"math_id": 93,
"text": "M_r"
},
{
"math_id": 94,
"text": "(n-1) \\times n"
},
{
"math_id": 95,
"text": " \\text{NM}(Z M_c, Y e^T_m M_c, e_n Y M_c)=\\text{NM}(Z, Y e^T_m, e_nY) M_c"
},
{
"math_id": 96,
"text": "M_c"
},
{
"math_id": 97,
"text": "m \\times (m-1)"
},
{
"math_id": 98,
"text": " \\text{IPF}(Z, Y e^T_m, e_nY): \\mathbb{R}^{n \\times m} \\times \\mathbb{R}^{n} \\times \\mathbb{R}^{m} \\mapsto \\mathbb{R}^{n \\times m}"
},
{
"math_id": 99,
"text": "\\boldsymbol{F}"
},
{
"math_id": 100,
"text": " F \\in \\mathbb{R}^{n \\times m}"
},
{
"math_id": 101,
"text": "F"
},
{
"math_id": 102,
"text": "\\boldsymbol{Y}"
},
{
"math_id": 103,
"text": " { F_{1,1} F_{2,2} }/ { F_{1,2} F_{2,1} } ={ Z_{1,1} Z_{2,2} }/ { Z_{1,2} Z_{2,1} } "
},
{
"math_id": 104,
"text": " \\text{LL}(X)=\\frac{X_{1,1}-int[{X_{1,.} X_{.,1}}/{X_{.,.}}]}{ \\text{min}(X_{1,.} , X_{.,1}) -int[{X_{1,.} X_{.,1}}/{X_{.,.}}]} =\\frac{Z_{1,1}-int[{Z_{1,.} Z_{.,1}}/{Z_{.,.}}]}{ \\text{min}(Z_{1,.} , Z_{.,1}) -int[{Z_{1,.} Z_{.,1}}/{Z_{.,.}}]} =\\text{LL}(Z)"
},
{
"math_id": 105,
"text": " \\text{IPF}(Z, Y e^T_m, e_nY) \\neq \\text{NM}(Z, Y e^T_m, e_nY) "
},
{
"math_id": 106,
"text": "\\color{Green}Z"
},
{
"math_id": 107,
"text": "{Z}"
},
{
"math_id": 108,
"text": "{Y}"
},
{
"math_id": 109,
"text": "\\boldsymbol{\\text{LL}(Z)\\geq 0}"
},
{
"math_id": 110,
"text": "\\boldsymbol{\\text{LL}(Z)\\leq 0}"
},
{
"math_id": 111,
"text": " \\text{MEDA}(Z, Y e^T_m, e_nY): \\mathbb{R}^{n \\times m} \\times \\mathbb{R}^{n} \\times \\mathbb{R}^{m} \\mapsto \\mathbb{R}^{n \\times m}"
},
{
"math_id": 112,
"text": "(Z e^T_m, e_nZ)"
},
{
"math_id": 113,
"text": "v=0.265"
},
{
"math_id": 114,
"text": "Y e^T_m, e_nY"
},
{
"math_id": 115,
"text": "v"
},
{
"math_id": 116,
"text": "2\\times 2"
},
{
"math_id": 117,
"text": "v({Z})"
}
]
| https://en.wikipedia.org/wiki?curid=70988990 |
70989 | Molar volume | Volume occupied by a given amount of particles of a substance
In chemistry and related fields, the molar volume, symbol "V"m, or formula_0 of a substance is the ratio of the volume ("V") occupied by a substance to the amount of substance ("n"), usually at a given temperature and pressure. It is also equal to the molar mass ("M") divided by the mass density ("ρ"):
formula_1
The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids.
Definition.
The molar volume of a substance "i" is defined as its molar mass divided by its density "ρ""i"0:
formula_2
For an ideal mixture containing "N" components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density:
formula_3
There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property.
Relation to specific volume.
Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance:
formula_4
Ideal gases.
For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure.
The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas:
formula_5
Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: "R" = , or about .
The molar volume of an ideal gas at 100 kPa (1 bar) is
at 0 °C,
at 25 °C.
The molar volume of an ideal gas at 1 atmosphere of pressure is
at 0 °C,
at 25 °C.
Crystalline solids.
For crystalline solids, the molar volume can be measured by X-ray crystallography.
The unit cell volume ("V"cell) may be calculated from the unit cell parameters, whose determination is the first step in an X-ray crystallography experiment (the calculation is performed automatically by the structure determination software). This is related to the molar volume by
formula_6
where "N"A is the Avogadro constant and "Z" is the number of formula units in the unit cell. The result is normally reported as the "crystallographic density".
Molar volume of silicon.
Ultra-pure silicon is routinely made for the electronics industry, and the measurement of the molar volume of silicon, both by X-ray crystallography and by the ratio of molar mass to mass density, has attracted much attention since the pioneering work at NIST in 1974. The interest stems from that accurate measurements of the unit cell volume, atomic weight and mass density of a pure crystalline solid provide a direct determination of the Avogadro constant.
The CODATA recommended value for the molar volume of silicon is , with a relative standard uncertainty of .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tilde V"
},
{
"math_id": 1,
"text": "V_{\\text{m}} = \\frac{V}{n} = \\frac{M}{\\rho}"
},
{
"math_id": 2,
"text": "V_{\\rm m,i} = {M_i\\over\\rho_i^0}"
},
{
"math_id": 3,
"text": "V_{\\rm m} = \\frac{\\displaystyle\\sum_{i=1}^{N} x_i M_i}{\\rho_{\\mathrm{mixture}}}"
},
{
"math_id": 4,
"text": "V_{\\rm m,i} = {M_i \\over \\rho_i^0} = M_i v_i"
},
{
"math_id": 5,
"text": "V_{\\rm m} = \\frac{V}{n} = \\frac{RT}{P}"
},
{
"math_id": 6,
"text": "V_{\\rm m} = {{N_{\\rm A}V_{\\rm cell}}\\over{Z}}"
}
]
| https://en.wikipedia.org/wiki?curid=70989 |
70996932 | Newton's sine-square law of air resistance | Isaac Newton's sine-squared law of air resistance is a formula that implies the force on a flat plate immersed in a moving fluid is proportional to the "square" of the sine of the angle of attack. Although Newton did not analyze the force on a flat plate himself, the techniques he used for spheres, cylinders, and conical bodies were later applied to a flat plate to arrive at this formula. In 1687, Newton devoted the second volume of his Principia Mathematica to fluid mechanics.
The analysis assumes that the fluid particles are moving at a uniform speed prior to impacting the plate and then follow the surface of the plate after contact. Particles passing above and below the plate are assumed to be unaffected and any particle-to-particle interaction is ignored. This leads to the following formula:
formula_0
where F is the force on the plate (oriented perpendicular to the plate), formula_1 is the density of the fluid, v is the velocity of the fluid, S is the surface area of the plate, and formula_2 is the angle of attack.
More sophisticated analysis and experimental evidence have shown that this formula is inaccurate; although Newton's analysis correctly predicted that the force was proportional to the density, the surface area of the plate, and the square of the velocity, the proportionality to the "square" of the sine of the angle of attack is incorrect. The force is "directly" proportional to the sine of the angle of attack, or for small values of formula_3 itself.
The assumed variation with the "square" of the sine predicted that the lift component would be much smaller than it actually is. This was frequently cited by detractors of heavier-than-air flight to "prove" it was impossible or impractical.
Ironically, the sine squared formula has had a rebirth in modern aerodynamics; the assumptions of rectilinear flow and non-interactions between particles are applicable at hypersonic speeds and the sine-squared formula leads to reasonable predictions.
In 1744, 17-years after Newton's death, the French mathematician Jean le Rond d'Alembert attempted to use the mathematical methods of the day to describe and quantify the forces acting on a body moving relative to a fluid. It proved impossible and d'Alembert was forced to conclude that he could not devise a mathematical method to describe the force on a body, even though practical experience showed such a force always exists. This has become known as D'Alembert's paradox.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = \\rho v^2 S \\sin^2(\\alpha) "
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\\alpha, \\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=70996932 |
709999 | Clifford module | In mathematics, a Clifford module is a representation of a Clifford algebra. In general a Clifford algebra "C" is a central simple algebra over some field extension "L" of the field "K" over which the quadratic form "Q" defining "C" is defined.
The abstract theory of Clifford modules was founded by a paper of M. F. Atiyah, R. Bott and Arnold S. Shapiro. A fundamental result on Clifford modules is that the Morita equivalence class of a Clifford algebra (the equivalence class of the category of Clifford modules over it) depends only on the signature "p" − "q" (mod 8). This is an algebraic form of Bott periodicity.
Matrix representations of real Clifford algebras.
We will need to study "anticommuting" matrices ("AB" = −"BA") because in Clifford algebras orthogonal vectors anticommute
formula_0
For the real Clifford algebra formula_1, we need "p" + "q" mutually anticommuting matrices, of which "p" have +1 as square and "q" have −1 as square.
formula_2
Such a basis of gamma matrices is not unique. One can always obtain another set of gamma matrices satisfying the same Clifford algebra by means of a similarity transformation.
formula_3
where "S" is a non-singular matrix. The sets "γ""a"′ and "γ""a" belong to the same equivalence class.
Real Clifford algebra R3,1.
Developed by Ettore Majorana, this Clifford module enables the construction of a Dirac-like equation without complex numbers, and its elements are called Majorana spinors.
The four basis vectors are the three Pauli matrices and a fourth antihermitian matrix. The signature is (+++−). For the signatures (+−−−) and (−−−+) often used in physics, 4×4 complex matrices or 8×8 real matrices are needed. | [
{
"math_id": 0,
"text": " A \\cdot B = \\frac{1}{2}( AB + BA ) = 0."
},
{
"math_id": 1,
"text": "\\mathbb{R}_{p,q}"
},
{
"math_id": 2,
"text": " \\begin{matrix}\n\\gamma_a^2 &=& +1 &\\mbox{if} &1 \\le a \\le p \\\\\n\\gamma_a^2 &=& -1 &\\mbox{if} &p+1 \\le a \\le p+q\\\\\n\\gamma_a \\gamma_b &=& -\\gamma_b \\gamma_a &\\mbox{if} &a \\ne b. \\ \\\\\n\\end{matrix}"
},
{
"math_id": 3,
"text": "\\gamma_{a'} = S \\gamma_{a} S^{-1} ,"
}
]
| https://en.wikipedia.org/wiki?curid=709999 |
71004765 | Omnigeneity | A concept in stellarator physics
Omnigeneity (sometimes also called omnigenity) is a property of a magnetic field inside a magnetic confinement fusion reactor. Such a magnetic field is called omnigenous if the path a single particle takes does not drift radially inwards or outwards on average. A particle is then confined to stay on a flux surface. All tokamaks are exactly omnigenous by virtue of their axisymmetry, and conversely an unoptimized stellarator is generally "not" omnigenous.
Because an exactly omnigenous reactor has no neoclassical transport (in the collisionless limit), stellarators are usually optimized in a way such that this criterion is met. One way to achieve this is by making the magnetic field quasi-symmetric, and the Helically Symmetric eXperiment takes this approach. One can also achieve this property without quasi-symmetry, and Wendelstein 7-X is an example of a device which is close to omnigeneity without being quasi-symmetric.
Theory.
The drifting of particles across flux surfaces is generally only a problem for trapped particles, which are trapped in a magnetic mirror. Untrapped (or passing) particles, which can circulate freely around the flux surface, are automatically confined to stay on a flux surface. For trapped particles, omnigeneity relates closely to the second adiabatic invariant formula_0 (often called the parallel or longitudinal invariant).
One can show that the radial drift a particle experiences after one full bounce motion is simply related to a derivative of formula_0,formula_1where formula_2 is the charge of the particle, formula_3 is the magnetic field line label, and formula_4 is the total radial drift expressed as a difference in toroidal flux. With this relation, omnigeneity can be expressed as the criterion that the second adiabatic invariant should be the same for all the magnetic field lines on a flux surface,formula_5This criterion is exactly met in axisymmetric systems, as the derivative with respect to formula_3 can be expressed as a derivative with respect to the toroidal angle (under which the system is invariant).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\cal{J}"
},
{
"math_id": 1,
"text": "\\frac{\\partial \\cal{J}}{\\partial \\alpha} = q \\Delta \\psi"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "\\Delta \\psi"
},
{
"math_id": 5,
"text": "\\frac{\\partial \\cal{J}}{\\partial \\alpha} = 0"
}
]
| https://en.wikipedia.org/wiki?curid=71004765 |
7100728 | Quantum vortex | Quantized flux circulation of some physical quantity
In physics, a quantum vortex represents a quantized flux circulation of some physical quantity. In most cases, quantum vortices are a type of topological defect exhibited in superfluids and superconductors. The existence of quantum vortices was first predicted by Lars Onsager in 1949 in connection with superfluid helium. Onsager reasoned that quantisation of vorticity is a direct consequence of the existence of a superfluid order parameter as a spatially continuous wavefunction. Onsager also pointed out that quantum vortices describe the circulation of superfluid and conjectured that their excitations are responsible for superfluid phase transitions. These ideas of Onsager were further developed by Richard Feynman in 1955 and in 1957 were applied to describe the magnetic phase diagram of type-II superconductors by Alexei Alexeyevich Abrikosov. In 1935 Fritz London published a very closely related work on magnetic flux quantization in superconductors. London's fluxoid can also be viewed as a quantum vortex.
Quantum vortices are observed experimentally in type-II superconductors (the Abrikosov vortex), liquid helium, and atomic gases (see Bose–Einstein condensate), as well as in photon fields (optical vortex) and exciton-polariton superfluids.
In a superfluid, a quantum vortex "carries" quantized orbital angular momentum, thus allowing the superfluid to rotate; in a superconductor, the vortex carries quantized magnetic flux.
The term "quantum vortex" is also used in the study of few body problems. Under the de Broglie–Bohm theory, it is possible to derive a "velocity field" from the wave function. In this context, quantum vortices are zeros on the wave function, around which this velocity field has a solenoidal shape, similar to that of irrotational vortex on potential flows of traditional fluid dynamics.
Vortex-quantisation in a superfluid.
In a superfluid, a quantum vortex is a hole with the superfluid circulating around the vortex axis; the inside of the vortex may contain excited particles, air, vacuum, etc. The thickness of the vortex depends on a variety of factors; in liquid helium, the thickness is of the order of a few Angstroms.
A superfluid has the special property of having phase, given by the wavefunction, and the velocity of the superfluid is proportional to the gradient of the phase (in the parabolic mass approximation). The circulation around any closed loop in the superfluid is zero if the region enclosed is simply connected. The superfluid is deemed irrotational; however, if the enclosed region actually contains a smaller region with an absence of superfluid, for example a rod through the superfluid or a vortex, then the circulation is:
formula_0
where formula_1 is the Planck constant divided by formula_2, m is the mass of the superfluid particle, and formula_3 is the total phase difference around the vortex. Because the wave-function must return to its same value after an integer number of turns around the vortex (similar to what is described in the Bohr model), then formula_4, where n is an integer. Thus, the circulation is quantized:
formula_5
London's flux quantization in a superconductor.
A principal property of superconductors is that they expel magnetic fields; this is called the Meissner effect. If the magnetic field becomes sufficiently strong it will, in some cases, “quench” the superconductive state by inducing a phase transition. In other cases, however, it will be energetically favorable for the superconductor to form a lattice of quantum vortices, which carry quantized magnetic flux through the superconductor. A superconductor that is capable of supporting vortex lattices is called a type-II superconductor, vortex-quantization in superconductors is general.
Over some enclosed area S, the magnetic flux is
formula_6 where formula_7 is the vector potential of the magnetic induction formula_8
Substituting a result of London's equation: formula_9, we find (with formula_10):
formula_11
where "ns", "m", and "es" are, respectively, number density, mass, and charge of the Cooper pairs.
If the region, "S", is large enough so that formula_12 along formula_13, then
formula_14
The flow of current can cause vortices in a superconductor to move, causing the electric field due to the phenomenon of electromagnetic induction. This leads to energy dissipation and causes the material to display a small amount of electrical resistance while in the superconducting state.
Constrained vortices in ferromagnets and antiferromagnets.
The vortex states in ferromagnetic or antiferromagnetic material are also important, mainly for information technology They are exceptional, since in contrast to superfluids or superconducting material one has a more subtle mathematics: instead of the usual equation of the type formula_15 where formula_16 is the vorticity at the spatial and temporal coordinates, and where formula_17 is the Dirac function, one has:
where now at any point and at any time there is the constraint formula_18. Here formula_19 is constant, the "constant magnitude" of the non-constant magnetization vector formula_20. As a consequence the vector formula_21 in eqn. (*) has been modified to a more complex entity formula_22. This leads, among other points, to the following fact:
In ferromagnetic or antiferromagnetic material a vortex can be moved to generate bits for information storage and recognition, corresponding, e.g., to changes of the quantum number "n". But although the magnetization has the usual azimuthal direction, and although one has vorticity quantization as in superfluids, as long as the circular integration lines surround the central axis at far enough perpendicular distance, this apparent vortex magnetization will change with the distance from an azimuthal direction to an upward or downward one, as soon as the vortex center is approached.
Thus, for each directional element formula_23 there are now not two, but four bits to be stored by a change of vorticity: The first two bits concern the sense of rotation, clockwise or counterclockwise; the remaining bits three and four concern the polarization of the central singular line, which may be polarized up- or downwards. The change of rotation and/or polarization involves subtle topology.
Statistical mechanics of vortex lines.
As first discussed by Onsager and Feynman, if the temperature in a superfluid or a superconductor is raised, the vortex loops undergo a second-order phase transition. This happens when the configurational entropy overcomes the Boltzmann factor, which suppresses the thermal or heat generation of vortex lines. The lines form a condensate. Since the centre of the lines, the vortex cores, are normal liquid or normal conductors, respectively, the condensation transforms the superfluid or superconductor into the normal state. The ensembles of vortex lines and their phase transitions can be described efficiently by a gauge theory.
Statistical mechanics of point vortices.
In 1949 Onsager analysed a toy model consisting of a neutral system of point vortices confined to a finite area. He was able to show that, due to the properties of two-dimensional point vortices the bounded area (and consequently, bounded phase space), allows the system to exhibit negative temperatures. Onsager provided the first prediction that some isolated systems can exhibit negative Boltzmann temperature. Onsager's prediction was confirmed experimentally for a system of quantum vortices in a Bose-Einstein condensate in 2019.
Pair-interactions of quantum vortices.
In a nonlinear quantum fluid, the dynamics and configurations of the vortex cores can be studied in terms of effective vortex–vortex pair interactions. The effective intervortex potential is predicted to affect quantum phase transitions and giving rise to different few-vortex molecules and many-body vortex patterns.
Preliminary experiments in the specific system of exciton-polaritons fluids showed an effective attractive–repulsive intervortex dynamics between two cowinding vortices, whose attractive component can be modulated by the nonlinearity amount in the fluid.
Spontaneous vortices.
Quantum vortices can form via the Kibble–Zurek mechanism. As a condensate forms by quench cooling, separate protocondensates form with independent phases. As these phase domains merge quantum vortices can be trapped in the emerging condensate order parameter. Spontaneous quantum vortices were observed in atomic Bose–Einstein condensates in 2008.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\oint_{C} \\mathbf{v}\\cdot\\,d\\mathbf{l} = \\frac{\\hbar}{m}\\oint_{C}\\nabla\\phi_v\\cdot\\,d\\mathbf{l} = \\frac{\\hbar}{m}\\Delta^\\text{tot}\\phi_v,"
},
{
"math_id": 1,
"text": "\\hbar"
},
{
"math_id": 2,
"text": "2\\pi"
},
{
"math_id": 3,
"text": "\\Delta^\\text{tot}\\phi_v"
},
{
"math_id": 4,
"text": "\\Delta^\\text{tot}\\phi_v= 2\\pi n"
},
{
"math_id": 5,
"text": "\\oint_{C} \\mathbf{v}\\cdot\\,d\\mathbf{l} \\equiv \\frac{2\\pi\\hbar}{m} n \\,."
},
{
"math_id": 6,
"text": "\\Phi = \\iint_S\\mathbf{B}\\cdot\\mathbf{\\hat{n}}\\,d^2x = \\oint_{\\partial S}\\mathbf{A}\\cdot d\\mathbf{l}, "
},
{
"math_id": 7,
"text": "\\mathbf A"
},
{
"math_id": 8,
"text": "\\mathbf B."
},
{
"math_id": 9,
"text": "\\mathbf{j}_s = -\\frac{n_se_s^2}{m} \\mathbf{A} + \\frac{n_se_s\\hbar}{m} \\boldsymbol{\\nabla}\\phi"
},
{
"math_id": 10,
"text": "\\mathbf B=\\mathrm{curl}\\,\\, \\mathbf A"
},
{
"math_id": 11,
"text": "\\Phi =-\\frac{m}{n_s e_s^2}\\oint_{\\partial S}\\mathbf{j}_s\\cdot d\\mathbf{l} +\\frac{\\hbar}{e_s} \\oint_{\\partial S}\\boldsymbol{\\nabla}\\phi\\cdot d\\mathbf{l},"
},
{
"math_id": 12,
"text": "\\mathbf{j}_s = 0"
},
{
"math_id": 13,
"text": "\\partial S"
},
{
"math_id": 14,
"text": "\\Phi = \\frac{\\hbar}{e_s} \\oint_{\\partial S}\\boldsymbol{\\nabla}\\phi\\cdot d\\mathbf{l} = \\frac{\\hbar}{e_s} \\Delta^\\text{tot}\\phi = \\frac{2\\pi\\hbar}{e_s}n. "
},
{
"math_id": 15,
"text": "\\operatorname{curl} \\ \\vec v (x,y,z,t)\\propto\\vec \\Omega (\\mathrm r,t)\\cdot\\delta (x,y),"
},
{
"math_id": 16,
"text": "\\vec \\Omega (\\mathrm r,t)"
},
{
"math_id": 17,
"text": "\\delta (x,y)"
},
{
"math_id": 18,
"text": "m_x^2(\\mathrm r, t)+m_y^2(\\mathrm r,t)+m_z^2(\\mathrm r,t)\\equiv M_0^2"
},
{
"math_id": 19,
"text": "M_0"
},
{
"math_id": 20,
"text": "\\vec m(x,y,z,t)"
},
{
"math_id": 21,
"text": "\\vec m"
},
{
"math_id": 22,
"text": "\\vec m_\\mathrm{eff}"
},
{
"math_id": 23,
"text": "\\mathrm d\\varphi \\,\\mathrm d\\vartheta"
}
]
| https://en.wikipedia.org/wiki?curid=7100728 |
71008175 | Job 5 | One of the chapters in Hebrew Bible
Job 5 is the fifth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Eliphaz the Temanite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –.
Text.
The original text is written in Hebrew language. This chapter is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 5 is grouped into the Dialogue section with the following outline:
The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. The first speech of Eliphaz in chapters 4 and 5 can be broken down into three main sections:
Eliphaz: The experience of the fool (5:1–7).
In this section Eliphaz responds directly to Job regarding Job's request for someone to answer him. Eliphaz compared Job's current experiences with those of persons who would be the opposite of the 'wise' (implying that Job is a fool), as these calamities are generally regarded as the fate of the wicked, according to the classical retribution theology.
Eliphaz appeals to the obvious insights encapsulated in proverbial sayings (4:8, 'those who plough iniquity and sow trouble reap the same'; 5:2, 'Surely vexation kills the fool, and jealousy slays the simple').
[Eliphaz said:] "Call out now;"
"Is there anyone who will answer you?"
"And to which of the holy ones will you turn?"
Verse 1.
Job later responds that he desires for 'such a mediator to present his case before God' (Job 9:33; 16:19–21).
Eliphaz suggests to commit one's cause to God who will reward the righteous (5:8–27).
This section can be divided into parts comprising verses 8–16, verses 17–26 and verse 27 as the conclusion of Eliphaz's first speech. Verse 8 starts a new topic with a 'strong adversative' "As for me" (or "But I") to commend a solution that Job "put his matter" (or "commit his cause") to God. Ironically, in the whole book, only Job who does talk to God, whereas all others restrict to 'pontificating about God'. The last statement for this first part (verse 16) emphasizes again the negative aspect of the retributive justice with 'a declaration that injustice has shut its mouth'. The next part (verses 17–26) deals with God acting in 'reproving' and 'disciplining', emphasizing the positive aspect of the doctrine of retribution, that the righteous will be rewarded. Eliphaz suggests that 'Job's only task was to apply the traditional teachings to himself, not to persist in his protest' (verse 27); it is the climax to Eliphaz's first speech.
[Eliphaz said:] "“Behold, blessed is the one whom God reproves;"
"therefore despise not the discipline of the Almighty."
Verse 17.
Despite his vision in , Eliphaz presents a faulty opinion in 5:17 about the disciplinary view of suffering (which will later be corrected by Elihu), because Job's suffering is not due to God's discipline.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71008175 |
710164 | Latin hypercube sampling | Statistical sampling technique
Latin hypercube sampling (LHS) is a statistical method for generating a near-random sample of parameter values from a multidimensional distribution. The sampling method is often used to construct computer experiments or for Monte Carlo integration.
LHS was described by Michael McKay of Los Alamos National Laboratory in 1979. An independently equivalent technique was proposed by in 1977. It was further elaborated by Ronald L. Iman and coauthors in 1981. Detailed computer codes and manuals were later published.
In the context of statistical sampling, a square grid containing sample positions is a Latin square if (and only if) there is only one sample in each row and each column. A Latin hypercube is the generalisation of this concept to an arbitrary number of dimensions, whereby each sample is the only one in each axis-aligned hyperplane containing it.
When sampling a function of formula_0 variables, the range of each variable is divided into formula_1 equally probable intervals. formula_1 sample points are then placed to satisfy the Latin hypercube requirements; this forces the number of divisions, formula_1, to be equal for each variable. This sampling scheme does not require more samples for more dimensions (variables); this independence is one of the main advantages of this sampling scheme. Another advantage is that random samples can be taken one at a time, remembering which samples were taken so far.
In two dimensions the difference between random sampling, Latin hypercube sampling, and orthogonal sampling can be explained as follows:
Thus, orthogonal sampling ensures that the set of random numbers is a very good representative of the real variability, LHS ensures that the set of random numbers is representative of the real variability whereas traditional random sampling (sometimes called brute force) is just a set of random numbers without any guarantees. | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "M"
}
]
| https://en.wikipedia.org/wiki?curid=710164 |
710174 | Bridge circuit | Type of electrical circuit
A bridge circuit is a topology of electrical circuitry in which two circuit branches (usually in parallel with each other) are "bridged" by a third branch connected between the first two branches at some intermediate point along them. The bridge was originally developed for laboratory measurement purposes and one of the intermediate bridging points is often adjustable when so used. Bridge circuits now find many applications, both linear and non-linear, including in instrumentation, filtering and power conversion.
The best-known bridge circuit, the Wheatstone bridge, was invented by Samuel Hunter Christie and popularized by Charles Wheatstone, and is used for measuring resistance. It is constructed from four resistors, two of known values "R"1 and "R"3 (see diagram), one whose resistance is to be determined "R"x, and one which is variable and calibrated "R"2. Two opposite vertices are connected to a source of electric current, such as a battery, and a galvanometer is connected across the other two vertices. The variable resistor is adjusted until the galvanometer reads zero. It is then known that the ratio between the variable resistor and its neighbour R1 is equal to the ratio between the unknown resistor and its neighbour R3, which enables the value of the unknown resistor to be calculated.
The Wheatstone bridge has also been generalised to measure impedance in AC circuits, and to measure resistance, inductance, capacitance, and dissipation factor separately. Variants are known as the Wien bridge, Maxwell bridge, and Heaviside bridge (used to measure the effect of mutual inductance). All are based on the same principle, which is to compare the output of two potential dividers sharing a common source.
In power supply design, a bridge circuit or bridge rectifier is an arrangement of diodes or similar devices used to rectify an electric current, i.e. to convert it from an unknown or alternating polarity to a direct current of known polarity.
In some motor controllers, an H-bridge is used to control the direction the motor turns.
Bridge current equation.
From the figure to the right, the bridge current is represented as "I"5
Per Thévenin's theorem, finding the Thévenin equivalent circuit which is connected to the bridge load "R"5 and using the arbitrary current flow "I"5, we have:
Thevenin Source ("V"th) is given by the formula:
formula_0
and the Thevenin resistance ("R"th):
formula_1
Therefore, the current flow ("I"5) through the bridge is given by Ohm's law:
formula_2
and the voltage ("V"5) across the load ("R"5) is given by the voltage divider formula:
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_{th}=\\left(\\frac{R_2}{R_1+R_2}-\\frac{R_4}{R_3+R_4}\\right)\\times U"
},
{
"math_id": 1,
"text": "R_{th}=\\frac{(R_1 + R_3) \\times (R_2 + R_4)}{R_1 + R_3 + R_2 + R_4}"
},
{
"math_id": 2,
"text": "I_5=\\frac{V_{th}}{R_{th}+R_5}"
},
{
"math_id": 3,
"text": "V_5=\\frac{R_5}{R_{th} + R_5} \\times V_{th}"
}
]
| https://en.wikipedia.org/wiki?curid=710174 |
71020 | Ultraviolet–visible spectroscopy | Range of spectroscopic analysis
Ultraviolet (UV) spectroscopy or ultraviolet–visible (UV–VIS) spectrophotometry refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. Being relatively inexpensive and easily implemented, this methodology is widely used in diverse applied and fundamental applications. The only requirement is that the sample absorb in the UV-Vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or transmittance (%T) or reflectance (%R), and its change with time.
A UV-vis spectrophotometer is an analytical instrument that measures the amount of ultraviolet (UV) and visible light that is absorbed by a sample. It is a widely used technique in chemistry, biochemistry, and other fields, to identify and quantify compounds in a variety of samples.
UV-vis spectrophotometers work by passing a beam of light through the sample and measuring the amount of light that is absorbed at each wavelength. The amount of light absorbed is proportional to the concentration of the absorbing compound in the sample
Optical transitions.
Most molecules and ions absorb energy in the ultraviolet or visible range, i.e., they are chromophores. The absorbed photon excites an electron in the chromophore to higher energy molecular orbitals, giving rise to an excited state. For organic chromophores, four possible types of transitions are assumed: π–π*, n–π*, σ–σ*, and n–σ*. Transition metal complexes are often colored (i.e., absorb visible light) owing to the presence of multiple electronic states associated with incompletely filled d orbitals.
Applications.
UV/Vis can be used to monitor structural changes in DNA.
UV/Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of diverse analytes or sample, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied.
The Beer–Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV/Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve.
A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor.
The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward–Fieser rules, for instance, are a set of empirical observations used to predict λmax, the wavelength of the most intense UV/Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV/Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present.
The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer–Lambert law:
formula_0,
where "A" is the measured absorbance (formally dimensionless but generally reported in absorbance units (AU)), formula_1 is the intensity of the incident light at a given wavelength, formula_2 is the transmitted intensity, "L" the path length through the sample, and "c" the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of formula_3.
The absorbance and extinction "ε" are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm.
The Beer–Lambert law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (xylenol orange or neutral red, for example).
UV–Vis spectroscopy is also used in the semiconductor industry to measure the thickness and optical properties of thin films on a wafer. UV–Vis spectrometers are used to measure the reflectance of light, and can be analyzed via the Forouhi–Bloomer dispersion equations to determine the index of refraction (formula_4) and the extinction coefficient (formula_5) of a given film across the measured spectral range.
Practical considerations.
The Beer–Lambert law has implicit assumptions that must be met experimentally for it to apply; otherwise there is a possibility of deviations from the law. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid. Worldwide, pharmacopoeias such as the American (USP) and European (Ph. Eur.) pharmacopeias demand that spectrophotometers perform according to strict regulatory requirements encompassing factors such as stray light and wavelength accuracy.
Spectral bandwidth.
Spectral bandwidth of a spectrophotometer is the range of wavelengths that the instrument transmits through a sample at a given time. It is determined by the light source, the monochromator, its physical slit-width and optical dispersion and the detector of the spectrophotometer. The spectral bandwidth affects the resolution and accuracy of the measurement. A narrower spectral bandwidth provides higher resolution and accuracy, but also requires more time and energy to scan the entire spectrum. A wider spectral bandwidth allows for faster and easier scanning, but may result in lower resolution and accuracy, especially for samples with overlapping absorption peaks. Therefore, choosing an appropriate spectral bandwidth is important for obtaining reliable and precise results.
It is important to have a monochromatic source of radiation for the light incident on the sample cell to enhance the linearity of the response. The closer the bandwidth is to be monochromatic (transmitting unit of wavelength) the more linear will be the response. The spectral bandwidth is measured as the number of wavelengths transmitted at half the maximum intensity of the light leaving the monochromator.
The best spectral bandwidth achievable is a specification of the UV spectrophotometer, and it characterizes how monochromatic the incident light can be. If this bandwidth is comparable to (or more than) the width of the absorption peak of the sample component, then the measured extinction coefficient will not be accurate. In reference measurements, the instrument bandwidth (bandwidth of the incident light) is kept below the width of the spectral peaks. When a test material is being measured, the bandwidth of the incident light should also be sufficiently narrow. Reducing the spectral bandwidth reduces the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio.
Wavelength error.
The extinction coefficient of an analyte in solution changes gradually with wavelength. A peak (a wavelength where the absorbance reaches a maximum) in the absorbance curve vs wavelength, i.e. the UV-VIS spectrum, is where the rate of change of absorbance with wavelength is the lowest. Therefore, quantitative measurements of a solute are usually conducted, using a wavelength around the absorbance peak, to minimize inaccuracies produced by errors in wavelength, due to the change of extinction coefficient with wavelength.
Stray light.
Stray light in a UV spectrophotometer is any light that reaches its detector that is not of the wavelength selected by the monochromator. This can be caused, for instance, by scattering of light within the instrument, or by reflections from optical surfaces.
Stray light can cause significant errors in absorbance measurements, especially at high absorbances, because the stray light will be added to the signal detected by the detector, even though it is not part of the actually selected wavelength. The result is that the measured and reported absorbance will be lower than the actual absorbance of the sample.
The stray light is an important factor, as it determines the "purity" of the light used for the analysis. The most important factor affecting it is the "stray light" level of the monochromator.
Typically a detector used in a UV-VIS spectrophotometer is broadband; it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear.
As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 Absorbance Units (AU), which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range.
Deviations from the Beer–Lambert law.
At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer–Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring.
Solutions that are not homogeneous can show deviations from the Beer–Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles. The deviations will be most noticeable under conditions of low concentration and high absorbance. The last reference describes a way to correct for this deviation.
Some solutions, like copper(II) chloride in water, change visually at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II) chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer–Lambert law.
Measurement uncertainty sources.
The above factors contribute to the measurement uncertainty of the results obtained with UV/Vis spectrophotometry. If UV/Vis spectrophotometry is used in quantitative chemical analysis then the results are additionally affected by uncertainty sources arising from the nature of the compounds and/or solutions that are measured. These include spectral interferences caused by absorption band overlap, fading of the color of the absorbing species (caused by decomposition or reaction) and possible composition mismatch between the sample and the calibration solution.
Ultraviolet–visible spectrophotometer.
The instrument used in ultraviolet–visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light after passing through a sample (formula_2), and compares it to the intensity of light before it passes through the sample (formula_6). The ratio formula_7 is called the "transmittance", and is usually expressed as a percentage (%T). The absorbance, formula_8, is based on the transmittance:
formula_9
The UV–visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample (formula_2), and compares it to the intensity of light reflected from a reference material (formula_6) (such as a white tile). The ratio formula_7 is called the "reflectance", and is usually expressed as a percentage (%R).
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or a prism as a monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a tungsten filament (300–2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190–400 nm), a xenon arc lamp, which is continuous from 160 to 2,000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously.
A spectrophotometer can be either "single beam" or "double beam". In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. formula_6 must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs.
In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken.
In a single-beam instrument, the cuvette containing only a solvent has to be measured first. Mettler Toledo developed a single beam array spectrophotometer that allows fast and accurate measurements over the UV/VIS range. The light source consists of a Xenon flash lamp for the ultraviolet (UV) as well as for the visible (VIS) and near-infrared wavelength regions covering a spectral range from 190 up to 1100 nm. The lamp flashes are focused on a glass fiber which drives the beam of light onto a cuvette containing the sample solution. The beam passes through the sample and specific wavelengths are absorbed by the sample components. The remaining light is collected after the cuvette by a glass fiber and driven into a spectrograph. The spectrograph consists of a diffraction grating that separates the light into the different wavelengths, and a CCD sensor to record the data, respectively. The whole spectrum is thus simultaneously measured, allowing for fast recording.
Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, formula_10, in the Beer–Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.
Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features. UV–visible microspectrophotometers consist of a UV–visible microscope integrated with a UV–visible spectrophotometer.
A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength.
Microspectrophotometry.
UV–visible spectroscopy of microscopic samples is done by integrating an optical microscope with UV–visible optics, white light sources, a monochromator, and a sensitive detector such as a charge-coupled device (CCD) or photomultiplier tube (PMT). As only a single optical path is available, these are single beam instruments. Modern instruments are capable of measuring UV–visible spectra in both reflectance and transmission of micron-scale sampling areas. The advantages of using such instruments is that they are able to measure microscopic samples but are also able to measure the spectra of larger samples with high spatial resolution. As such, they are used in the forensic laboratory to analyze the dyes and pigments in individual textile fibers, microscopic paint chips and the color of glass fragments. They are also used in materials science and biological research and for determining the energy content of coal and petroleum source rock by measuring the vitrinite reflectance. Microspectrophotometers are used in the semiconductor and micro-optics industries for monitoring the thickness of thin films after they have been deposited. In the semiconductor industry, they are used because the critical dimensions of circuitry is microscopic. A typical test of a semiconductor wafer would entail the acquisition of spectra from many points on a patterned or unpatterned wafer. The thickness of the deposited films may be calculated from the interference pattern of the spectra. In addition, ultraviolet–visible spectrophotometry can be used to determine the thickness, along with the refractive index and extinction coefficient of thin films. A map of the film thickness across the entire wafer can then be generated and used for quality control purposes.
Additional applications.
UV/Vis can be applied to characterize the rate of a chemical reaction. Illustrative is the conversion of the yellow-orange and blue isomers of mercury dithizonate. This method of analysis relies on the fact that concentration is linearly proportional to concentration. In the same approach allows determination of equilibria between chromophores.
From the spectrum of burning gases, it is possible to determine a chemical composition of a fuel, temperature of gases, and air-fuel ratio.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A=\\log_{10}(I_0/I)=\\varepsilon c L"
},
{
"math_id": 1,
"text": "I_0"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "1/M*cm"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "I_o"
},
{
"math_id": 7,
"text": "I/I_o"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "A=-\\log(\\%T/100\\%)"
},
{
"math_id": 10,
"text": "L"
}
]
| https://en.wikipedia.org/wiki?curid=71020 |
710251 | Wind wave | Surface waves generated by wind on open water
In fluid dynamics, a wind wave, or wind-generated water wave, is a surface wave that occurs on the free surface of bodies of water as a result of the wind blowing over the water's surface. The contact distance in the direction of the wind is known as the "fetch". Waves in the oceans can travel thousands of kilometers before reaching land. Wind waves on Earth range in size from small ripples to waves over high, being limited by wind speed, duration, fetch, and water depth.
When directly generated and affected by local wind, a wind wave system is called a wind sea. Wind waves will travel in a great circle route after being generated – curving slightly left in the southern hemisphere and slightly right in the northern hemisphere. After moving out of the area of fetch and no longer being affected by the local wind, wind waves are called "swells" and can travel thousands of kilometers. A noteworthy example of this is waves generated south of Tasmania during heavy winds that will travel across the Pacific to southern California, producing desirable surfing conditions. Wind waves in the ocean are also called ocean surface waves and are mainly "gravity waves", where gravity is the main equilibrium force.
Wind waves have a certain amount of randomness: subsequent waves differ in height, duration, and shape with limited predictability. They can be described as a stochastic process, in combination with the physics governing their generation, growth, propagation, and decay – as well as governing the interdependence between flow quantities such as the water surface movements, flow velocities, and water pressure. The key statistics of wind waves (both seas and swells) in evolving sea states can be predicted with wind wave models.
Although waves are usually considered in the water seas of Earth, the hydrocarbon seas of Titan may also have wind-driven waves. Waves in bodies of water may also be generated by other causes, both at the surface and underwater (such as watercraft, animals, waterfalls, landslides, earthquakes, bubbles, and impact events).
Formation.
The great majority of large breakers seen at a beach result from distant winds. Five factors influence the formation of the flow structures in wind waves:
All of these factors work together to determine the size of the water waves and the structure of the flow within them.
The main dimensions associated with wave propagation are:
A fully developed sea has the maximum wave size theoretically possible for a wind of specific strength, duration, and fetch. Further exposure to that specific wind could only cause a dissipation of energy due to the breaking of wave tops and formation of "whitecaps". Waves in a given area typically have a range of heights. For weather reporting and for scientific analysis of wind wave statistics, their characteristic height over a period of time is usually expressed as "significant wave height". This figure represents an average height of the highest one-third of the waves in a given time period (usually chosen somewhere in the range from 20 minutes to twelve hours), or in a specific wave or storm system. The significant wave height is also the value a "trained observer" (e.g. from a ship's crew) would estimate from visual observation of a sea state. Given the variability of wave height, the largest individual waves are likely to be somewhat less than twice the reported significant wave height for a particular day or storm.
Wave formation on an initially flat water surface by wind is started by a random distribution of normal pressure of turbulent wind flow over the water. This pressure fluctuation produces normal and tangential stresses in the surface water, which generates waves. It is usually assumed for the purpose of theoretical analysis that:
The second mechanism involves wind shear forces on the water surface. John W. Miles suggested a surface wave generation mechanism that is initiated by turbulent wind shear flows based on the inviscid Orr–Sommerfeld equation in 1957. He found the energy transfer from the wind to the water surface is proportional to the curvature of the velocity profile of the wind at the point where the mean wind speed is equal to the wave speed. Since the wind speed profile is logarithmic to the water surface, the curvature has a negative sign at this point. This relation shows the wind flow transferring its kinetic energy to the water surface at their interface.
Assumptions:
Generally, these wave formation mechanisms occur together on the water surface and eventually produce fully developed waves.
For example, if we assume a flat sea surface (Beaufort state 0), and a sudden wind flow blows steadily across the sea surface, the physical wave generation process follows the sequence:
Types.
Three different types of wind waves develop over time:
Ripples appear on smooth water when the wind blows, but will die quickly if the wind stops. The restoring force that allows them to propagate is surface tension. Sea waves are larger-scale, often irregular motions that form under sustained winds. These waves tend to last much longer, even after the wind has died, and the restoring force that allows them to propagate is gravity. As waves propagate away from their area of origin, they naturally separate into groups of common direction and wavelength. The sets of waves formed in this manner are known as swells. The Pacific Ocean is 19,800 km from Indonesia to the coast of Colombia and, based on an average wavelength of 76.5m, would have ~258,824 swells over that width.
It is sometimes alleged that out of a set of waves, the seventh wave in a set is always the largest; while this isn't the case, the waves in the middle of a given set tend to be larger than those before and after them.
Individual "rogue waves" (also called "freak waves", "monster waves", "killer waves", and "king waves") much higher than the other waves in the sea state can occur. In the case of the Draupner wave, its height was 2.2 times the significant wave height. Such waves are distinct from tides, caused by the Moon and Sun's gravitational pull, tsunamis that are caused by underwater earthquakes or landslides, and waves generated by underwater explosions or the fall of meteorites—all having far longer wavelengths than wind waves.
The largest ever recorded wind waves are not rogue waves, but standard waves in extreme sea states. For example, high waves were recorded on the RRS Discovery in a sea with significant wave height, so the highest wave was only 1.6 times the significant wave height.
The biggest recorded by a buoy (as of 2011) was high during the 2007 typhoon Krosa near Taiwan.
Spectrum.
Ocean waves can be classified based on: the disturbing force that creates them; the extent to which the disturbing force continues to influence them after formation; the extent to which the restoring force weakens or flattens them; and their wavelength or period. Seismic sea waves have a period of about 20 minutes, and speeds of . Wind waves (deep-water waves) have a period up to about 20 seconds.
The speed of all ocean waves is controlled by gravity, wavelength, and water depth. Most characteristics of ocean waves depend on the relationship between their wavelength and water depth. Wavelength determines the size of the orbits of water molecules within a wave, but water depth determines the shape of the orbits. The paths of water molecules in a wind wave are circular only when the wave is traveling in deep water. A wave cannot "feel" the bottom when it moves through water deeper than half its wavelength because too little wave energy is contained in the water movement below that depth. Waves moving through water deeper than half their wavelength are known as deep-water waves. On the other hand, the orbits of water molecules in waves moving through shallow water are flattened by the proximity of the sea bottom surface. Waves in water shallower than 1/20 their original wavelength are known as shallow-water waves. Transitional waves travel through water deeper than 1/20 their original wavelength but shallower than half their original wavelength.
In general, the longer the wavelength, the faster the wave energy will move through the water. The relationship between the wavelength, period and velocity of any wave is:
formula_0
where C is speed (celerity), L is the wavelength, and T is the period (in seconds). Thus the speed of the wave derives from the functional dependence formula_1 of the wavelength on the period (the dispersion relation).
The speed of a deep-water wave may also be approximated by:
formula_2
where g is the acceleration due to gravity, per second squared. Because g and π (3.14) are constants, the equation can be reduced to:
formula_3
when C is measured in meters per second and L in meters. In both formulas the wave speed is proportional to the square root of the wavelength.
The speed of shallow-water waves is described by a different equation that may be written as:
formula_4
where C is speed (in meters per second), g is the acceleration due to gravity, and d is the depth of the water (in meters). The period of a wave remains unchanged regardless of the depth of water through which it is moving. As deep-water waves enter the shallows and feel the bottom, however, their speed is reduced, and their crests "bunch up", so their wavelength shortens.
Spectral models.
Sea state can be described by the sea wave spectrum or just wave spectrum formula_5. It is composed of a wave height spectrum (WHS) formula_6 and a wave direction spectrum (WDS) formula_7. Many interesting properties about the sea state can be found from the wave spectra.
WHS describes the spectral density of wave height variance ("power") versus wave frequency, with dimension formula_8.
The relationship between the spectrum formula_9 and the wave amplitude formula_10 for a wave component formula_11 is:
formula_12
Some WHS models are listed below.
formula_13
formula_14
where
formula_15
formula_16
(The latter model has since its creation improved based on the work of Phillips and Kitaigorodskii to better model the wave height spectrum for high wavenumbers.)
As for WDS, an example model of formula_7 might be:
formula_17
Thus the sea state is fully determined and can be recreated by the following function where formula_18 is the wave elevation, formula_19 is uniformly distributed between 0 and formula_20, and formula_21 is randomly drawn from the directional distribution function formula_22
formula_23
Shoaling and refraction.
As waves travel from deep to shallow water, their shape changes (wave height increases, speed decreases, and length decreases as wave orbits become asymmetrical). This process is called shoaling.
Wave refraction is the process that occurs when waves interact with the sea bed to slow the velocity of propagation as a function of wavelength and period. As the waves slow down in shoaling water, the crests tend to realign at a decreasing angle to the depth contours. Varying depths along a wave crest cause the crest to travel at different phase speeds, with those parts of the wave in deeper water moving faster than those in shallow water. This process continues while the depth decreases, and reverses if it increases again, but the wave leaving the shoal area may have changed direction considerably. Rays—lines normal to wave crests between which a fixed amount of energy flux is contained—converge on local shallows and shoals. Therefore, the wave energy between rays is concentrated as they converge, with a resulting increase in wave height.
Because these effects are related to a spatial variation in the phase speed, and because the phase speed also changes with the ambient current—due to the Doppler shift—the same effects of refraction and altering wave height also occur due to current variations. In the case of meeting an adverse current the wave "steepens", i.e. its wave height increases while the wavelength decreases, similar to the shoaling when the water depth decreases.
Breaking.
Some waves undergo a phenomenon called "breaking". A breaking wave is one whose base can no longer support its top, causing it to collapse. A wave breaks when it runs into shallow water, or when two wave systems oppose and combine forces. When the slope, or steepness ratio, of a wave, is too great, breaking is inevitable.
Individual waves in deep water break when the wave steepness—the ratio of the wave height "H" to the wavelength "λ"—exceeds about 0.17, so for "H" > 0.17 "λ". In shallow water, with the water depth small compared to the wavelength, the individual waves break when their wave height "H" is larger than 0.8 times the water depth "h", that is "H" > 0.8 "h". Waves can also break if the wind grows strong enough to blow the crest off the base of the wave.
In shallow water, the base of the wave is decelerated by drag on the seabed. As a result, the upper parts will propagate at a higher velocity than the base and the leading face of the crest will become steeper and the trailing face flatter. This may be exaggerated to the extent that the leading face forms a barrel profile, with the crest falling forward and down as it extends over the air ahead of the wave.
Three main types of breaking waves are identified by surfers or surf lifesavers. Their varying characteristics make them more or less suitable for surfing and present different dangers.
When the shoreline is near vertical, waves do not break but are reflected. Most of the energy is retained in the wave as it returns to seaward. Interference patterns are caused by superposition of the incident and reflected waves, and the superposition may cause localized instability when peaks cross, and these peaks may break due to instability. (see also clapotic waves)
Physics of waves.
Wind waves are mechanical waves that propagate along the interface between water and air; the restoring force is provided by gravity, and so they are often referred to as surface gravity waves. As the wind blows, pressure and friction perturb the equilibrium of the water surface and transfer energy from the air to the water, forming waves. The initial formation of waves by the wind is described in the theory of Phillips from 1957, and the subsequent growth of the small waves has been modeled by Miles, also in 1957.
In linear plane waves of one wavelength in deep water, parcels near the surface move not plainly up and down but in circular orbits: forward above and backward below (compared to the wave propagation direction). As a result, the surface of the water forms not an exact sine wave, but more a trochoid with the sharper curves upwards—as modeled in trochoidal wave theory. Wind waves are thus a combination of transversal and longitudinal waves.
When waves propagate in shallow water, (where the depth is less than half the wavelength) the particle trajectories are compressed into ellipses.
In reality, for finite values of the wave amplitude (height), the particle paths do not form closed orbits; rather, after the passage of each crest, particles are displaced slightly from their previous positions, a phenomenon known as Stokes drift.
As the depth below the free surface increases, the radius of the circular motion decreases. At a depth equal to half the wavelength λ, the orbital movement has decayed to less than 5% of its value at the surface. The phase speed (also called the celerity) of a surface gravity wave is—for pure periodic wave motion of small-amplitude waves—well approximated by
formula_24
where
"c" = phase speed;
"λ" = wavelength;
"d" = water depth;
"g" = acceleration due to gravity at the Earth's surface.
In deep water, where formula_25, so formula_26 and the hyperbolic tangent approaches formula_27, the speed formula_28 approximates
formula_29
In SI units, with formula_30 in m/s, formula_31, when formula_32 is measured in metres.
This expression tells us that waves of different wavelengths travel at different speeds. The fastest waves in a storm are the ones with the longest wavelength. As a result, after a storm, the first waves to arrive on the coast are the long-wavelength swells.
For intermediate and shallow water, the Boussinesq equations are applicable, combining frequency dispersion and nonlinear effects. And in very shallow water, the shallow water equations can be used.
If the wavelength is very long compared to the water depth, the phase speed (by taking the limit of c when the wavelength approaches infinity) can be approximated by
formula_33
On the other hand, for very short wavelengths, surface tension plays an important role and the phase speed of these gravity-capillary waves can (in deep water) be approximated by
formula_34
where
"S" = surface tension of the air-water interface;
formula_35 = density of the water.
When several wave trains are present, as is always the case in nature, the waves form groups. In deep water, the groups travel at a group velocity which is half of the phase speed. Following a single wave in a group one can see the wave appearing at the back of the group, growing, and finally disappearing at the front of the group.
As the water depth formula_36 decreases towards the coast, this will have an effect: wave height changes due to wave shoaling and refraction. As the wave height increases, the wave may become unstable when the crest of the wave moves faster than the trough. This causes "surf", a breaking of the waves.
The movement of wind waves can be captured by wave energy devices. The energy density (per unit area) of regular sinusoidal waves depends on the water density formula_35, gravity acceleration formula_37 and the wave height formula_38 (which, for regular waves, is equal to twice the amplitude, formula_39):
formula_40
The velocity of propagation of this energy is the group velocity.
Models.
Surfers are very interested in the wave forecasts. There are many websites that provide predictions of the surf quality for the upcoming days and weeks. Wind wave models are driven by more general weather models that predict the winds and pressures over the oceans, seas, and lakes.
Wind wave models are also an important part of examining the impact of shore protection and beach nourishment proposals. For many beach areas there is only patchy information about the wave climate, therefore estimating the effect of wind waves is important for managing littoral environments.
A wind-generated wave can be predicted based on two parameters: wind speed at 10 m above sea level and wind duration, which must blow over long periods of time to be considered fully developed. The significant wave height and peak frequency can then be predicted for a certain fetch length.
Seismic signals.
Ocean water waves generate seismic waves that are globally visible on seismographs. There are two principal constituents of the ocean wave-generated seismic microseism. The strongest of these is the secondary microseism which is created by ocean floor pressures generated by interfering ocean waves and has a spectrum that is generally between approximately 6–12 s periods, or at approximately half of the period of the responsible interfering waves. The theory for microseism generation by standing waves was provided by Michael Longuet-Higgins in 1950 after in 1941 Pierre Bernard suggested this relation with standing waves on the basis of observations. The weaker primary microseism, also globally visible, is generated by dynamic seafloor pressures of propagating waves above shallower (less than several hundred meters depth) regions of the global ocean. Microseisms were first reported in about 1900, and seismic records provide long-term proxy measurements of seasonal and climate-related large-scale wave intensity in Earth's oceans including those associated with anthropogenic global warming.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C = {L}/{T} "
},
{
"math_id": 1,
"text": " L(T) "
},
{
"math_id": 2,
"text": " C = \\sqrt{{gL}/{2\\pi}} "
},
{
"math_id": 3,
"text": " C = 1.251\\sqrt{L} "
},
{
"math_id": 4,
"text": " C = \\sqrt{gd} = 3.1\\sqrt{d} "
},
{
"math_id": 5,
"text": "S(\\omega, \\Theta)"
},
{
"math_id": 6,
"text": "S(\\omega)"
},
{
"math_id": 7,
"text": "f(\\Theta)"
},
{
"math_id": 8,
"text": "\\{S(\\omega)\\} = \\{{\\text{length}}^2\\cdot\\text{time}\\}"
},
{
"math_id": 9,
"text": "S(\\omega_j)"
},
{
"math_id": 10,
"text": "A_j"
},
{
"math_id": 11,
"text": "j"
},
{
"math_id": 12,
"text": "\\frac{1}{2} A_j^2 = S(\\omega_j)\\, \\Delta \\omega"
},
{
"math_id": 13,
"text": "\n\\frac{S(\\omega)}{H_{1/3}^2 T_1} = \\frac{0.11}{2\\pi} \\left(\\frac{\\omega T_1}{2\\pi}\\right)^{-5} \\mathrm{exp} \\left[-0.44 \\left(\\frac{\\omega T_1}{2\\pi}\\right)^{-4} \\right]\n"
},
{
"math_id": 14,
"text": "\nS(\\omega) = 155 \\frac{H_{1/3}^2}{T_1^4 \\omega^5} \\mathrm{exp} \\left(\\frac{-944}{T_1^4 \\omega^4}\\right)(3.3)^Y,\n"
},
{
"math_id": 15,
"text": "Y = \\exp \\left[-\\left(\\frac{0.191 \\omega T_1 -1}{2^{1/2}\\sigma}\\right)^2\\right]"
},
{
"math_id": 16,
"text": "\\sigma =\n\\begin{cases}\n0.07 & \\text{if }\\omega \\le 5.24 / T_1, \\\\\n0.09 & \\text{if }\\omega > 5.24 / T_1.\n\\end{cases}\n"
},
{
"math_id": 17,
"text": "f(\\Theta) = \\frac{2}{\\pi}\\cos^2\\Theta, \\qquad -\\pi/2 \\le \\Theta \\le \\pi/2"
},
{
"math_id": 18,
"text": "\\zeta"
},
{
"math_id": 19,
"text": "\\epsilon_{j}"
},
{
"math_id": 20,
"text": "2\\pi"
},
{
"math_id": 21,
"text": "\\Theta_j"
},
{
"math_id": 22,
"text": "\\sqrt{f(\\Theta)}:"
},
{
"math_id": 23,
"text": "\\zeta = \\sum_{j=1}^N \\sqrt{2 S(\\omega_j) \\Delta \\omega_j}\\; \\sin(\\omega_j t - k_j x \\cos \\Theta_j - k_j y \\sin \\Theta_j + \\epsilon_{j})."
},
{
"math_id": 24,
"text": "c=\\sqrt{\\frac{g \\lambda}{2\\pi} \\tanh \\left(\\frac{2\\pi d}{\\lambda}\\right)}"
},
{
"math_id": 25,
"text": "d \\ge \\frac{1}{2}\\lambda"
},
{
"math_id": 26,
"text": "\\frac{2\\pi d}{\\lambda} \\ge \\pi"
},
{
"math_id": 27,
"text": "1"
},
{
"math_id": 28,
"text": "c"
},
{
"math_id": 29,
"text": "c_\\text{deep}=\\sqrt{\\frac{g\\lambda}{2\\pi}}."
},
{
"math_id": 30,
"text": "c_\\text{deep}"
},
{
"math_id": 31,
"text": "c_\\text{deep} \\approx 1.25\\sqrt\\lambda"
},
{
"math_id": 32,
"text": "\\lambda"
},
{
"math_id": 33,
"text": "c_\\text{shallow} = \\lim_{\\lambda\\rightarrow\\infty} c = \\sqrt{gd}."
},
{
"math_id": 34,
"text": "c_\\text{gravity-capillary}=\\sqrt{\\frac{g \\lambda}{2\\pi} + \\frac{2\\pi S}{\\rho\\lambda}}"
},
{
"math_id": 35,
"text": "\\rho"
},
{
"math_id": 36,
"text": "d"
},
{
"math_id": 37,
"text": "g"
},
{
"math_id": 38,
"text": "H"
},
{
"math_id": 39,
"text": "a"
},
{
"math_id": 40,
"text": "E=\\frac{1}{8}\\rho g H^2=\\frac{1}{2}\\rho g a^2."
}
]
| https://en.wikipedia.org/wiki?curid=710251 |
7102599 | Lutz–Kelker bias | The Lutz–Kelker bias is a supposed systematic bias that results from the assumption that the probability of a star being at distance formula_0 increases with the square of the distance which is equivalent to the assumption that the distribution of stars in space is uniform. In particular, it causes measured parallaxes to stars to be larger than their actual values. The bias towards measuring larger parallaxes in turn results in an underestimate of distance and therefore an underestimate on the object's luminosity.
For a given parallax measurement with an accompanying uncertainty, both stars closer and farther may, because of uncertainty in measurement, appear at the given parallax. Assuming uniform stellar distribution in space, the probability density of the true parallax per unit range of parallax will be proportional to formula_1 (where formula_2 is the true parallax), and therefore, there will be more stars in the volume shells at farther distance. As a result of this dependence, more stars will have their true parallax smaller than the observed parallax. Thus, the measured parallax will be systematically biased towards a value larger than the true parallax. This causes inferred luminosities and distances to be too small, which poses an apparent problem to astronomers trying to measure distance. The existence (or otherwise) of this bias and the necessity of correcting for it has become relevant in astronomy with the precision parallax measurements made by the Hipparcos satellite and more recently with the high-precision data releases of the Gaia mission.
The correction method due to Lutz and Kelker placed a bound on the true parallax of stars. This is not valid because true parallax (as distinct from measured parallax) cannot be known. Integrating over all true parallaxes (all space) assumes that stars are equally visible at all distances, and leads to divergent integrals yielding an invalid calculation. Consequently, the Lutz-Kelker correction should not be used. In general, other corrections for systematic bias are required, depending on the selection criteria of the stars under consideration.
The scope of effects of the bias are also discussed in the context of the current higher-precision measurements and the choice of stellar sample where the original stellar distribution assumptions are not valid. These differences result in the original discussion of effects to be largely overestimated and highly dependent on the choice of stellar sample. It also remains possible that relations to other forms of statistical bias such as the Malmquist bias may have a counter-effect on the Lutz–Kelker bias for at least some samples.
Mathematical Description.
Original Description.
The Distribution Function.
Mathematically, the Lutz-Kelker Bias originates from the dependence of the number density on the observed parallax that is translated into the conditional probability of parallax measurements. Assuming a Gaussian distribution of the observed parallax about the true parallax due to errors in measurement, we can write the conditional probability distribution function of measuring a parallax of formula_3 given that the true parallax is formula_2 as
formula_4
since the estimation is of a true parallax based on the measured parallax, the conditional probability of the true parallax being formula_2, given that the observed parallax is formula_3is of interest. In the original treatment of the phenomenon by Lutz & Kelker, this probability, using Bayes theorem, is given as
formula_5
where formula_6 and formula_7 are the prior probabilities of the true and observed parallaxes respectively.
Dependence on Distance.
The probability density of finding a star with apparent magnitude formula_8 at a distance formula_0 can be similarly written as
formula_9
where formula_10 is the probability density of finding a star with apparent magnitude m with a given distance formula_11. Here, formula_10 will be dependent on the luminosity function of the star, which depends on its absolute magnitude. formula_12 is the probability density function of the apparent magnitude independent of distance. The probability of a star being at distance formula_11 will be proportional to formula_13 such that
formula_14
Assuming a uniform distribution of stars in space, the number density formula_15 becomes a constant and we can write
formula_16, where formula_17.
Since we deal with the probability distribution of the true parallax based on a fixed observed parallax, the probability density formula_18 becomes irrelevant and we can conclude that the distribution will have the proportionality
formula_19 and thus,
formula_20
Normalization.
The conditional probability of the true parallax based on the observed parallax is divergent around zero for the true parallax. Therefore, it is not possible to normalize this probability. Following the original description of the bias, we can define a normalization by including the observed parallax as
formula_21
The inclusion of formula_22does not affect proportionality since it is a fixed constant. Moreover, in this defined "normalization", we will get a probability of 1 when the true parallax is equal to the observed parallax, regardless of the errors in measurement. Therefore, we can define a dimensionless parallax formula_23 and get the dimensionless distribution of the true parallax as
formula_24
Here, formula_25 represents the point where the measurement in parallax is equal to its true value, where the probability distribution should be centered. However, this distribution, due to formula_26 factor will deviate from the point formula_25 to smaller values. This presents the systematic Lutz-Kelker Bias. The value of this bias will be based on the value of formula_27, the marginal uncertainty in parallax measurement.
Scope of Effects.
Original Treatment.
In the original treatment of the Lutz–Kelker bias as it was first proposed the uncertainty in parallax measurement is considered to be the sole source of bias. As a result of the parallax dependence of stellar distributions, smaller uncertainty in the observed parallax will result in only a slight bias from the true parallax value. Larger uncertainties in contrast would yield higher systematic deviations of the observed parallax from its true value. Large errors in parallax measurement become apparent in luminosity calculations and are therefore easy to detect. Consequently, the original treatment of the phenomenon considered the bias to be effective when the uncertainty in the observed parallax, formula_28, is close to about 15% of the measured value, formula_3. This was a very strong statement indicating that if the uncertainty in the parallax in about 15–20%, the bias is so effective that we lose most of the parallax and distance information. Several subsequent work on the phenomenon refuted this argument and it was shown that the scope is actually very sample based and may be dependent on other sources of bias. Therefore, more recently it is argued that the scope for most stellar samples is not as drastic as first proposed.
Subsequent Discussions.
Following the original statement, the scope of the effects of the bias, as well as its existence and relative methods of correction have been discussed in many works in recent literature, including subsequent work of Lutz himself. Several subsequent work state that the assumption of uniform stellar distribution may not be applicable depending on the choice of stellar sample. Moreover, the effects of different distributions of stars in space as well as that of measurement errors would yield different forms of bias. This suggests the bias is largely dependent on the specific choice of sample and measurement error distributions, although the term Lutz–Kelker bias is commonly used generically for the phenomenon on all stellar samples. It is also questioned whether other sources of error and bias such as the Malmquist Bias actually counter-effect or even cancel the Lutz–Kelker bias, so that the effects are not as drastic as initially described by Lutz and Kelker. Overall, such differences are discussed to result in effects of the bias to be largely overestimated in the original treatment.
More recently, the effects of the Lutz–Kelker bias became relevant in the context of the high-precision measurements of Gaia mission. The scope of effects of Lutz–Kelker bias on certain samples is discussed in the recent Gaia data releases, including the original assumptions and the possibility of different distributions. It remains important to take bias effects with caution regarding sample selection as stellar distribution is expected to be non-uniform at large distance scales. As a result, it is questioned whether correction methods, including the Lutz-Kelker correction proposed in the original work, are applicable for a given stellar sample, since effects are expected to depend on the stellar distribution. Moreover, following the original description and the dependence of the bias on the measurement errors, the effects are expected to be lower due to the higher precision of current instruments such as Gaia.
History.
The original description of the phenomenon was presented in a paper by Thomas E. Lutz and Douglas H. Kelker in the Publications of the Astronomical Society of the Pacific, Vol. 85, No. 507, p. 573 article entitled "On the Use of Trigonometric Parallaxes for the Calibration of Luminosity Systems: Theory." although it was known following the work of Trumpler & Weaver in 1953. The discussion on statistical bias on measurements in astronomy date back to as early as to Eddington in 1913.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s\n"
},
{
"math_id": 1,
"text": "1/p^4"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "p_o"
},
{
"math_id": 4,
"text": "g(p_o|p) = \\dfrac{1}{\\sqrt{2 \\pi \\sigma}} \\exp{\\Big(\\dfrac{-(p_o-p)^2}{2\\sigma^2}\\Big)}"
},
{
"math_id": 5,
"text": "g(p|p_o) = \\dfrac{g(p_o|p) \\ g(p)}{g(p_o)}"
},
{
"math_id": 6,
"text": "g(p) dp"
},
{
"math_id": 7,
"text": "g(p_o) dp_o"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": "h(s|m) = \\dfrac{h(m|s)\\ h(s)}{h(m)}"
},
{
"math_id": 10,
"text": "h(m|s)"
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "h(m)"
},
{
"math_id": 13,
"text": "s^2"
},
{
"math_id": 14,
"text": "h(s) \\propto n(s) \\ s^2"
},
{
"math_id": 15,
"text": "n(s)"
},
{
"math_id": 16,
"text": "g(p) \\ dp = h(s|m) \\ \\Bigg | \\dfrac{\\partial s}{\\partial p} \\Bigg |_m \\ dp \\propto s^2 \\Bigg | \\dfrac{\\partial s}{\\partial p} \\Bigg |_m dp"
},
{
"math_id": 17,
"text": "s = 1/p"
},
{
"math_id": 18,
"text": "g(p_o)"
},
{
"math_id": 19,
"text": "g(p|p_o) \\propto g(p|p_o) \\ p^{-4}"
},
{
"math_id": 20,
"text": "g(p|p_o) \\propto \\dfrac{1}{p^4} \\exp\\Big({-\\dfrac{(p-p_o)^2}{2 \\ \\sigma^2}}\\Big)"
},
{
"math_id": 21,
"text": "g(p|p_o) \\propto \\Big(\\dfrac{p_o}{p}\\Big)^4 \\ \\exp\\Big({-\\dfrac{(p-p_o)^2}{2 \\ \\sigma^2}}\\Big) "
},
{
"math_id": 22,
"text": "p_o "
},
{
"math_id": 23,
"text": "Z := p/p_o "
},
{
"math_id": 24,
"text": "G(Z) \\propto Z^{-4} \\ \\exp\\Big({-\\dfrac{(Z-1)^2}{2 \\ (\\sigma/ p_o)^2}}\\Big) "
},
{
"math_id": 25,
"text": "Z=1 "
},
{
"math_id": 26,
"text": "Z^{-4} "
},
{
"math_id": 27,
"text": "\\sigma / p_o"
},
{
"math_id": 28,
"text": "\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=7102599 |
7102909 | Moving-cluster method | In astrometry, the moving-cluster method and the closely related convergent point method are means, primarily of historical interest, for determining the distance to star clusters. They were used on several nearby clusters in the first half of the 1900s to determine distance. The moving-cluster method is now largely superseded by other, usually more accurate distance measures.
Introduction.
The moving-cluster method relies on observing the proper motions and Doppler shift of each member of a group of stars known to form a cluster. The idea is that since all the stars share a common space velocity, they will appear to move towards a point of common convergence ("vanishing point") on the sky. This is essentially a perspective effect.
Using the moving-cluster method, the distance to a given star cluster (in parsecs) can be determined using the following equation:
formula_0
where "θ" is the angle between the star and the cluster's apparent convergence point, "μ" is the proper motion of the cluster (in arcsec/year), and "v" is the star's radial velocity (in AU/year).
Usage.
The method has only ever been used for a small number of clusters. This is because for the method to work, the cluster must be quite close to Earth (within a few hundred parsecs), and also be fairly tightly bound so it can be made out on the sky. Also, the method is quite difficult to work with compared with more straightforward methods like trigonometric parallax. Finally, the uncertainty in the final distance values are in general fairly large compared those obtained with precision measurements like those from Hipparcos.
Of the clusters it has been used with, certainly the most famous are the Hyades and the Pleiades. The moving-cluster method was in fact the only way astronomers had to measure the distance to these clusters with any precision for some time in the early 20th century.
Because of the problems outlined above, this method has not been used practically for stars for several decades in astronomical research.
However, recently it has been used to estimate the distance between the brown dwarf 2M1207 and its observed exoplanet 2M1207b. In December 2005, American astronomer Eric Mamajek reported a distance (53 ± 6 parsecs) to 2M1207b using the moving-cluster method.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{distance} = \\mathrm{tan}(\\theta) \\frac {\\mathrm{v}} {\\mathrm{\\mu}}"
}
]
| https://en.wikipedia.org/wiki?curid=7102909 |
71029712 | Job 39 | Job 39 is the 39th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of God to Job, which belongs to the "Verdicts" section of the book, comprising –.
Text.
The original text is written in Hebrew language. This chapter is divided into 30 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 39 is grouped into the Verdict section with the following outline:
God's speeches in chapters 38–41 can be split in two parts, both starting with almost identical phrases and having a similar structure:
The revelation of the Lord to Job is the culmination of the book of Job, that the Lord speaks directly to Job and displays his sovereign power and glory. Job has lived through the suffering—without cursing God, holding his integrity, and nowhere regretted it – but he was unaware of the real reason for his suffering, so God intervenes to resolve the spiritual issues that surfaced. Job was not punished for sin and Job’s suffering had not cut him off from God, now Job sees the end the point that he cannot have the knowledge to make the assessments he made, so it is wiser to bow in submission and adoration of God than to try to judge him.
Chapter 39 completes the survey of animals that began at Job 38:39 (feeding of the lions and the ravens) with the habits and instincts of the "wild goat", the "wild donkey", and "wild ox" (verses 1–12); then a transitionto the most remarkable of birds, the ostrich (verses 13–18), followed by the horse in a passage of extraordinary fire and brilliancy (verses 19–25), closed by the depiction of remarkable birds, the hawk and eagle (verses 26–30).
[YHWH said:] "Do you know when the mountain goats give birth?"
"Do you observe the calving of the does?"
Verse 1.
The ibex can only be observed from distance in the En Gedi area of Israel as the animals resist domestication by humans, but manage to survive with the instinct that God has given.
[YHWH said:] "Will the wild ox be willing to serve you"
"or spend the night by your manger?"
Verse 9.
Art depictions of aurochs exist since as early as the Paleolithic period (such as cave paintings in Lascaux) also in Egyptian, Ugaritic and Mesopotamian paintings, reliefs and literature (including in the hunting scenes).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71029712 |
71029746 | Job 40 | Chapter of the Book of Job in the Bible
Job 40 is the 40th chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of God to Job, which belongs to the "Verdicts" section of the book, comprising –.
Context.
Job 40 appears towards the end of the Book of Job. Traditionally placed in the Ketuvim section of the Hebrew Bible between Psalms and Proverbs, in modern Jewish Bibles the book is placed after the other two other poetic books. Job is also one of the poetic books in the Christian Old Testament, usually following the Book of Esther. The book is structured with a prologue and narrative introduction in the first two chapters and then the majority of the book is a debate between Job and number of other people in poetry, which runs until chapter 37.
The chapter is part of the response of God to Job which runs from chapters 38 to 41. The chapter is traditionally divided into three sections. The first two verses are joined with the preceding two chapters from verse 38:1 in God's first speech, Verses 3 to 5 of the chapter are considered a short intermission in God's monologue and cover Job's response to this first speech. The remainder of the chapter from verse 6 to the end of chapter 41 are considered to be God's second speech.
Text.
The original text is written in Hebrew language. This chapter is divided into 24 verses in English Bibles, but counted to 32 verses in Hebrew Bible using a different verse numbering (see below).
Verse numbering.
There are some differences in verse numbering of this chapter in English Bibles and Hebrew texts:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 40 is grouped into the Verdict section with the following outline:
God's speeches in chapters 38–41 can be split in two parts, both starting with almost identical phrases and having a similar structure:
The revelation of the Lord to Job is the culmination of the book of Job, that the Lord speaks directly to Job and displays his sovereign power and glory. Job has lived through the suffering—without cursing God, holding his integrity, and nowhere regretted it – but he was unaware of the real reason for his suffering, so God intervenes to resolve the spiritual issues that surfaced. Job was not punished for sin and Job’s suffering had not cut him off from God, now Job sees the end the point that he cannot have the knowledge to make the assessments he made, so it is wiser to bow in submission and adoration of God than to try to judge him.
Chapter 40 opens with a short dialogue between YHWH and Job (verses 1–5) interposed between the first and second speeches of YHWH. It is followed by God's second speech which focuses mainly on two figures: Behemoth (Job 40) and Leviathan (Job 41).
Dialogue between God and Job (40:1–5).
The inclusion of legal terms ("contend… argue… answer) from the litigation motif suggests that YHWH does not intend to present evidence for the defense, but rather to show Job why the process is flawed, because Job wishes to see God in court based on the very narrow view of the retributive justice in the world. YHWH is not just a judge, but also the king who actively exercises his sovereign rule, with a complex governing of the universe. YHWH's summation (verse 2) shows Job the futility of his pursue and the implied way forward for Job to acknowledge it.
[YHWH said:] "Shall the one who contends with the Almighty correct Him?"
"He who rebukes God, let him answer it."
[Job said:] "Behold, I am vile;"
"What shall I answer You?"
"I lay my hand over my mouth."
Verse 4.
Job's acknowledgement that he is "small" ("vile"; rather than he has sinned) shows the turning point from arguing against YHWH into accepting what YHWH has done in Job's life. This answer of Job is still tentative, so YHWH proceeds with a second round of questions and observations (Job 40:6–41:34) to finally prompt Job to give his ultimate response (Job 42:1–6).
God speaks of Behemoth (40:6–24).
God's second speech begins with a challenge to announce the theme (40:6–14), before proceeding with the description of Behemoth (40:15–24) and Leviathan (41:1–34). These two creatures are described as big in size and uncontrollable by humans, but YHWH totally control them all in his orderly world.
"Then the LORD answered Job out of the whirlwind, and said:"
[YHWH said:] "Look now at the behemoth"
"which I made along with you;"
"he eats grass like an ox."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71029746 |
710331 | Force-directed graph drawing | Physical simulation to visualize graphs
Force-directed graph drawing algorithms are a class of algorithms for drawing graphs in an aesthetically-pleasing way. Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible, by assigning forces among the set of edges and the set of nodes, based on their relative positions, and then using these forces either to simulate the motion of the edges and nodes or to minimize their energy.
While graph drawing can be a difficult problem, force-directed algorithms, being physical simulations, usually require no special knowledge about graph theory such as planarity.
Forces.
Force-directed graph drawing algorithms assign forces among the set of edges and the set of nodes of a graph drawing. Typically, spring-like attractive forces based on Hooke's law are used to attract pairs of endpoints of the graph's edges towards each other, while simultaneously repulsive forces like those of electrically charged particles based on Coulomb's law are used to separate all pairs of nodes. In equilibrium states for this system of forces, the edges tend to have uniform length (because of the spring forces), and nodes that are not connected by an edge tend to be drawn further apart (because of the electrical repulsion). Edge attraction and vertex repulsion forces may be defined using functions that are not based on the physical behavior of springs and particles; for instance, some force-directed systems use springs whose attractive force is logarithmic rather than linear.
An alternative model considers a spring-like force for every pair of nodes formula_0 where the ideal length formula_1 of each spring is proportional to the graph-theoretic distance between nodes "i" and "j", without using a separate repulsive force. Minimizing the difference (usually the squared difference) between Euclidean and ideal distances between nodes is then equivalent to a metric multidimensional scaling problem.
A force-directed graph can involve forces other than mechanical springs and electrical repulsion. A force analogous to gravity may be used to pull vertices towards a fixed point of the drawing space; this may be used to pull together different connected components of a disconnected graph, which would otherwise tend to fly apart from each other because of the repulsive forces, and to draw nodes with greater centrality to more central positions in the drawing; it may also affect the vertex spacing within a single component. Analogues of magnetic fields may be used for directed graphs. Repulsive forces may be placed on edges as well as on nodes in order to avoid overlap or near-overlap in the final drawing. In drawings with curved edges such as circular arcs or spline curves, forces may also be placed on the control points of these curves, for instance to improve their angular resolution.
Methods.
Once the forces on the nodes and edges of a graph have been defined, the behavior of the entire graph under these sources may then be simulated as if it were a physical system. In such a simulation, the forces are applied to the nodes, pulling them closer together or pushing them further apart. This is repeated iteratively until the system comes to a mechanical equilibrium state; i.e., their relative positions do not change anymore from one iteration to the next. The positions of the nodes in this equilibrium are used to generate a drawing of the graph.
For forces defined from springs whose ideal length is proportional to the graph-theoretic distance, stress majorization gives a very well-behaved (i.e., monotonically convergent) and mathematically elegant way to minimize these differences and, hence, find a good layout for the graph.
It is also possible to employ mechanisms that search more directly for energy minima, either instead of or in conjunction with physical simulation. Such mechanisms, which are examples of general global optimization methods, include simulated annealing and genetic algorithms.
Advantages.
The following are among the most important advantages of force-directed algorithms:
Disadvantages.
The main disadvantages of force-directed algorithms include the following:
History.
Force-directed methods in graph drawing date back to the work of , who showed that polyhedral graphs may be drawn in the plane with all faces convex by fixing the vertices of the outer face of a planar embedding of the graph into convex position, placing a spring-like attractive force on each edge, and letting the system settle into an equilibrium. Because of the simple nature of the forces in this case, the system cannot get stuck in local minima, but rather converges to a unique global optimum configuration. Because of this work, embeddings of planar graphs with convex faces are sometimes called Tutte embeddings.
The combination of attractive forces on adjacent vertices, and repulsive forces on all vertices, was first used by ; additional pioneering work on this type of force-directed layout was done by . The idea of using only spring forces between all pairs of vertices, with ideal spring lengths equal to the vertices' graph-theoretic distance, is from .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(i,j)"
},
{
"math_id": 1,
"text": "\\delta_{ij}"
},
{
"math_id": 2,
"text": "O(n^3)"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "O(n)"
},
{
"math_id": 5,
"text": "n\\log(n)"
},
{
"math_id": 6,
"text": "n^2"
}
]
| https://en.wikipedia.org/wiki?curid=710331 |
71038072 | Hyperfinite equivalence relation | In descriptive set theory and related areas of mathematics, a hyperfinite equivalence relation on a standard Borel space X is a Borel equivalence relation "E" with countable classes, that can, in a certain sense, be approximated by Borel equivalence relations that have finite classes.
Definitions.
Definition 1. Let "X" be a standard Borel space, that is; it is a measurable space which arises by equipping a Polish space "X" with its σ-algebra of Borel subsets (and forgetting the topology). Let "E" be an equivalence relation on "X". We will say that "E" is Borel if "E" is a Borel subset of the cartesian product of "X" with itself, when equipped with the product σ-algebra. We will say that "E" is finite (respectively, countable) if E has finite (respectively, countable) classes.
The above names might be misleading, since if "X" is an uncountable standard Borel space, the equivalence relation will be uncountable when considered as a set of ordered pairs from "X".
Definition 2. Let "E" be a countable Borel equivalence relation on a standard Borel space "X". We will say that "E" is hyperfinite if formula_0, where formula_1 is an increasing sequence of finite Borel equivalence relations on "X".
Intuitively, this means that there is a sequence finite equivalence relations on "X", each finer then its predecessors, approximating "E" arbitrarily well.
Discussion.
A major area of research in descriptive set theory is the classification of Borel equivalence relations, and in particular those which are countable. Among these, finite equivalence relations are considered to be the simplest (for instance, they admit Borel transversals). Therefore, it is natural to ask whether certain equivalence relations, which are not necessarily finite, can be approximated by finite equivalence relations. This turns out to be a notion which is both rich enough to encapsulate many natural equivalence relations appearing in mathematics, yet restrictive enough to allow deep theorems to develop.
It is also worthwhile to note that any countable equivalence relation "E" can be written down as an increasing union of finite equivalence relations. This can be done, for instance, by taking a partition of every class into classes of size two, then joining two classes in the new equivalence relation which are within the same "E"-class to form a partition with classes of size four, and so forth. The key observation is that this process requires the axiom of choice in general, and therefore it is not clear that this process generates Borel approximations. Indeed, there are countable Borel equivalence relations that are not hyperfinite, and so in particular the process described above will fail to generate Borel equivalence relations approximating the larger equivalence relation.
Open problems.
Weiss's conjecture.
The above examples seem to indicate that Borel actions of "tame" countable groups induce hyperfinite equivalence relations. Weiss conjectured that any Borel action of a countable amenable group on a standard Borel space induces a hyperfinite orbit equivalence relation. While this is still an open problem, some partial results are known.
The union problem.
Another open problem in the area is whether a countable increasing union of hyperfinite equivalence relations is hyperfinite. This is often referred to as the union problem.
Under certain conditions, it is known that a countable increasing union of hyperfinite equivalence relations is hyperfinite. For example, if the union of the equivalence relations has a property known as "Borel-boundedness" (which roughly means that any Borel assignment of functions formula_10 to points on the space can be "eventually bounded" by such a Borel assignment which is constant on equivalence classes), then it is hyperfinite. However, it is unknown whether every such union satisfies this property.
Measure-theoretic results.
Under the assumption that the underlying space "X" is equipped with a Borel probability measure "μ" and that one is willing to remove sets of measure zero, the theory is much better understood. For instance, if the equivalence relation is generated by a Borel action of a countable amenable group, the resulting orbit equivalence relation is ""μ"-hyperfinite", meaning that it is hyperfinite on a subset of the space of full measure (it is worthwhile to note that the action need not be measure-preserving, or even quasi-measure preserving). Since every countable Borel equivalence relation "E" on a standard non-atomic Borel probability space "(X,formula_11)" that admits a Borel transversal is a finite equivalence relation on a subset of full measure (this is essentially Feldman-Moore, together with Vitali's argument in his classical proof of the non-existence of a nontrivial invariant measure on the formula_12-algebra of all subsets of the real line), the above shows us that unlike equivalence relations which admit transversals, many examples of group actions which appear naturally in ergodic theory give rise to hyperfinite orbit equivalence relations (in particular, whenever the underlying space is a standard Borel space and the group is countable and amenable).
Similarly, a countable increasing union of hyperfinite equivalence relations on such a space is "μ"-hyperfinite as well.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "E=\\bigcup_{n \\in \\mathbb{N}}F_{n}"
},
{
"math_id": 1,
"text": "F_{n}"
},
{
"math_id": 2,
"text": "F_0\\subseteq F_1\\subseteq F_2\\subseteq ...\\subseteq G"
},
{
"math_id": 3,
"text": "E' \\subset E"
},
{
"math_id": 4,
"text": "a:G \\times X \\rightarrow X"
},
{
"math_id": 5,
"text": "f:X\\rightarrow Y"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "\\forall x,y\\in X, f(x)Ef(y)\\iff xEy"
},
{
"math_id": 8,
"text": "F_{2}"
},
{
"math_id": 9,
"text": "2^{F_{2}}"
},
{
"math_id": 10,
"text": "f:\\mathbb{N}\\rightarrow\\mathbb{N}"
},
{
"math_id": 11,
"text": "\\mu"
},
{
"math_id": 12,
"text": "\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=71038072 |
710483 | Scott domain | In the mathematical fields of order and domain theory, a Scott domain is an algebraic, bounded-complete and directed-complete partial order (dcpo). They are named in honour of Dana S. Scott, who was the first to study these structures at the advent of domain theory. Scott domains are very closely related to algebraic lattices, being different only in possibly lacking a greatest element. They are also closely related to Scott information systems, which constitute a "syntactic" representation of Scott domains.
While the term "Scott domain" is widely used with the above definition, the term "domain" does not have such a generally accepted meaning and different authors will use different definitions; Scott himself used "domain" for the structures now called "Scott domains". Additionally, Scott domains appear with other names like "algebraic semilattice" in some publications.
Originally, Dana Scott demanded a complete lattice, and the Russian mathematician Yuri Yershov constructed the isomorphic structure of dcpo. But this was not recognized until after scientific communications improved after the fall of the Iron Curtain. In honour of their work, a number of mathematical papers now dub this fundamental construction a "Scott–Ershov" domain.
Definition.
Formally, a non-empty partially ordered set formula_0 is called a "Scott domain" if the following hold:
Properties.
Since the empty set certainly has some upper bound, we can conclude the existence of a least element formula_1 (the supremum of the empty set) from bounded completeness.
The property of being bounded-complete is equivalent to the existence of infima of all "non-empty" subsets of D. It is well known that the existence of "all" infima implies the existence of all suprema and thus makes a partially ordered set into a complete lattice. Thus, when a top element (the infimum of the empty set) is adjoined to a Scott domain, one can conclude that:
Consequently, Scott domains are in a sense "almost" algebraic lattices. However, removing the top element from a complete lattice does not always produce a Scott domain. (Consider the complete lattice formula_2. The finite subsets of formula_3 form a directed set, but have no upper bound in formula_4.)
Scott domains become topological spaces by introducing the Scott topology.
Explanation.
Scott domains are intended to represent "partial algebraic data", ordered by information content. An element formula_5 is a piece of data that might not be fully defined. The statement formula_6 means "formula_7 contains all the information that formula_8 does". The bottom element is the element containing no information at all. Compact elements are the elements representing a finite amount of information.
With this interpretation we can see that the supremum formula_9 of a subset formula_10 is the element that contains all the information that "any" element of formula_11 contains, but "no more". Obviously such a supremum only exists (i.e., makes sense) provided formula_11 does not contain inconsistent information; hence the domain is directed and bounded complete, but not "all" suprema necessarily exist. The algebraicity axiom essentially ensures that all elements get all their information from (non-strictly) lower down in the ordering; in particular, the jump from compact or "finite" to non-compact or "infinite" elements does not covertly introduce any extra information that cannot be reached at some finite stage.
On the other hand, the infimum formula_12 is the element that contains all the information that is shared by "all" elements of formula_11, and "no less". If formula_11 contains no consistent information, then its elements have no information in common and so its infimum is formula_1. In this way all non-empty infima exist, but not all infima are necessarily interesting.
This definition in terms of partial data allows an algebra to be defined as the limit of a sequence of increasingly more defined partial algebras—in other words a fixed point of an operator that adds progressively more information to the algebra. For more information, see Domain theory.
Literature.
"See the literature given for domain theory." | [
{
"math_id": 0,
"text": "(D, \\leq)"
},
{
"math_id": 1,
"text": "\\bot"
},
{
"math_id": 2,
"text": "\\mathcal{P}(\\mathbb N)"
},
{
"math_id": 3,
"text": "\\mathbb N"
},
{
"math_id": 4,
"text": "\\mathcal{P}(\\mathbb N)\\setminus \\{\\mathbb N\\}"
},
{
"math_id": 5,
"text": "x \\in D"
},
{
"math_id": 6,
"text": "x \\leq y"
},
{
"math_id": 7,
"text": "y"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "\\bigvee X"
},
{
"math_id": 10,
"text": "X \\subseteq D"
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "\\bigwedge X"
},
{
"math_id": 13,
"text": "w v' = v"
},
{
"math_id": 14,
"text": "101 \\leq 10110"
}
]
| https://en.wikipedia.org/wiki?curid=710483 |
71051733 | Job 4 | 4th chapter of the Book of Job in Bible
Job 4 is the fourth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Eliphaz the Temanite (one of Job's friends), which belongs to the Dialogue section of the book, comprising –.
Text.
The original text is written in Hebrew language. This chapter is divided into 21 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 4 is grouped into the Dialogue section with the following outline:
The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. The first speech of Eliphaz in chapters 4 and 5 can be broken down into three main sections:
Eliphaz's summary outline of retribution (4:1–11).
This section can be divided into two parts: an introduction (verses 1–6) followed by an outline of the retribution by Eliphaz (verses 7–11). Twice in the beginning of his speech Eliphaz starts off in a respectful way to Job (verse 2a; verses 3–4) before using "but" to speak what he really wants to say: that Job should apply the advices he himself had given to others and using a godly manner to gain consolation. Eliphaz sets forth the arguments that will be explored in the debate, such as:
Eliphaz appeals to consensus (4:7), that he expects Job to 'concur in the common dogma of retribution', as well as appeals to individual experience (4:8, 'As I have seen'), to special revelation (4:12-21), to collective experience (5:27a, 'See, we have searched this out; it is true'), and to the obvious insights encapsulated in proverbial sayings (4:8, 'those who plough iniquity and sow trouble reap the same'; 5:2, 'Surely
vexation kills the fool, and jealousy slays the simple'). Convinced that a principle of reward and punishment governed the universe, Eliphaz is oblivious to the pain resulting from this dogma (4:7–9, where a divine wind brings destruction like the tempest that killed Job's children).
The poem contains rich vocabulary, such as the use five different words for lion in 4:10–11 (cf. Joel 1:4 for similar richness), which metaphorically might allude to the death of Job's children.
"Then Eliphaz the Temanite answered and said,"
[Eliphaz said:] 10"The roaring of the lion, and the voice of the fierce lion,"
"and the teeth of the young lions are broken."
11"The old lion perishes for lack of prey,"
"and the cubs of the lioness are scattered"
Verse 10–11.
D. J. A. Clines thinks that it is 'probably impossible to distinguish' the meaning of these words.
The Greek Septuagint renders verse 10 as “the strength of the lion, and the voice of the lioness and the exulting cry of serpents are quenched.”
Eliphaz's vision (4:12–21).
In this section Eliphaz shares the divine visitation he received while in adeep sleep ("tardēmâ"; cf. Abraham in ), when he felt a wind ("rûah") glided past his face, but could not make out the exact appearance of the deity, only could 'grasp the brief word that follows an eerie silence': 'Can a mortal be more righteous than God ("Eloah")?' (verses 12–17). Eliphaz then draws the implications of this in 'a series of reflection on human condition', implicitly on 'Job and his situation' (verses 18–21).
[Eliphaz heard a voice saying:] "Shall mortal man be more just than God?"
"shall a man be more pure than his maker?"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71051733 |
71062619 | Kaniadakis exponential distribution | The Kaniadakis exponential distribution (or "κ"-exponential distribution) is a probability distribution arising from the maximization of the Kaniadakis entropy under appropriate constraints. It is one example of a Kaniadakis distribution. The "κ"-exponential is a generalization of the exponential distribution in the same way that Kaniadakis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy. The "κ"-exponential distribution of Type I is a particular case of the "κ"-Gamma distribution, whilst the "κ"-exponential distribution of Type II is a particular case of the "κ"-Weibull distribution.
Type I.
Probability density function.
The Kaniadakis "κ"-exponential distribution of Type I is part of a class of statistical distributions emerging from the Kaniadakis κ-statistics which exhibit power-law tails. This distribution has the following probability density function:
formula_0
valid for formula_1, where formula_2 is the entropic index associated with the Kaniadakis entropy and formula_3 is known as rate parameter. The exponential distribution is recovered as formula_4
Cumulative distribution function.
The cumulative distribution function of "κ"-exponential distribution of Type I is given by
formula_5
for formula_1. The cumulative exponential distribution is recovered in the classical limit formula_6.
Properties.
Moments, expectation value and variance.
The "κ"-exponential distribution of type I has moment of order formula_7 given by
formula_8
where formula_9 is finite if formula_10.
The expectation is defined as:
formula_11
and the variance is:
formula_12
Kurtosis.
The kurtosis of the "κ"-exponential distribution of type I may be computed thought:
formula_13
Thus, the kurtosis of the "κ"-exponential distribution of type I distribution is given by:formula_14orformula_15The kurtosis of the ordinary exponential distribution is recovered in the limit formula_6.
Skewness.
The skewness of the "κ"-exponential distribution of type I may be computed thought:
formula_16
Thus, the skewness of the "κ"-exponential distribution of type I distribution is given by:formula_17The kurtosis of the ordinary exponential distribution is recovered in the limit formula_6.
Type II.
Probability density function.
The Kaniadakis "κ"-exponential distribution of Type II also is part of a class of statistical distributions emerging from the Kaniadakis κ-statistics which exhibit power-law tails, but with different constraints. This distribution is a particular case of the Kaniadakis "κ"-Weibull distribution with formula_18 is:
formula_19
valid for formula_1, where formula_2 is the entropic index associated with the Kaniadakis entropy and formula_3 is known as rate parameter.
The exponential distribution is recovered as formula_4
Cumulative distribution function.
The cumulative distribution function of "κ"-exponential distribution of Type II is given by
formula_20
for formula_1. The cumulative exponential distribution is recovered in the classical limit formula_6.
Properties.
Moments, expectation value and variance.
The "κ"-exponential distribution of type II has moment of order formula_21 given by
formula_22
The expectation value and the variance are:
formula_23
formula_24
The mode is given by:
formula_25
Kurtosis.
The kurtosis of the "κ"-exponential distribution of type II may be computed thought:
formula_26
Thus, the kurtosis of the "κ"-exponential distribution of type II distribution is given by:
formula_27
or
formula_28
Skewness.
The skewness of the "κ"-exponential distribution of type II may be computed thought:
formula_29
Thus, the skewness of the "κ"-exponential distribution of type II distribution is given by:formula_30orformula_31The skewness of the ordinary exponential distribution is recovered in the limit formula_6.
Quantiles.
The quantiles are given by the following expressionformula_32with formula_33, in which the median is the case :formula_34
Lorenz curve.
The Lorenz curve associated with the "κ"-exponential distribution of type II is given by:
formula_35
The Gini coefficient isformula_36
Asymptotic behavior.
The "κ"-exponential distribution of type II behaves asymptotically as follows:
formula_37
formula_38
Applications.
The "κ"-exponential distribution has been applied in several areas, such as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \nf_{_{\\kappa}}(x) = (1 - \\kappa^2) \\beta \\exp_\\kappa(-\\beta x)\n"
},
{
"math_id": 1,
"text": "x \\ge 0"
},
{
"math_id": 2,
"text": "0 \\leq |\\kappa| < 1"
},
{
"math_id": 3,
"text": "\\beta > 0"
},
{
"math_id": 4,
"text": "\\kappa \\rightarrow 0."
},
{
"math_id": 5,
"text": "F_\\kappa(x) = 1-\\Big(\\sqrt{1+\\kappa^2\\beta^2 x^2} + \\kappa^2 \\beta x \\Big)\\exp_k({-\\beta x)} "
},
{
"math_id": 6,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 7,
"text": "m \\in \\mathbb{N}"
},
{
"math_id": 8,
"text": "\\operatorname{E}[X^m] = \\frac{1 - \\kappa^2}{\\prod_{n=0}^{m+1} [1-(2n-m-1) \\kappa ]} \\frac{m!}{\\beta^m}"
},
{
"math_id": 9,
"text": "f_\\kappa(x)"
},
{
"math_id": 10,
"text": "0 < m + 1 < 1/\\kappa"
},
{
"math_id": 11,
"text": "\\operatorname{E}[X] = \\frac{1}{\\beta} \\frac{1 - \\kappa^2}{1 - 4\\kappa^2} "
},
{
"math_id": 12,
"text": "\\operatorname{Var}[X] = \\sigma_\\kappa^2 = \\frac{1}{\\beta^2} \\frac{2(1-4\\kappa^2)^2 - (1 - \\kappa^2)^2(1-9\\kappa^2)}{(1-4\\kappa^2)^2(1-9\\kappa^2)} "
},
{
"math_id": 13,
"text": "\\operatorname{Kurt}[X] = \\operatorname{E}\\left[\\frac{\\left[ X - \\frac{1}{\\beta} \\frac{1 - \\kappa^2}{1 - 4\\kappa^2}\\right]^4}{\\sigma_\\kappa^4}\\right] "
},
{
"math_id": 14,
"text": "\\operatorname{Kurt}[X] = \\frac{ 9(1-\\kappa^2)(1200\\kappa^{14} - 6123\\kappa^{12} + 562\\kappa^{10} +1539 \\kappa^8 - 544 \\kappa^6 + 143 \\kappa^4 -18\\kappa^2 + 1 )}{ \\beta^4 \\sigma_\\kappa^4 (1 - 4\\kappa^2)^4 (3600\\kappa^8 -4369\\kappa^6 + 819\\kappa^4 - 51\\kappa^2 + 1) } \\quad \\text{for} \\quad 0 \\leq \\kappa < 1/5 "
},
{
"math_id": 15,
"text": "\\operatorname{Kurt}[X] = \\frac{ 9(9\\kappa^2-1)^2(\\kappa^2-1)(1200\\kappa^{14} - 6123\\kappa^{12} + 562\\kappa^{10} +1539 \\kappa^8 - 544 \\kappa^6 + 143 \\kappa^4 -18\\kappa^2 + 1 )}{ \\beta^2 (1 - 4\\kappa^2)^2(9\\kappa^6 + 13\\kappa^4 - 5\\kappa^2 +1)(3600\\kappa^8 -4369\\kappa^6 + 819\\kappa^4 - 51\\kappa^2 + 1) } \\quad \\text{for} \\quad 0 \\leq \\kappa < 1/5 "
},
{
"math_id": 16,
"text": "\\operatorname{Skew}[X] = \\operatorname{E}\\left[\\frac{\\left[ X - \\frac{1}{\\beta} \\frac{1 - \\kappa^2}{1 - 4\\kappa^2}\\right]^3}{\\sigma_\\kappa^3}\\right] "
},
{
"math_id": 17,
"text": "\\operatorname{Shew}[X] = \\frac{ 2 (1-\\kappa^2) (144 \\kappa^8+23 \\kappa^6+27 \\kappa^4-6 \\kappa^2+1) }{ \\beta^3 \\sigma^3_\\kappa (4 \\kappa^2-1)^3 (144 \\kappa^4-25 \\kappa^2+1) } \\quad \\text{for} \\quad 0 \\leq \\kappa < 1/4 "
},
{
"math_id": 18,
"text": "\\alpha = 1"
},
{
"math_id": 19,
"text": " \nf_{_{\\kappa}}(x) = \n\\frac{\\beta}{\\sqrt{1+\\kappa^2 \\beta^2 x^2}}\n\\exp_\\kappa(-\\beta x)\n"
},
{
"math_id": 20,
"text": "F_\\kappa(x) = \n1-\\exp_k({-\\beta x)}"
},
{
"math_id": 21,
"text": "m < 1/\\kappa"
},
{
"math_id": 22,
"text": "\\operatorname{E}[X^m] = \\frac{\\beta^{-m} m!}{\\prod_{n=0}^{m} [1-(2n- m) \\kappa ]}"
},
{
"math_id": 23,
"text": "\\operatorname{E}[X] = \\frac{1}{\\beta} \\frac{1}{1 - \\kappa^2} "
},
{
"math_id": 24,
"text": "\\operatorname{Var}[X] = \\sigma_\\kappa^2 = \\frac{1}{\\beta^2} \\frac{1+2 \\kappa^4}{(1-4\\kappa^2)(1-\\kappa^2)^2} "
},
{
"math_id": 25,
"text": "x_{\\textrm{mode}} = \\frac{1}{\\kappa \\beta\\sqrt{2(1-\\kappa^2)}} "
},
{
"math_id": 26,
"text": "\\operatorname{Kurt}[X] = \\operatorname{E}\\left[\\left(\\frac{X - \\frac{1}{\\beta} \\frac{1}{1 - \\kappa^2} }{\\sigma_\\kappa} \\right)^4 \\right] "
},
{
"math_id": 27,
"text": "\\operatorname{Kurt}[X] = \\frac{3 (72 \\kappa^{10} - 360 \\kappa^8 - 44 \\kappa^6-32 \\kappa^4+7 \\kappa^2-3) }{ \\beta^4 \\sigma_\\kappa^4 (\\kappa^2 - 1)^4 (576 \\kappa^6 - 244 \\kappa^4 + 29 \\kappa^2 - 1) } \\quad \\text{ for } \\quad 0 \\leq \\kappa < 1/4 "
},
{
"math_id": 28,
"text": "\\operatorname{Kurt}[X] = \\frac{3 (72 \\kappa^{10} - 360 \\kappa^8 - 44 \\kappa^6-32 \\kappa^4+7 \\kappa^2-3) }{ (4\\kappa^2-1)^{-1} (2 \\kappa^4+1)^2 (144 \\kappa^4-25 \\kappa^2+1) } \\quad \\text{ for } \\quad 0 \\leq \\kappa < 1/4 "
},
{
"math_id": 29,
"text": "\\operatorname{Skew}[X] = \\operatorname{E}\\left[\\frac{\\left[ X - \\frac{1}{\\beta} \\frac{1}{1 - \\kappa^2}\\right]^3}{\\sigma_\\kappa^3}\\right] "
},
{
"math_id": 30,
"text": "\\operatorname{Skew}[X] = -\\frac{ 2 (15 \\kappa^6+6 \\kappa^4+2 \\kappa^2+1) }{ \\beta^3 \\sigma_\\kappa^3 (\\kappa^2 - 1)^3 (36 \\kappa^4 - 13 \\kappa^2 + 1) } \\quad \\text{for} \\quad 0 \\leq \\kappa < 1/3 "
},
{
"math_id": 31,
"text": "\\operatorname{Skew}[X] = \\frac{ 2 (15 \\kappa^6+6 \\kappa^4+2 \\kappa^2+1) }{ (1 - 9\\kappa^2)(2 \\kappa^4 + 1) } \\sqrt{ \\frac{1 - 4\\kappa^2 }{ 1 + 2\\kappa^4 } } \\quad \\text{for} \\quad 0 \\leq \\kappa < 1/3 "
},
{
"math_id": 32,
"text": "x_{\\textrm{quantile}} (F_\\kappa) = \\beta^{-1} \\ln_\\kappa \\Bigg(\\frac{1}{1 - F_\\kappa} \\Bigg) "
},
{
"math_id": 33,
"text": "0 \\leq F_\\kappa \\leq 1"
},
{
"math_id": 34,
"text": "x_{\\textrm{median}} (F_\\kappa) = \\beta^{-1} \\ln_\\kappa (2) "
},
{
"math_id": 35,
"text": "\\mathcal{L}_\\kappa(F_\\kappa) = 1 + \\frac{1 - \\kappa}{2 \\kappa}(1 - F_\\kappa)^{1 + \\kappa} - \\frac{1 + \\kappa}{2 \\kappa}(1 - F_\\kappa)^{1 - \\kappa}"
},
{
"math_id": 36,
"text": "\\operatorname{G}_\\kappa = \\frac{2 + \\kappa^2}{4 - \\kappa^2}"
},
{
"math_id": 37,
"text": "\\lim_{x \\to +\\infty} f_\\kappa (x) \\sim \\kappa^{-1} (2 \\kappa \\beta)^{-1/\\kappa} x^{(-1 - \\kappa)/\\kappa}"
},
{
"math_id": 38,
"text": "\\lim_{x \\to 0^+} f_\\kappa (x) = \\beta"
}
]
| https://en.wikipedia.org/wiki?curid=71062619 |
71074489 | Heteroclinic channels | Robotic control method
Heteroclinic channels are ensembles of trajectories that can connect saddle equilibrium points in phase space. Dynamical systems and their associated phase spaces can be used to describe natural phenomena in mathematical terms; heteroclinic channels, and the cycles (or orbits) that they produce, are features in phase space that can be designed to occupy specific locations in that space. Heteroclinic channels move trajectories from one equilibrium point to another. More formally, a heteroclinic channel is a region in phase space in which nearby trajectories are drawn closer and closer to one unique limiting trajectory, the heteroclinic orbit. Equilibria connected by heteroclinic trajectories form heteroclinic cycles and cycles can be connected to form heteroclinic networks. Heteroclinic cycles and networks naturally appear in a number of applications, such as fluid dynamics, population dynamics, and neural dynamics. In addition, dynamical systems are often used as methods for robotic control. In particular, for robotic control, the equilibrium points can correspond to robotic states, and the heteroclinic channels can provide smooth methods for switching from state to state.
Overview.
Heteroclinic channels (or heteroclinic orbits) are building blocks for a subset of dynamical systems that are built around connected saddle equilibrium points. Homoclinic channels/orbits join a single equilibrium point to itself, whereas heteroclinic channels join two different saddle equilibrium points in phase space. The connection is formed from the unstable manifold of the first saddle (“pushing away” from that point) to the stable manifold of the next saddle point (“pulling towards” this point). Combining at least three saddle equilibria in this way produces a heteroclinic cycle, and multiple heteroclinic cycles can be connected into heteroclinic networks.
Heteroclinic channels have both spatial and temporal features in phase space. Spatial because they affect trajectories within a certain region around themselves, and temporal because the parameters of a heteroclinic channel affect how much time a trajectory spends along that channel (or more specifically, how much time it spends around one of the saddle points). The transient nature of heteroclinic channels is important for describing their “switching” nature. That is, some neighborhood around each equilibrium point can be defined as a separate state, and the heteroclinic channel itself presents a method of switching sequentially between these states.
Heteroclinic "switching" is an important descriptor for natural phenomena, especially in neural dynamics. It has also been used as an approach for designing robotic control methods which cycle between states, whether those states are pre-defined behaviors or transient states that lead to larger behaviors.
History.
The mathematical image described above – a series of states with a functional mechanism for switching between them – also describes a phenomenon known as winnerless competition (WLC). Winnerless competition describes the switching phenomenon between two competitive states and was identified by Busse & Heikes in 1980 when they were investigating the change of phases in a convection cycle. However, the transient dynamics of WLC are widely agreed to first have been presented by Alfred J. Lotka, who first developed the concept to describe autocatalytic chemical reactions in 1910 and then developed an extended version in 1925 to describe ecological predator-prey relationships. In 1926, Vito Volterra independently published the same set of equations with a focus on mathematical biology, especially multi-species interactions. These equations, now known as the Lotka-Volterra equations, are widely used as a mathematical model to describe transient heteroclinic switching dynamics.
Heteroclinic cycles which describe the transition between at least three states were first described by May & Leonard in 1975. They identified a special case of the Lotka-Volterra equations for population dynamics . The re-emergence of heteroclinic cycles and the increased ability to do numerical computations as compared to the period of Lotka and Volterra, prompted a resurgence of interest in heteroclinic channels, cycles, and networks as mathematical models for transient sequential dynamics.
Heteroclinic channels have become models for neural dynamics. An example is Laurent "et al." (2001) who described the neural responses of fish and insects to olfactory stimuli as a WLC system, where each stimulus and its response could be identified as a separate state within the space. The responses could be modeled in this way because of their spatial and temporal properties, which aligned with the spatiotemporal nature of WLC. Rabinovich "et" al. (2001) & Afraimovich "et al." (2004) used WLC networks (via the Fitzhugh-Nagumo & Lotka-Volterra models, respectively) to connect the mathematical concept of stable heteroclinic channels (SHCs) to transient neural dynamics more generally, particularly other sensory processes and more abstract neural connections. Rabinovich "et al." (2008) expanded this idea to larger cognitive dynamic systems, and large-scale brain networks. Stable heteroclinic channels have also been used to model neuromechanical systems. The feeding structures and associated feeding processes (stages of swallowing) of marine mollusks have been analyzed using heteroclinic channels.
Biological models have always been a source of inspiration for roboticists, especially those interested in robotic control. Since robotic control requires defining and sequencing the physical actions of the robot, models of neural dynamics can be very useful. An example of this can be found in central pattern generators, which are widely used for rhythmic robotic motion. Heteroclinic channels have been used to replicate central pattern generators for robot control. Similarly, dynamic movement primitives, another common robotic motion control system, have been adapted and made more flexible by using heteroclinic channels. In more practical applications, stable heteroclinic channels have been directly used in the control of several biologically-inspired robots
Concepts.
Dynamical systems.
A dynamical system is a rule or set of rules that describe the evolution of a state (or a system of states) in time. The set of all possible states is called the state space. The phase space is the state space of a continuous system. Dynamical systems describe the state over time with mathematical equations, often ordinary differential equations. The current state at a particular time can be plotted as a point in phase space. The set of points over time can be plotted as a trajectory.
Stability.
A heteroclinic channel itself can be asymptotically stable. That is, any point near the vicinity of the channel is attracted to the heteroclinic cycle at the core of the channel. Both heteroclinic channels and cycles can be robust (or structurally stable) if, within a given parameter range, they maintain a given behavior; however, this is not required.
Stochasticity.
Noise is one input into a heteroclinic system to move it from one equilibrium to the next. The reason is that noise (or some other stochasticity) disturbs the system enough to move it into the vicinity of the next saddle equilibrium point in the sequence. The amount of noise required is inversely proportional to the “attractiveness” of the saddle points; the more attractive the stable part of the saddle is to the system state, the longer the trajectory will linger in its vicinity, and the more noise will be required to move the system’s state off of that attractive equilibrium point. There are also other ways of moving between the equilibrium points including parametric changes, or using sensory feedback.
Control theory.
Control theory, in robotics, deals with the use of dynamical systems to control robotic systems. The goal of robotic control is to perform precise, coordinated actions using physical actuators in response to sensor input. Dynamical systems can be used to drive the robot to a desired state (or set of states) using sensor input to minimize actuator errors.
Mathematical definition.
An equilibrium point in a dynamical system is a solution to the system of differential equations describing a trajectory that does not change with time. Equilibrium points can be described by their stability, which are often determined by the eigenvalues of the system’s Jacobian matrix. In general, the eigenvalues of a saddle point have non-zero real parts, at least one of the real parts is positive and at least one of the real parts is negative. Any eigenvalue with a negative real value indicates a stable manifold of the saddle which attracts trajectories, whereas any eigenvalue with a positive real value indicates the unstable manifold of the saddle which repels trajectories.
Phase space definition (from Heteroclinic orbit).
Let formula_0 be the ordinary differential equation describing a continuous dynamical system. If there are equilibria at formula_1 and formula_2, then a solution formula_3 is a heteroclinic connection from formula_4 to formula_5 if
formula_6 as formula_7
and
formula_8 as formula_9
This implies that the connection is contained in the stable manifold of formula_5 and the unstable manifold of formula_4.
Neural dynamics examples.
Neural dynamics are the non-linear dynamics that describe neural processes, from single neurons to cognitive processes and large-scale neural systems.
Lotka-Volterra model.
This model was first presented independently by Alfred J. Lotka for autocatalytic chemical reactions and then again for biological species in competition by Vito Volterra from a mathematical biology perspective. Originally, this model was only considered for two species: the two chemical species in the reaction, or a predator-prey situation in a shared environment.
The original equations were based on the logistic population equation, which is popularly used in ecology.
formula_10
where formula_11 is the size or concentration of a species at a given time, formula_12 is the growth rate and formula_13 is the carrying capacity of that species.
Lotka incorporated a term for the interaction between species and, with some generalization, the series of equations can be written as follows:
formula_14
In this definition, formula_15 is the size or concentration of the formula_16-th species and formula_17 is the total number of species. The interaction between each species is described by the matrix formula_18.
May-Leonard expansion.
May and Leonard expanded the Lotka-Volterra equations by investigating the system in which three species interact with each other (i.e., formula_19). They found that for a system in which each equilibrium point is a saddle with an formula_20 dimensional stable manifold, and the unstable manifold connects the points sequentially, the equation above can be re-written as follows:
formula_21
Explicitly for formula_22, this becomes
<templatestyles src="Block indent/styles.css"/>formula_23
where the coupling matrix, formula_24, is given by
formula_25
In this model, the stability of the saddle equilibria can be easily determined. The stability requirements for the formation of a stable heteroclinic cycle are formula_26 with either formula_27 or formula_28.
It was noted in this work that the system never asymptotically reaches any of the equilibrium points, but the amount of time the trajectory spends at each equilibrium point increases with time. In ecological terms, this suggests that a single population would eventually “beat out” the other two. May & Leonard noted that this is not a practical result in biology (and also see).
Winnerless Competition framework.
The “Winnerless Competition” framework (suggested by Laurent et al.) allowed a single neuron and/or a collection of synchronized neurons to be encoded between "on" and "off". Laurent et al. investigated olfaction in fish and insects, particularly olfactory reception, and some of the postsynaptic structures in the odor sensory system. They found that the processing (or encoding) of perceived odors occurred over at least three timescales: fast, intermediate, and slow. They posited that an odor encoding system should be reproducible, which requires it to be insensitive to (or rapidly forget) any initial state. This is only possible if the dynamical system is strongly dissipative, that is, it settles on a state quickly and is insensitive to internal noise. Conversely, a useful odor encoding system should be sensitive to small variations in input, which requires the system to be active. An active system uses external sources to allow small variations in initial states to grow with time. The winnerless competition framework allowed a single neuron (or node) to encode a stimulus (the “fast” timescale), or many stimuli could be encoded via stimulus-specific trajectories (the “slow” timescale).
The winnerless competition system was described by
formula_29
where formula_15 and formula_30 characterize the activities of stimulus-specific groups formula_16 and formula_31, respectively, formula_17 is the number of neurons being simulated, formula_32 characterizes the strength of inhibition by formula_16 and formula_31 (i.e., their interactions with each other), and formula_33 is the current input by a stimulus formula_34 to formula_16.
Winnerless competition required that the inhibitory connections in the formula_24 matrix were asymmetrical and cyclic. For example, for formula_22, if formula_35 then formula_36, and formula_37.
Overall, this description produces a heteroclinic channel composed of several heteroclinic orbits (trajectories).
Transient neural dynamics.
Sensory encoding via heteroclinic orbits (which are facilitated by heteroclinic channels) as described by Laurent et al. was extrapolated beyond the olfactory system. Rabinovich et al. explored winnerless competition as a spatiotemporal dynamical system corresponding to the activity of specific neurons or groups of neurons. They identified the added stimulus as the factor that would drive a trajectory from one node along the channel to the next. Without it, the system would reduce to a steady state in which one neuron (or neuronal group) was active whereas the others were quiescent.
Afraimovich et al. also developed winnerless competition using connected saddle points in phase space as a model for transient, sequential neural activity. They outlined how the saddle points should be defined, the conditions for heteroclinic connections between them and the conditions for heteroclinic sequence stability. They performed numerical simulations of the dynamics of a network with "N = 50 neurons" and used Gaussian noise as the external input. They found that the movement of a trajectory along each connection was initiated by the noise, and the speed of switching from one saddle to the next depended on the noise level.
Cognitive dynamics.
The sequential switching property of stable heteroclinic channels has been expanded to describe higher-level transient cognitive dynamics, particularly sequential decision making. Rabinovich et al. first introduced this idea by applying the sequential switching that characterizes stable heteroclinic channels to the sequential decision making process seen in a fixed time game. The player takes sequential actions in a changing environment to maximize some reward. For a fixed time game, in order to maximize the reward, the player must encounter as many decision states as possible. This means that within a fixed amount of time, the trajectory must pass in the vicinity of as many saddle points, or nodes, as possible. When the trajectory reached the vicinity of a saddle point, a decision-making function was applied.
The reward was maximized by choosing appropriate system parameters. One of these was a decision-making rule that corresponded to the fastest motion away from the saddle, which was the shortest time to reach the next saddle. Additionally, there was an optimal level of additive noise; the noise was high enough that the trajectory could move away from each saddle quickly, but not so high that the trajectory would be directed off the cycle entirely.
A major point of this work was that, without significant external stimulus, the player was likely to find one of two extremes: ending decision-making quickly or reaching a cycle that runs through the entire allotted time. Behaviorally, this cycle translates to habit formation (on a cognitive level) and is sensitive to external stimuli that can change the trajectory’s direction at any time.
Rabinovich & Varona described sequential memory in a similar way. They also introduced “chunking”, which describes how the brain groups sequential information items into chunks at different hierarchical levels. They used stable heteroclinic channels as a framework for building these chunks into high level heteroclinic networks.
Neuromechanical models.
Heteroclinic channels have also been used as a model for neuromechanical systems in animals, particularly the feeding structures in marine mollusks. Shaw et al. (2015) investigated potential models for the feeding behavior of "Aplysia californica". They found that heteroclinic channels could more accurately match features of actual experimental data than other models such as limit cycles. Lyttle et al. (2017) showed that both the heteroclinic model and the limit cycle model of the "Aplysia californica"’s feeding system grant different advantages and disadvantages, such as robustness to perturbations and flexibility to inputs. They also showed that a reasonable model of the animal’s behavior could be made by switching between these modes, heteroclinic and limit cycle, using external sensory input, providing a dynamical basis for understanding both robustness and flexibility in motor systems.
Robotic control examples.
Mathematical expansions of the framework are required for robotic control applications.
For higher dimensional systems, the connection/inhibition matrix formula_24 can be generalized as:
formula_38
or formulations similar to this.
Appropriate saddle values must be assigned to make the system dissipative. The strength of the saddle can be characterized by its two largest eigenvalues: the single unstable eigenvalue, formula_39, and the weakest stable eigenvalue, formula_40. The saddle value of the formula_16"-"th node can be defined as
formula_41
If formula_42, the formula_16-th node is dissipative and stable, and if formula_43 the entire cycle will be stable.
SHCs for biologically inspired robotic control.
SHCs have been used directly to control robots, particularly biologically-inspired robotic systems. SHCs have also been used to adapt existing robot control frameworks. In both instances, the special properties of SHCs were used to improve the associated control tasks. Some examples include integrated contact sensing to modulate the additive SHC noise, a combined Gaussian Mixture Model to inform SHC "switching", a central pattern generator which was adapted to be temporally sensitive, and a modified control framework which has an intuitive visualization property.
Contact sensing for additive noise modulation.
SHCs make it possible to use sensory feedback for rapid choices in a high degree-of-freedom robotic system. For example, Daltorio et al. used SHCs as a controller for the simulated locomotion of a worm-like robot in a pipe. The robot's structure consisted of 12 actuated body segments, each with one degree of freedom: segment length. Each segment coupled its height to the length such that as the length decreased, the height increased. This structure was used to simulate peristaltic locomotion, as the segments’ actuation was coordinated to form a peristaltic wave down the robot, with each segment contracting one after the other down the robot body.
For this system, each body segment was associated with a saddle point in the SHC system. The multi-dimensional connection matrix was constructed so that each point inhibited its neighbors except the point immediately after it. This asymmetry caused the active SHC node to move “backwards” down the robot structure, while the body moved forward.
The controller was tested in multiple pipe-shaped paths where contact sensors on the robot could provide information on the environment. Contact sensing information was used to modulate the noise added to the system, which in turn allowed the activation sequence to be altered. This was key for highly coordinated movement across all segments.
Gaussian mixture model to inform "switching".
SHCs can be used to inform the switching among complex configurations. Petrič et al. used a combined Gaussian Mixture Model (GMM) and SHC system to control a spinal exoskeleton. The exoskeleton was designed as a quasi-passive system that physically supports the user to different degrees depending on the current pose or movement of the user. Different functional poses/movements were identified as the nodes within the SHC system. GMMs were used to indicate what the additive inputs for each SHC node should be, which would drive the system from one pose to the next.
Temporally sensitive central pattern generator.
SHCs have been used as an alternative to central pattern generators for robotic control. Horchler et al. used SHCs to produce an oscillator whose behavior near each node could be manipulated using system parameters: additive noise and saddle values. This produced a cyclic controller that could spend more time at a particular node when needed. The controller's responsiveness to external input was demonstrated by pausing and resetting the cycle using additive noise.
Intuitive visualization property.
Rouse & Daltorio replaced the underlying attractor points of dynamic movement primitives, another biologically-inspired robotic control method, with the saddle points of SHCs. This adaptive framework maintained the stability of the system. Additionally, it provided a visualization property which allowed the user to intuitively place saddle points in phase space to match a desired trajectory in the task space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dot{x} = f(x)"
},
{
"math_id": 1,
"text": "x = x_0"
},
{
"math_id": 2,
"text": "x = x_1"
},
{
"math_id": 3,
"text": "\\phi(t)"
},
{
"math_id": 4,
"text": "x_0"
},
{
"math_id": 5,
"text": "x_1"
},
{
"math_id": 6,
"text": "\\phi(t) \\rightarrow x_0"
},
{
"math_id": 7,
"text": "t \\rightarrow -\\infin"
},
{
"math_id": 8,
"text": "\\phi(t) \\rightarrow x_1"
},
{
"math_id": 9,
"text": "t \\rightarrow +\\infin"
},
{
"math_id": 10,
"text": "{dx \\over dt} = rx\\Bigl(1 - {x \\over K}\\Bigr)"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "r"
},
{
"math_id": 13,
"text": "K"
},
{
"math_id": 14,
"text": "{dx_i(t) \\over dt} = rx_i(t)\\Biggl[1 - \\sum_{j=1}^N \\alpha_{ij} x_j(t) \\Biggr]"
},
{
"math_id": 15,
"text": "x_i(t)"
},
{
"math_id": 16,
"text": "i"
},
{
"math_id": 17,
"text": "N"
},
{
"math_id": 18,
"text": "\\alpha"
},
{
"math_id": 19,
"text": "N = 3"
},
{
"math_id": 20,
"text": "N-1"
},
{
"math_id": 21,
"text": "{dx_i(t) \\over dt} = x_i(t)\\Biggl[1 - \\sum_{j=1}^N \\rho_{ij} x_j(t) \\Biggr]"
},
{
"math_id": 22,
"text": "N=3"
},
{
"math_id": 23,
"text": "\n \\begin{align}\n {dx_1 \\over dt} &= x_1[1 - x_1 - \\alpha x_2 - \\beta x_3],\\\\\n {dx_2 \\over dt} &= x_2[1 - \\beta x_1 - x_2 - \\alpha x_3],\\\\\n {dx_3 \\over dt} &= x_3[1 - \\alpha x_1 - \\beta x_2 - x_3],\n \\end{align}\n"
},
{
"math_id": 24,
"text": "\\rho"
},
{
"math_id": 25,
"text": "\\rho = \\begin{bmatrix} 1 & \\alpha & \\beta \\\\ \\beta & 1 & \\alpha \\\\ \\alpha & \\beta & 1 \\end{bmatrix}."
},
{
"math_id": 26,
"text": "\\alpha + \\beta \\geq 2"
},
{
"math_id": 27,
"text": "\\alpha > 1"
},
{
"math_id": 28,
"text": "\\beta > 1"
},
{
"math_id": 29,
"text": "{dx_i(t) \\over dt} = x_i(t)\\Biggl[1 - \\sum_{j=1}^N \\rho_{ij} x_j(t) \\Biggr] + S_i^s"
},
{
"math_id": 30,
"text": "x_j(t)"
},
{
"math_id": 31,
"text": "j"
},
{
"math_id": 32,
"text": "\\rho_{ij} > 0"
},
{
"math_id": 33,
"text": "S_i^s(t)"
},
{
"math_id": 34,
"text": "s"
},
{
"math_id": 35,
"text": "\\rho_{11},\\rho_{22},\\rho_{33}=1"
},
{
"math_id": 36,
"text": "\\rho_{12},\\rho_{23},\\rho_{31}>1"
},
{
"math_id": 37,
"text": "\\rho_{21},\\rho_{32},\\rho_{13}<1"
},
{
"math_id": 38,
"text": "\\rho = \\begin{bmatrix}\n 1 & \\alpha & \\gamma & \\gamma & \\cdots & \\gamma & \\beta \\\\ \n \\beta & 1 & \\alpha & \\gamma & \\gamma & \\cdots & \\gamma \\\\\n \\gamma & \\beta & 1 & \\alpha & \\gamma & \\cdots & \\gamma \\\\\n \\gamma & \\gamma & \\beta & 1 & \\alpha & \\cdots & \\gamma \\\\\n \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots \\\\\n \\gamma & \\cdots & & \\gamma & \\beta & 1 & \\alpha \\\\\n \\alpha & \\gamma & \\cdots & \\gamma & \\gamma & \\beta & 1\n \\end{bmatrix}"
},
{
"math_id": 39,
"text": "\\lambda^u"
},
{
"math_id": 40,
"text": "-\\lambda^s"
},
{
"math_id": 41,
"text": "v_i = {Re \\lambda_i^s \\over \\lambda_i^u}"
},
{
"math_id": 42,
"text": "v_i>1"
},
{
"math_id": 43,
"text": "\\textstyle \\prod_{i=1}^N \\displaystyle v_i>1"
}
]
| https://en.wikipedia.org/wiki?curid=71074489 |
71079573 | Markov chain tree theorem | In the mathematical theory of Markov chains, the Markov chain tree theorem is an expression for the stationary distribution of a Markov chain with finitely many states. It sums up terms for the rooted spanning trees of the Markov chain, with a positive combination for each tree. The Markov chain tree theorem is closely related to Kirchhoff's theorem on counting the spanning trees of a graph, from which it can be derived. It was first stated by , for certain Markov chains arising in thermodynamics, and proved in full generality by , motivated by an application in limited-memory estimation of the probability of a biased coin.
A finite Markov chain consists of a finite set of states, and a transition probability formula_0 for changing from state formula_1 to state formula_2, such that for each state the outgoing transition probabilities sum to one. From an initial choice of state (which turns out to be irrelevant to this problem), each successive state is chosen at random according to the transition probabilities from the previous state. A Markov chain is said to be irreducible when every state can reach every other state through some sequence of transitions, and aperiodic if, for every state, the possible numbers of steps in sequences that start and end in that state have greatest common divisor one. An irreducible and aperiodic Markov chain necessarily has a stationary distribution, a probability distribution on its states that describes the probability of being on a given state after many steps, regardless of the initial choice of state.
The Markov chain tree theorem considers spanning trees for the states of the Markov chain, defined to be trees, directed toward a designated root, in which all directed edges are valid transitions of the given Markov chain. If a transition from state formula_1 to state formula_2 has transition probability formula_0, then a tree formula_3 with edge set formula_4 is defined to have weight equal to the product of its transition probabilities:
formula_5
Let formula_6 denote the set of all spanning trees having state formula_1 at their root. Then, according to the Markov chain tree theorem, the stationary probability formula_7 for state formula_1 is proportional to the sum of the weights of the trees rooted at formula_1. That is,
formula_8
where the normalizing constant formula_9 is the sum of formula_10 over all spanning trees.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_{i,j}"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "j"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "E(T)"
},
{
"math_id": 5,
"text": "w(T)=\\prod_{(i,j)\\in E(T)} p_{i,j}."
},
{
"math_id": 6,
"text": "\\mathcal{T}_i"
},
{
"math_id": 7,
"text": "\\pi_i"
},
{
"math_id": 8,
"text": "\\pi_i=\\frac{1}{Z}\\sum_{T\\in\\mathcal{T}_i} w(T),"
},
{
"math_id": 9,
"text": "Z"
},
{
"math_id": 10,
"text": "w(T)"
}
]
| https://en.wikipedia.org/wiki?curid=71079573 |
71083305 | Quantum boomerang effect | Quantum mechanical effect in disordered media
The quantum boomerang effect is a quantum mechanical phenomenon whereby wavepackets launched through disordered media return, on average, to their starting points, as a consequence of Anderson localization and the inherent symmetries of the system. At early times, the initial parity asymmetry of the nonzero momentum leads to asymmetric behavior: nonzero displacement of the wavepackets from their origin. At long times, inherent time-reversal symmetry and the confining effects of Anderson localization lead to correspondingly symmetric behavior: both zero final velocity and zero final displacement.
History.
In 1958, Philip W. Anderson introduced the eponymous model of disordered lattices which exhibits localization, the confinement of the electrons' probability distributions within some small volume. In other words, if a wavepacket were dropped into a disordered medium, it would spread out initially but then approach some maximum range. On the macroscopic scale, the transport properties of the lattice are reduced as a result of localization, turning what might have been a conductor into an insulator. Modern condensed matter models continue to study disorder as an important feature of real, imperfect materials.
In 2019, theorists considered the behavior of a wavepacket not merely dropped, but actively launched through a disordered medium with some initial nonzero momentum, predicting that the wavepacket's center of mass would asymptotically return to the origin at long times — the quantum boomerang effect. Shortly after, quantum simulation experiments in cold atom settings confirmed this prediction
by simulating the quantum kicked rotor, a model that maps to the Anderson model of disordered lattices.
Description.
Consider a wavepacket formula_1 with initial momentum formula_2 which evolves in the general Hamiltonian of a Gaussian, uncorrelated, disordered medium:
formula_3
where formula_4 and formula_5, and the overbar notation indicates an average over all possible realizations of the disorder.
The classical Boltzmann equation predicts that this wavepacket should slow down and localize at some new point — namely, the terminus of its mean free path. However, when accounting for the quantum mechanical effects of localization and time-reversal symmetry (or some other unitary or antiunitary symmetry), the probability density distribution formula_6 exhibits off-diagonal, oscillatory elements in its eigenbasis expansion that decay at long times, leaving behind only diagonal elements independent of the sign of the initial momentum. Since the direction of the launch does not matter at long times, the wavepacket "must" return to the origin.
The same destructive interference argument used to justify Anderson localization applies to the quantum boomerang. The Ehrenfest theorem states that the variance ("i.e." the spread) of the wavepacket evolves thus:
formula_7
where the use of the Wigner function allows the final approximation of the particle distribution into two populations formula_8 of positive and negative velocities, with centers of mass denoted
formula_9
A path contributing to formula_10 at some time must have negative momentum formula_11 by definition; since every part of the wavepacket originated at the same positive momentum formula_2 behavior, this path from the origin to formula_12 and from initial formula_2 momentum to final formula_11 momentum can be time-reversed and translated to create another path from formula_12 back to the origin with the same initial and final momenta. This second, time-reversed path is equally weighted in the calculation of formula_13 and ultimately results in formula_14. The same logic does not apply to formula_15 because there is no initial population in the momentum state formula_11. Thus, the wavepacket variance only has the first term:
formula_16
This yields long-time behavior
formula_17
where formula_0 and formula_18 are the scattering mean free path and scattering mean free time, respectively. The exact form of the boomerang can be approximated using the diagonal Padé approximants formula_19 extracted from a series expansion derived with the Berezinskii diagrammatic technique.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ell"
},
{
"math_id": 1,
"text": "\\Psi(x,t)\\propto\\exp\\left[-x^2/(2\\sigma)^2+ik_0x\\right]"
},
{
"math_id": 2,
"text": "\\hbar k_0"
},
{
"math_id": 3,
"text": "\\hat{H}=\\frac{\\hat{p}^2}{2m}+V(\\hat{x}),"
},
{
"math_id": 4,
"text": "\\overline{V(x)}=0"
},
{
"math_id": 5,
"text": "\\overline{V(x)V(x')}=\\gamma\\delta(x-x')"
},
{
"math_id": 6,
"text": "|\\Psi^2|"
},
{
"math_id": 7,
"text": "\\partial_t\\langle\\hat{x}^2\\rangle=\\frac{1}{2i\\hbar m}\\left\\langle\\left[\\hat{x}^2,\\hat{p}^2\\right]\\right\\rangle=-\\frac{1}{2m}\\left(\\hat{x}\\hat{p}+\\hat{p}\\hat{x}\\right)\\approx 2v_0\\langle\\hat{x}\\rangle_+-2v_0\\langle\\hat{x}\\rangle_-,"
},
{
"math_id": 8,
"text": "n_\\pm"
},
{
"math_id": 9,
"text": "\\langle x\\rangle_\\pm\\equiv\\int\\limits_{-\\infty}^\\infty x n_\\pm(x,t)\\mathrm{d}x."
},
{
"math_id": 10,
"text": "\\langle\\hat{x}\\rangle_-"
},
{
"math_id": 11,
"text": "-\\hbar k_0"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "n_-(x,t)"
},
{
"math_id": 14,
"text": "\\langle\\hat{x}\\rangle_-=0"
},
{
"math_id": 15,
"text": "\\langle\\hat{x}\\rangle_+"
},
{
"math_id": 16,
"text": "\\partial_t\\langle\\hat{x}^2\\rangle=2v_0\\langle\\hat{x}\\rangle."
},
{
"math_id": 17,
"text": "\\langle\\hat{x}(t)\\rangle=64\\ell\\left(\\frac{\\tau}{t}\\right)^2\\log\\left(1+\\frac{t}{\\tau}\\right),"
},
{
"math_id": 18,
"text": "\\tau"
},
{
"math_id": 19,
"text": "R_{[n/n]}"
}
]
| https://en.wikipedia.org/wiki?curid=71083305 |
71085 | Shannon–Hartley theorem | Theorem that tells the maximum rate at which information can be transmitted
In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.
Statement of the theorem.
The Shannon–Hartley theorem states the channel capacity formula_0, meaning the theoretical tightest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power formula_1 through an analog communication channel subject to additive white Gaussian noise (AWGN) of power formula_2:
formula_3
where
Historical development.
During the late 1920s, Harry Nyquist and Ralph Hartley developed a handful of fundamental ideas related to the transmission of information, particularly in the context of the telegraph as a communications system. At the time, these concepts were powerful breakthroughs individually, but they were not part of a comprehensive theory. In the 1940s, Claude Shannon developed the concept of channel capacity, based in part on the ideas of Nyquist and Hartley, and then formulated a complete theory of information and its transmission.
Nyquist rate.
In 1927, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel. In symbolic notation,
formula_7
where formula_8 is the pulse frequency (in pulses per second) and formula_5 is the bandwidth (in hertz). The quantity formula_9 later came to be called the "Nyquist rate", and transmitting at the limiting pulse rate of formula_9 pulses per second as "signalling at the Nyquist rate". Nyquist published his results in 1928 as part of his paper "Certain topics in Telegraph Transmission Theory".
Hartley's law.
During 1928, Hartley formulated a way to quantify information and its line rate (also known as data signalling rate "R" bits per second). This method, later known as Hartley's law, became an important precursor for Shannon's more sophisticated notion of channel capacity.
Hartley argued that the maximum number of distinguishable pulse levels that can be transmitted and received reliably over a communications channel is limited by the dynamic range of the signal amplitude and the precision with which the receiver can distinguish amplitude levels. Specifically, if the amplitude of the transmitted signal is restricted to the range of [−"A" ... +"A"] volts, and the precision of the receiver is ±Δ"V" volts, then the maximum number of distinct pulses "M" is given by
formula_10.
By taking information per pulse in bit/pulse to be the base-2-logarithm of the number of distinct messages "M" that could be sent, Hartley constructed a measure of the line rate "R" as:
formula_11
where formula_8 is the pulse rate, also known as the symbol rate, in symbols/second or baud.
Hartley then combined the above quantification with Nyquist's observation that the number of independent pulses that could be put through a channel of bandwidth formula_5 hertz was formula_9 pulses per second, to arrive at his quantitative measure for achievable line rate.
Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth, formula_5, in Hertz and what today is called the digital bandwidth, formula_12, in bit/s.
Other times it is quoted in this more quantitative form, as an achievable line rate of formula_12 bits per second:
formula_13
Hartley did not work out exactly how the number "M" should depend on the noise statistics of the channel, or how the communication could be made reliable even when individual symbol pulses could not be reliably distinguished to "M" levels; with Gaussian noise statistics, system designers had to choose a very conservative value of formula_14 to achieve a low error rate.
The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's observations about a logarithmic measure of information and Nyquist's observations about the effect of bandwidth limitations.
Hartley's rate result can be viewed as the capacity of an errorless "M"-ary channel of formula_9 symbols per second. Some authors refer to it as a capacity. But such an errorless channel is an idealization, and if M is chosen small enough to make the noisy channel nearly errorless, the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth formula_5, which is the Hartley–Shannon result that followed later.
Noisy channel coding theorem and capacity.
Claude Shannon's development of information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The proof of the theorem shows that a randomly constructed error-correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes.
Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity formula_0 and information transmitted at a line rate formula_12, then if
formula_15
there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of formula_0 bits per second.
The converse is also important. If
formula_16
the probability of error at the receiver increases without bound as the rate is increased. So no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.
The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the "M" in Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels.
If there were such a thing as a noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time (Note that an infinite-bandwidth analog channel could not transmit unlimited amounts of error-free data absent infinite signal power). Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise.
Bandwidth and noise affect the rate at which information can be transmitted over an analog channel. Bandwidth limitations alone do not impose a cap on the maximum information rate because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. Taking into account both noise and bandwidth limitations, however, there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when sophisticated multi-level encoding techniques are used.
In the channel considered by the Shannon–Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon–Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power.
Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent.
Implications of the theorem.
Comparison of Shannon's capacity to Hartley's law.
Comparing the channel capacity to the information rate from Hartley's law, we can find the effective number of distinguishable levels "M":
formula_17
formula_18
The square root effectively converts the power ratio back to a voltage ratio, so the number of levels is approximately proportional to the ratio of signal RMS amplitude to noise standard deviation.
This similarity in form between Shannon's capacity and Hartley's law should not be interpreted to mean that formula_14 pulse levels can be literally sent without any confusion. More levels are needed to allow for redundant coding and error correction, but the net data rate that can be approached with coding is equivalent to using that formula_14 in Hartley's law.
Frequency-dependent (colored noise) case.
In the simple version above, the signal and noise are fully uncorrelated, in which case formula_19 is the total power of the received signal and noise together. A generalization of the above equation for the case where the additive noise is not white (or that the &NoBreak;&NoBreak; is not constant with frequency over the bandwidth) is obtained by treating the channel as many narrow, independent Gaussian channels in parallel:
formula_20
where
Note: the theorem only applies to Gaussian stationary process noise. This formula's way of introducing frequency-dependent noise cannot describe all continuous-time noise processes. For example, consider a noise process consisting of adding a random wave whose amplitude is 1 or −1 at any point in time, and a channel that adds such a wave to the source signal. Such a wave's frequency components are highly dependent. Though such a noise may have a high power, it is fairly easy to transmit a continuous signal with much less power than one would need if the underlying noise was a sum of independent noises in each frequency band.
Approximations.
For large or small and constant signal-to-noise ratios, the capacity formula can be approximated:
Bandwidth-limited case.
When the SNR is large ("S"/"N" ≫ 1), the logarithm is approximated by
formula_24
in which case the capacity is logarithmic in power and approximately linear in bandwidth (not quite linear, since N increases with bandwidth, imparting a logarithmic effect). This is called the bandwidth-limited regime.
formula_25
where
formula_26
Power-limited case.
Similarly, when the SNR is small (if &NoBreak;&NoBreak;), applying the approximation to the logarithm:
formula_27
then the capacity is linear in power. This is called the power-limited regime.
formula_28
In this low-SNR approximation, capacity is independent of bandwidth if the noise is white, of spectral density formula_29 watts per hertz, in which case the total noise power is formula_30.
formula_31
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "C = B \\log_2 \\left( 1+\\frac{S}{N} \\right) "
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "S/N"
},
{
"math_id": 7,
"text": "f_p \\le 2B "
},
{
"math_id": 8,
"text": "f_p"
},
{
"math_id": 9,
"text": "2B"
},
{
"math_id": 10,
"text": "M = 1 + { A \\over \\Delta V } "
},
{
"math_id": 11,
"text": " R = f_p \\log_2(M), "
},
{
"math_id": 12,
"text": "R"
},
{
"math_id": 13,
"text": " R \\le 2B \\log_2(M). "
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": " R < C "
},
{
"math_id": 16,
"text": " R > C "
},
{
"math_id": 17,
"text": "2B \\log_2(M) = B \\log_2 \\left( 1+\\frac{S}{N} \\right) "
},
{
"math_id": 18,
"text": "M = \\sqrt{1+\\frac{S}{N}}."
},
{
"math_id": 19,
"text": "S+N"
},
{
"math_id": 20,
"text": " C = \\int_{0}^B \\log_2 \\left( 1+\\frac{S(f)}{N(f)} \\right) df "
},
{
"math_id": 21,
"text": "S(f)"
},
{
"math_id": 22,
"text": "N(f)"
},
{
"math_id": 23,
"text": "f"
},
{
"math_id": 24,
"text": "\\log_2 \\left( 1+\\frac{S}{N} \\right) \\approx \\log_2 \\frac{S}{N} = \\frac{\\ln 10}{\\ln 2} \\cdot \\log_{10} \\frac{S}{N} \\approx 3.32 \\cdot \\log_{10} \\frac{S}{N} ,"
},
{
"math_id": 25,
"text": " C \\approx 0.332 \\cdot B \\cdot \\mathrm{SNR\\ (in\\ dB)} "
},
{
"math_id": 26,
"text": "\\mathrm{SNR\\ (in \\ dB)} = 10\\log_{10}{S \\over N}. "
},
{
"math_id": 27,
"text": "\\log_2 \\left( 1+\\frac{S}{N} \\right) = \\frac{1}{\\ln 2} \\cdot \\ln \\left( 1+\\frac{S}{N} \\right) \\approx \\frac{1}{\\ln 2} \\cdot \\frac{S}{N} \\approx 1.44 \\cdot {S \\over N};"
},
{
"math_id": 28,
"text": " C \\approx 1.44 \\cdot B \\cdot {S \\over N}."
},
{
"math_id": 29,
"text": "<Math>N_0</math>"
},
{
"math_id": 30,
"text": "N = B \\cdot N_0"
},
{
"math_id": 31,
"text": " C \\approx 1.44 \\cdot {S \\over N_0}"
}
]
| https://en.wikipedia.org/wiki?curid=71085 |
71098827 | List of largest known primes and probable primes | The table below lists the largest currently known prime numbers and probable primes (PRPs) as tracked by the PrimePages and by Henri & Renaud Lifchitz's PRP Records. Numbers with more than 2,000,000 digits are shown.
Largest known primes.
These numbers have been proved prime by computer with a primality test for their form, for example the Lucas–Lehmer primality test for Mersenne numbers. formula_0 is the third cyclotomic polynomial, defined as formula_1.
Largest known probable primes (PRPs).
These are probable primes. Primality has not been proven because it is too hard for general numbers of this size but they are expected to be primes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Phi_3(x)"
},
{
"math_id": 1,
"text": "x^2 + x + 1"
}
]
| https://en.wikipedia.org/wiki?curid=71098827 |
71099027 | Chialvo map | The Chialvo map is a two-dimensional map proposed by Dante R. Chialvo in 1995 to describe the generic dynamics of excitable systems. The model is inspired by Kunihiko Kaneko's Coupled map lattice numerical approach which considers time and space as discrete variables but state as a continuous one. Later on Rulkov popularized a similar approach. By using only three parameters the model is able to efficiently mimic generic neuronal dynamics in computational simulations, as single elements or as parts of inter-connected networks.
The model.
The model is an iterative map where at each time step, the behavior of one neuron is updated as the following equations:
formula_0
in which, formula_1 is called activation or action potential variable, and formula_2 is the recovery variable. The model has four parameters, formula_3 is a time-dependent additive perturbation or a constant bias, formula_4 is the time constant of recovery formula_5, formula_6 is the activation-dependence of the recovery process formula_7 and formula_8 is an offset constant. The model has a rich dynamics, presenting from oscillatory to chaotic behavior, as well as non trivial responses to small stochastic fluctuations.
Analysis.
Bursting and chaos.
The map is able to capture the aperiodic solutions and the bursting behavior which are remarkable in the context of neural systems. For example, for the values formula_9, formula_10 and formula_11 and changing b from formula_12 to formula_13 the system passes from oscillations to aperiodic bursting solutions.
Fixed points.
Considering the case where formula_14 and formula_15 the model mimics the lack of ‘voltage-dependence inactivation’ for real neurons and the evolution of the recovery variable is fixed at formula_16. Therefore, the dynamics of the activation variable is basically described by the iteration of the following equations
formula_17
in which formula_18 as a function of formula_19 has a period-doubling bifurcation structure.
Examples.
Example 1.
A practical implementation is the combination of formula_20 neurons over a lattice, for that, it can be defined formula_21 as a coupling constant for combining the neurons. For neurons in a single row, we can define the evolution of action potential on time by the diffusion of the local temperature formula_1 in:
formula_22
where formula_23 is the time step and formula_24 is the index of each neuron. For the values formula_9, formula_25, formula_26 and formula_27, in absence of perturbations they are at the resting state. If we introduce a stimulus over cell 1, it induces two propagated waves circulating in opposite directions that eventually collapse and die in the middle of the ring.
Example 2.
Analogous to the previous example, it's possible create a set of coupling neurons over a 2-D lattice, in this case the evolution of action potentials is given by:
formula_28
where formula_24, formula_29, represent the index of each neuron in a square lattice of size formula_30, formula_31. With this example spiral waves can be represented for specific values of parameters. In order to visualize the spirals, we set the initial condition in a specific configuration formula_32 and the recovery as formula_33.
The map can also present chaotic dynamics for certain parameter values. In the following figure we show the chaotic behavior of the variable formula_1 on a square network of formula_34 for the parameters formula_9, formula_35, formula_26 and formula_36.
The map can be used to simulated a nonquenched disordered lattice (as in Ref ), where each map connects with four nearest neighbors on a square lattice, and in addition each map has a probability formula_37 of connecting to another one randomly chosen, multiple coexisting circular excitation waves will emerge at the beginning of the simulation until spirals takes over.
Chaotic and periodic behavior for a neuron.
For a neuron, in the limit of formula_38, the map becomes 1D, since formula_2 converges to a constant. If the parameter formula_6 is scanned in a range, different orbits will be seen, some periodic, others chaotic, that appear between two fixed points, one at formula_39 ; formula_40 and the other close to the value of formula_3 (which would be the regime excitable). | [
{
"math_id": 0,
"text": "\\begin{align} x_{n+1} = & f(x_n,y_n) = x_n^2 \\exp{(y_n-x_n)}+k \\\\ y_{n+1} =& g(x_n,y_n) = ay_n-bx_n+c \\\\ \\end{align}"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "(a<1)"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "(b<1)"
},
{
"math_id": 8,
"text": "c"
},
{
"math_id": 9,
"text": "a=0.89"
},
{
"math_id": 10,
"text": "c=0.28 "
},
{
"math_id": 11,
"text": "k=0.025"
},
{
"math_id": 12,
"text": "0.6"
},
{
"math_id": 13,
"text": "0.18"
},
{
"math_id": 14,
"text": "k=0"
},
{
"math_id": 15,
"text": "b<<a"
},
{
"math_id": 16,
"text": "y_{f0}"
},
{
"math_id": 17,
"text": "\\begin{align} x_{n+1} = & f(x_n,y_{f0}) = x^2_n exp (r - x_n)\n \\\\ r = & y_{f0} = c/(1-a) \\\\ \\end{align}"
},
{
"math_id": 18,
"text": "f(x_n,y_{f0})"
},
{
"math_id": 19,
"text": "r"
},
{
"math_id": 20,
"text": "N"
},
{
"math_id": 21,
"text": "0>d<1"
},
{
"math_id": 22,
"text": "x_{n+1}^i = (1-d)f(x_n^i) + (d/2)[f(x_n^{i+1}) + f(x_n^{i-1})]"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "i"
},
{
"math_id": 25,
"text": "b=0.6"
},
{
"math_id": 26,
"text": "c=0.28"
},
{
"math_id": 27,
"text": "k=0.02"
},
{
"math_id": 28,
"text": "x_{n+1}^{i,j} = (1-d)f(x_n^{i,j}) + (d/4)[f(x_n^{i+1,j}) + f(x_n^{i-1,j})+f(x_n^{i,j+1}) + f(x_n^{i,j-1})]"
},
{
"math_id": 29,
"text": "j"
},
{
"math_id": 30,
"text": "I"
},
{
"math_id": 31,
"text": "J"
},
{
"math_id": 32,
"text": "x^{ij}=i*0.0033"
},
{
"math_id": 33,
"text": "y^{ij}=y_f-(j * 0.0066)"
},
{
"math_id": 34,
"text": "500\\times500"
},
{
"math_id": 35,
"text": "b=0.18"
},
{
"math_id": 36,
"text": "k=0.026"
},
{
"math_id": 37,
"text": "p"
},
{
"math_id": 38,
"text": "b=0"
},
{
"math_id": 39,
"text": "x=1"
},
{
"math_id": 40,
"text": "y=1"
}
]
| https://en.wikipedia.org/wiki?curid=71099027 |
7110556 | Raku rules | Raku rules are the regular expression, string matching and general-purpose parsing facility of the Raku programming language, and are a core part of the language. Since Perl's pattern-matching constructs have exceeded the capabilities of formal regular expressions for some time, Raku documentation refers to them exclusively as "regexes", distancing the term from the formal definition.
Raku provides a superset of Perl 5 features with respect to regexes, folding them into a larger framework called "rules", which provide the capabilities of a parsing expression grammar, as well as acting as a closure with respect to their lexical scope. Rules are introduced with the codice_0 keyword, which has a usage quite similar to subroutine definitions. Anonymous rules can be introduced with the codice_1 (or codice_2) keyword, or simply be used inline as regexes were in Perl 5 via the codice_3 (matching) or codice_4 (substitution) operators.
History.
In "Apocalypse 5", a document outlining the preliminary design decisions for Raku pattern matching, Larry Wall enumerated 20 problems with the "current regex culture". Among these were that Perl's regexes were "too compact and 'cute'", had "too much reliance on too few metacharacters", "little support for named captures", "little support for grammars", and "poor integration with 'real' language".
Between late 2004 and mid-2005, a compiler for Raku style rules was developed for the Parrot virtual machine called Parrot Grammar Engine (PGE), which was later renamed to the more generic Parser Grammar Engine. PGE is a combination of runtime and compiler for Raku style grammars that allows any parrot-based compiler to use these tools for parsing, and also to provide rules to their runtimes.
Among other Raku features, support for named captures was added to Perl 5.10 in 2007.
In May 2012, the reference implementation of Raku, Rakudo, shipped its Rakudo Star monthly snapshot with a working JSON parser built entirely in Raku rules.
Changes from Perl 5.
There are only six unchanged features from Perl 5's regexes:
A few of the most powerful additions include:
The following changes greatly improve the readability of regexes:
Implicit changes.
Some of the features of Perl 5 regular expressions are more powerful in Raku because of their ability to encapsulate the expanded features of Raku rules. For example, in Perl 5, there were positive and negative lookahead operators codice_22 and codice_23. In Raku these same features exist, but are called codice_24 and codice_25.
However, because codice_26 can encapsulate arbitrary rules, it can be used to express lookahead as a syntactic predicate for a grammar. For example, the following parsing expression grammar describes the classic non-context-free language formula_0:
S ← &(A !b) a+ B
A ← a A? b
B ← b B? c
In Raku rules that would be:
Of course, given the ability to mix rules and regular code, that can be simplified even further:
However, this makes use of assertions, which is a subtly different concept in Raku rules, but more substantially different in parsing theory, making this a semantic rather than syntactic predicate. The most important difference in practice is performance. There is no way for the rule engine to know what conditions the assertion may match, so no optimization of this process can be made.
Integration with Perl.
In many languages, regular expressions are entered as strings, which are then passed to library routines that parse and compile them into an internal state. In Perl 5, regular expressions shared some of the lexical analysis with Perl's scanner. This simplified many aspects of regular expression usage, though it added a great deal of complexity to the scanner. In Raku, rules are part of the grammar of the language. No separate parser exists for rules, as it did in Perl 5. This means that code, embedded in rules, is parsed at the same time as the rule itself and its surrounding code. For example, it is possible to nest rules and code without re-invoking the parser:
rule ab {
(a.) # match "a" followed by any character
# Then check to see if that character was "b"
# If so, print a message.
The above is a single block of Raku code that contains an outer rule definition, an inner block of assertion code, and inside of that a regex that contains one more level of assertion.
Implementation.
Keywords.
There are several keywords used in conjunction with Raku rules:
Here is an example of typical use:
if $string ~~ / <phrase> \n / {
Modifiers.
Modifiers may be placed after any of the regex keywords, and before the delimiter. If a regex is named, the modifier comes after the name. Modifiers control the way regexes are parsed and how they behave. They are always introduced with a leading codice_35 character.
Some of the more important modifiers include:
For example:
Grammars.
A grammar may be defined using the codice_46 operator. A grammar is essentially just a namespace for rules:
grammar Str::SprintfFormat {
This is the grammar used to define Perl's codice_47 string formatting notation.
Outside of this namespace, you could use these rules like so:
A rule used in this way is actually identical to the invocation of a subroutine with the extra semantics and side-effects of pattern matching (e.g., rule invocations can be backtracked).
Examples.
Here are some example rules in Raku:
That last is identical to: | [
{
"math_id": 0,
"text": " \\{ a^n b^n c^n : n \\ge 1 \\} "
}
]
| https://en.wikipedia.org/wiki?curid=7110556 |
71107101 | Partition algebra | The partition algebra is an associative algebra with a basis of set-partition diagrams and multiplication given by diagram concatenation. Its subalgebras include diagram algebras such as the Brauer algebra, the Temperley–Lieb algebra, or the group algebra of the symmetric group. Representations of the partition algebra are built from sets of diagrams and from representations of the symmetric group.
Definition.
Diagrams.
A partition of formula_0 elements labelled formula_1 is represented as a diagram, with lines connecting elements in the same subset. In the following example, the subset formula_2 gives rise to the lines formula_3, and could equivalently be represented by the lines formula_4 (for instance).
For formula_5 and formula_6, the partition algebra formula_7 is defined by a formula_8-basis made of partitions, and a multiplication given by diagram concatenation. The concatenated diagram comes with a factor formula_9, where formula_10 is the number of connected components that are disconnected from the top and bottom elements.
Generators and relations.
The partition algebra formula_7 is generated by formula_11 elements of the type
These generators obey relations that include
formula_12
Other elements that are useful for generating subalgebras include
In terms of the original generators, these elements are
formula_13
Properties.
The partition algebra formula_7 is an associative algebra. It has a multiplicative identity
The partition algebra formula_7 is semisimple for formula_14. For any two formula_15 in this set, the algebras formula_7 and formula_16 are isomorphic.
The partition algebra is finite-dimensional, with formula_17 (a Bell number).
Subalgebras.
Eight subalgebras.
Subalgebras of the partition algebra can be defined by the following properties:
Combining these properties gives rise to 8 nontrivial subalgebras, in addition to the partition algebra itself:
The symmetric group algebra formula_23 is the group ring of the symmetric group formula_24 over formula_8. The Motzkin algebra is sometimes called the dilute Temperley–Lieb algebra in the physics literature.
Properties.
The listed subalgebras are semisimple for formula_14.
Inclusions of planar into non-planar algebras:
formula_25
Inclusions from constraints on subset size:
formula_26
Inclusions from allowing top-top and bottom-bottom lines:
formula_27
We have the isomorphism:
formula_28
More subalgebras.
In addition to the eight subalgebras described above, other subalgebras have been defined:
An algebra with a half-integer index formula_36 is defined from partitions of formula_37 elements by requiring that formula_38 and formula_39 are in the same subset. For example, formula_40 is generated by formula_41 so that formula_42, and formula_43.
Periodic subalgebras are generated by diagrams that can be drawn on an annulus without line crossings. Such subalgebras include a translation element formula_44 such that formula_45. The translation element and its powers are the only combinations of formula_46 that belong to periodic subalgebras.
Representations.
Structure.
For an integer formula_47, let formula_48 be the set of partitions of formula_49 elements formula_50 (bottom) and formula_51 (top), such that no two top elements are in the same subset, and no top element is alone. Such partitions are represented by diagrams with no top-top lines, with at least one line for each top element. For example, in the case formula_52:
Partition diagrams act on formula_48 from the bottom, while the symmetric group formula_53 acts from the top. For any Specht module formula_54 of formula_53 (with therefore formula_55), we define the representation of formula_7
formula_56
The dimension of this representation is
formula_57
where formula_58 is a Stirling number of the second kind, formula_59 is a binomial coefficient, and formula_60 is given by the hook length formula.
A basis of formula_61 can be described combinatorially in terms of set-partition tableaux: Young tableaux whose boxes are filled with the blocks of a set partition.
Assuming that formula_7 is semisimple, the representation formula_61 is irreducible, and the
set of irreducible finite-dimensional representations of the partition algebra is
formula_62
Representations of subalgebras.
Representations of non-planar subalgebras have similar structures as representations of the partition algebra. For example, the Brauer-Specht modules of the Brauer algebra are built from Specht modules, and certain sets of partitions.
In the case of the planar subalgebras, planarity prevents nontrivial permutations, and Specht modules do not appear. For example, a standard module of the Temperley–Lieb algebra is parametrized by an integer formula_63 with formula_64, and a basis is simply given by a set of partitions.
The following table lists the irreducible representations of the partition algebra and eight subalgebras.
The irreducible representations of formula_29 are indexed by partitions such that formula_66 and their dimensions are formula_67. The irreducible representations of formula_68 are indexed by partitions such that formula_69. The irreducible representations of formula_34 are indexed by sequences of partitions.
Schur-Weyl duality.
Assume formula_70.
For formula_71 a formula_21-dimensional vector space with basis formula_72, there is a natural action of the partition algebra formula_7 on the vector space formula_73. This action is defined by the matrix elements of a partition formula_74 in the basis formula_75:
formula_76
This matrix element is one if all indices corresponding to any given partition subset coincide, and zero otherwise. For example, the action of a Temperley–Lieb generator is
formula_77
Duality between the partition algebra and the symmetric group.
Let formula_78 be integer.
Let us take formula_71 to be the natural permutation representation of the symmetric group formula_79. This formula_21-dimensional representation is a sum of two irreducible representations: the standard and trivial representations, formula_80.
Then the partition algebra formula_7 is the centralizer of the action of formula_79 on the tensor product space formula_73,
formula_81
Moreover, as a bimodule over formula_82, the tensor product space decomposes into irreducible representations as
formula_83
where formula_84 is a Young diagram of size formula_21 built by adding a first row to formula_65, and formula_85 is the corresponding Specht module of formula_79.
Dualities involving subalgebras.
The duality between the symmetric group and the partition algebra generalizes the original Schur-Weyl duality between the general linear group and the symmetric group. There are other generalizations. In the relevant tensor product spaces, we write formula_86 for an irreducible formula_21-dimensional representation of the first group or algebra:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2k"
},
{
"math_id": 1,
"text": "1,\\bar 1, 2,\\bar 2,\\dots, k,\\bar k"
},
{
"math_id": 2,
"text": "\\{\\bar 1, \\bar 4,\\bar 5, 6\\}"
},
{
"math_id": 3,
"text": "\\bar 1 - \\bar 4, \\bar 4 -\\bar 5, \\bar 5 - 6"
},
{
"math_id": 4,
"text": " \\bar 1- 6, \\bar 4 - 6, \\bar 5 - 6, \\bar 1 - \\bar 5"
},
{
"math_id": 5,
"text": "n\\in \\mathbb{C}"
},
{
"math_id": 6,
"text": "k\\in \\mathbb{N}^*"
},
{
"math_id": 7,
"text": "P_k(n)"
},
{
"math_id": 8,
"text": "\\mathbb{C}"
},
{
"math_id": 9,
"text": "n^D"
},
{
"math_id": 10,
"text": "D"
},
{
"math_id": 11,
"text": "3k-2"
},
{
"math_id": 12,
"text": "\ns_i^2 = 1 \\quad , \\quad s_is_{i+1}s_i = s_{i+1}s_is_{i+1} \\quad, \\quad p_i^2 = np_i \\quad , \\quad b_i^2= b_i \\quad , \\quad p_i b_i p_i = p_i\n"
},
{
"math_id": 13,
"text": " e_i = b_ip_ip_{i+1}b_i \\quad , \\quad l_i = s_ip_i \\quad , \\quad r_i=p_is_i "
},
{
"math_id": 14,
"text": " n\\in\\mathbb{C} - \\{0,1,\\dots, 2k-2\\}"
},
{
"math_id": 15,
"text": "n,n'"
},
{
"math_id": 16,
"text": " P_k(n')"
},
{
"math_id": 17,
"text": "\\dim P_k(n) = B_{2k}"
},
{
"math_id": 18,
"text": "1,2,\\dots,2k"
},
{
"math_id": 19,
"text": "1,2"
},
{
"math_id": 20,
"text": "2"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "p_i\\to \\frac{1}{n}p_i"
},
{
"math_id": 23,
"text": " \\mathbb{C} S_k "
},
{
"math_id": 24,
"text": " S_k"
},
{
"math_id": 25,
"text": " \nPP_k(n) \\subset P_k(n) \\quad , \\quad M_k(n) \\subset RB_k(n) \\quad ,\\quad TL_k(n)\\subset B_k(n) \\quad, \\quad PR_k \\subset R_k \n"
},
{
"math_id": 26,
"text": "\nB_k(n) \\subset RB_k(n) \\subset P_k(n) \\quad ,\\quad TL_k(n) \\subset M_k(n) \\subset PP_k(n) \\quad , \\quad \n\\mathbb{C}S_k \\subset R_k \n"
},
{
"math_id": 27,
"text": "\nR_k \\subset RB_k(n) \\quad , \\quad PR_k\\subset M_k(n) \\quad ,\\quad \\mathbb{C}S_k \\subset B_k(n)\n"
},
{
"math_id": 28,
"text": " \nPP_k(n^2) \\cong TL_{2k}(n) \\quad , \\quad \\left\\{\\begin{array}{l} p_i \\mapsto n e_{2i-1} \\\\ b_i \\mapsto \\frac{1}{n} e_{2i} \\end{array}\\right.\n"
},
{
"math_id": 29,
"text": "\\text{prop}P_k"
},
{
"math_id": 30,
"text": "s_i, b_ip_{i+1}b_{i+1}"
},
{
"math_id": 31,
"text": "QP_k(n)"
},
{
"math_id": 32,
"text": "s_i,b_i,e_i"
},
{
"math_id": 33,
"text": "1+\\sum_{j=1}^{2k} (-1)^{j-1} B_{2k-j}"
},
{
"math_id": 34,
"text": "U_k"
},
{
"math_id": 35,
"text": "s_i, b_i"
},
{
"math_id": 36,
"text": "k+\\frac12"
},
{
"math_id": 37,
"text": "2k+2"
},
{
"math_id": 38,
"text": "k+1"
},
{
"math_id": 39,
"text": "\\overline{k+1}"
},
{
"math_id": 40,
"text": "P_{k+\\frac12}"
},
{
"math_id": 41,
"text": "s_{i\\leq k-1},b_{i\\leq k},p_{i\\leq k}"
},
{
"math_id": 42,
"text": "P_k\\subset P_{k+\\frac12}\\subset P_{k+1}"
},
{
"math_id": 43,
"text": "\\dim P_{k+\\frac12} =B_{2k+1}"
},
{
"math_id": 44,
"text": "u="
},
{
"math_id": 45,
"text": "u^k=1"
},
{
"math_id": 46,
"text": "s_i"
},
{
"math_id": 47,
"text": "0\\leq \\ell \\leq k"
},
{
"math_id": 48,
"text": "D_\\ell"
},
{
"math_id": 49,
"text": "k+\\ell"
},
{
"math_id": 50,
"text": " 1,2,\\dots, k"
},
{
"math_id": 51,
"text": "\\bar 1,\\bar 2,\\dots,\\bar \\ell"
},
{
"math_id": 52,
"text": "k=12, \\ell = 5"
},
{
"math_id": 53,
"text": "S_\\ell"
},
{
"math_id": 54,
"text": "V_\\lambda"
},
{
"math_id": 55,
"text": "|\\lambda|=\\ell"
},
{
"math_id": 56,
"text": "\n\\mathcal{P}_\\lambda = \\mathbb{C} D_{|\\lambda|}\\otimes_{\\mathbb{C} S_{|\\lambda|}} V_\\lambda\\ .\n"
},
{
"math_id": 57,
"text": "\n\\dim\\mathcal{P}_\\lambda = f_\\lambda \\sum_{\\ell = |\\lambda|}^k \\left\\{ {k\\atop \\ell} \\right\\} \\binom{\\ell}{|\\lambda|} \\ ,\n"
},
{
"math_id": 58,
"text": " \\left\\{ {k\\atop \\ell} \\right\\} "
},
{
"math_id": 59,
"text": " \\binom{\\ell}{|\\lambda|}"
},
{
"math_id": 60,
"text": "f_\\lambda = \\dim S_\\lambda"
},
{
"math_id": 61,
"text": "\\mathcal{P}_\\lambda"
},
{
"math_id": 62,
"text": "\n\\text{Irrep}\\left(P_k(n)\\right) = \\left\\{ \\mathcal{P}_\\lambda \\right\\}_{0\\leq |\\lambda|\\leq k}\\ .\n"
},
{
"math_id": 63,
"text": "0\\leq \\ell\\leq k "
},
{
"math_id": 64,
"text": "\\ell\\equiv k\\bmod 2"
},
{
"math_id": 65,
"text": "\\lambda"
},
{
"math_id": 66,
"text": "0<|\\lambda|\\leq k"
},
{
"math_id": 67,
"text": " f_\\lambda \\left\\{ {k\\atop |\\lambda|} \\right\\}"
},
{
"math_id": 68,
"text": "QP_k"
},
{
"math_id": 69,
"text": "0\\leq|\\lambda|\\leq k"
},
{
"math_id": 70,
"text": "n\\in \\mathbb{N}^*"
},
{
"math_id": 71,
"text": "V"
},
{
"math_id": 72,
"text": " v_1,\\dots, v_n"
},
{
"math_id": 73,
"text": " V^{\\otimes k}"
},
{
"math_id": 74,
"text": "\\{1,\\bar 1, 2,\\bar 2,\\dots, k,\\bar k\\}=\\sqcup_h E_h"
},
{
"math_id": 75,
"text": "(v_{j_1}\\otimes \\cdots \\otimes v_{j_k})"
},
{
"math_id": 76,
"text": "\n\\left(\\sqcup_h E_h\\right)_{j_1,j_2,\\dots ,j_k}^{j_{\\bar 1}, j_{\\bar 2},\\dots ,j_{\\bar k}} = \\mathbf{1}_{r,s\\in E_h\\implies j_r=j_s} \\ . \n"
},
{
"math_id": 77,
"text": "\ne_i \\left(v_{j_1}\\otimes \\cdots \\otimes v_{j_i}\\otimes v_{j_{i+1}}\\otimes \\cdots \\otimes v_{j_k}\\right)\n= \\delta_{j_i,j_{i+1}}\\sum_{j=1}^n v_{j_1}\\otimes \\cdots \\otimes v_{j}\\otimes v_{j}\\otimes \\cdots \\otimes v_{j_k}\\ .\n"
},
{
"math_id": 78,
"text": "n\\geq 2k"
},
{
"math_id": 79,
"text": "S_n"
},
{
"math_id": 80,
"text": "V=[n-1,1]\\oplus [n]"
},
{
"math_id": 81,
"text": "\nP_k(n) \\cong \\text{End}_{S_n}\\left(V^{\\otimes k}\\right)\\ . \n"
},
{
"math_id": 82,
"text": "P_k(n)\\times S_n"
},
{
"math_id": 83,
"text": "\nV^{\\otimes k} = \\bigoplus_{0\\leq |\\lambda|\\leq k} \\mathcal{P}_\\lambda \\otimes V_{[n-|\\lambda|,\\lambda]}\\ ,\n"
},
{
"math_id": 84,
"text": "[n-|\\lambda|,\\lambda]"
},
{
"math_id": 85,
"text": " V_{[n-|\\lambda|,\\lambda]}"
},
{
"math_id": 86,
"text": "V_n"
}
]
| https://en.wikipedia.org/wiki?curid=71107101 |
71107103 | Stability of matter | Statistical mechanics
In physics, stability of matter refers to the problem of showing rigorously that a large number of charged quantum particles can coexist and form macroscopic objects, like ordinary matter. The first proof was provided by Freeman Dyson and Andrew Lenard in 1967–1968, but a shorter and more conceptual proof was found later by Elliott Lieb and Walter Thirring in 1975.
Background and history.
In statistical mechanics, the existence of macroscopic objects is usually explained in terms of the behavior of the energy or the free energy with respect to the total number formula_0 of particles. More precisely, it should behave linearly in formula_0 for large values of formula_0.
In fact, if the free energy behaves like formula_1 for some formula_2, then pouring two glasses of water would provide an energy proportional to formula_3, which is enormous for large formula_0. A system is called "stable of the second kind" or "thermodynamically stable" when the (free) energy is bounded from below by a linear function of formula_0. Upper bounds are usually easy to show in applications, and this is why people have worked more on proving lower bounds.
Neglecting other forces, it is reasonable to assume that ordinary matter is composed of negative and positive non-relativistic charges (electrons and nuclei), interacting solely via the Coulomb force. A finite number of such particles always collapses in classical mechanics, due to the infinite depth of the electron-nucleus attraction, but it can exist in quantum mechanics thanks to Heisenberg's uncertainty principle. Proving that such a system is thermodynamically stable is called the stability of matter problem and it is very difficult due to the long range of the Coulomb potential. Stability should be a consequence of screening effects, but those are hard to quantify.
Let us denote by
formula_4
the quantum Hamiltonian of formula_0 electrons and formula_5 nuclei of charges formula_6 and masses formula_7 in atomic units. Here formula_8 denotes the Laplacian, which is the quantum kinetic energy operator. At zero temperature, the question is whether the ground state energy (the minimum of the spectrum of formula_9) is bounded from below by a constant times the total number of particles:
The constant formula_10 can depend on the largest number of spin states for each particle as well as the largest value of the charges formula_11. It should ideally not depend on the masses formula_7 so as to be able to consider the infinite mass limit, that is, classical nuclei.
Dyson showed in 1967 that if all the particles are bosons, then the inequality (1) cannot be true and the system is thermodynamically unstable. It was in fact later proved that in this case the energy goes like formula_12 instead of being linear in formula_0.
It is therefore important that either the positive or negative charges are fermions. In other words, stability of matter is a consequence of the Pauli exclusion principle. In real life electrons are indeed fermions, but finding the right way to use Pauli's principle and prove stability turned out to be remarkably difficult. Michael Fischer and David Ruelle formalized the conjecture in 1966 and offered a bottle of Champagne to anybody who could prove it. Dyson and Lenard found the proof of (1) a year later and therefore got the bottle.
As was mentioned before, stability is a necessary condition for the existence of macroscopic objects, but it does not immediately imply the existence of thermodynamic functions. One should really show that the energy really behaves linearly in the number of particles. Based on the Dyson-Lenard result, this was solved in an ingenious way by Elliott Lieb and Joel Lebowitz in 1972.
The Dyson-Lenard proof is "″extraordinarily complicated and difficult″" and relies on deep and tedious analytical bounds. The obtained constant formula_10 in (1) was also very large. In 1975, Elliott Lieb and Walter Thirring found a simpler and more conceptual proof, based on a spectral inequality, now called the Lieb-Thirring inequality.
They got a constant formula_10 which was by several orders of magnitude smaller than the Dyson-Lenard constant and had a realistic value.
They arrived at the final inequality
where formula_13 is the largest nuclear charge and formula_14 is the number of electronic spin states which is 2. Since formula_15, this yields the desired linear lower bound (1).
The idea of Lieb-Thirring was to bound the quantum energy from below in terms of the Thomas-Fermi energy. The latter is always stable due to a theorem of Edward Teller which states that atoms can never bind in Thomas-Fermi theory.
The new Lieb-Thirring inequality was used to bound the quantum kinetic energy of the electrons in terms of the Thomas-Fermi kinetic energy formula_16. Teller's "No-Binding Theorem" was in fact also used to bound from below the total Coulomb interaction in terms of the simpler Hartree energy appearing in Thomas-Fermi theory. Speaking about the Lieb-Thirring proof, Freeman Dyson wrote later
"″Lenard and I found a proof of the stability of matter in 1967. Our proof was so complicated and so unilluminating that it stimulated Lieb and Thirring to find the first decent proof. (...) Why was our proof so bad and why was theirs so good? The reason is simple. Lenard and I began with mathematical tricks and hacked our way through a forest of inequalities without any physical understanding. Lieb and Thirring began with physical understanding and went on to find the appropriate mathematical language to make their understanding rigorous. Our proof was a dead end. Theirs was a gateway to the new world of ideas.″"
The Lieb-Thirring approach has generated many subsequent works and extensions.
(Pseudo-)Relativistic systems
magnetic fields
quantized fields
and two-dimensional fractional statistics (anyons)
have for instance been studied since the Lieb-Thirring paper.
The form of the bound (1) has also been improved over the years. For example, one can obtain a constant independent of the number formula_5 of nuclei.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "N^a"
},
{
"math_id": 2,
"text": "a\\neq1"
},
{
"math_id": 3,
"text": "(2N)^a-2N^a=(2^a-2)N^a"
},
{
"math_id": 4,
"text": "\nH_{N,K}=-\\sum_{i=1}^N\\frac{\\Delta_{x_i}}{2}-\\sum_{k=1}^K\\frac{\\Delta_{R_k}}{2M_k}-\\sum_{i=1}^N\\sum_{k=1}^K\\frac{z_k}{|x_i-R_k|}+\\sum_{1\\leq i<j\\leq N}\\frac{1}{|x_i-x_j|}+\\sum_{1\\leq k<m\\leq K}\\frac{z_kz_m}{|R_k-R_m|}\n"
},
{
"math_id": 5,
"text": "K"
},
{
"math_id": 6,
"text": "z_1,...,z_K"
},
{
"math_id": 7,
"text": "M_1,...,M_K"
},
{
"math_id": 8,
"text": "\\Delta=\\nabla^2=\\sum_{j=1}^3\\partial_{jj}"
},
{
"math_id": 9,
"text": "H_{N,K}"
},
{
"math_id": 10,
"text": "C"
},
{
"math_id": 11,
"text": "z_k"
},
{
"math_id": 12,
"text": "N^{7/5}"
},
{
"math_id": 13,
"text": "Z=\\max(z_k)"
},
{
"math_id": 14,
"text": "q"
},
{
"math_id": 15,
"text": " N^{1/3}K^{2/3}\\leq N+K"
},
{
"math_id": 16,
"text": "\\int_{\\mathbb{R}^3}\\rho(x)^{\\frac53}d^3x"
}
]
| https://en.wikipedia.org/wiki?curid=71107103 |
711139 | Nonsense mutation | Type of mutation in a DNA sequence
In genetics, a nonsense mutation is a point mutation in a sequence of DNA that results in a "nonsense codon", or a premature stop codon in the transcribed mRNA, and leads to a truncated, incomplete, and possibly nonfunctional protein product. Nonsense mutations are not always harmful; the functional effect of a nonsense mutation depends on many aspects, such as the location of the stop codon within the coding DNA. For example, the effect of a nonsense mutation depends on the proximity of the nonsense mutation to the original stop codon, and the degree to which functional subdomains of the protein are affected. As nonsense mutations leads to premature termination of polypeptide chains; they are also called chain termination mutations.
Missense mutations differ from nonsense mutations since they are point mutations that exhibit a single nucleotide change to cause substitution of a different amino acid. A nonsense mutation also differs from a nonstop mutation, which is a point mutation that removes a stop codon. About 10% of patients facing genetic diseases have involvement with nonsense mutations. Some of the diseases that these mutations can cause are Duchenne muscular dystrophy (DMD), cystic fibrosis (CF), spinal muscular atrophy (SMA), cancers, metabolic diseases, and neurologic disorders. The rate of nonsense mutations is variable from gene-to-gene and tissue-to-tissue, but gene silencing occurs in every patient with a nonsense mutation.
Simple example.
DNA: 5' - ATG ACT CAC CGA GCG CGA AGC TGA - 3'
3' - TAC TGA GTG GCT CGC GCT TCG ACT - 5'
mRNA: 5' - AUG ACU CAC CGA GCG CGA AGC UGA - 3'
Protein: Met Thr His Arg Ala Arg Ser Stop
The example above begins with a 5' DNA sequence with 24 nucleotides (8 triplet codons) seen and its complementary strand shown below. The next row highlights the 5' mRNA strand, which is generated through transcription. Lastly, the final row showcases which the amino acids that are translated from each respective codon, with the eighth and final codon representing the stop codon. The codons corresponding to the fourth amino acid, Arginine, are highlighted because they will undergo a nonsense mutation in the following figure of this example.
DNA: 5' - ATG ACT CAC TGA GCG CGA AGC TGA - 3'
3' - TAC TGA GTG ACT CGC GCT TCG ACT - 5'
mRNA: 5' - AUG ACU CAC UGA GCG CGU AGC UGA - 3'
Protein: Met Thr His Stop
Now, suppose that a nonsense mutation was introduced at the fourth codon in the 5' DNA sequence (CGA) causing the cytosine to be replaced with thymine, yielding TGA in the 5' DNA sequence and ACT in the complementary strand. Because ACT is transcribed as UGA, it is translated as a stop codon. This leads the remaining codons of the mRNA to not be translated into protein because the stop codon is prematurely reached during translation. This can yield a truncated (i.e., abbreviated) protein product, which quite often lacks the functionality of the normal, non-mutant protein.
Possible outcomes.
Deleterious.
Deleterious outcomes represent the majority of nonsense mutations and are the most common outcome that is observed naturally. Deleterious nonsense mutations decreases the overall fitness and reproductive success of the organism. For example, a nonsense mutation occurring in a gene encoding a protein can cause structural or functional defects in the protein that disrupt cellular biology. Depending on the significance of the functions of this protein, this disruption now could be detrimental to the fitness and survival of that organism.
Neutral.
When a nonsense mutation is neutral, it does not provide benefits or harm. These occur when the effects of the mutation are unnoticed. In other words, this means that the mutation does not positively or negatively affect the organism. As this effect is unnoticed, there is a lack of papers describing such mutations. An example of this type of nonsense mutation is one that occurs directly before the original stop codon for that given protein. Because this mutation occurred in such close proximity to the end of the protein chain, the impact of this change might not be as significant. This would suggest that this amino acid that was mutated did not have a large impact on the overall structure or function of the protein or the organism as a whole. This scenario is rare, but possible.
Beneficial.
Beneficial nonsense mutations are considered as the rarest of possible nonsense mutation outcomes. Beneficial nonsense mutations increase the overall fitness and reproductive success of an organism, opposite of the effects of a deleterious mutation. Because a nonsense mutation introduces a premature stop codon within a sequence of DNA, it is extremely unlikely that this scenario can actually benefit the organism. An example of this would occur with a nonsense mutation that impacts a dysfunctional protein that releases toxins. The stop codon that this mutation brings would stop this dysfunctional protein from properly carrying out its function. Stopping this protein from performing at full strength causes less toxin to be released and the fitness of the organism to be improved. These types of situations with nonsense mutations occur a lot less frequently than the deleterious outcomes.
Suppressing nonsense mutations.
Nonsense-mediated mRNA decay
Despite an expected tendency for premature termination codons to yield shortened polypeptide products, in fact the formation of truncated proteins does not occur often "in vivo". Many organisms—including humans and lower species, such as yeast—employ a nonsense-mediated mRNA decay pathway, which degrades mRNAs containing nonsense mutations before they are able to be translated into nonfunctional polypeptides.
tRNA Suppression
Because nonsense mutations result in altered mRNA with a premature stop codon, one way of suppressing the damage done to the final protein's function is to alter the tRNA that reads the mRNA. These tRNA’s are termed suppressor tRNA's. If the stop codon is UAG, any other amino acid tRNA could be altered from its original anticodon to AUC so it will recognize the UAG codon instead. This will result in the protein not being truncated, but it may still have an altered amino acid. These suppressor tRNA mutations are only possible if the cell has more than one tRNA that reads a particular codon, otherwise the mutation would kill the cell. The only stop codons are UAG, UAA, and UGA. UAG and UAA suppressors read their respective stop codons instead of their original codon, but UAA suppressors also read UAG due to wobble base pairing. UGA suppressors are very rare. Another hurdle to pass in this technique is the fact that stop codons are also recognized by release factors, so the tRNA still needs to compete with the release factors to keep the translation going. Because of this, suppression is usually only 10-40% successful. These suppressor tRNA mutations also target stop codons that are not mutations, causing some proteins to be much longer than they should be. Only bacteria and lower eukaryotes can survive with these mutations, mammal and insect cells die as a result of a suppressor mutation.
For historical reasons the three stop codons were given names (see Stop codons): UAG is called the amber codon, UAA is called the ochre codon, and UGA is called the opal codon.
Common disease-associated nonsense mutations.
Nonsense mutations comprise around 20% of single nucleotide substitutions within protein coding sequences that result in human disease. Nonsense mutation-mediated pathology is often attributed to reduced amounts of full-length protein, because only 5-25% of transcripts possessing nonsense mutations do not undergo nonsense-mediated decay (NMD). Translation of the remaining nonsense-bearing mRNA may generate abbreviated protein variants with toxic effects.
Twenty-three different single-point nucleotide substitutions are capable of converting a non-stop codon into a stop-codon, with the mutations CGAformula_0TGA and CAGformula_0TAG being the most common disease-related substitutions characterized in the Human Gene Mutation Database (HGMD). As a result of different substitution frequencies for each nucleotide, the proportions of the three stop codons generated by disease-inducing nonsense mutations differs from stop codon distributions in non-diseased gene variants. Notably, the codon TAG is overrepresented, while the TGA and TAA codons are underrepresented in disease-related nonsense mutations.
Translation termination efficiency is influenced by the specific stop codon sequence on the mRNA, with the UAA sequence yielding the highest termination. Sequences surrounding the stop codon also impact termination efficiency. Consequently, the underlying pathology of diseases caused by nonsense mutations is ultimately dependent on the identity of the mutated gene, and specific location of the mutation.
Examples of diseases induced by nonsense mutations include:
Nonsense mutations in other genes may also drive dysfunction of several tissue or organ systems:
SMAD8
SMAD8 is the eighth homolog of the ENDOGLIN gene family and is involved in the signaling between TGF-b/BMP. It has been identified that novel nonsense mutations in SMAD8 are associated with pulmonary arterial hypertension. The pulmonary system relies on SMAD1, SMAD5, and SMAD 8 to regulate pulmonary vascular function. Downregulation and loss of signals that are normally operated by SMAD8 contributed to pathogenesis in pulmonary arterial hypertension. The ALK1 gene, a part of the TGF-B signaling family, was found to have been mutated while also down-regulating the SMAD8 gene in patients with pulmonary arterial hypertension. SMAD8 mutants were not phosphorylated by ALK1, disrupting interactions with SMAD4 that would normally allow for signaling in wild-type organisms.
LGR4.
LGR4 binds R-spondins to activate the Wnt signaling pathway. Wnt signaling regulates bone mass and osteoblast differentiation and is important for the development of bone, heart, and muscle. An LGR4 nonsense mutation in a healthy population has been linked to low bone mass density and symptoms of osteoporosis. LGR4 mutant mice showed the observed low bone mass is not due to age-related bone loss. Mutations in LGR4 have been associated with family lineages with medical histories of rare bone disorders. Wild-type mice lacking LGR4 also displayed delayed osteoblast differentiation during development, showcasing the important role of LGR4 in bone mass regulation and development.
Therapeutics targeting nonsense mutation diseases.
Therapeutics for diseases caused by nonsense mutations attempt to recapitulate wild-type function by decreasing the efficacy of NMD, facilitating readthrough of the premature stop codon during translation, or editing the genomic nonsense mutation.
Antisense oligonucleotides to suppress the expression of NMD and translation termination proteins are being explored in animal models of nonsense mutation-induced disease. Other RNA therapeutics under investigation include synthetic suppressor tRNAs that enable ribosomes to insert an amino acid, instead of initiating chain termination, upon encountering premature stop codons.
CRISPR-Cas9 based single nucleotide substitutions have been used to generate amino acid codons from stop codons, achieving an editing success rate of 10% in cell cultures.
Read-through has been achieved using small molecule drugs such as aminoglycosides and negamycin. An oxadiazole, ataluren (previously PTC124), facilitates the selective read-through of aberrant stop codons, rendering it a potential therapeutic against nonsense mutation-induced disease. Ataluren, sold under the tradename Translarna, is currently an approved treatment for Duchenne muscular dystrophy in the European Economic area and Brazil. However, phase III trials of Ataluren as a cystic fibrosis therapeutic have failed to meet their primary endpoints. | [
{
"math_id": 0,
"text": "\\longrightarrow"
}
]
| https://en.wikipedia.org/wiki?curid=711139 |
71119 | Wigner's friend | Thought experiment in theoretical quantum physics
Wigner's friend is a thought experiment in theoretical quantum physics, first published by the Hungarian-American physicist Eugene Wigner in 1961, and further developed by David Deutsch in 1985. The scenario involves an indirect observation of a quantum measurement: An observer formula_0 observes another observer formula_1 who performs a quantum measurement on a physical system. The two observers then formulate a statement about the physical system's state after the measurement according to the laws of quantum theory. In the Copenhagen interpretation, the resulting statements of the two observers contradict each other. This reflects a seeming incompatibility of two laws in the Copenhagen interpretation: the deterministic and continuous time evolution of the state of a closed system and the nondeterministic, discontinuous collapse of the state of a system upon measurement. Wigner's friend is therefore directly linked to the measurement problem in quantum mechanics with its famous Schrödinger's cat paradox.
Generalizations and extensions of Wigner's friend have been proposed. Two such scenarios involving multiple friends have been implemented in a laboratory, using photons to stand in for the friends.
Original paradox.
Wigner introduced the thought experiment in a 1961 article "Remarks on the Mind-Body Question". He begins by noting that most physicists in the then-recent past had been thoroughgoing materialists who would insist that "mind" or "soul" are illusory, and that nature is fundamentally deterministic. He argues that quantum physics has changed this situation:
All that quantum mechanics purports to provide are probability connections between subsequent impressions (also called "apperceptions") of the consciousness, and even though the dividing line between the observer, whose consciousness is being affected, and the observed physical object can be shifted towards the one or the other to a considerable degree, it cannot be eliminated.
Nature of the wave function.
Going into more detail, Wigner says:
Given any object, all the possible knowledge concerning that object can be given as its wave function. This is a mathematical concept the exact nature of which need not concern us here—it is composed of a (countable) infinity of numbers. If one knows these numbers, one can foresee the behavior of the object as far as it can be foreseen. More precisely, the wave function permits one to foretell with what probabilities the object will make one or another impression on us if we let it interact with us either directly, or indirectly. [...] In fact, the wave function is only a suitable language for describing the body of knowledge—gained by observations—which is relevant for predicting the future behaviour of the system. For this reason, the interactions which may create one or another sensation in us are also called observations, or measurements. One realises that "all" the information which the laws of physics provide consists of probability connections between subsequent impressions that a system makes on one if one interacts with it repeatedly, i.e., if one makes repeated measurements on it. The wave function is a convenient summary of that part of the past impressions which remains relevant for the probabilities of receiving the different possible impressions when interacting with the system at later times.
The wave function of an object "exists" (Wigner's quotation marks) because observers can share it:
The information given by the wave function is communicable. If someone else somehow determines the wave function of a system, he can tell me about it and, according to the theory, the probabilities for the possible different impressions (or "sensations") will be equally large, no matter whether he or I interact with the system in a given fashion.
Observing a system causes its wave functions to change indeterministically, because "the entering of an impression into our consciousness" implies a revision of "the probabilities for different impressions which we expect to receive in the future".
The observer observed.
Wigner presents two arguments for the thesis that the mind influences the body, i.e., that a human body can "deviate from the laws of physics" as deduced from experimenting upon inanimate objects. The argument that he personally finds less persuasive is the one that has become known as "Wigner's friend". In this thought experiment, Wigner posits that his friend is in a laboratory, and Wigner lets the friend perform a quantum measurement on a physical system (this could be a spin system). This system is assumed to be in a superposition of two distinct states, say, state 0 and state 1 (or formula_2 and formula_3 in Dirac notation). When Wigner's friend measures the system in the {0,1}-basis, according to quantum mechanics, they will get one of the two possible outcomes (0 or 1) and the system will collapse into the corresponding state.
Now Wigner himself models the scenario from outside the laboratory, knowing that inside, his friend will at some point perform the 0/1-measurement on the physical system. According to the linearity of the quantum mechanical equations, Wigner will assign a superposition state to the whole laboratory (i.e. the joint system of the physical system together with the friend): The superposition state of the lab is then a linear combination of "system is in state 0 — friend has measured 0" and "system is in state 1 — friend has measured 1".
Let Wigner now ask his friend for the result of the measurement. Whichever answer the friend gives (0 or 1), Wigner would then assign the state "system is in state 0 — friend has measured 0" or "system is in state 1 — friend has measured 1" to the laboratory. Therefore, it is only at the time when he learns about his friend's result that the superposition state of the laboratory collapses.
However, unless Wigner is considered in a "privileged position as ultimate observer", the friend's point of view must be regarded as equally valid, and this is where an apparent paradox comes into play: From the point of view of the friend, the measurement result was determined long before Wigner had asked about it, and the state of the physical system has already collapsed. When exactly did the collapse occur? Was it when the friend had finished their measurement, or when the information of its result entered Wigner's consciousness? As Wigner says, he could ask his friend, "What did you feel about the [measurement result] before I asked you?" The question of what result the friend has seen is surely "already decided in his mind", Wigner writes, which implies that the friend–system joint state must already be one of the collapsed options, not a superposition of them. Wigner concludes that the linear time evolution of quantum states according to the Schrödinger equation cannot apply when the physical entity involved is a conscious being.
Wigner presents his second argument, which he finds more persuasive, much more briefly:
The second argument to support the existence of an influence of the consciousness on the physical world is based on the observation that we do not know of any phenomenon in which one subject is influenced by another without exerting an influence thereupon. This appears convincing to this writer.
As a "reductio ad absurdum".
According to physicist Leslie Ballentine, by 1987 Wigner had decided that consciousness does not cause a physical collapse of the wavefunction, although he still believed that his chain of inferences leading up to that conclusion were correct. As Ballentine recounts, Wigner regarded his 1961 argument as a , indicating that the postulates of quantum mechanics need to be revised in some way.
Responses in different interpretations of quantum mechanics.
Many-worlds interpretations.
The various versions of the many worlds interpretation avoid the need to postulate that consciousness causes collapse – indeed, that collapse occurs at all.
Hugh Everett III's doctoral thesis "'Relative state' formulation of quantum mechanics" serves as the foundation for today's many versions of many-worlds interpretations. In the introductory part of his work, Everett discusses the "amusing, but "extremely hypothetical" drama" of the Wigner's friend paradox. Note that there is evidence of a drawing of the scenario in an early draft of Everett's thesis. It was therefore Everett who provided the first written discussion of the problem four or five years before it was discussed in "Remarks on the mind-body question" by Wigner, of whom it received the name and fame thereafter. However, Everett being a student of Wigner's, it is clear that they must have discussed it together at some point.
In contrast to his teacher Wigner, who held the consciousness of an observer to be responsible for a collapse, Everett understands the Wigner's friend scenario in a different way: Insisting that quantum states assignments should be objective and nonperspectival, Everett derives a straightforward logical contradiction when letting formula_1 and formula_0 reason about the laboratory's state of formula_4 together with formula_1. Then, the Wigner's Friend scenario shows to Everett an incompatibility of the collapse postulate for describing measurements with the deterministic evolution of closed systems. In the context of his new theory, Everett claims to solve the Wigner's friend paradox by only allowing a continuous unitary time evolution of the wave function of the universe. However, there is no evidence of any written argument of Everett's on the topic.
In many-worlds interpretations, measurements are modelled as interactions between subsystems of the universe and manifest themselves as a branching of the universal state. The different branches account for the different possible measurement outcomes and are seen to exist as subjective experiences of the corresponding observers. In this view, the friend's measurement of the spin results in a branching of the world into two parallel worlds, one, in which the friend has measured the spin to be 1, and another, in which the friend has received the measurement outcome 0. If then Wigner measures at a later time the combined system of friend and spin system, the world again splits into two parallel parts.
Objective-collapse theories.
According to objective-collapse theories, wave-function collapse occurs when a superposed system reaches a certain objective threshold of size or complexity. Objective-collapse proponents would expect a system as macroscopic as a cat to have collapsed before the box was opened, so the question of observation-of-observers does not arise for them. If the measured system were much simpler (such as a single spin state), then once the observation was made, the system would be expected to collapse, since the larger system of the scientist, equipment, and room would be considered far too complex to become entangled in the superposition.
Relational quantum mechanics.
Relational quantum mechanics (RQM) was developed in 1996 by Carlo Rovelli and is one of the more recent interpretations of quantum mechanics. In RQM, any physical system can play the role of an observing system, to which any other system may display "facts" about physical variables. This inherent relativity of facts in RQM provides a straightforward "solution" to the seemingly paradoxical situation in Wigner's friend scenario: The state that the friend assigns to the spin is a state relative to himself as friend, whereas the state that Wigner assigns to the combined system of friend and spin is a state relative to himself as Wigner. By construction of the theory, these two descriptions do not have to match, because both are correct assignments of states relative to their respective system.
If the physical variable that is measured of the spin system is denoted by "z", where "z" takes the possible outcome values 0 or 1, the above Wigner's friend situation is modelled in the RQM context as follows: formula_1 models the situation as the before-after-transition
formula_5
of the state of formula_4 relative to him (here it was assumed that formula_1 received the outcome "z" = 1 in his measurement of formula_4).
In RQM language, the fact "z" = 1 for the spin of formula_4 actualized itself relative to formula_1 during the interaction of the two systems.
A different way to model the same situation is again an outside (Wigner's) perspective. From that viewpoint, a measurement by one system (formula_1) of another (formula_4) results in a correlation of the two systems. The state displaying such a correlation is equally valid for modelling the measurement process. However, the system with respect to which this correlated state is valid changes. Assuming that Wigner (formula_0) has the information that the physical variable "z" of formula_4 is being measured by formula_1, but not knowing what formula_1 received as result, formula_0 must model the situation as
formula_6
where formula_7 is considered the state of formula_1 before the measurement, and formula_8 and formula_9 are the states corresponding to formula_1's state when he has measured 1 or 0 respectively. This model is depicting the situation as relative to formula_0, so the assigned states are relative states with respect to the Wigner system. In contrast, there is no value for the "z" outcome that actualizes with respect to formula_0, as he is not involved in the measurement.
In this sense, two accounts of the same situation (process of the measurement of the physical variable "z" on the system formula_4 by formula_1) are accepted within RQM to exist side by side. Only when deciding for a reference system, a statement for the "correct" account of the situation can be made.
QBism and Bayesian interpretations.
In the interpretation known as QBism, advocated by N. David Mermin among others, the Wigner's-friend situation does not lead to a paradox, because there is never a uniquely correct wavefunction for any system. Instead, a wavefunction is a statement of personalist Bayesian probabilities, and moreover, the probabilities that wavefunctions encode are probabilities for experiences that are also personal to the agent who experiences them. Jaynes expresses this as follows: "There is a paradox only if we suppose that a density matrix (i.e. a probability distribution) is something 'physically real' and 'absolute'. But now the dilemma disappears when we recognize the 'relativity principle' for probabilities. A density matrix (or, in classical physics, a probability distribution over coordinates and momenta) represents, not a physical situation, but only a certain "state of knowledge" about a range of possible physical situations". And as von Baeyer puts it, "Wavefunctions are not tethered to electrons and carried along like haloes hovering over the heads of saints—they are assigned by an agent and depend on the total information available to the agent." Consequently, there is nothing wrong in principle with Wigner and his friend assigning different wavefunctions to the same system. A similar position is taken by Brukner, who uses an elaboration of the Wigner's-friend scenario to argue for it.
De Broglie–Bohm theory.
The De Broglie-Bohm theory, also known as Bohmian mechanics or pilot wave theory, postulates, in addition to the wave function, an actual configuration of particles that exists even when unobserved. This particle configuration evolves in time according to a deterministic law, with the wave function guiding the motion of the particles. The particle configuration determines the actual measurement outcome —e.g., whether Schrödinger's cat is dead or alive or whether Wigner's friend has measured 0 or 1— even if the wave function is a superposition. Indeed, according to the De Broglie-Bohm theory, the wave function never collapses on the fundamental level. There is, however, a concept of "effective collapse", based on the fact that, in many situations, "empty branches" of the wave function, which do not guide the actual particle configuration, can be ignored for all practical purposes.
The De Broglie-Bohm theory does not assign any special status to conscious observers. In the Wigner's-friend situation, the first measurement would lead to an effective collapse. But even if Wigner describes the state of his friend as a superposition, there is no contradiction with this friend having observed a definite measurement outcome as described by the particle configuration. Thus, according to the De Broglie-Bohm theory, there is no paradox because the wave function alone is not a complete description of the physical state.
An extension of the Wigner's friend experiment.
In 2016, Frauchiger and Renner used an elaboration of the Wigner's-friend scenario to argue that quantum theory cannot be used to model physical systems that are themselves agents who use quantum theory. They provide an information-theoretic analysis of two specifically connected pairs of "Wigner's friend" experiments, where the human observers are modelled within quantum theory. By then letting the four different agents reason about each other's measurement results (using the laws of quantum mechanics), contradictory statements are derived.
The resulting theorem highlights an incompatibility of a number of assumptions that are usually taken for granted when modelling measurements in quantum mechanics.
In the title of their published version of September 2018, the authors' interpretation of their result is apparent: Quantum theory as given by the textbook and used in the numerous laboratory experiments to date "cannot consistently describe the use of itself" in any given (hypothetical) scenario. The implications of the result are currently subject to many debates among physicists of both theoretical and experimental quantum mechanics. In particular, the various proponents of the different interpretations of quantum mechanics have challenged the validity of the Frauchiger–Renner argument.
The experiment was designed using a combination of arguments by Wigner (Wigner's friend), Deutsch and Hardy (see Hardy's paradox). The setup involves a number of macroscopic agents (observers) performing predefined quantum measurements in a given time order. Those agents are assumed to all be aware of the whole experiment and to be able to use quantum theory to make statements about other people's measurement results. The design of the thought experiment is such that the different agents' observations along with their logical conclusions drawn from a quantum-theoretical analysis yields inconsistent statements.
The scenario corresponds roughly to two parallel pairs of "Wigners" and friends: formula_10 with formula_11 and formula_12 with formula_13. The friends each measure a specific spin system, and each Wigner measures "his" friend's laboratory (which includes the friend). The individual agents make logical conclusions that are based on their measurement result, aiming at predictions about other agent's measurements within the protocol. Frauchiger and Renner argue that an inconsistency occurs if three assumptions are taken to be simultaneously valid. Roughly speaking, those assumptions are
(Q): Quantum theory is correct.
(C): Agent's predictions are information-theoretically consistent.
(S): A measurement yields only one single outcome.
More precisely, assumption (Q) involves the probability predictions within quantum theory given by the Born rule. This means that an agent is allowed to trust this rule being correct in assigning probabilities to other outcomes conditioned on his own measurement result. It is, however, sufficient for the extended Wigner's friend experiment to assume the validity of the Born rule for probability-1 cases, i.e., if the prediction can be made with certainty.
Assumption (C) invokes a consistency among different agents' statements in the following manner: The statement "I know (by the theory) that they know (by the same theory) that x" is equivalent to "I know that x".
Assumption (S) specifies that once an agent has arrived at a probability-1 assignment of a certain outcome for a given measurement, they could never agree to a different outcome for the same measurement.
Assumptions (Q) and (S) are used by the agents when reasoning about measurement outcomes of other agents, and assumption (C) comes in when an agent combines other agent's statements with their own. The result is contradictory, and therefore, assumptions (Q), (C) and (S) cannot all be valid, hence the no-go theorem.
Reflection.
The meaning and implications of the Frauchiger–Renner thought experiment are highly debated. A number of assumptions taken in the argument are very foundational in content and therefore cannot be given up easily. However, the questions remains whether there are "hidden" assumptions that do not explicitly appear in the argument. The authors themselves conclude that "quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner". On the other hand, one presentation of the experiment as a quantum circuit models the agents as single qubits and their reasoning as simple conditional operations.
QBism, relational quantum mechanics and the De Broglie–Bohm theory have been argued to avoid the contradiction suggested by the extended Wigner's-friend scenario of Frauchiger and Renner.
In fiction.
Stephen Baxter's novel "Timelike Infinity" (1992) discusses a variation of Wigner's friend thought experiment through a refugee group of humans self-named "The Friends of Wigner". They believe that an ultimate observer at the end of time may collapse all possible entangled wave-functions generated since the beginning of the universe, hence choosing a reality without oppression. | [
{
"math_id": 0,
"text": "W"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "|0\\rangle"
},
{
"math_id": 3,
"text": "|1\\rangle"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "\\alpha|0\\rangle_S + \\beta|1\\rangle_S \\to |1\\rangle_S"
},
{
"math_id": 6,
"text": "\\big(\\alpha|0\\rangle_S + \\beta|1\\rangle_S\\big) |\\bot\\rangle_F \\to \\alpha\\big(|0\\rangle_S \\otimes |0\\rangle_F\\big) + \\beta\\big(|1\\rangle_S \\otimes |1\\rangle_F\\big),"
},
{
"math_id": 7,
"text": "|\\bot\\rangle_F"
},
{
"math_id": 8,
"text": "|1\\rangle_F"
},
{
"math_id": 9,
"text": "|0\\rangle_F"
},
{
"math_id": 10,
"text": "F_1"
},
{
"math_id": 11,
"text": "W_1"
},
{
"math_id": 12,
"text": "F_2"
},
{
"math_id": 13,
"text": "W_2"
}
]
| https://en.wikipedia.org/wiki?curid=71119 |
711288 | Viscoelasticity | Property of materials with both viscous and elastic characteristics under deformation
In materials science and continuum mechanics, viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like water, resist both shear flow and strain linearly with time when a stress is applied. Elastic materials strain when stretched and immediately return to their original state once the stress is removed.
Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. Whereas elasticity is usually the result of bond stretching along crystallographic planes in an ordered solid, viscosity is the result of the diffusion of atoms or molecules inside an amorphous material.
Background.
In the nineteenth century, physicists such as James Clerk Maxwell, Ludwig Boltzmann, and Lord Kelvin researched and experimented with creep and recovery of glasses, metals, and rubbers. Viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, "η". The inverse of "η" is also known as fluidity, "φ". The value of either can be derived as a function of temperature or as a given value (i.e. for a dashpot).
Depending on the change of strain rate versus stress inside a material, the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material. In this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, it is categorized as non-Newtonian fluid. There is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic. In addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the thermodynamic theory of polymer elasticity.
Some examples of viscoelastic materials are amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures, and bitumen materials. Cracking occurs when the strain is applied quickly and outside of the elastic limit. Ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends on both the rate of the change of their length and the force applied.
A viscoelastic material has the following properties:
Elastic versus viscoelastic behavior.
Unlike purely elastic substances, a viscoelastic substance has an elastic component and a viscous component. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time. Purely elastic materials do not dissipate energy (heat) when a load is applied, then removed. However, a viscoelastic substance dissipates energy when a load is applied, then removed. Hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a viscous material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic material's reaction to a loading cycle.
Specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a viscoelastic material such as a polymer, parts of the long polymer chain change positions. This movement or rearrangement is called creep. Polymers remain a solid material even when these parts of their chains are rearranging in order to accommodate the stress, and as this occurs, it creates a back stress in the material. When the back stress is the same magnitude as the applied stress, the material no longer creeps. When the original stress is taken away, the accumulated back stresses will cause the polymer to return to its original form. The material creeps, which gives the prefix visco-, and the material fully recovers, which gives the suffix -elasticity.
Linear viscoelasticity and nonlinear viscoelasticity.
Linear viscoelasticity is when the function is separable in both creep response and load. All linear viscoelastic models can be represented by a Volterra equation connecting stress and strain:
formula_3
or
formula_4
where
Linear viscoelasticity is usually applicable only for small deformations.
Nonlinear viscoelasticity is when the function is not separable. It usually happens when the deformations are large or if the material changes its properties under deformations. Nonlinear viscoelasticity also elucidates observed phenomena such as normal stresses, shear thinning, and extensional thickening in viscoelastic fluids.
An anelastic material is a special case of a viscoelastic material: an anelastic material will fully recover to its original state on the removal of load.
When distinguishing between elastic, viscous, and forms of viscoelastic behavior, it is helpful to reference the time scale of the measurement relative to the relaxation times of the material being observed, known as the Deborah number (De) where:
formula_9
where
Dynamic modulus.
Viscoelasticity is studied using dynamic mechanical analysis, applying a small oscillatory stress and measuring the resulting strain.
A complex dynamic modulus G can be used to represent the relations between the oscillating stress and strain:
formula_12
where formula_13; formula_14 is the "storage modulus" and formula_15 is the "loss modulus":
formula_16
formula_17
where formula_18 and formula_19 are the amplitudes of stress and strain respectively, and formula_20 is the phase shift between them.
Constitutive models of linear viscoelasticity.
Viscoelastic materials, such as amorphous polymers, semicrystalline polymers, biopolymers and even the living tissue and cells, can be modeled in order to determine their stress and strain or force and displacement interactions as well as their temporal dependencies. These models, which include the Maxwell model, the Kelvin–Voigt model, the standard linear solid model, and the Burgers model, are used to predict a material's response under different loading conditions.
Viscoelastic behavior has elastic and viscous components modeled as linear combinations of springs and dashpots, respectively. Each model differs in the arrangement of these elements, and all of these viscoelastic models can be equivalently modeled as electrical circuits.
In an equivalent electrical circuit, stress is represented by current, and strain rate by voltage. The elastic modulus of a spring is analogous to the inverse of a circuit's "inductance" (it stores energy) and the viscosity of a dashpot to a circuit's "resistance" (it dissipates energy).
The elastic components, as previously mentioned, can be modeled as springs of elastic constant E, given the formula:
formula_21
where σ is the stress, E is the elastic modulus of the material, and ε is the strain that occurs under the given stress, similar to Hooke's law.
The viscous components can be modeled as dashpots such that the stress–strain rate relationship can be given as,
formula_22
where σ is the stress, η is the viscosity of the material, and dε/dt is the time derivative of strain.
The relationship between stress and strain can be simplified for specific stress or strain rates. For high stress or strain rates/short time periods, the time derivative components of the stress–strain relationship dominate. In these conditions it can be approximated as a rigid rod capable of sustaining high loads without deforming. Hence, the dashpot can be considered to be a "short-circuit".
Conversely, for low stress states/longer time periods, the time derivative components are negligible and the dashpot can be effectively removed from the system – an "open" circuit. As a result, only the spring connected in parallel to the dashpot will contribute to the total strain in the system.
Maxwell model.
The Maxwell model can be represented by a purely viscous damper and a purely elastic spring connected in series, as shown in the diagram. The model can be represented by the following equation:
formula_23
Under this model, if the material is put under a constant strain, the stresses gradually relax. When a material is put under a constant stress, the strain has two components. First, an elastic component occurs instantaneously, corresponding to the spring, and relaxes immediately upon release of the stress. The second is a viscous component that grows with time as long as the stress is applied. The Maxwell model predicts that stress decays exponentially with time, which is accurate for most polymers. One limitation of this model is that it does not predict creep accurately. The Maxwell model for creep or constant-stress conditions postulates that strain will increase linearly with time. However, polymers for the most part show the strain rate to be decreasing with time.
This model can be applied to soft solids: thermoplastic polymers in the vicinity of their melting temperature, fresh concrete (neglecting its aging), and numerous metals at a temperature close to their melting point.
The equation introduced here, however, lacks a consistent derivation from more microscopic model and is not observer independet. The Upper-convected Maxwell model is its sound formulation in tems of the Cauchy stress tensor and constitutes the simplest tensorial constitutive model for viscoelasticity (see e.g. or
Kelvin–Voigt model.
The Kelvin–Voigt model, also known as the Voigt model, consists of a Newtonian damper and Hookean elastic spring connected in parallel, as shown in the picture. It is used to explain the creep behaviour of polymers.
The constitutive relation is expressed as a linear first-order differential equation:
formula_24
This model represents a solid undergoing reversible, viscoelastic strain. Upon application of a constant stress, the material deforms at a decreasing rate, asymptotically approaching the steady-state strain. When the stress is released, the material gradually relaxes to its undeformed state. At constant stress (creep), the model is quite realistic as it predicts strain to tend to σ/E as time continues to infinity. Similar to the Maxwell model, the Kelvin–Voigt model also has limitations. The model is extremely good with modelling creep in materials, but with regards to relaxation the model is much less accurate.
This model can be applied to organic polymers, rubber, and wood when the load is not too high.
Standard linear solid model.
The standard linear solid model, also known as the Zener model, consists of two springs and a dashpot. It is the simplest model that describes both the creep and stress relaxation behaviors of a viscoelastic material properly. For this model, the governing constitutive relations are:
Under a constant stress, the modeled material will instantaneously deform to some strain, which is the instantaneous elastic portion of the strain. After that it will continue to deform and asymptotically approach a steady-state strain, which is the retarded elastic portion of the strain. Although the standard linear solid model is more accurate than the Maxwell and Kelvin–Voigt models in predicting material responses, mathematically it returns inaccurate results for strain under specific loading conditions.
Jeffreys model.
The Jeffreys model like the Zener model is a three element model. It consist of two dashpots and a spring.
It was proposed in 1929 by Harold Jeffreys to study Earth's mantle.
Burgers model.
The Burgers model consists of either two Maxwell components in parallel or a Kelvin–Voigt component, a spring and a dashpot in series. For this model, the governing constitutive relations are:
This model incorporates viscous flow into the standard linear solid model, giving a linearly increasing asymptote for strain under fixed loading conditions.
Generalized Maxwell model.
The generalized Maxwell model, also known as the Wiechert model, is the most general form of the linear model for viscoelasticity. It takes into account that the relaxation does not occur at a single time, but at a distribution of times. Due to molecular segments of different lengths with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model.
Applications: metals and alloys at temperatures lower than one quarter of their absolute melting temperature (expressed in K).
Constitutive models for nonlinear viscoelasticity.
Non-linear viscoelastic constitutive equations are needed to quantitatively account for phenomena in fluids like differences in normal stresses, shear thinning, and extensional thickening. Necessarily, the history experienced by the material is needed to account for time-dependent behavior, and is typically included in models as a history kernel K.
Second-order fluid.
The second-order fluid is typically considered the simplest nonlinear viscoelastic model, and typically occurs in a narrow region of materials behavior occurring at high strain amplitudes and Deborah number between Newtonian fluids and other more complicated nonlinear viscoelastic fluids. The second-order fluid constitutive equation is given by:
formula_25
where:
Upper-convected Maxwell model.
The upper-convected Maxwell model incorporates nonlinear time behavior into the viscoelastic Maxwell model, given by:
formula_32
where formula_33 denotes the stress tensor.
Oldroyd-B model.
The Oldroyd-B model is an extension of the Upper Convected Maxwell model and is interpreted as a solvent filled with elastic bead and spring dumbbells.
The model is named after its creator James G. Oldroyd.
The model can be written as:
formula_34
where:
Whilst the model gives good approximations of viscoelastic fluids in shear flow, it has an unphysical singularity in extensional flow, where the dumbbells are infinitely stretched. This is, however, specific to idealised flow; in the case of a cross-slot geometry the extensional flow is not ideal, so the stress, although singular, remains integrable, although the stress is infinite in a correspondingly infinitely small region.
If the solvent viscosity is zero, the Oldroyd-B becomes the upper convected Maxwell model.
Wagner model.
Wagner model is might be considered as a simplified practical form of the Bernstein–Kearsley–Zapas model. The model was developed by German rheologist Manfred Wagner.
For the isothermal conditions the model can be written as:
formula_46
where:
The "strain damping function" is usually written as:
formula_54
If the value of the strain hardening function is equal to one, then the deformation is small; if it approaches zero, then the deformations are large.
Prony series.
In a one-dimensional relaxation test, the material is subjected to a sudden strain that is kept constant over the duration of the test, and the stress is measured over time. The initial stress is due to the elastic response of the material. Then, the stress relaxes over time due to the viscous effects in the material. Typically, either a tensile, compressive, bulk compression, or shear strain is applied. The resulting stress vs. time data can be fitted with a number of equations, called models. Only the notation changes depending on the type of strain applied: tensile-compressive relaxation is denoted formula_55, shear is denoted formula_56, bulk is denoted formula_57. The Prony series for the shear relaxation is
formula_58
where formula_59 is the long term modulus once the material is totally relaxed, formula_60 are the relaxation times (not to be confused with formula_60 in the diagram); the higher their values, the longer it takes for the stress to relax. The data is fitted with the equation by using a minimization algorithm that adjust the parameters (formula_61) to minimize the error between the predicted and data values.
An alternative form is obtained noting that the elastic modulus is related to the long term modulus by
formula_62
Therefore,
formula_63
This form is convenient when the elastic shear modulus formula_64 is obtained from data independent from the relaxation data, and/or for computer implementation, when it is desired to specify the elastic properties separately from the viscous properties, as in Simulia (2010).
A creep experiment is usually easier to perform than a relaxation one, so most data is available as (creep) compliance vs. time. Unfortunately, there is no known closed form for the (creep) compliance in terms of the coefficient of the Prony
series. So, if one has creep data, it is not easy to get the coefficients of the (relaxation) Prony series, which are needed for example in. An expedient way to obtain these coefficients is the following. First, fit the creep data with a model that has closed form solutions in both compliance and relaxation; for example the Maxwell-Kelvin model
(eq. 7.18-7.19) in Barbero (2007) or the Standard Solid Model (eq. 7.20-7.21) in Barbero (2007) (section 7.1.3). Once the parameters of the creep model are known, produce relaxation pseudo-data with the conjugate relaxation model for the same
times of the original data. Finally, fit the pseudo data with the Prony series.
Effect of temperature.
The secondary bonds of a polymer constantly break and reform due to thermal motion. Application of a stress favors some conformations over others, so the molecules of the polymer will gradually "flow" into the favored conformations over time. Because thermal motion is one factor contributing to the deformation of polymers, viscoelastic properties change with increasing or decreasing temperature. In most cases, the creep modulus, defined as the ratio of applied stress to the time-dependent strain, decreases with increasing temperature. Generally speaking, an increase in temperature correlates to a logarithmic decrease in the time required to impart equal strain under a constant stress. In other words, it takes less work to stretch a viscoelastic material an equal distance at a higher temperature than it does at a lower temperature.
More detailed effect of temperature on the viscoelastic behavior of polymer can be plotted as shown.
There are mainly five regions (some denoted four, which combines IV and V together) included in the typical polymers.
Extreme cold temperatures can cause viscoelastic materials to change to the glass phase and become brittle. For example, exposure of pressure sensitive adhesives to extreme cold (dry ice, freeze spray, etc.) causes them to lose their tack, resulting in debonding.
Viscoelastic creep.
When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep.
At time formula_65, a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails, if it is a viscoelastic liquid. If, on the other hand, it is a viscoelastic solid, it may or may not fail depending on the applied stress versus the material's ultimate resistance. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time formula_66, after which the strain immediately decreases (discontinuity) then gradually decreases at times formula_67 to a residual strain.
Viscoelastic creep data can be presented by plotting the creep modulus (constant applied stress divided by total strain at a particular time) as a function of time. Below its critical stress, the viscoelastic creep modulus is independent of stress applied. A family of curves describing strain versus time response to various applied stress may be represented by a single viscoelastic creep modulus versus time curve if the applied stresses are below the material's critical stress value.
Viscoelastic creep is important when considering long-term structural design. Given loading and temperature conditions, designers can choose materials that best suit component lifetimes.
Measurement.
Shear rheometry.
Shear rheometers are based on the idea of putting the material to be measured between two plates, one or both of which move in a shear direction to induce stresses and strains in the material. The testing can be done at constant strain rate, stress, or in an oscillatory fashion (a form of dynamic mechanical analysis). Shear rheometers are typically limited by edge effects where the material may leak out from between the two plates and slipping at the material/plate interface.
Extensional rheometry.
Extensional rheometers, also known as extensiometers, measure viscoelastic properties by pulling a viscoelastic fluid, typically uniaxially. Because this typically makes use of capillary forces and confines the fluid to a narrow geometry, the technique is often limited to fluids with relatively low viscosity like dilute polymer solutions or some molten polymers. Extensional rheometers are also limited by edge effects at the ends of the extensiometer and pressure differences between inside and outside the capillary.
Despite the apparent limitations mentioned above, extensional rheometry can also be performed on high viscosity fluids. Although this requires the use of different instruments, these techniques and apparatuses allow for the study of the extensional viscoelastic properties of materials such as polymer melts. Three of the most common extensional rheometry instruments developed within the last 50 years are the Meissner-type rheometer, the filament stretching rheometer (FiSER), and the Sentmanat Extensional Rheometer (SER).
The Meissner-type rheometer, developed by Meissner and Hostettler in 1996, uses two sets of counter-rotating rollers to strain a sample uniaxially. This method uses a constant sample length throughout the experiment, and supports the sample in between the rollers via an air cushion to eliminate sample sagging effects. It does suffer from a few issues – for one, the fluid may slip at the belts which leads to lower strain rates than one would expect. Additionally, this equipment is challenging to operate and costly to purchase and maintain.
The FiSER rheometer simply contains fluid in between two plates. During an experiment, the top plate is held steady and a force is applied to the bottom plate, moving it away from the top one. The strain rate is measured by the rate of change of the sample radius at its middle. It is calculated using the following equation:
formula_68
where formula_69 is the mid-radius value and formula_70 is the strain rate. The viscosity of the sample is then calculated using the following equation:
formula_71
where formula_72 is the sample viscosity, and formula_73 is the force applied to the sample to pull it apart.
Much like the Meissner-type rheometer, the SER rheometer uses a set of two rollers to strain a sample at a given rate. It then calculates the sample viscosity using the well known equation:
formula_74
where formula_0 is the stress, formula_72 is the viscosity and formula_70 is the strain rate. The stress in this case is determined via torque transducers present in the instrument. The small size of this instrument makes it easy to use and eliminates sample sagging between the rollers. A schematic detailing the operation of the SER extensional rheometer can be found on the right.
Other methods.
Though there are many instruments that test the mechanical and viscoelastic response of materials, broadband viscoelastic spectroscopy (BVS) and resonant ultrasound spectroscopy (RUS) are more commonly used to test viscoelastic behavior because they can be used above and below ambient temperatures and are more specific to testing viscoelasticity. These two instruments employ a damping mechanism at various frequencies and time ranges with no appeal to time–temperature superposition. Using BVS and RUS to study the mechanical properties of materials is important to understanding how a material exhibiting viscoelasticity will perform.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma"
},
{
"math_id": 1,
"text": "\\dot {\\varepsilon} "
},
{
"math_id": 2,
"text": "\\dot {\\sigma}"
},
{
"math_id": 3,
"text": "\\varepsilon(t) = \\frac { \\sigma(t) }{ E_\\text{inst,creep} }+ \\int_0^t K(t - t') \\dot{\\sigma}(t') dt'"
},
{
"math_id": 4,
"text": "\\sigma(t)= E_\\text{inst,relax}\\varepsilon(t)+ \\int_0^t F(t - t') \\dot{\\varepsilon}(t') dt'"
},
{
"math_id": 5,
"text": "\\sigma (t)"
},
{
"math_id": 6,
"text": "\\varepsilon (t)"
},
{
"math_id": 7,
"text": "E_\\text{inst,creep}"
},
{
"math_id": 8,
"text": "E_\\text{inst,relax}"
},
{
"math_id": 9,
"text": "De = \\lambda/t"
},
{
"math_id": 10,
"text": "\\lambda"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "G = G' + iG''"
},
{
"math_id": 13,
"text": "i^2 = -1"
},
{
"math_id": 14,
"text": "G'"
},
{
"math_id": 15,
"text": "G''"
},
{
"math_id": 16,
"text": " G' = \\frac {\\sigma_0} {\\varepsilon_0} \\cos \\delta "
},
{
"math_id": 17,
"text": " G'' = \\frac {\\sigma_0} {\\varepsilon_0} \\sin \\delta "
},
{
"math_id": 18,
"text": "\\sigma_0"
},
{
"math_id": 19,
"text": "\\varepsilon_0"
},
{
"math_id": 20,
"text": "\\delta"
},
{
"math_id": 21,
"text": "\\sigma = E \\varepsilon"
},
{
"math_id": 22,
"text": "\\sigma = \\eta \\frac{d\\varepsilon}{dt}"
},
{
"math_id": 23,
"text": "\\sigma + \\frac {\\eta} {E} \\dot {\\sigma} = \\eta \\dot {\\varepsilon}"
},
{
"math_id": 24,
"text": "\\sigma = E \\varepsilon + \\eta \\dot {\\varepsilon}"
},
{
"math_id": 25,
"text": "\\mathbf T = -p\\mathbf I + 2 \\eta_0\\mathbf D - \\psi_1 \\mathbf D^\\triangledown + 4\\psi _2 \\mathbf D \\bullet\\mathbf D"
},
{
"math_id": 26,
"text": "\\mathbf I"
},
{
"math_id": 27,
"text": "\\mathbf D"
},
{
"math_id": 28,
"text": "\\eta_0 , \\psi_1 , \\psi_2"
},
{
"math_id": 29,
"text": "\\mathbf D ^\\triangledown"
},
{
"math_id": 30,
"text": "\\mathbf D ^\\triangledown \\equiv \\dot \\mathbf D - (\\bigtriangledown\\mathbf v) ^\\mathbf T \\bullet \\mathbf D - \\mathbf D \\bullet \\bigtriangledown \\mathbf v"
},
{
"math_id": 31,
"text": "\\dot \\mathbf D \\equiv \\frac {\\partial}{ \\partial t} \\mathbf D + \\mathbf v \\bullet \\bigtriangledown \\mathbf D"
},
{
"math_id": 32,
"text": "\\mathbf \\tau + \\lambda \\mathbf \\tau ^\\triangledown = 2 \\eta _0 \\mathbf D"
},
{
"math_id": 33,
"text": "\\mathbf \\tau"
},
{
"math_id": 34,
"text": " \\mathbf{T} + \\lambda_1 \\stackrel{\\nabla}{\\mathbf{T}} = 2\\eta_0 (\\mathbf{D} + \\lambda_2 \\stackrel{\\nabla}{\\mathbf{D}}) "
},
{
"math_id": 35,
"text": "\\mathbf{T}"
},
{
"math_id": 36,
"text": "\\lambda_1"
},
{
"math_id": 37,
"text": "\\lambda_2"
},
{
"math_id": 38,
"text": " \\frac{\\eta_s}{\\eta_0}\\lambda_1 "
},
{
"math_id": 39,
"text": " \\stackrel{\\nabla}{\\mathbf{T}} "
},
{
"math_id": 40,
"text": " \\stackrel{\\nabla}{\\mathbf{T}} = \\frac{\\partial}{\\partial t} \\mathbf{T} + \\mathbf{v} \\cdot \\nabla \\mathbf{T} -( (\\nabla \\mathbf{v})^T \\cdot \\mathbf{T} + \\mathbf{T} \\cdot (\\nabla \\mathbf{v})); "
},
{
"math_id": 41,
"text": "\\mathbf{v}"
},
{
"math_id": 42,
"text": "\\eta_0"
},
{
"math_id": 43,
"text": " \\eta_0 = \\eta_s + \\eta_p "
},
{
"math_id": 44,
"text": "\\mathbf {D}"
},
{
"math_id": 45,
"text": "\\mathbf{D} = \\frac{1}{2} \\left[\\boldsymbol\\nabla \\mathbf{v} + (\\boldsymbol\\nabla \\mathbf{v})^T\\right]"
},
{
"math_id": 46,
"text": "\\mathbf{\\sigma}(t) = -p \\mathbf{I} + \\int_{-\\infty}^{t} M(t-t')h(I_1,I_2)\\mathbf{B}(t')\\, dt'"
},
{
"math_id": 47,
"text": "\\mathbf{\\sigma}(t)"
},
{
"math_id": 48,
"text": "\\mathbf{I}"
},
{
"math_id": 49,
"text": "M(x)=\\sum_{k=1}^m \\frac{g_i}{\\theta_i}\\exp \\left(\\frac{-x}{\\theta_i}\\right),"
},
{
"math_id": 50,
"text": "g_i"
},
{
"math_id": 51,
"text": "\\theta_i"
},
{
"math_id": 52,
"text": "h(I_1,I_2)"
},
{
"math_id": 53,
"text": "\\mathbf{B}"
},
{
"math_id": 54,
"text": "h(I_1,I_2)=m^*\\exp(-n_1 \\sqrt{I_1-3})+(1-m^*)\\exp(-n_2 \\sqrt{I_2-3})"
},
{
"math_id": 55,
"text": "E"
},
{
"math_id": 56,
"text": "G"
},
{
"math_id": 57,
"text": "K"
},
{
"math_id": 58,
"text": "\nG(t) = G_\\infty + \\sum_{i=1}^{N} G_i \\exp(-t/\\tau_i)\n"
},
{
"math_id": 59,
"text": "G_\\infty"
},
{
"math_id": 60,
"text": "\\tau_i"
},
{
"math_id": 61,
"text": "G_\\infty, G_i, \\tau_i"
},
{
"math_id": 62,
"text": "\nG(t=0) = G_0 = G_\\infty+\\sum_{i=1}^{N} G_i\n"
},
{
"math_id": 63,
"text": "\nG(t) = G_0 - \\sum_{i=1}^{N} G_i \\left[1-e^{-t / \\tau_i}\\right]\n"
},
{
"math_id": 64,
"text": "G_0"
},
{
"math_id": 65,
"text": "t_0"
},
{
"math_id": 66,
"text": "t_1"
},
{
"math_id": 67,
"text": "t > t_1"
},
{
"math_id": 68,
"text": "\\dot{\\epsilon} = -\\frac{2}{R}{dR \\over dt}"
},
{
"math_id": 69,
"text": "R"
},
{
"math_id": 70,
"text": "\\dot{\\epsilon}"
},
{
"math_id": 71,
"text": "\\eta = \\frac{F}{\\pi R^2 \\dot{\\epsilon}}"
},
{
"math_id": 72,
"text": "\\eta"
},
{
"math_id": 73,
"text": "F"
},
{
"math_id": 74,
"text": "\\sigma = \\eta \\dot{\\epsilon}"
}
]
| https://en.wikipedia.org/wiki?curid=711288 |
71138 | Wave function collapse | Process by which a quantum system takes on a definitive state
In quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an "observation", and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation.
Calculations of quantum decoherence show that when a quantum system interacts with the environment, the superpositions "apparently" reduce to mixtures of classical alternatives. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation throughout this "apparent" collapse. More importantly, this is not enough to explain "actual" wave function collapse, as decoherence does not reduce it to a single eigenstate.
Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement.
Mathematical description.
In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position formula_0 and the momentum formula_1 but also energy formula_2, formula_3 components of spin (formula_4), and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing formula_5 for an eigenstate and formula_6 for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation:
formula_7
The kets formula_8 specify the different available quantum "alternatives", i.e., particular quantum states.
The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true.
Collapse.
To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation, abruptly converting an arbitrary state into a single component eigenstate of the observable:
formula_9
where the arrow represents a measurement of the observable corresponding to the formula_10 basis.
For any single event, only one eigenvalue is measured, chosen randomly from among the possible values.
Meaning of the expansion coefficients.
The complex coefficients formula_11 in the expansion of a quantum state in terms of eigenstates formula_8,
formula_7
can be written as an (complex) overlap of the corresponding eigenstate and the quantum state:
formula_12
They are called the probability amplitudes. The square modulus formula_13 is the probability that a measurement of the observable yields the eigenstate formula_14. The sum of the probability over all possible outcomes must be one:
formula_15
As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area.
This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.
Terminology.
The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description. Reduction of the state vector replaces the full state vector with a single eigenstate of the observable.
The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation". When the wave function representation is used, the "reduction" is called "wave function collapse".
The measurement problem.
The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrodinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.
Physical approaches to collapse.
Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself".
Various interpretations of quantum mechanics attempt to provide a physical model for collapse. Three treatments of collapse can be found among the common interpretations. The first group includes hidden variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.
The significance ascribed to the wave function varies from interpretation to interpretation, and varies even within an interpretation (such as the Copenhagen Interpretation). If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.
Quantum decoherence.
Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.
History.
The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise "Mathematische Grundlagen der Quantenmechanik". Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr also repeatedly cautioned that we must give up a "pictorial representation", and perhaps also interpreted collapse as a formal, not physical, process.
The "Copenhagen" model espoused by Heisenberg and Bohr separated the quantum system from the classical measurement apparatus. In 1932
von Neumann took a more formal approach, developing an "ideal" measurement scheme that postulated that there were two processes of wave function change:
In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe. While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule.
Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system". Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.
By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the "necessity" of such a collapse. Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular Compton scattering was paradigmatic). Later work discussed so-called measurements of the "second kind", that is to say measurements that will not give the same value when immediately repeated as opposed to the more easily discussed measurements of the "first kind", which will.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "s_{z}"
},
{
"math_id": 5,
"text": "\\phi_i"
},
{
"math_id": 6,
"text": "c_i"
},
{
"math_id": 7,
"text": " | \\psi \\rangle = \\sum_i c_i | \\phi_i \\rangle."
},
{
"math_id": 8,
"text": "\\{| \\phi_i \\rangle\\}"
},
{
"math_id": 9,
"text": " | \\psi \\rangle = \\sum_i c_i | \\phi_i \\rangle \\rightarrow |\\psi'\\rangle = |\\phi_i\\rangle."
},
{
"math_id": 10,
"text": "\\phi"
},
{
"math_id": 11,
"text": "\\{c_{i}\\}"
},
{
"math_id": 12,
"text": " c_i = \\langle \\phi_i | \\psi \\rangle ."
},
{
"math_id": 13,
"text": "|c_{i}|^{2}"
},
{
"math_id": 14,
"text": "| \\phi_i \\rangle"
},
{
"math_id": 15,
"text": "\\langle \\psi|\\psi \\rangle = \\sum_i |c_i|^2 = 1."
}
]
| https://en.wikipedia.org/wiki?curid=71138 |
71138035 | William Spence (mathematician) | Scottish mathematician (1777–1815)
William Spence (born 31 July 1777 in Greenock, Scotland – died 22 May 1815 in Glasgow, Scotland) was a Scottish mathematician who published works on the fields of logarithmic functions, algebraic equations and their relation to integral and differential calculus respectively.
Early life, family, and personal life.
Spence was the second son to Ninian Spence and his wife Sarah Townsend. Ninian Spence ran a coppersmith business, and the Spence family were a prominent family in Greenock at the time.
From an early age, Spence was characterised as having a docile and reasonable nature, with him being mature for his age. At school he formed a life-long friendship with John Galt, who documented much of his life and his works posthumously. Despite having received a formal education until he was a teenager, Spence never attended university, instead he moved to Glasgow where he lodged with a friend of his fathers, learning the skills of a manufacturer.
Two years after his father's death in 1795, Spence returned to Greenock in 1797. With the support of Galt and others, he established a small literary society, wherein once a month they read a range of essays on varying subjects, this society met frequently until 1804. After this, Spence visited many places in England, he lived in London for a few months where, in 1809, he published his first work. In 1814, he published his second work, getting married in the same year – Spence intended to live in London, and began his journey back before becoming ill, having travelled as far as Glasgow, he died in his sleep due to illness.
Spence held an interest in musical composition, and played the flute.
Published works.
Spence published "An Essay on the Theory of the Various Orders of Logarithmic Transcendents: With an Inquiry Into Their Applications to the Integral Calculus and the Summation of Series in 1809." Throughout his work, he displayed a familiarity with the work of Lagrange and Arbogast, which is notable since at the time very few were familiar with their works. In his preface he derived the binomial theorem and mainly focused on the properties and analytic applications of the series:
formula_0
which he denoted with formula_1. He went on further to derive nine general properties of this function in a table.
Spence goes on to calculate the values of:
formula_2
(the dilogarithm) to nine decimal places, in a table, for all integer values of formula_3 from 1 to 100, the first ever of its kind. These functions became known as the polylogarithm functions, with this particular case often called Spence's function after Spence.
Later on he also created a similar table for formula_4.
Spence published his last work, "Outlines of a theory of Algebraical Equations, deduced from the principles of Harriott, and extended to the fluxional or differential calculus was published in 1814." In which he took a systematic approach to solving equations up to the fourth degree using symmetrical functions of the roots.
After Spence's death, John Herschel edited "Mathematical Essays by the late William Spence", which was published in 1819, with John Galt writing a biography on Spence.
Legacy.
Spence's work was noted to be remarkable at the time, with John Herschel, his acquaintance and one of Britain's leading mathematicians at the time, had referenced it in one of his later publications "Consideration of various points of analysis," which prompted Herschel to edit Spence's manuscripts. Spence was held in such high regard by Galt, and later Herschel that they published a collection of his individual essays in 1819. Posthumously, his work was met with appreciation from his contemporaries, with a review in the ninety-fourth number of the Quarterly Review (reproduced in Galt's "The Literary and Miscellanies of John Galt, Volume 1)" that described his first work in 1809 as:
"" [The] first formal essay in our language on any distinct and considerable branch of the integral calculus, which has appeared since… Hellinsʼs papers on the ‘Rectification of the Conic Sections"." | [
{
"math_id": 0,
"text": "\\pm x/1^n - x^2/2^n \\pm x^3/3^n - ..."
},
{
"math_id": 1,
"text": "L_n(1\\pm x)"
},
{
"math_id": 2,
"text": "L_2(x) = -\\int^x_0\\frac{\\ln (1-t)}{t} \\operatorname{d}\\!t"
},
{
"math_id": 3,
"text": "1 + x"
},
{
"math_id": 4,
"text": "\\tan^{-1}x"
}
]
| https://en.wikipedia.org/wiki?curid=71138035 |
7113944 | Augmented cognition | Interdisciplinary area of psychology and engineering
Augmented cognition is an interdisciplinary area of psychology and engineering, attracting researchers from the more traditional fields of human-computer interaction, psychology, ergonomics and neuroscience. Augmented cognition research generally focuses on tasks and environments where human–computer interaction and interfaces already exist. Developers, leveraging the tools and findings of neuroscience, aim to develop applications which capture the human user's cognitive state in order to drive real-time computer systems. In doing so, these systems are able to provide operational data specifically targeted for the user in a given context. Three major areas of research in the field are: Cognitive State Assessment (CSA), Mitigation Strategies (MS), and Robust Controllers (RC). A subfield of the science, Augmented Social Cognition, endeavours to enhance the "ability of a group of people to remember, think, and reason."
History.
In 1962 Douglas C. Engelbart released the report "Augmenting Human Intellect: A Conceptual Framework" which introduced, and laid the groundwork for, augmented cognition. In this paper, Engelbart defines "augmenting human intellect" as "increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems."
Modern augmented cognition began to emerge in the early 2000s. Advances in cognitive, behavioral, and neurological sciences during the 1990s set the stage for the emerging field of augmented cognition – this period has been termed the "Decade of the Brain." Major advancements in functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have been pivotal in the emergence of augmented cognition technologies which seek to monitor the user's cognitive abilities. As these tools were primarily used in controlled environments, their further development was essential to pragmatic augmented cognition applications.
Research.
DARPA's Augmented Cognition Program.
The Defense Advanced Research Projects Agency (DARPA) has been one of the primary funding agencies for augmented cognition investigators. A major focus of DARPA's augmented cognition program (AugCog) has been developing more robust tools for monitoring cognitive state and integrating them with computer systems. The program envisions "order of magnitude increases in available, net thinking power resulting from linked human-machine dyads [that] will provide such clear informational superiority that few rational individuals or organizations would challenge under the consequences of mortality."
The program began in 2001, and has since be renamed to Improving Warfighter Information Intake Under Stress Program. By leveraging such tools, the program seeks to provide warfighters with enhanced cognitive abilities, especially under complex or stressful war conditions. As of 2002, the program vision is divided into four phases:
Proof of concept was carried out in two phases: near real time monitoring of the user's cognitive activity, and subsequent manipulation of the user's cognitive state.
Augmented Cognition International (ACI) Society.
The Augmented Cognition International (ACI) Society held its first conference in July 2005. At the society's first conference, attendees from a diverse background including academia, government, and industry came together to create an agenda for future research. The agenda focused on near-, medium-, and long-term research and development goals in key augmented cognition science and technology areas. The International Conference on Human Computer Interaction, where the society first established itself, continues to host the society's activities.
Translation engines.
Thad Starner, and the American Sign Language (ASL) Research Group at Georgia Tech, have been researching systems for the recognition of ASL. Telesign, a one-way translation system from ASL to English, was shown to have a 94% accuracy rate on a vocabulary with 141 signs.
Augmentation Factor.
Ron Fulbright proposed the "augmentation factor (A+)," as a measure of the degree a human is cognitively enhanced by working in collaborative partnership with an artificial cognitive system (cog). If WH is the cognitive work performed by the human in a human-machine dyad, and WC is the cognitive work done by the cog then A+ = WC/WH. In situations where a human is working alone without assistance, then WC = 0 resulting in A+ = 0 meaning the human is not cognitively augmented at all. In situations where the human does more cognitive work than the cog, A+ < 1. In situations where the cog does more cognitive work than the human, A+ > 1. As cognitive systems continue to advance, A+ will increase. In situations where a cog performs all cognitive work without the assistance of a human, then WH = 0 resulting in A+ = <undefined> meaning attempting to calculate the augmentation factor is nonsensical since there is no human involved to be augmented.
Human/Cog Ensembles.
Whereas DARPA's AugCog program focuses on human/machine dyads, it is possible for there to be more than one human and more than one artificial element involved. "Human/Cog Ensembles" involve one or more humans working with one or more cognitive systems (cogs). In a human/cog ensemble, the total amount of cognitive work performed by the ensemble, W*, is the sum of the cognitive work performed by each of the N humans in the ensemble plus the sum of the cognitive work performed by each of the M cognitive systems in the ensemble:
W* = formula_0WkH + formula_1WkC
Controversy.
Privacy concerns.
The increasing sophistication of brain-reading technologies has led many to investigate their potential applications for lie detection. Legally required brain scans arguably violate “the guarantee against self-incrimination” because they differ from acceptable forms of bodily evidence, such as fingerprints or blood samples, in an important way: they are not simply physical, hard evidence, but evidence that is intimately linked to the defendant's mind. Under US law, brain-scanning technologies might also raise implications for the Fourth Amendment, calling into question whether they constitute an unreasonable search and seizure.
Human augmentation.
Many of the same arguments in the debate around human enhancement can be analogized to augmented cognition. Economic inequality, for instance, may serve to exacerbate societal advantages and disadvantages due to the limited availability of such technologies.
Fearing the potential applications of devices like Google Glass, certain gambling establishments (such as Caesar's Palace in Las Vegas) banned its use even before it was commercially available.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{k=1}^N"
},
{
"math_id": 1,
"text": "\\sum_{k=1}^M"
}
]
| https://en.wikipedia.org/wiki?curid=7113944 |
71141390 | Champion Trees of South Africa | Trees of significance in South Africa
Champion Trees in South Africa are individual trees or groves that have been identified as having special significance, and therefore protected under Section 12(1) of the National Forests Act of 1998 by the Department of Forestry, Fisheries and the Environment.
History.
In 2003, the Department of Agriculture, Forestry and Fisheries initiated the project to identify and grant special status to indigenous and non-indigenous trees in South Africa that meet certain set criteria. From May to July 2003, workshops were held in Gauteng, KwaZulu-Natal and the Western Cape to gain consensus from experts to assist in the identification process of exceptional trees (Champion Trees) that are worthy of special protection throughout South Africa.
The Department of Agriculture, Forestry and Fisheries initiated the Champion Trees Project with the purpose of identifying exceptional trees and regulating for their special protection using the National Forests Act of 1998 (NFA). Section 12 of the National Forests Act states that the Minister of Forestry, Fisheries and the Environment can declare certain tree species and individual trees or groups of trees as protected. Under Section 15(1)(a) of the National Forests Act, such protected trees may not be "...cut, disturbed or damaged and their products may not be possessed, sold or transported without a licence...". In the case of individual trees, the protection is absolute, with no potential for permission for removal except if life or property is threatened (e.g. by dying or leaning trees).
One of the outcomes of the Department's Champion Trees Project is to gazette a list of Champion trees as part of the National Forests Act.
Criteria for selection of a Champion Tree.
Any person can nominate a tree for selection. Individual trees or groups of trees proposed for Champion status should have the following attributes:
Additional criteria that define a tree's eligibility are biological attributes, the age of the tree, and heritage or historical significance.
Biological attributes.
Champion trees can be designated on a range of singular biological attributes:
The Dendrological Society of South Africa, which maintains the National Register of Big Trees in South Africa, uses a formula of the combination of the three biological attributes to obtain the Size Index (SI): formula_0 This formula has been implemented to determine a tree's Champion status.
Tree age.
The National Forests Act recommends that trees considered for Champion Tree status on the basis of age should be at least 120 years old.
Heritage Significance.
This criterion should take into account the particular value associated with the tree, and graded on a scale of 1-10 (>6 is a potential candidate for Champion Tree status):
Designated Champion Trees.
As of 2018, 93 trees have been designated Champion Trees.
<templatestyles src="Legend/styles.css" /> indicates a de-listed tree.
See also.
<templatestyles src="Stack/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "SI = \\sqrt[](d) \\cdot h \\cdot \\sqrt[](2r)"
}
]
| https://en.wikipedia.org/wiki?curid=71141390 |
711539 | Files-11 | Filesystem of OpenVMS and RSX-11 operating systems
Files-11 is the file system used in the RSX-11 and OpenVMS operating systems from Digital Equipment Corporation. It supports record-oriented I/O, remote network access, and file versioning. The original ODS-1 layer is a flat file system; the ODS-2 version is a hierarchical file system, with support for access control lists.
Files-11 is similar to, but significantly more advanced than, the file systems used in previous Digital Equipment Corporation operating systems such as TOPS-20 and RSTS/E.
History.
The native OpenVMS file system is descended from older DEC operating systems and is similar in many ways, both having been designed by Dave Cutler. A major difference is the layout of directories. These file systems all provided some form of rudimentary non-hierarchical directory structure, typically based on assigning one directory per user account. Under RSTS/E, each user account was represented by two numbers, a codice_0 pair, and had one associated directory. Special system files, such as program executables and the OS itself, were stored in the directory of a reserved system account.
While this was suitable for PDP-11 systems, which possessed limited permanent storage capacity, VAX systems with much larger hard drives required a more flexible method of file storage: hierarchical directory layout in particular, the most notable improvement in ODS-2.
Overview.
"Files-11" is the general term for five separate file systems, known as on-disk structure (ODS) levels 1 through 5.
ODS-1 is the flat file system used by the RSX-11 OS, supported by older VMS systems for RSX compatibility, but never used to support VMS itself; it has been largely superseded by ODS-2 and ODS-5.
ODS-2 is the original VMS file system. Compared with ODS-1, it is a hierarchical file system.
Although seldom referred to by their ODS level designations, ODS-3 and ODS-4 are the Files-11 support for the CD-ROM ISO 9660 and High Sierra Format file systems, respectively.
ODS-5 is an extended version of ODS-2 available on Alpha, IA-64 and x86-64 platforms which adds support for case-preserving filenames with non-ASCII characters and improvements to the hierarchical directory support. It was originally intended for file serving to Microsoft Windows or other non-VMS systems as part of the "NT Affinity" project, but is also used on user disks and Internet servers.
Directory layout.
All files and directories in a Files-11 file system are contained inside one or more "parent directories", and eventually under the root directory, the "master file directory" (see below). The file system is therefore organised in a directed acyclic graph (DAG) structure.
In this example ("see right"), File 2 has a directory entry under both Dir 2 and Dir 3; it is "in" both directories simultaneously. Even if removed from one, it would still exist in the other directory until removed from there also. This is similar to the concept of hard links in UNIX, although care must be taken that the file is not actually deleted on disks that are not set up for hard links (only available on ODS-5 disks, and then only if the disk has hard links enabled).
Disk organization and naming.
An operational VMS system has access to one or more online disks, each of which contains a complete, independent file system. These are either local storage or, in the case of a cluster, storage shared with remote systems.
In an OpenVMS cluster configuration, non-private disks are shared between all nodes in the cluster "(see figure 1)". In this configuration, the two system disks are accessible to both nodes via the network, but the private disk is not shared: it is mounted for use only by a particular user or process on that machine. Access to files across a cluster is managed by the OpenVMS Distributed Lock Manager, an integral part of the file system.
Multiple disks can be combined to form a single large logical disk, or "volume set". Disks can also be automatically replicated into "shadow sets" for data security or faster read performance.
A disk is identified by either its physical name or (more often) by a user-defined logical name. For example, the boot device (system disk) may have the physical name $3$DKA100, but it is generally referred to by the logical name SYS$SYSDEVICE.
File systems on each disk (with the exception of ODS-1) are hierarchical. A fully specified filename consists of a nodename, a username and password, a device name, directory, filename, file type, and a version number, in the format:
NODE"accountname password"::device:[directory.subdirectory]filename.type;ver
For example, [DIR1.DIR2.DIR3]FILE.EXT refers to the latest version of FILE.EXT, on the current default disk, in directory [DIR1.DIR2.DIR3].
DIR1 is a subdirectory of the master file directory (MFD), or "root directory", and DIR2 is a subdirectory of DIR1. A disk's MFD is identified by [000000].
Most parts of the filename can be omitted, in which case they are taken from the current "default file specification". The default file specification replaces the concept of "current directory" in other operating systems by providing a set of defaults for node, device name and directory. All processes have a default file specification which includes disk name and directory, and most VMS file system routines accept a default file specification which can also include the file type; the TYPE command, for example, defaults to ".LIS" as the file type, so the command TYPE F, with no extension, attempts to open the file F.LIS.
Every file has a version number, which defaults to 1 if no other versions of the same filename are present (otherwise one higher than the greatest version). Every time a file is saved, rather than overwriting the existing version, a new file with the same name but an incremented version number is created. Old versions can be deleted explicitly, with the DELETE or the PURGE command, or optionally, older versions of a file can be deleted automatically when the file's "version limit" is reached (set by SET FILE/VERSION_LIMIT). Old versions are thus not overwritten, but are kept on disk and may be retrieved at any time. The architectural limit on version numbers is 32767. The versioning behavior is easily overridden if it is unwanted. In particular, files which are directly updated, such as databases, do not create new versions unless explicitly programmed.
ODS-2 is limited to eight levels of subdirectories, and only uppercase, alphanumeric names (plus the underscore, dash, and dollar sign) up to 39.39 characters (39 for the filename and another 39 for the extension). ODS-5 expands the character set to lowercase letters and most other printable ASCII characters, as well as ISO Latin-1 and Unicode characters, increases the maximum filename length and allows unlimited levels of subdirectories. When constructing a pathname for an ODS-5 file which uses characters not allowed under ODS-2, a special "^" syntax is used to preserve backwards compatibility; the file "file.tar.gz;1" on an ODS-5 disk, for example, would be referred to as "file^.tar.gz"—the file's name is "file.tar", and the extension is ".gz".
File security: protection and ACLs.
VMS file security is defined by two mechanisms, UIC-based access control and ACL-based access control. UIC access control is based on the owner of the file and the UIC, or user, accessing the file. Access is determined by four groups of permissions:
And four permission bits:
The "system" access applies to any user whose UIC group code is less than or equal to the SYSGEN parameter MAXSYSGROUP (typically 8, or 10 octal) (for example the SYSTEM user); "owner" and "group" apply to the owner of the file and that user's user group, and "world" applies to any other user. There is also a fifth permission bit, "Control", which is used to determine access to change file metadata such as protection. This group cannot be set explicitly; it is always set for System and Owner, and never for Group or World.
UIC-based access control is also affected by four system privileges, which allow users holding them to override access controls:
ACLs allow additional privileges to be assigned on a user– or group–specific basis; for example, a web server's UIC could be granted read access to all files in a particular directory. ACLs can be marked as "inherited", where a directory file's ACL applies to all files underneath it. ACLs are modified using the EDIT/ACL command, and take the form of identifier/access pairs. For example, the ACL entry
(IDENTIFIER=HTTP$SERVER,ACCESS=READ+EXECUTE)
would allow the user HTTP$SERVER to read and execute the file.
Logical names.
A logical name is a system variable which may reference a disk, directory or file, or contain other program-specific information. For example, the logical SYS$SYSDEVICE contains the system's boot device. A logical name normally refers to a single directory or disk, "e.g." SYS$LOGIN: which is the user's login (home) directory (or directories); these logicals cannot be used as true disk names—SYS$LOGIN:[DIR]FILE is not a valid file specification. However, "concealed" logical names, defined by DEFINE/TRANSLATION=CONCEALED, can be used in that way; these "rooted" directories are defined with a trailing "." on the directory specification, hence
$ DEFINE/TRANS=CONCEAL HOME DISK$USERS:["username".]
would allow HOME:[DIR]FILE to be used. More common are simple logicals which point to specific directories associated with some application software which may be located in on any disk or any directory. Hence logical ABC_EXE may point to a directory of executable programs for application ABC and ABC_TEMP may point to a directory of temporary files for that same application and this directory may be on the same disk and in the same directory tree as ABC_EXE or could be somewhere on another disk (and in a different directory tree).
In a manner similar to Unix, VMS defines several standard input and output channels which are accessed through the logical names SYS$INPUT, SYS$OUTPUT, SYS$ERROR and SYS$COMMAND.
Logical names do not have a close equivalent in POSIX operating systems. They resemble Unix environment variables, except they are expanded by the file system, instead of the command shell or application program. They must be defined before use, so it is common for many logical names to be defined in the system startup command file, as well as user login command files.
In VMS, logical names may reference other logical names (up to a predefined nesting limit of 10), and may contain lists of names to search for an existing filename. Some frequently referenced logical names are:
The closest non-DEC operating system to support the concept of logical names is AmigaOS, through the ASSIGN command. AmigaOS's disk operating system, AmigaDOS, which is a port of TRIPOS, bears some resemblance to DEC operating systems. For example, physical device names follow a pattern like DF0: for the first floppy disk, CDROM2: for the 3rd CD-ROM drive, etc. However, since the system can boot from any attached drive, the operating system creates the SYS: assignment to automatically reference the boot device used. Other assignments, LIBS:, PREFS:, C:, S:, et al. are also made, themselves referenced off SYS:. Users are, of course, allowed to create and destroy their own assignments too.
Record-oriented I/O: Record Management Services.
Record Management Services is the structured I/O layer of the VMS operating system. RMS provides comprehensive program support for managing "structured" files, such as record-based and indexed database files. The VMS file system, in conjunction with RMS, extends files access past simple byte-streams and allows OS-level support for a variety of rich files types. Each file in the VMS file system may be thought of as a database, containing a series of records, each of which has one of more individual fields. A text file, for example, is a list of records (lines) separated by a newline character. RMS is an example of a record-oriented filesystem.
There are four record formats defined by RMS:
There are four record access methods, or methods to retrieve extant records from files:
Physical layout: the On-Disk Structure.
At the disk level, ODS represents the file system as an array of "blocks", a block being 512 contiguous bytes on one physical disk ("volume"). Disk blocks are assigned in "clusters" (originally 3 contiguous blocks but later increased with larger disk sizes). A file on the disk will ideally be entirely contiguous, i.e. the blocks which contain the file will be sequential, but disk fragmentation will sometimes require the file to be located in discontiguous clusters in which case the fragments are called extents. Disks may be combined with other disks to form a "volume set" and files stored anywhere across that set of disks, but larger disk sizes have reduced the use of volume sets because management of a single physical disk is simpler.
Every file on a Files-11 disk (or volume set) has a unique "file identification" (FID), composed of three numbers: the "file number" (NUM), the "file sequence number" (SEQ), and the "relative volume number" (RVN). The NUM indicates where in the INDEXF.SYS file (see below) the metadata for the file is located; the SEQ is a generation number which incremented when the file is deleted and another file is created reusing the same INDEXF.SYS entry (so any dangling references to the old file do not accidentally point to the new one); and the RVN indicates the volume number on which the file is stored when using a volume set.
Directories.
The structural support of an ODS volume is provided by a "directory file"—a special file containing a list of file names, file version numbers and their associated FIDs, similar to VSAM catalogs on MVS and directories on Unix file systems and NTFS. At the root of the directory structure is the "master file directory" (MFD), the root directory which contains (directly or indirectly) every file on the volume.
The Master File Directory.
At the top level of an ODS file system is the "master file directory" (MFD), which contains all top-level directory files (including itself), and several system files used to store file system information. On ODS-1 volumes, a two-level directory structure is used: each "user identification code" (UIC) has an associated "user file directory" (UFD), of the form [GROUP.USER]. On ODS-2 and later volumes, the layout of directories under the MFD is free-form, subject to a limit on the nesting of directories (8 levels on ODS-2 and unlimited on ODS-5). On multi-volume sets, the MFD is always stored on the first volume, and contains the subdirectories of all volumes.
The following system files are present in the ODS MFD:
Note that the file system implementation itself does not refer to these files by name, but by their file IDs, which always have the same values. Thus, INDEXF.SYS is always the file with NUM = 1 and SEQ = 1.
Index file: INDEXF.SYS.
The index file contains the most basic information about a Files-11 volume set.
There are two organizations of INDEXF.SYS, the traditional organization and the organization used on disks with GPT.SYS; with the GUID Partition Table (GPT) structures.
With the traditional organization, block 1 is the "boot block", which contains the location of the "primary bootstrap image", used to load the VMS operating system. This is always located at logical block 0 on the disk, so that the hardware firmware can read it. This block is always present, even on non-system (non-bootable) volumes.
After the boot block is the "primary home block". This contains the "volume name", the location of the extents comprising the remainder of the index file, the volume owner's UIC, and the "volume protection" information. There are normally several additional copies of the home block, known as the "secondary home blocks", to allow recovery of the volume if it is lost or damaged.
On disks with GPT.SYS, GPT.SYS contains the equivalent of the boot block (known as the Master Boot Record (MBR)), and there is no primary home block. All home blocks present on a GPT-based disk are alternate home blocks. These structures are not included in INDEXF.SYS, and the blocks of the INDEXF.SYS file are unused.
The rest of the index file is composed of "file headers", which describe the extents allocated to the files residing on the volume, and file metadata such as the owner UIC, ACLs and protection information. Each file is described by one or more file headers—more than one can be required when a file has a large number of extents. The file header is a fixed-length block, but contains both fixed– and variable–length sections:
If possible, the map and ACL sections of the header are contained completely in the "primary header". However, if the ACL is too long, or the file contains too many extents, there will not be enough space in the primary header to store them. In this case, an "extension header" is allocated to store the overflow information.
The file header begins with 4 offsets (IDOFFSET, MPOFFSET, ACOFFSET and ROFFSET). Since the size of the areas after the fixed-length header may vary (such as the map and ACL areas), the offsets are required to locate these additional areas. Each offset is the number of 16-bit words from the beginning of the file header to the beginning of that area.
If the file requires multiple headers, the "extension segment number" (SEGNUM) contains the sequence number of this header, beginning at 0 in the first entry in INDEXF.SYS.
STRUCLEV contains the current structure level (in the high byte) and version (in the low byte) of the file system; ODS-2 being structure level 2. An increase in the version number indicates a backwards-compatible change that older software may ignore; changes in the structure level itself are incompatible.
W_FID (containing three values: FID_NUM, FID_SEQ and FID_RVN, corresponding to the file, sequence, and relative volume number) contains the ID of this file; EXT_FID (again composed of three values) holds the location of the next extension header, if any. In both of these values, the RVN is specified as 0 to represent the "current" volume (0 is not normally a valid RVN).
FILECHAR contains several flags which affect how the file is handled or organised:
ACCMODE describes the "privilege level" at which a process must be running in order to access the file. VMS defines four privilege levels: user, supervisor, exec, and kernel. Each type of access - read, write, execute and delete - is encoded as a 2-bit integer.
FILEPROT contains the discretionary access control information for the file. It is divided into 4 groups of 4 bits each: system, owner, group and world. Bit 0 corresponds to read access, 1 to write, 2 to execute and 3 to delete. Setting a bit denies a particular access to a group; clearing it allows it.
If the file header is an extension header, BACKLINK contains the file ID of the primary header; otherwise, it contains the file ID of the directory file containing the primary entry for the file.
The bitmap file is responsible for storing information regarding used and available space on a volume. It contains the "storage control block" (SCB), which includes summary information detailing ???, and the bitmap, an array of bits to indicate if a cluster of blocks on the disk is free or allocated. In early versions of VMS the cluster comprised 3 blocks but as disk sizes have increased, so has the cluster size.
The bad block file contains all of the known bad blocks on the physical volume. The purpose is to prevent the system from allocating them to files. This file was used more in the early days when disks were typically manufactured with more bad patches on the surface.
The volume set list is located on volume 1 of a volume set, and contains a list of labels of all volumes in the set, and the set's volume name.
When a file on a multi-volume set crosses the boundary of two constituent volumes, the continuation file is used as its extension header and describes the volume where the rest of the file can be found.
The quota file contains information of each UIC's disk space usage on a volume. It contains a record for each UIC with space allocated to it on a volume, along with information on how much space is being used by that UIC. "NOTE: The DISK QUOTA feature is optional and the file will only exist if the feature was ever enabled."
The volume security profile contains the volume's owner UIC, the volume protection mask, and its access control list.
This file overlays and protects the MBR (Master Boot Record) and GPT (GUID Partitioning Table) disk structures utilized for and by the Extensible Firmware Interface-compliant firmware. This file is created by default during OpenVMS I64 disk initialization, and is optionally created (with INITIALIZE/GPT) on OpenVMS Alpha.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "i+k"
},
{
"math_id": 2,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=711539 |
7116190 | Wick product | In probability theory, the Wick product is a particular way of defining an adjusted product of a set of random variables. In the lowest order product the adjustment corresponds to subtracting off the mean value, to leave a result whose mean is zero. For the higher order products the adjustment involves subtracting off lower order (ordinary) products of the random variables, in a symmetric way, again leaving a result whose mean is zero. The Wick product is a polynomial function of the random variables, their expected values, and expected values of their products.
The definition of the Wick product immediately leads to the Wick power of a single random variable and this allows analogues of other functions of random variables to be defined on the basis of replacing the ordinary powers in a power-series expansions by the Wick powers. The Wick powers of commonly-seen random variables can be expressed in terms of special functions such as Bernoulli polynomials or Hermite polynomials.
The Wick product is named after physicist Gian-Carlo Wick, cf. Wick's theorem.
Definition.
Assume that "X"1, ..., "X""k" are random variables with finite moments. The Wick product
formula_0
is a sort of product defined recursively as follows:
formula_1
(i.e. the empty product—the product of no random variables at all—is 1). For "k" ≥ 1, we impose the requirement
formula_2
where formula_3 means that "X""i" is absent, together with the constraint that the average is zero,
formula_4
Equivalently, the Wick product can be defined by writing the monomial formula_5 as a "Wick polynomial":
formula_6,
where formula_7 denotes the Wick product formula_8 if formula_9. This is easily seen to satisfy the inductive definition.
Examples.
It follows that
formula_10
formula_11
formula_12
Another notational convention.
In the notation conventional among physicists, the Wick product is often denoted thus:
formula_13
and the angle-bracket notation
formula_14
is used to denote the expected value of the random variable "X".
Wick powers.
The "n"th Wick power of a random variable "X" is the Wick product
formula_15
with "n" factors.
The sequence of polynomials "P""n" such that
formula_16
form an Appell sequence, i.e. they satisfy the identity
formula_17
for "n" = 0, 1, 2, ... and "P"0("x") is a nonzero constant.
For example, it can be shown that if "X" is uniformly distributed on the interval [0, 1], then
formula_18
where "B""n" is the "n"th-degree Bernoulli polynomial. Similarly, if "X" is normally distributed with variance 1, then
formula_19
where "H""n" is the "n"th Hermite polynomial.
formula_20
formula_21 | [
{
"math_id": 0,
"text": "\\langle X_1,\\dots,X_k \\rangle\\,"
},
{
"math_id": 1,
"text": "\\langle \\rangle = 1\\,"
},
{
"math_id": 2,
"text": "{\\partial\\langle X_1,\\dots,X_k\\rangle \\over \\partial X_i}\n= \\langle X_1,\\dots,X_{i-1}, \\widehat{X}_i, X_{i+1},\\dots,X_k \\rangle,"
},
{
"math_id": 3,
"text": "\\widehat{X}_i"
},
{
"math_id": 4,
"text": " \\operatorname{E} \\langle X_1,\\dots,X_k\\rangle = 0. \\,"
},
{
"math_id": 5,
"text": "X_1\\dots X_k"
},
{
"math_id": 6,
"text": " X_1\\dots X_k = \\sum_{S\\subseteq\\left\\{1,\\dots,k\\right\\}} \\operatorname{E}\\left(\\textstyle\\prod_{i\\notin S} X_i\\right) \\cdot \\langle X_i : i \\in S \\rangle \\,"
},
{
"math_id": 7,
"text": "\\langle X_i : i \\in S \\rangle"
},
{
"math_id": 8,
"text": "\\langle X_{i_1},\\dots,X_{i_m} \\rangle"
},
{
"math_id": 9,
"text": "S = \\left\\{i_1,\\dots,i_m\\right\\}"
},
{
"math_id": 10,
"text": "\\langle X \\rangle = X - \\operatorname{E}X,\\,"
},
{
"math_id": 11,
"text": "\\langle X, Y \\rangle = X Y - \\operatorname{E}Y\\cdot X - \\operatorname{E}X\\cdot Y+ 2(\\operatorname{E}X)(\\operatorname{E}Y) - \\operatorname{E}(X Y),\\,"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n\\langle X,Y,Z\\rangle\n=&XYZ\\\\\n&-\\operatorname{E}Y\\cdot XZ\\\\\n&-\\operatorname{E}Z\\cdot XY\\\\\n&-\\operatorname{E}X\\cdot YZ\\\\\n&+2(\\operatorname{E}Y)(\\operatorname{E}Z)\\cdot X\\\\\n&+2(\\operatorname{E}X)(\\operatorname{E}Z)\\cdot Y\\\\\n&+2(\\operatorname{E}X)(\\operatorname{E}Y)\\cdot Z\\\\\n&-\\operatorname{E}(XZ)\\cdot Y\\\\\n&-\\operatorname{E}(XY)\\cdot Z\\\\\n&-\\operatorname{E}(YZ)\\cdot X\\\\\n&-\\operatorname{E}(XYZ)\\\\\n&+2\\operatorname{E}(XY)\\operatorname{E}Z+2\\operatorname{E}(XZ)\\operatorname{E}Y+2\\operatorname{E}(YZ)\\operatorname{E}X\\\\\n&-6(\\operatorname{E}X)(\\operatorname{E}Y)(\\operatorname{E}Z).\n\\end{align}"
},
{
"math_id": 13,
"text": ": X_1, \\dots, X_k:\\,"
},
{
"math_id": 14,
"text": "\\langle X \\rangle\\,"
},
{
"math_id": 15,
"text": "X'^n = \\langle X,\\dots,X \\rangle\\,"
},
{
"math_id": 16,
"text": "P_n(X) = \\langle X,\\dots,X \\rangle = X'^n\\,"
},
{
"math_id": 17,
"text": "P_n'(x) = nP_{n-1}(x),\\,"
},
{
"math_id": 18,
"text": " X'^n = B_n(X)\\, "
},
{
"math_id": 19,
"text": " X'^n = H_n(X)\\, "
},
{
"math_id": 20,
"text": " (aX+bY)^{'n} = \\sum_{i=0}^n {n\\choose i}a^ib^{n-i} X^{'i} Y^{'{n-i}}"
},
{
"math_id": 21,
"text": "\\langle \\operatorname{exp}(aX)\\rangle \\ \\stackrel{\\mathrm{def}}{=} \\ \\sum_{i=0}^\\infty\\frac{a^i}{i!} X^{'i}"
}
]
| https://en.wikipedia.org/wiki?curid=7116190 |
7116196 | Feige–Fiat–Shamir identification scheme | In cryptography, the Feige–Fiat–Shamir identification scheme is a type of parallel zero-knowledge proof developed by Uriel Feige, Amos Fiat, and Adi Shamir in 1988. Like all zero-knowledge proofs, it allows one party, the Prover, to prove to another party, the Verifier, that they possess secret information without revealing to Verifier what that secret information is. The Feige–Fiat–Shamir identification scheme, however, uses modular arithmetic and a parallel verification process that limits the number of communications between Prover and Verifier.
Setup.
Following a common convention, call the prover Peggy and the verifier Victor.
Choose two large prime integers "p" and "q" and compute the product "n = pq". Create secret numbers formula_0 coprime to "n". Compute formula_1. Peggy and Victor both receive formula_2 while formula_3 and formula_4 are kept secret. Peggy is then sent the numbers formula_5. These are her secret login numbers. Victor is sent the numbers formula_6 by Peggy when she wishes to identify herself to Victor. Victor is unable to recover Peggy's formula_5 numbers from his formula_6 numbers due to the difficulty in determining a modular square root when the modulus' factorization is unknown.
Procedure.
This procedure is repeated with different formula_7 and formula_12 values until Victor is satisfied that Peggy does indeed possess the modular square roots (formula_5) of his formula_6 numbers.
Security.
In the procedure, Peggy does not give any useful information to Victor. She merely proves to Victor that she has the secret numbers without revealing what those numbers are. Anyone who intercepts the communication between each Peggy and Victor would only learn the same information. The eavesdropper would not learn anything useful about Peggy's secret numbers.
Suppose Eve has intercepted Victor's formula_6 numbers but does not know what Peggy's formula_5 numbers are. If Eve wants to try to convince Victor that she is Peggy, she would have to correctly guess what Victor's formula_12 numbers will be. She then picks a random formula_16, calculates formula_17 and sends formula_10 to Victor. When Victor sends formula_12, Eve simply returns her formula_16. Victor is satisfied and concludes that Eve has the secret numbers. However, the probability of Eve correctly guessing what Victor's formula_12 will be is 1 in formula_18. By repeating the procedure formula_19 times, the probability drops to 1 in formula_20 . For formula_21 and formula_22 the probability of successfully posing as Peggy is less than 1 in 1 million. | [
{
"math_id": 0,
"text": "s_1, \\cdots, s_k"
},
{
"math_id": 1,
"text": "v_i \\equiv s_i^{2} \\pmod{n}"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "s_i"
},
{
"math_id": 6,
"text": "v_i"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "s\\in\\{-1,1\\}"
},
{
"math_id": 9,
"text": "s \\cdot x \\equiv r^2 \\pmod{n}"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "a_1, \\cdots, a_k"
},
{
"math_id": 12,
"text": "a_i"
},
{
"math_id": 13,
"text": "y \\equiv rs_1^{a_1}s_2^{a_2} \\cdots s_k^{a_k}\\pmod{n}"
},
{
"math_id": 14,
"text": "y^2 \\pmod{n} \\equiv \\pm\\, x v_1^{a_1}v_2^{a_2} \\cdots v_k^{a_k}\\pmod{n}"
},
{
"math_id": 15,
"text": "x \\neq 0 ."
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "x \\equiv y^2 v_1^{-a_1}v_2^{-a_2} \\cdots v_k^{-a_k}\\pmod{n}"
},
{
"math_id": 18,
"text": "2^k"
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "2^{k t}"
},
{
"math_id": 21,
"text": "k = 5"
},
{
"math_id": 22,
"text": "t = 4"
}
]
| https://en.wikipedia.org/wiki?curid=7116196 |
71165287 | Non-relativistic gravitational fields | Within general relativity (GR), Einstein's relativistic gravity, the gravitational field is described by the 10-component metric tensor. However, in Newtonian gravity, which is a limit of GR, the gravitational field is described by a single component Newtonian gravitational potential. This raises the question to identify the Newtonian potential within the metric, and to identify the physical interpretation of the remaining 9 fields.
The definition of the non-relativistic gravitational fields provides the answer to this question, and thereby describes the image of the metric tensor in Newtonian physics. These fields are not strictly non-relativistic. Rather, they apply to the non-relativistic (or post-Newtonian) limit of GR.
A reader who is familiar with electromagnetism (EM) will benefit from the following analogy. In EM, one is familiar with the electrostatic potential formula_0 and the magnetic vector potential formula_1. Together, they combine into the 4-vector potential formula_2, which is compatible with relativity. This relation can be thought to represent the non-relativistic decomposition of the electromagnetic 4-vector potential. Indeed, a system of point-particle charges moving slowly with respect to the speed of light may be studied in an expansion in formula_3, where formula_4 is a typical velocity and formula_5 is the speed of light. This expansion is known as the post-Coulombic expansion. Within this expansion, formula_0 contributes to the two-body potential already at 0th order, while formula_6 contributes only from the 1st order and onward, since it couples to electric currents and hence the associated potential is proportional to formula_3.
Definition.
In the non-relativistic limit, of weak gravity and non-relativistic velocities, general relativity reduces to Newtonian gravity. Going beyond the strict limit, corrections can be organized into a perturbation theory known as the post-Newtonian expansion. As part of that, the metric gravitational field formula_7, is redefined and decomposed into the "non-relativistic gravitational (NRG) fields" formula_8 : formula_9 is the Newtonian potential, formula_10 is known as the gravito-magnetic vector potential, and finally formula_11 is a 3d symmetric tensor known as the spatial metric perturbation. The field redefinition is given by
formula_12
In components, this is equivalent to
formula_13
where formula_14.
Counting components, formula_15 has 10, while formula_9 has 1, formula_10 has 3 and finally formula_11 has 6. Hence, in terms of components, the decomposition reads formula_16.
Motivation for definition.
In the post-Newtonian limit, bodies move slowly compared with the speed of light, and hence the gravitational field is also slowly changing. Approximating the fields to be time independent, the Kaluza-Klein reduction (KK) was adapted to apply to the time direction. Recall that in its original context, the KK reduction applies to fields which are independent of a compact spatial fourth direction. In short, the NRG decomposition is a Kaluza-Klein reduction over time.
The definition was essentially introduced in, interpreted in the context of the post-Newtonian expansion in, and finally the normalization of formula_10 was changed in to improve the analogy between a spinning object and a magnetic dipole.
Relation with standard approximations.
By definition, the post-Newtonian expansion assumes a weak field approximation. Within the first order perturbation to the metric formula_17, where formula_18 is the Minkowski metric, we find the standard weak field decomposition into a scalar, vector and tensor formula_19, which is similar to the non-relativistic gravitational (NRG) fields. The importance of the NRG fields is that they provide a "non-linear extension", thereby facilitating computation at higher orders in the weak field / post-Newtonian expansion. Summarizing, the NRG fields are adapted for higher order post-Newtonian expansion.
Physical interpretation.
The scalar field formula_9 is interpreted as the Newtonian gravitational potential.
The vector field formula_10 is interpreted as the gravito-magnetic vector potential. It is magnetic-like, or analogous to the magnetic vector potential in electromagnetism (EM). In particular, it is sourced by massive currents (the analogue of charge currents in EM), namely by momentum.
As a result, the gravito-magnetic vector potential is responsible for current-current interaction, which appears at the 1st post-Newtonian order. In particular, it generates a "repulsive" contribution to the force between parallel massive currents. However, this repulsion is overturned by the standard Newtonian gravitational attraction, since in gravity a current "wire" must always be massive (charged) -- unlike EM.
A spinning object is the analogue of an electromagnetic current loop, which forms as magnetic dipole, and as such it creates a magnetic-like dipole field in formula_10.
The symmetric tensor formula_11 is known as the spatial metric perturbation. From the 2nd post-Newtonian order and onward, it must be accounted for. If one restricts to the 1st post-Newtonian order, formula_11 can be ignored, and relativistic gravity is described by the formula_9, formula_10 fields. Hence it becomes a strong analogue of electromagnetism, an analogy known as gravitoelectromagnetism.
Applications and generalizations.
The two body problem in general relativity holds both intrinsic interest and observational, astrophysical interest. In particular, it is used to describe the motion of binary compact objects, which are the sources for gravitational waves. As such, the study of this problem is essential for both detection and interpretation of gravitational waves.
Within this two body problem, the effects of GR are captured by the two body effective potential, which is expanded within the post-Newtonian approximation. Non-relativistic gravitational fields were found to economize the determination of this two body effective potential.
Generalizations.
In higher dimensions, with an arbitrary spacetime dimension formula_20, the definition of non-relativistic gravitational fields generalizes into
formula_21Substituting formula_22 reproduces the standard 4d definition above. | [
{
"math_id": 0,
"text": "\\phi^\\text{EM}"
},
{
"math_id": 1,
"text": "\\vec{A}{}^\\text{EM}"
},
{
"math_id": 2,
"text": "A_\\mu^\\text{EM} \\leftrightarrow (\\phi^\\text{EM}, \\vec{A}{}^\\text{EM})"
},
{
"math_id": 3,
"text": "v^2/c^2"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "\\vec{A}^\\text{EM}"
},
{
"math_id": 7,
"text": "g_{\\mu\\nu},\\ \\mu, \\nu = 0, 1, 2, 3"
},
{
"math_id": 8,
"text": "g_{\\mu\\nu} \\leftrightarrow \\big(\\phi, \\vec{A}, \\sigma_{ij}\\big)"
},
{
"math_id": 9,
"text": "\\phi"
},
{
"math_id": 10,
"text": "\\vec{A}"
},
{
"math_id": 11,
"text": "\\sigma_{ij}"
},
{
"math_id": 12,
"text": "ds^2\\equiv g_{\\mu \\nu}dx^\\mu dx^\\nu = e^{2 \\phi}(dt-2\\, \\vec{A} \\cdot d\\vec{x})^2-e^{-2 \\phi}(\\delta_{ij} + \\sigma_{ij})\\, dx^i\\, dx^j."
},
{
"math_id": 13,
"text": "\\begin{align}\n g_{00} &= e^{2 \\phi}, \\\\\n g_{0i} &= -2\\, e^{2 \\phi} \\, A_i, \\\\\n g_{ij} &= -e^{-2 \\phi}\\, (\\delta_{ij} + \\sigma_{ij}) + 4 \\, e^{2 \\phi} \\,A_i \\, A_j,\n\\end{align}"
},
{
"math_id": 14,
"text": "i, j = 1, 2, 3"
},
{
"math_id": 15,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 16,
"text": "10 = 1 + 3 + 6"
},
{
"math_id": 17,
"text": "g_{\\mu \\nu} = \\eta_{\\mu \\nu} + h_{\\mu \\nu}"
},
{
"math_id": 18,
"text": "\\eta_{\\mu \\nu}\n"
},
{
"math_id": 19,
"text": "h_{\\mu\\nu} \\to \\left( h_{00}, h_{0i}, h_{ij} \\right)"
},
{
"math_id": 20,
"text": "d"
},
{
"math_id": 21,
"text": "ds^2 = e^{2 \\phi}(dt-2\\, \\vec{A} \\cdot d\\vec{x})^2-e^{-2 \\phi/(d-3)}(\\delta_{ij} + \\sigma_{ij}) dx^i dx^j"
},
{
"math_id": 22,
"text": "d=4"
}
]
| https://en.wikipedia.org/wiki?curid=71165287 |
71165324 | Poly(trimethylene carbonate) | Polycarbonate
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Poly(trimethylene carbonate) (PTMC) is an aliphatic polycarbonate synthesized from the 6-membered cyclic carbonate, trimethylene carbonate (1,3-propylene carbonate or 1,3-Dioxan-2-one). Trimethylene carbonate (TMC) is a colorless crystalline solid with melting point ranging between 45°C and 48 °C and boiling point at 255°C (at 760 mmHg). TMC is originally synthesized from 1,3-propanediol with phosgene or carbon monoxide, which are highly poisonous gases. Another route is from the transesterification of 1,3-propanediol and dialkylcarbonates. This route is considered "greener" compared to the other one, since precursors can be obtained from renewable resources and carbon dioxide.
Synthesis.
In opposition to five-membered cyclic carbonate, the six-membered ones like trimethylene carbonate are thermodynamically less stable than its polymer, undergoing the ring-opening polymerization with retention of COformula_0 in the polymer structure, and generation an aliphatic polycarbonate.
Ring-opening polymerization (ROP) is the most common method used to synthesize poly(trimethylene carbonate) and their copolymers, since this synthetic route can allow mild reaction condition.
Several ROP catalysts/initiators have been used to synthesize the polymer, among them metal-catalyzed polymerization using oxides, salts and complexes of Al, K, Ti, Zn, Zr, Sn and rare earths metals; enzyme-catalyzed polymerization; and alcohol-initiated polymerization.
Physical properties.
PTMC is a predominantly amorphous polymer in the relaxed state but it can present some crystallinity, particularly when the chains are stretched.
The polymer presents glass transition temperature (formula_1) between -15 and -30 °C and melting temperature (formula_2) ranging from 38 to 41°C.
Low molecular weight PTMC is a rubbery polymer with poor dimensional stability, tackiness, and inadequate mechanical properties. Nevertheless, high molecular weight amorphous PTMC (over 100,000) is very flexible, with a relatively low elastic modulus (5–7 MPa) at room temperature, tough and it presents excellent ultimate mechanical properties. Mechanical properties of the rubber can be also improved upon cross-linking by gamma-irradiation.
PTMC has resistance to non-enzymatic hydrolysis, compared to most aliphatic polyesters, but it is biodegradable "in vivo" by enzymes. It is a resorbable material since the ester bonds can be enzymatically broken, producing COformula_0 and water. So, "in vivo", it degrades by surface erosion and its decomposition products contain no organic acids, preventing potential inflammatory responses.
Applications.
Due to the predominant amorphous nature, PTMC is a flexible polymer with rubbery behavior. In addition, the biodegradability and biocompatibility of PTMC make it to have high applicability in biomedical applications as scaffolds for tissue regeneration and drug delivery devices.
PTMC has been used as scaffolds for tissue engineering particularly for some types of soft tissues in which the maintenance of mechanical properties is important for tissue reconstruction. PTMC-based membranes have been also evaluated as barrier for use in hard tissue guided regeneration like bone. The performance of these membranes is comparable with commercial collagen and e-PTFE membranes, showing well suitability for use in guided bone regeneration.
Due to rubbery and hydrophobic nature, PTMC-based copolymers produced from ROP of TMC with lactone-based comonomers have been synthesized to modify these characteristics, amplifying applications. Thus, the use as resorbable medical devices in which control of rigidity and biodegradation time are desired has been proposed. Main examples of these copolymers are poly(L-lactide-co-trimethylene carbonate), poly(glycolide-"co"-trimethylene carbonate) and poly(caprolactone-"co"-trimethylene carbonate).
Poly(L-lactide-co-trimethylene carbonate) has been proposed for application as small diameter vascular grafts. Poly(glycolide-"co"-trimethylene carbonate) is a commercial monofilament used for suture with slow biodegradation rate which allows maintenance of high mechanical strength compatible with the surgical recovery. Poly(caprolactone-"co"-trimethylene carbonate) has been proposed as biomaterial for conduits in the regeneration of central nervous system. | [
{
"math_id": 0,
"text": "_2"
},
{
"math_id": 1,
"text": "T_g"
},
{
"math_id": 2,
"text": "T_m"
}
]
| https://en.wikipedia.org/wiki?curid=71165324 |
711716 | Geodetic Reference System 1980 | Collection of data on Earth's gravity and shape
The Geodetic Reference System 1980 (GRS80) consists of a global reference ellipsoid and a normal gravity model. The GRS80 gravity model has been followed by the newer more accurate Earth Gravitational Models, but the GRS80 reference ellipsoid is still the most accurate in use for coordinate reference systems, e.g. for the international ITRS, the European ETRS89 and (with a 0,1 mm rounding error) for WGS 84 used for the American Global Navigation Satellite System (GPS).
Background.
Geodesy is the scientific discipline that deals with the measurement and representation of the earth, its gravitational field and geodynamic phenomena (polar motion, earth tides, and crustal motion) in three-dimensional, time-varying space.
The geoid is essentially the figure of the Earth abstracted from its topographic features. It is an idealized equilibrium surface of sea water, the mean sea level surface in the absence of currents, air pressure variations etc. and continued under the continental masses. The geoid, unlike the ellipsoid, is irregular and too complicated to serve as the computational surface on which to solve geometrical problems like point positioning. The geometrical separation between it and the reference ellipsoid is called the geoidal undulation, or more usually the geoid-ellipsoid separation, "N". It varies globally between .
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial
radius) "a" and flattening "f". The quantity "f" = ("a"−"b")/"a", where "b" is the semi-minor axis (polar radius), is a purely geometrical one. The mechanical ellipticity of the earth (dynamical flattening, symbol "J"2) is determined to high precision by observation of satellite orbit perturbations. Its relationship with the geometric flattening is indirect. The relationship depends on the internal density distribution.
The 1980 Geodetic Reference System (GRS 80) posited a semi-major axis and a <templatestyles src="Fraction/styles.css" />1⁄298.257222101 flattening. This system was adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG) in Canberra, Australia, 1979.
The GRS 80 reference system was originally used by the World Geodetic System 1984 (WGS 84). The reference ellipsoid of WGS 84 now differs slightly due to later refinements.
The numerous other systems which have been used by diverse countries for their maps and charts are gradually dropping out of use as more and more countries move to global, geocentric reference systems using the GRS80 reference ellipsoid.
Definition.
The reference ellipsoid is usually defined by its semi-major axis (equatorial
radius) formula_0 and either its semi-minor axis (polar radius) formula_1, aspect ratio formula_2 or flattening formula_3, but GRS80 is an exception: "four" independent constants are required for a complete definition. GRS80 chooses as these formula_0, formula_4, formula_5 and formula_6, making the geometrical constant formula_3 a derived quantity.
Semi-major axis = Equatorial Radius = formula_7;
Geocentric gravitational constant determined from the gravitational constant and the earth mass with atmosphere formula_8;
Dynamical form factor formula_9;
Angular velocity of rotation formula_10;
Flattening = formula_3 = 0.003 352 810 681 183 637 418;
Reciprocal of flattening = formula_11 = 298.257 222 100 882 711 243;
Semi-minor axis = Polar Radius = formula_1 = 6 356 752.314 140 347 m;
Aspect ratio = formula_12 = 0.996 647 189 318 816 363;
as defined by the International Union of Geodesy and Geophysics (IUGG): formula_13 = 6 371 008.7714 m;
Authalic mean radius = formula_14 = 6 371 007.1809 m;
Radius of a sphere of the same volume = formula_15 = 6 371 000.7900 m;
Linear eccentricity = formula_16 = 521 854.0097 m;
Eccentricity of elliptical section through poles = formula_17 = 0.081 819 191 0428;
Polar radius of curvature = formula_18 = 6 399 593.6259 m;
Equatorial radius of curvature for a meridian = formula_19 = 6 335 439.3271 m;
Meridian quadrant = 10 001 965.7292 m;
Period of rotation (sidereal day) = formula_20 = 86 164.100 637 s
Derived quantities.
The formula giving the eccentricity of the GRS80 spheroid is:
formula_21
where
formula_22
and formula_23 (so formula_24). The equation is solved iteratively to give
formula_25
which gives
formula_26
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "(b/a)"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "GM"
},
{
"math_id": 5,
"text": "J_2"
},
{
"math_id": 6,
"text": "\\omega"
},
{
"math_id": 7,
"text": "a = 6\\,378\\,137\\,\\mathrm{m}"
},
{
"math_id": 8,
"text": "GM = 3986005\\times10^8\\, \\mathrm{m^3/s^2}"
},
{
"math_id": 9,
"text": "J_2 = 108\\,263\\times10^{-8}"
},
{
"math_id": 10,
"text": "\\omega = 7\\,292\\,115\\times10^{-11}\\, \\mathrm{s^{-1}}"
},
{
"math_id": 11,
"text": "1/f"
},
{
"math_id": 12,
"text": "b/a"
},
{
"math_id": 13,
"text": "R_1 = (2a+b)/3"
},
{
"math_id": 14,
"text": "R_2"
},
{
"math_id": 15,
"text": "R_3 = (a^2b)^{1/3}"
},
{
"math_id": 16,
"text": "c = \\sqrt{a^2-b^2}"
},
{
"math_id": 17,
"text": "e = \\frac{\\sqrt{a^2-b^2}}{a}"
},
{
"math_id": 18,
"text": "a^2/b"
},
{
"math_id": 19,
"text": "b^2/a"
},
{
"math_id": 20,
"text": "2\\pi/\\omega"
},
{
"math_id": 21,
"text": "e^2 = \\frac {a^2 - b^2}{a^2} = 3J_2 + \\frac4{15} \\frac{\\omega^2 a^3}{GM} \\frac{e^3}{2q_0},"
},
{
"math_id": 22,
"text": " 2q_0 = \\left(1 + \\frac3{e'^2}\\right) \\arctan e' - \\frac3{e'}"
},
{
"math_id": 23,
"text": "e' = \\frac{e}{\\sqrt{1 - e^2}} "
},
{
"math_id": 24,
"text": "\\arctan e' = \\arcsin e"
},
{
"math_id": 25,
"text": "e^2 = 0.00669\\,43800\\,22903\\,41574\\,95749\\,48586\\,28930\\,62124\\,43890\\,\\ldots"
},
{
"math_id": 26,
"text": "f = 1/298.25722\\,21008\\,82711\\,24316\\,28366\\,\\ldots."
}
]
| https://en.wikipedia.org/wiki?curid=711716 |
71174908 | Weihrauch reducibility | Notion from Computability
In computable analysis, Weihrauch reducibility is a notion of reducibility between multi-valued functions on represented spaces that roughly captures the uniform computational strength of computational problems. It was originally introduced by Klaus Weihrauch in an unpublished 1992 technical report.
Definition.
A "represented space" is a pair formula_0 of a set formula_1 and a surjective partial function formula_2.
Let formula_3 be represented spaces and let formula_4 be a partial multi-valued function. A "realizer" for formula_5 is a (possibly partial) function formula_6 such that, for every formula_7, formula_8. Intuitively, a realizer formula_9 for formula_5 behaves "just like formula_5" but it works on names. If formula_9 is a realizer for formula_5 we write formula_10.
Let formula_11 be represented spaces and let formula_12 be partial multi-valued functions. We say that formula_5 is "Weihrauch reducible" to formula_13, and write formula_14, if there are computable partial functions formula_15 such thatformula_16where formula_17 and formula_18 denotes the join in the Baire space. Very often, in the literature we find formula_19 written as a binary function, so to avoid the use of the join. In other words, formula_20 if there are two computable maps formula_21 such that the function formula_22 is a realizer for formula_5 whenever formula_23 is a solution for formula_24. The maps formula_21 are often called "forward" and "backward" functional respectively.
We say that formula_5 is "strongly Weihrauch reducible" to formula_13, and write formula_25, if the backward functional formula_26 does not have access to the original input. In symbols:formula_27
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X,\\delta)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\delta:\\subset \\mathbb{N}^{\\mathbb{N}}\\rightarrow X"
},
{
"math_id": 3,
"text": "(X,\\delta_X),(Y,\\delta_Y)"
},
{
"math_id": 4,
"text": " f:\\subset X \\rightrightarrows Y "
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "F:\\subset \\mathbb{N}^\\mathbb{N} \\to \\mathbb{N}^\\mathbb{N}"
},
{
"math_id": 7,
"text": "p \\in \\mathrm{dom} f \\circ \\delta_X"
},
{
"math_id": 8,
"text": " \\delta_Y \\circ F (p) = f\\circ \\delta_X(p)"
},
{
"math_id": 9,
"text": "F"
},
{
"math_id": 10,
"text": " F \\vdash f"
},
{
"math_id": 11,
"text": "X,Y,Z,W"
},
{
"math_id": 12,
"text": " f:\\subset X \\rightrightarrows Y, g:\\subset Z \\rightrightarrows W"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "f\\le_{\\mathrm{W}} g"
},
{
"math_id": 15,
"text": "\\Phi,\\Psi:\\subset \\mathbb{N}^\\mathbb{N} \\to \\mathbb{N}^\\mathbb{N}"
},
{
"math_id": 16,
"text": " (\\forall G \\vdash g )( \\Psi \\langle \\mathrm{id}, G\\Phi \\rangle \\vdash f ),"
},
{
"math_id": 17,
"text": "\\Psi \\langle \\mathrm{id}, G\\Phi \\rangle:= \\langle p,q\\rangle \\mapsto \\Psi(\\langle p, G\\Phi(q) \\rangle) "
},
{
"math_id": 18,
"text": " \\langle \\cdot \\rangle"
},
{
"math_id": 19,
"text": " \\Psi "
},
{
"math_id": 20,
"text": "f \\le_\\mathrm{W} g"
},
{
"math_id": 21,
"text": "\\Phi, \\Psi"
},
{
"math_id": 22,
"text": "p \\mapsto \\Psi(p, q) "
},
{
"math_id": 23,
"text": "q"
},
{
"math_id": 24,
"text": "g(\\Phi(p))"
},
{
"math_id": 25,
"text": " f\\le_{\\mathrm{sW}} g"
},
{
"math_id": 26,
"text": "\\Psi"
},
{
"math_id": 27,
"text": " (\\forall G \\vdash g )( \\Psi G\\Phi \\vdash f )."
}
]
| https://en.wikipedia.org/wiki?curid=71174908 |
71176219 | Convex cap | A convex cap, also known as a convex floating body or just floating body, is a well defined structure in mathematics commonly used in convex analysis for approximating convex shapes. In general it can be thought of as the intersection of a convex Polytope with a half-space.
Definition.
A cap, formula_0 can be defined as the intersection of a half-space formula_1 with a convex set formula_2. Note that the cap can be defined in any dimensional space. Given a formula_3, formula_4 can be defined as the cap containing formula_0 corresponding to a half-space parallel to formula_1 with width formula_5 times greater than that of the original.
The definition of a cap can also be extended to define a cap of a point formula_6 where the cap formula_0 can be defined as the intersection of a convex set formula_2 with a half-space formula_1 containing formula_6. The minimal cap of a point is a cap of formula_6 with formula_7.
Floating Bodies and Caps.
We can define the floating body of a convex shape formula_1 using the following process. Note the floating body is also convex. In the case of a 2-dimensional convex compact shape formula_2, given some formula_8 where formula_9 is small. The floating body of this 2-dimensional shape is given by removing all the 2 dimensional caps of area formula_9 from the original body. The resulting shape will be our convex floating body formula_10. We generalize this definition to n dimensions by starting with an n dimensional convex shape and removing caps in the corresponding dimension.
Relation to affine surface area.
As formula_11, the floating body more closely approximates formula_2. This information can tell us about the affine surface area formula_12 of formula_2 which measures how the boundary behaves in this situation. If we take the convex floating body of a shape, we notice that the distance from the boundary of the floating body to the boundary of the convex shape is related to the convex shape's curvature.
Specifically, convex shapes with higher curvature have a higher distance between the two boundaries. Taking a look at the difference in the areas of the original body and the floating body as formula_11. Using the relation between curvature and distance, we can deduce that formula_13 is also dependent on the curvature. Thus,
formula_14.
In this formula, formula_15 is the curvature of formula_16 at formula_6 and formula_17 is the length of the curve.
We can generalize distance, area and volume for n dimensions using the Hausdorff measure. This definition, then works for all formula_18. As well, the power of formula_15 is related to the inverse of formula_19 where formula_20 is the number of dimensions. So, the affine surface area for an n-dimensional convex shape is
formula_21
where formula_22 is the formula_23-dimensional Hausdorff measure.
Wet part of a convex body.
The wet part of a convex body can be defined as formula_24 where formula_25 is any real number describing the maximum volume of the wet part and formula_26.
We can see that using a non-degenerate linear transformation (one whose matrix is invertible) preserves any properties of formula_27. So, we can say that formula_28 is equivariant under these types of transformations. Using this notation, formula_29. Note that
formula_30
is also equivariant under non-degenerate linear transformations.
Caps for approximation.
Assume formula_31 and choose formula_32 randomly, independently and according to the uniform distribution from formula_2. Then, formula_33 is a random polytope. Intuitively, it is clear that as formula_34, formula_35 approaches formula_2. We can determine how well formula_35 approximates formula_2 in various measures of approximation, but we mainly focus on the volume. So, we define formula_36, when formula_37 refers to the expected value. We use formula_38 as the wet part of formula_2 and formula_39 as the floating body of formula_2. The following theorem states that the general principle governing formula_40 is of the same order as the magnitude of the volume of the wet part with formula_41.
Theorem.
For formula_42 and formula_43, formula_44. The proof of this theorem is based on the technique of M-regions and cap coverings. We can use the minimal cap which is a cap formula_45 containing formula_6 and satisfying formula_46. Although the minimal cap is not unique, this doesn't have an effect on the proof of the theorem.
Lemma.
If formula_47 and formula_48, then formula_49 for every minimal cap formula_45.
Since formula_50, this lemma establishes the equivalence of the M-regions formula_51 and a minimal cap formula_45: a blown up copy of formula_51 contains formula_45 and a blown up copy of formula_45 contains formula_51. Thus, M-regions and minimal caps can be interchanged freely, without losing more than a constant factor in estimates.
Economic cap covering.
A cap covering can be defined as the set of caps that completely cover some boundary formula_52. By minimizing the size of each cap, we can minimize the size of the set of caps and create a new set. This set of caps with minimal volume is called an economic cap covering and can be explicitly defined as the set of caps formula_0 covering some boundary formula_52 where each formula_53 has some minimal width formula_54 and the total volume of this covering is ≪ formula_54 ⋅ formula_55.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "\\lambda \\geq 1"
},
{
"math_id": 4,
"text": "C^\\lambda"
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "\\operatorname{vol}(C(x)) = \\operatorname{min}\\{\\operatorname{vol}(K \\cap H) : x \\in H\\}"
},
{
"math_id": 8,
"text": "\\delta > 0"
},
{
"math_id": 9,
"text": "\\delta"
},
{
"math_id": 10,
"text": "K\\delta"
},
{
"math_id": 11,
"text": "\\delta \\rightarrow 0"
},
{
"math_id": 12,
"text": "\\Omega(K)"
},
{
"math_id": 13,
"text": "A(K) - A(K\\delta)"
},
{
"math_id": 14,
"text": "\\Omega(K) = \\int_{\\operatorname{bd} K} {\\kappa(x)^{\\frac{1}{3}}dl} \\approx \\frac{A(K) - A(K\\delta)}{\\delta^{\\frac{2}{3}}}"
},
{
"math_id": 15,
"text": "\\kappa(x)"
},
{
"math_id": 16,
"text": "\\operatorname{bd} K"
},
{
"math_id": 17,
"text": "l"
},
{
"math_id": 18,
"text": "n \\geq 0"
},
{
"math_id": 19,
"text": "n + 1"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "\\int_{\\operatorname{bd} K} {\\kappa(x)^{\\frac{1}{n + 1}}d\\sigma(x)} \\approx \\frac{A(K) - A(K\\delta)}{\\delta^{\\frac{2}{n + 1}}}"
},
{
"math_id": 22,
"text": "\\sigma(x)"
},
{
"math_id": 23,
"text": "(n - 1)"
},
{
"math_id": 24,
"text": "K(t) = K(v \\leq t) = \\{x \\in K : v(x)\\leq t \\}"
},
{
"math_id": 25,
"text": "t"
},
{
"math_id": 26,
"text": "v(x) = \\operatorname{min}\\{\\operatorname{vol}(K \\cap H): x \\in H\\}"
},
{
"math_id": 27,
"text": "K(t)"
},
{
"math_id": 28,
"text": "v: K \\rightarrow \\mathbb{R}"
},
{
"math_id": 29,
"text": "v_{AK}(Ax) = |\\mathrm{det} A|v_K(x)"
},
{
"math_id": 30,
"text": "\\frac{\\mathrm{vol}(K(v \\leq t\\mathrm{vol}(K)))}{\\mathrm{vol}(K)}"
},
{
"math_id": 31,
"text": "K \\in \\mathcal{K_1} = \\{K: \\mathrm{vol}(K) = 1\\}"
},
{
"math_id": 32,
"text": "x_1, ... , x_n"
},
{
"math_id": 33,
"text": "K_n = \\text{conv}\\{x_1, ..., x_n\\}"
},
{
"math_id": 34,
"text": "n \\rightarrow \\infty"
},
{
"math_id": 35,
"text": "K_n"
},
{
"math_id": 36,
"text": "E(K, n) = E(\\mathrm{vol}K - \\mathrm{vol}K_n)"
},
{
"math_id": 37,
"text": "E"
},
{
"math_id": 38,
"text": "K(t) = K(v \\leq t)"
},
{
"math_id": 39,
"text": "K(v \\geq t)"
},
{
"math_id": 40,
"text": "E(K, n)"
},
{
"math_id": 41,
"text": "t = \\frac{1}{n}"
},
{
"math_id": 42,
"text": "K \\in \\mathcal{K}_1"
},
{
"math_id": 43,
"text": "n \\geq d + 1"
},
{
"math_id": 44,
"text": "\\mathrm{vol}K(\\frac{1}{n}) \\ll E(K, n) \\ll \\mathrm{vol}K(\\frac{1}{n})"
},
{
"math_id": 45,
"text": "C(x)"
},
{
"math_id": 46,
"text": "\\mathrm{vol}C(x) = v(x)"
},
{
"math_id": 47,
"text": "x \\in K"
},
{
"math_id": 48,
"text": "v(x) < \\varepsilon_0"
},
{
"math_id": 49,
"text": "C(x) \\subset M(x, 3d)"
},
{
"math_id": 50,
"text": "M(x) \\subset C^2(x)"
},
{
"math_id": 51,
"text": "M(x)"
},
{
"math_id": 52,
"text": "D"
},
{
"math_id": 53,
"text": "C_i"
},
{
"math_id": 54,
"text": "\\varepsilon"
},
{
"math_id": 55,
"text": "vol(D)"
}
]
| https://en.wikipedia.org/wiki?curid=71176219 |
71185170 | Job 7 | Job 7 is the seventh chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. The "dialogue" section of the book, comprises –. This chapter records one of the speeches of Job, the central character in the book.
Text.
The original text is written in Hebrew language. This chapter is divided into 21 verses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 7 is grouped into the Dialogue section with the following outline:
The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapters 6 and 7 record Job's response after the first speech of Eliphaz (in chapters 4 and 5), which can be divided into two main sections:
The pattern of first speaking to the friends and then turning to God is typical of Job throughout the dialogue.
Chapter 7 is 'a balanced poem' comprising 3 parts, each bracketed by an opening statement about human condition and a closing cry to God:
The change of the focus of Job's speech is made explicit in verses 7–8, so the "you" in verses 12, 14, 16 and 21 is clearly referring to YHWH.
The hardship of human life (7:1–8).
In this part, Job speaks of human misery and hardship in human existence. Job mentions the brevity of life (the focus later, in chapter 14) and the lack of hope (verse 6) before addressing God directly (verse 7) calling God to act toward him ("remember") according to God's prior commitments to "the afflicted" (cf. Genesis 8:1; Exodus 2:24). In rejecting Eliphaz's optimistic view that hope remains for him (cf. Job 6:20), Job utilizes a pun on the Hebrew words for "hope" and "thread" ("tiqwah") as he thinks of himself as fragile and precarious as the useless 'small ends of the thread that are snapped off a loom after the weaving is completed' (cf. Joshua 2:18. 21).
[Job said:] "Remember that my life is a breath;"
"my eye will never again see good."
The short-lived nature of human life (7:9–16).
Job's second axiom of human life focuses on 'the ephemeral nature of human beings'. In weighing up death and life (verses 15–16) Job does not embrace 'death as something positive', but he only dismisses the 'possibility of living forever'.
[Job said:] "Am I the sea, or a sea monster,"
"that You set a guard over me?"
Questions of "why?" and "how long?" (7:17–21).
The third part contains a barrage of questions: "why?" (verses 17-18) and then "how long?" (verse 19), which are the characteristics of laments. Job does not deny that he sins (verse 20–21) but he cannot understand why he has not been forgiven after showing penitence and making necessary sacrifices (cf. Job 1:13). At the end, there is a tension between Job desiring God's presence and God's absence in his life.
[Job said:] "And why do You not pardon my transgression"
"and take away my iniquity?"
"For now I will lie down in the dust;"
"and You will seek me diligently, but I will not be."
Verse 21.
The last word of Job's speech (7:21; "’ê-nen-nî", "I [will] no longer [be]") shares the same root as the last word in Bildad's speech in the following chapter with different pronominal suffix (8:22; "’ê-nen-nū", "will come to nothing").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71185170 |
711862 | Reissner–Nordström metric | Spherically symmetric metric with electric charge
In physics and astronomy, the Reissner–Nordström metric is a static solution to the Einstein–Maxwell field equations, which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body of mass "M". The analogous solution for a charged, rotating body is given by the Kerr–Newman metric.
The metric was discovered between 1916 and 1921 by Hans Reissner, Hermann Weyl, Gunnar Nordström and George Barker Jeffery independently.
The metric.
In spherical coordinates formula_0, the Reissner–Nordström metric (i.e. the line element) is
formula_1
formula_8.
formula_10
The total mass of the central body and its irreducible mass are related by
formula_12
The difference between formula_13 and formula_14 is due to the equivalence of mass and energy, which makes the electric field energy also contribute to the total mass.
In the limit that the charge formula_15 (or equivalently, the length scale formula_9) goes to zero, one recovers the Schwarzschild metric. The classical Newtonian theory of gravity may then be recovered in the limit as the ratio formula_16 goes to zero. In the limit that both formula_17 and formula_16 go to zero, the metric becomes the Minkowski metric for special relativity.
In practice, the ratio formula_16 is often extremely small. For example, the Schwarzschild radius of the Earth is roughly 9 mm (3/8 inch), whereas a satellite in a geosynchronous orbit has an orbital radius formula_5 that is roughly four billion times larger, at 42,164 km (26,200 miles). Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The ratio only becomes large close to black holes and other ultra-dense objects such as neutron stars.
Charged black holes.
Although charged black holes with "rQ" ≪ "r"s are similar to the Schwarzschild black hole, they have two horizons: the event horizon and an internal Cauchy horizon. As with the Schwarzschild metric, the event horizons for the spacetime are located where the metric component formula_18 diverges; that is, where
formula_19
This equation has two solutions:
formula_20
These concentric event horizons become degenerate for 2"rQ" = "r"s, which corresponds to an extremal black hole. Black holes with 2"rQ" > "r"s cannot exist in nature because if the charge is greater than the mass there can be no physical event horizon (the term under the square root becomes negative). Objects with a charge greater than their mass can exist in nature, but they can not collapse down to a black hole, and if they could, they would display a naked singularity. Theories with supersymmetry usually guarantee that such "superextremal" black holes cannot exist.
The electromagnetic potential is
formula_21
If magnetic monopoles are included in the theory, then a generalization to include magnetic charge "P" is obtained by replacing "Q"2 by "Q"2 + "P"2 in the metric and including the term "P" cos "θ" "dφ" in the electromagnetic potential.
Gravitational time dilation.
The gravitational time dilation in the vicinity of the central body is given by
formula_22
which relates to the local radial escape velocity of a neutral particle
formula_23
Christoffel symbols.
The Christoffel symbols
formula_24
with the indices
formula_25
give the nonvanishing expressions
formula_26
Given the Christoffel symbols, one can compute the geodesics of a test-particle.
Tetrad form.
Instead of working in the holonomic basis, one can perform efficient calculations with a tetrad. Let formula_27 be a set of one-forms with internal Minkowski index formula_28, such that formula_29. The Reissner metric can be described by the tetrad
formula_30,
formula_31,
formula_32
formula_33
where formula_34. The parallel transport of the tetrad is captured by the connection one-forms formula_35. These have only 24 independent components compared to the 40 components of formula_36. The connections can be solved for by inspection from Cartan's equation formula_37, where the left hand side is the exterior derivative of the tetrad, and the right hand side is a wedge product.
formula_38
formula_39
formula_40
formula_41
formula_42
The Riemann tensor formula_43 can be constructed as a collection of two-forms by the second Cartan equation formula_44 which again makes use of the exterior derivative and wedge product. This approach is significantly faster than the traditional computation with formula_36; note that there are only four nonzero formula_45 compared with nine nonzero components of formula_46.
Equations of motion.
Because of the spherical symmetry of the metric, the coordinate system can always be aligned in a way that the motion of a test-particle is confined to a plane, so for brevity and without restriction of generality we use "θ" instead of "φ". In dimensionless natural units of "G" = "M" = "c" = "K" = 1 the motion of an electrically charged particle with the charge "q" is given by
formula_47
which yields
formula_48
formula_49
formula_50
All total derivatives are with respect to proper time formula_51.
Constants of the motion are provided by solutions formula_52 to the partial differential equation
formula_53
after substitution of the second derivatives given above. The metric itself is a solution when written as a differential equation
formula_54
The separable equation
formula_55
immediately yields the constant relativistic specific angular momentum
formula_56
a third constant obtained from
formula_57
is the specific energy (energy per unit rest mass)
formula_58
Substituting formula_59 and formula_60 into formula_61 yields the radial equation
formula_62
Multiplying under the integral sign by formula_59 yields the orbital equation
formula_63
The total time dilation between the test-particle and an observer at infinity is
formula_64
The first derivatives formula_65 and the contravariant components of the local 3-velocity formula_66 are related by
formula_67
which gives the initial conditions
formula_68
formula_69
The specific orbital energy
formula_70
and the specific relative angular momentum
formula_71
of the test-particle are conserved quantities of motion. formula_72 and formula_73 are the radial and transverse components of the local velocity-vector. The local velocity is therefore
formula_74
Alternative formulation of metric.
The metric can be expressed in Kerr–Schild form like this:
formula_75
Notice that k is a unit vector. Here "M" is the constant mass of the object, "Q" is the constant charge of the object, and "η" is the Minkowski tensor.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(t, r, \\theta, \\varphi)"
},
{
"math_id": 1,
"text": "\nds^2=c^2\\, d\\tau^2 = \n\\left( 1 - \\frac{r_\\text{s}}{r} + \\frac{r_{\\rm Q}^2}{r^2} \\right) c^2\\, dt^2 -\\left( 1 - \\frac{r_\\text{s}}{r} + \\frac{r_Q^2}{r^2} \\right)^{-1} \\, dr^2 - r^2 \\, d\\theta^2 - r^2\\sin^2\\theta \\, d\\varphi^2,"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "\\tau"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "(\\theta, \\varphi)"
},
{
"math_id": 7,
"text": "r_\\text{s}"
},
{
"math_id": 8,
"text": "r_\\text{s} = \\frac{2GM}{c^2},"
},
{
"math_id": 9,
"text": "r_Q"
},
{
"math_id": 10,
"text": "r_Q^2 = \\frac{Q^2 G}{4\\pi\\varepsilon_0 c^4}."
},
{
"math_id": 11,
"text": "\\varepsilon_0"
},
{
"math_id": 12,
"text": "M_{\\rm irr}= \\frac{c^2}{G} \\sqrt{\\frac{r_+^2}{2}} \\ \\to \\ M=\\frac{Q ^2}{16\\pi\\varepsilon_0 G M_{\\rm irr}} + M_{\\rm irr}."
},
{
"math_id": 13,
"text": "M"
},
{
"math_id": 14,
"text": "M_{\\rm irr}"
},
{
"math_id": 15,
"text": "Q"
},
{
"math_id": 16,
"text": "r_\\text{s}/r"
},
{
"math_id": 17,
"text": "r_Q/r"
},
{
"math_id": 18,
"text": "g_{rr}"
},
{
"math_id": 19,
"text": " 1 - \\frac{r_{\\rm s}}{r} + \\frac{r_{\\rm Q}^2}{r^2} = -\\frac{1}{g_{rr}} = 0."
},
{
"math_id": 20,
"text": "r_\\pm = \\frac{1}{2}\\left(r_{\\rm s} \\pm \\sqrt{r_{\\rm s}^2 - 4r_{\\rm Q}^2}\\right)."
},
{
"math_id": 21,
"text": "A_\\alpha = (Q/r, 0, 0, 0)."
},
{
"math_id": 22,
"text": "\\gamma = \\sqrt{|g^{t t}|} = \\sqrt{\\frac{r^2}{Q^2+(r-2 M) r}} ,"
},
{
"math_id": 23,
"text": "v_{\\rm esc}=\\frac{\\sqrt{\\gamma^2-1}}{\\gamma}. "
},
{
"math_id": 24,
"text": " \\Gamma^i_{j k} = \\sum_{s=0}^3 \\ \\frac{g^{is}}{2} \\left(\\frac{\\partial g_{js}}{\\partial x^k}+\\frac{\\partial g_{sk}}{\\partial x^j}-\\frac{\\partial g_{jk}}{\\partial x^s}\\right)"
},
{
"math_id": 25,
"text": "\\{ 0, \\ 1, \\ 2, \\ 3 \\} \\to \\{ t, \\ r, \\ \\theta, \\ \\varphi \\}"
},
{
"math_id": 26,
"text": "\n\\begin{align}\n\\Gamma^t_{t r} & = \\frac{M r-Q^2}{r ( Q^2 + r^2 - 2 M r ) } \\\\[6pt]\n\\Gamma^r_{t t} & = \\frac{(M r-Q^2) \\left(r^2-2Mr+Q^2\\right)}{r^5} \\\\[6pt]\n\\Gamma^r_{r r} & = \\frac{Q^2-M r}{r (Q^2 -2 M r+r^2)} \\\\[6pt]\n\\Gamma^r_{\\theta \\theta} & = -\\frac{r^2-2Mr+Q^2}{r} \\\\[6pt]\n\\Gamma^r_{\\varphi \\varphi} & = -\\frac{\\sin ^2 \\theta \\left(r^2-2Mr+Q^2\\right)}{r} \\\\[6pt]\n\\Gamma^\\theta_{\\theta r} & = \\frac{1}{r} \\\\[6pt]\n\\Gamma^\\theta_{\\varphi \\varphi} & = - \\sin \\theta \\cos \\theta \\\\[6pt]\n\\Gamma^\\varphi_{\\varphi r} & = \\frac{1}{r} \\\\[6pt]\n\\Gamma^\\varphi_{\\varphi \\theta} & = \\cot \\theta\n\\end{align}\n"
},
{
"math_id": 27,
"text": " {\\bf e}_I = e_{\\mu I} "
},
{
"math_id": 28,
"text": " I \\in\\{0,1,2,3\\} "
},
{
"math_id": 29,
"text": " \\eta^{IJ} e_{\\mu I} e_{\\nu J} = g_{\\mu\\nu}"
},
{
"math_id": 30,
"text": " {\\bf e}_0 = G^{1/2} \\, dt "
},
{
"math_id": 31,
"text": " {\\bf e}_1 = G^{-1/2} \\, dr "
},
{
"math_id": 32,
"text": " {\\bf e}_2 = r \\, d\\theta "
},
{
"math_id": 33,
"text": " {\\bf e}_3 = r \\sin \\theta \\, d\\varphi "
},
{
"math_id": 34,
"text": " G(r) = 1 - r_sr^{-1} + r_Q^2r^{-2} "
},
{
"math_id": 35,
"text": " \\boldsymbol \\omega_{IJ} = - \\boldsymbol \\omega_{JI} = \\omega_{\\mu IJ} = e_{I}^\\nu \\nabla_\\mu e_{J\\nu} "
},
{
"math_id": 36,
"text": " \\Gamma_{\\mu\\nu}^\\lambda "
},
{
"math_id": 37,
"text": " d{\\bf e}_I = {\\bf e}^J \\wedge \\boldsymbol \\omega_{IJ} "
},
{
"math_id": 38,
"text": " \\boldsymbol \\omega_{10} = \\frac12 \\partial_r G \\, dt"
},
{
"math_id": 39,
"text": " \\boldsymbol \\omega_{20} = \\boldsymbol \\omega_{30} = 0"
},
{
"math_id": 40,
"text": " \\boldsymbol \\omega_{21} = - G^{1/2} \\, d\\theta"
},
{
"math_id": 41,
"text": " \\boldsymbol \\omega_{31} = - \\sin \\theta G^{1/2} d \\varphi"
},
{
"math_id": 42,
"text": " \\boldsymbol \\omega_{32} = - \\cos \\theta \\, d\\varphi"
},
{
"math_id": 43,
"text": " {\\bf R}_{IJ} = R_{\\mu\\nu IJ} "
},
{
"math_id": 44,
"text": "{\\bf R}_{IJ} = d \\boldsymbol \\omega_{IJ} + \\boldsymbol \\omega_{IK} \\wedge \\boldsymbol \\omega^K{}_J,"
},
{
"math_id": 45,
"text": "\\boldsymbol \\omega_{IJ} "
},
{
"math_id": 46,
"text": "\\Gamma_{\\mu\\nu}^\\lambda"
},
{
"math_id": 47,
"text": " \\ddot x^i = - \\sum_{j=0}^3 \\ \\sum_{k=0}^3 \\ \\Gamma^i_{j k} \\ {\\dot x^j} \\ {\\dot x^k} + q \\ {F^{i k}} \\ {\\dot x_k} "
},
{
"math_id": 48,
"text": "\\ddot t = \\frac{ \\ 2 (Q^2-Mr) }{r(r^2 -2Mr +Q ^2)}\\dot{r}\\dot{t}+\\frac{qQ}{(r^2-2mr+Q^2)} \\ \\dot{r}"
},
{
"math_id": 49,
"text": "\\ddot r = \\frac{(r^2-2Mr+Q^2)(Q^2-Mr) \\ \\dot{t}^2}{r^5}+\\frac{(Mr-Q^2) \\dot{r}^2}{r(r^2-2Mr+Q^2)}+\\frac{(r^2-2Mr+Q^2) \\ \\dot{\\theta}^2}{r} + \\frac{qQ(r^2-2mr+Q^2)}{r^4} \\ \\dot{t}"
},
{
"math_id": 50,
"text": "\\ddot \\theta = -\\frac{2 \\ \\dot\\theta \\ \\dot{r}}{r} ."
},
{
"math_id": 51,
"text": "\\dot a=\\frac{da}{d\\tau}"
},
{
"math_id": 52,
"text": "S (t,\\dot t,r,\\dot r,\\theta,\\dot\\theta,\\varphi,\\dot\\varphi) "
},
{
"math_id": 53,
"text": " 0=\\dot t\\dfrac{\\partial S}{\\partial t}+\\dot r\\frac{\\partial S}{\\partial r}+\\dot\\theta\\frac{\\partial S}{\\partial\\theta}+\\ddot t \\frac{\\partial S}{\\partial \\dot t} +\\ddot r \\frac{\\partial S}{\\partial \\dot r} + \\ddot\\theta \\frac{\\partial S}{\\partial \\dot\\theta} "
},
{
"math_id": 54,
"text": "\n S_1=1 = \n\\left( 1 - \\frac{r_s}{r} + \\frac{r_{\\rm Q}^2}{r^2} \\right) c^2\\, {\\dot t}^2 -\\left( 1 - \\frac{r_s}{r} + \\frac{r_Q^2}{r^2} \\right)^{-1} \\, {\\dot r}^2 - r^2 \\, {\\dot \\theta}^2 ."
},
{
"math_id": 55,
"text": " \\frac{\\partial S}{\\partial r}-\\frac{2}{r}\\dot\\theta\\frac{\\partial S}{\\partial \\dot\\theta}=0 "
},
{
"math_id": 56,
"text": " S_2=L=r^2\\dot\\theta; "
},
{
"math_id": 57,
"text": "\n\\frac{\\partial S}{\\partial r}-\\frac{2(Mr-Q^2)}{r(r^2-2Mr+Q^2)}\\dot t\\frac{\\partial S}{\\partial \\dot t}=0\n"
},
{
"math_id": 58,
"text": "S_3=E=\\frac{\\dot t(r^2-2Mr+Q^2)}{r^2} + \\frac{qQ}{r} ."
},
{
"math_id": 59,
"text": "S_2"
},
{
"math_id": 60,
"text": "S_3"
},
{
"math_id": 61,
"text": "S_1"
},
{
"math_id": 62,
"text": "c\\int d\\,\\tau =\\int \\frac{r^2\\,dr}{ \\sqrt{r^4(E-1)+2Mr^3-(Q^2+L^2)r^2+2ML^2r-Q^2L^2 } } ."
},
{
"math_id": 63,
"text": "c\\int Lr^2\\,d\\theta =\\int \\frac{L\\,dr}{ \\sqrt{r^4(E-1)+2Mr^3-(Q^2+L^2)r^2+2ML^2r-Q^2L^2 } }.\n"
},
{
"math_id": 64,
"text": "\\gamma= \\frac{q \\ Q \\ r^3 + E \\ r^4}{r^2 \\ (r^2-2 r+Q^2)} ."
},
{
"math_id": 65,
"text": "\\dot x^i"
},
{
"math_id": 66,
"text": "v^i"
},
{
"math_id": 67,
"text": "\\dot x^i = \\frac{v^i}{\\sqrt{(1-v^2) \\ |g_{i i}|}},"
},
{
"math_id": 68,
"text": "\\dot r = \\frac{v_\\parallel \\sqrt{r^2-2M+Q^2}}{r \\sqrt{(1-v^2)}}"
},
{
"math_id": 69,
"text": "\\dot \\theta = \\frac{v_\\perp}{r \\sqrt{(1-v^2)}} ."
},
{
"math_id": 70,
"text": "E=\\frac{\\sqrt{Q^2-2rM+r^2}}{r \\sqrt{1-v^2}}+\\frac{qQ}{r}"
},
{
"math_id": 71,
"text": "L=\\frac{v_\\perp \\ r}{\\sqrt{1-v^2}}"
},
{
"math_id": 72,
"text": "v_{\\parallel}"
},
{
"math_id": 73,
"text": "v_{\\perp}"
},
{
"math_id": 74,
"text": "v = \\sqrt{v_\\perp^2+v_\\parallel^2} = \\sqrt{\\frac{(E^2-1)r^2-Q^2-r^2+2rM}{E^2 r^2}}."
},
{
"math_id": 75,
"text": "\n\\begin{align}\ng_{\\mu \\nu} & = \\eta_{\\mu \\nu} + fk_\\mu k_\\nu \\\\[5pt]\nf & = \\frac{G}{r^2}\\left[2Mr - Q^2 \\right] \\\\[5pt]\n\\mathbf{k} & = ( k_x ,k_y ,k_z ) = \\left( \\frac{x}{r} , \\frac{y}{r}, \\frac{z}{r} \\right) \\\\[5pt]\nk_0 & = 1.\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=711862 |
71188265 | Ecological Orbits | 2003 book on population ecology
Ecological Orbits: How Planets Move and Populations Grow is a book on population ecology by ecologist Lev R. Ginzburg and philosopher of science Mark Colyvan that argues for an inertial model of population dynamics.
Summary.
The book is divided into eight chapters, each of which advances Ginzburg and Colyvan's argument for an inertial model of population dynamics. It begins with an explanation of planetary orbits and population growth, and argues that an analogy between the two might be fruitful for theoretical ecology. Chapter 2 engages in the debate over whether or not ecology can have laws of nature akin to those of physics by comparing ecological allometries such as Kleiber's law with Kepler's laws of planetary motion. Ginzburg and Colyvan argue that ecological allomotries cannot be discounted as laws of nature they are not always exceptionless, predictive, falsifiable, or distinguishable from accidental regularities as such concerns would also rule out physical laws. For example, they argue that physical laws are not always exceptionless because many are only valid under ideal conditions and they are never perfectly falsifiable because supplementary adjustments can always be made to prevent a complete rejection of the law.
In chapter 3, Ginzburg and Colyvan outline how their view of population dynamics differs from conventional accounts which assume exponential growth or decline. While traditional accounts are based on rules governing rates of birth and death, Ginzburg and Colyvan's model starts by considering the energy consumption of individuals which underlies their ability to survive and reproduce and therefore determines population birth and death rates. This leads to a theory that is different in mathematical form to the conventional account, being a second-order differential equation rather than a first-order equation predicting exponential population growth or decline. According to this theory, population decline should exhibit accelerated death and population dynamics more generally should only be able to be predicted if both the growth rate and acceleration are known, analogously to classical mechanics which requires the speed and acceleration of an object to be known to predict its subsequent motion. Ginzburg and Colyvan present results from experiments by Larry Slobodkin on the starvation of hydras which showed parabolic population decline as evidence for their theory.
Chapter 4 details what Ginzburg and Colyvan call the "maternal effect hypothesis" which provides the mechanism for the inertial aspect of their theory of population dynamics. According to this hypothesis, healthier mothers produce offspring who are themselves healthier and better provided for than less healthy mothers, making the offspring better able to survive and reproduce. As a result, the growth rate of a population depends not just on its current circumstances but also the circumstances of the parent generation. This means that in general populations should respond to changes in circumstances with a time lag; the health of previous generations can for a time help to counteract the effect of worsening conditions. Ginzburg and Colyvan explain how the maternal effect produces second-order changes in population growth which can account for population cycles. Population cycles are generally explained by predator-prey models but Ginzburg and Colyvan argue in chapter 5 that such models are overfitted and cannot explain what they call the case of missing periods without ad hoc assumptions. The case of missing periods is the absence of observed population cycles with periods between 2 and 6 generations, a fact that is predicted by the maternal effect hypothesis alongside population age structuring and cohort effects. They also argue that the maternal effect hypothesis can be tested directly via laboratory experiments using model species. Specifically, they argue that ratio-dependent predation in which there is joint exponential growth of predator and prey populations is possible under their model but not traditional models.
In chapter 6, Ginzburg and Colyvan further develop the ideas of accelerated death and maternal effects into a general theory of inertial population dynamics. They suggest an equation for the time derivative of the growth rate (i.e. the acceleration in population change) which depends on just three parameters "α," "β", and "rmax":
formula_0
where "r" is the growth rate, "t" is time, "n" is the natural logarithm of the population size and "n*" is the equilibrium value of "n". They show that this equation can account for a range of population dynamics, including exponential growth, monotonic approach to equilibrium, overshooting equilibrium, damped oscillations, oscillations with constant or increasing amplitudes, and asymmetric cycles.
Chapter 7 explores some of the practical consequences of Ginzburg and Colyvan's model, including for the management of fisheries and conservation of endangered species. For example, they argue that their inertial model predicts that relatively small increases in the number of fish caught by fisheries can result in a rapid decline in fish population and even extinction. They think that current difficulties in environmental management can be traced back to the fact that currently used tools do not take into account inertial effects. Chapter 8 details the problems involved in scientific theory choice and argues that such problems support the inertial model over traditional models of population dynamics. They argue that the inertial model is relatively simple whilst successfully accommodating the empirical data. The chapter also summarises the arguments and positions presented throughout the book.
Reception.
Günter Wagner reviewed the book in "Science", describing it as "an exciting read on many levels" due to Ginzburg and Colyvan's explanations throughout the book. He said that the book placing individual energy consumption at the centre of its theory was "novel" and that if Ginzburg and Colyvan were right about the practical consequences of their theory, ""Ecological Orbits" ought to become an instant classic, one to be read by every professional and aspiring ecologist and environmental biologist." He also said that it was possible that ""Ecological Orbits" may well turn out to mark such a transition from what was considered unthinkable—namely a rigorous and nontrivial theory of population dynamics akin to a law of nature—to a real scientific achievement." In "The Quarterly Review of Biology", Charles J. Krebs said that the book "explores the analogy between planetary motion and population growth in a novel way that provides some exciting insights into the fundamental structure of theoretical population biology." He described Ginzburg and Colyvan's argument as "persuasive" and says his "only complaint" is that the term "maternal effects" was "used too loosely to mean all delayed effects by which one generation affects the biology of the following generations, rather than being restricted to the biological mechanisms".
In a review of the book published in the "International Society for Behavioral Ecology Newsletter", Scott M. Ramsay questioned how novel the idea of time lags and maternal effects were in population ecology, saying that time lags are often included in ecology textbooks and that the maternal effect has been known about since the 1950s. He said that he was left wondering whether "it [is] fair that this book be criticized for being merely derivative, or should the authors be applauded for bringing attention to ideas that have resisted incorporation in population modeling?" He also felt that the analogy between population dynamics and classical mechanics was less strong than Ginzburg and Colyvan argue, saying that the data they present may be able to be explained by non-inertial models. Nonetheless, he thought the writing style "was generally quite good" and that the book "should be of obvious interest to theoretical ecologists" whilst also being accessible to graduate students and advanced undergraduates.
In "Ecology", Robert P. Freckleton criticised the book for presenting a completely deterministic model of population dynamics that ignores stochastic effects such as the effects of weather, saying that their model "must be regarded to be incomplete" as a result. Freckleton says that this is an important omission because stochasticity can provide an alternative mechanism for accelerated deaths in populations. Freckleton also argues that the book has a zoological bias, focusing on the population dynamics of animals to the exclusion of the population dynamics of plants. He says that deterministic models have been successfully applied to plant populations but that population cycles are generally not observed contrary to what the maternal effect hypothesis predicts. He also felt that the argument in the book may have been more suited to a more succinct presentation in a journal article. He concluded by saying "[h]aving said all this, I enjoyed reading the book and I got food for thought."
John M. Drake reviewed the book in "The American Midland Naturalist". He said that it presents "a fresh and stimulating perspective" that "challenges one to take seriously the problem that the conceptual foundations of our discipline are still to be questioned, interpreted, challenged and modified or approved." He thought that it was "a pleasure to read" and had a tone that was "disarming and engaging", making it accessible for undergraduates and interested non-scientists. However, he felt that the philosophical portion of the book focusing on whether ecology has laws was engaging in a "misguided" dispute and was "a bit tiresome". Serge Luryi reviewed the book in "Physics Today", saying "I recommend it highly as a true pleasure to read." He described Ginzburg "an eager and capable revolutionary" for his work in ecology and says that the philosophy in the book "should not scare away the prospective reader as it is presented in a very lighthearted way."
John Matthewson reviewed the book in the "Australasian Journal of Philosophy", describing it as brief but dense and deep in its exposition of scientific and philosophical issues. Nonetheless, he thought that it could have been made longer so that some of the mathematical points could be explained more accessibly for lay readers. He also thought that this would allow some of the philosophical arguments, which he thought some readers may find "unsatisfying", to be developed in more depth. For example, he argues that the book's argument that laws of nature in physics and ecology are equally open to question could lead us to conclude that neither have genuine laws rather than that both do as Ginzburg and Colyvan argue. Furthermore, he says that there are differences between ecological and physical laws not considered in the book, such as physical laws being universal. Overall, he characterises these problems as "minor points given all that there is to enjoy in this book" and says that "Ecological Orbits" is "a fantastic example of what can result when scientists and philosophers collaborate."
References.
<templatestyles src="Reflist/styles.css" />
External links.
"Ecological Orbits" at Oxford Scholarship Online | [
{
"math_id": 0,
"text": "\\frac{dr}{dt} \\approx \\left(1 - \\frac{r}{r_{max}} \\right)(-\\alpha(n-n^*) + \\beta r)"
}
]
| https://en.wikipedia.org/wiki?curid=71188265 |
711898 | Proper length | Length of an object in the object's rest frame
Proper length or rest length is the length of an object in the object's rest frame.
The measurement of lengths is more complicated in the theory of relativity than in classical mechanics. In classical mechanics, lengths are measured based on the assumption that the locations of all points involved are measured simultaneously. But in the theory of relativity, the notion of simultaneity is dependent on the observer.
A different term, proper distance, provides an invariant measure whose value is the same for all observers.
"Proper distance" is analogous to proper time. The difference is that the proper distance is defined between two spacelike-separated events (or along a spacelike path), while the proper time is defined between two timelike-separated events (or along a timelike path).
Proper length or rest length.
The "proper length" or "rest length" of an object is the length of the object measured by an observer which is at rest relative to it, by applying standard measuring rods on the object. The measurement of the object's endpoints doesn't have to be simultaneous, since the endpoints are constantly at rest at the same positions in the object's rest frame, so it is independent of Δ"t". This length is thus given by:
formula_0
However, in relatively moving frames the object's endpoints have to be measured simultaneously, since they are constantly changing their position. The resulting length is shorter than the rest length, and is given by the formula for length contraction (with "γ" being the Lorentz factor):
formula_1
In comparison, the invariant proper distance between two arbitrary events happening at the endpoints of the same object is given by:
formula_2
So Δ"σ" depends on Δ"t", whereas (as explained above) the object's rest length "L"0 can be measured independently of Δ"t". It follows that Δ"σ" and "L"0, measured at the endpoints of the same object, only agree with each other when the measurement events were simultaneous in the object's rest frame so that Δ"t" is zero. As explained by Fayngold:
p. 407: "Note that the "proper distance" between two events is generally "not" the same as the "proper length" of an object whose end points happen to be respectively coincident with these events. Consider a solid rod of constant proper length "l"0. If you are in the rest frame "K"0 of the rod, and you want to measure its length, you can do it by first marking its endpoints. And it is not necessary that you mark them simultaneously in "K"0. You can mark one end now (at a moment "t"1) and the other end later (at a moment "t"2) in "K"0, and then quietly measure the distance between the marks. We can even consider such measurement as a possible operational definition of proper length. From the viewpoint of the experimental physics, the requirement that the marks be made simultaneously is redundant for a stationary object with constant shape and size, and can in this case be dropped from such definition. Since the rod is stationary in "K"0, the distance between the marks is the "proper length" of the rod regardless of the time lapse between the two markings. On the other hand, it is not the "proper distance" between the marking events if the marks are not made simultaneously in "K"0."
Proper distance between two events in flat space.
In special relativity, the proper distance between two spacelike-separated events is the distance between the two events, as measured in an inertial frame of reference in which the events are simultaneous. In such a specific frame, the distance is given by
formula_3
where
The definition can be given equivalently with respect to any inertial frame of reference (without requiring the events to be simultaneous in that frame) by
formula_4
where
The two formulae are equivalent because of the invariance of spacetime intervals, and since Δ"t" = 0 exactly when the events are simultaneous in the given frame.
Two events are spacelike-separated if and only if the above formula gives a real, non-zero value for Δ"σ".
Proper distance along a path.
The above formula for the proper distance between two events assumes that the spacetime in which the two events occur is flat. Hence, the above formula cannot in general be used in general relativity, in which curved spacetimes are considered. It is, however, possible to define the proper distance along a path in any spacetime, curved or flat. In a flat spacetime, the proper distance between two events is the proper distance along a straight path between the two events. In a curved spacetime, there may be more than one straight path (geodesic) between two events, so the proper distance along a straight path between two events would not uniquely define the proper distance between the two events.
Along an arbitrary spacelike path "P", the proper distance is given in tensor syntax by the line integral
formula_5
where
In the equation above, the metric tensor is assumed to use the codice_0 metric signature, and is assumed to be normalized to return a time instead of a distance. The − sign in the equation should be dropped with a metric tensor that instead uses the codice_1 metric signature. Also, the formula_6 should be dropped with a metric tensor that is normalized to use a distance, or that uses geometrized units.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_{0} = \\Delta x."
},
{
"math_id": 1,
"text": "L = \\frac{L_0}{\\gamma}."
},
{
"math_id": 2,
"text": "\\Delta\\sigma = \\sqrt{\\Delta x^2 - c^2 \\Delta t^2}."
},
{
"math_id": 3,
"text": "\\Delta\\sigma=\\sqrt{\\Delta x^2 + \\Delta y^2 + \\Delta z^2} ,"
},
{
"math_id": 4,
"text": "\\Delta\\sigma = \\sqrt{\\Delta x^2 + \\Delta y^2 + \\Delta z^2 - c^2 \\Delta t^2},"
},
{
"math_id": 5,
"text": "L = c \\int_P \\sqrt{-g_{\\mu\\nu} dx^\\mu dx^\\nu} ,"
},
{
"math_id": 6,
"text": "c"
}
]
| https://en.wikipedia.org/wiki?curid=711898 |
71193979 | Quasiconvexity (calculus of variations) | Generalisation of convexity
In the calculus of variations, a subfield of mathematics, quasiconvexity is a generalisation of the notion of convexity. It is used to characterise the integrand of a functional and related to the existence of minimisers. Under some natural conditions, quasiconvexity of the integrand is a necessary and sufficient condition for a functional
formula_0
to be lower semi-continuous in the weak topology, for a sufficient regular domain formula_1. By compactness arguments (Banach–Alaoglu theorem) the existence of minimisers of weakly lower semicontinuous functionals may then follow from the direct method.
This concept was introduced by Morrey in 1952. This generalisation should not be confused with the same concept of a quasiconvex function which has the same name.
Definition.
A locally bounded Borel-measurable function formula_2 is called quasiconvex if
formula_3
for all formula_4 and all formula_5 , where "B"(0,1) is the unit ball and formula_6 is the Sobolev space of essentially bounded functions with essentially bounded derivative and vanishing trace.
Relations to other notions of convexity.
Quasiconvexity is a generalisation of convexity for functions defined on matrices, to see this let formula_4 and formula_8 with
formula_9. The Riesz-Markov-Kakutani representation theorem states that the dual space of formula_10 can be identified with the space of signed, finite Radon measures on it. We define a Radon measure formula_11 by
formula_12
for formula_13. It can be verified that formula_11 is a
probability measure and its barycenter is given
formula_14
If h is a convex function, then Jensens' Inequality gives
formula_15
This holds in particular if "V"("x") is the derivative of formula_16 by the generalised Stokes' Theorem.
The determinant formula_17 is an example of a quasiconvex function, which is not convex. To see that the determinant is not convex, consider
formula_18
It then holds formula_19 but for formula_20 we have
formula_21. This shows that the determinant is not a quasiconvex function like in Game Theory and thus a distinct notion of convexity.
In the vectorial case of the Calculus of Variations there are other notions of convexity. For a function formula_22 it holds that
formula_23
These notions are all equivalent if formula_24 or formula_25. Already in 1952, Morrey conjectured that rank-1-convexity does not imply quasiconvexity. This was a major unsolved problem in the Calculus of Variations, until Šverák gave an counterexample in 1993 for the case formula_26 and formula_27.
The case formula_28 or formula_29 is still an open problem, known as Morrey's conjecture.
Relation to weak lower semi-continuity.
Under certain growth condition of the integrand, the sequential weakly lower semi-continuity (swlsc) of an integral functional in an appropriate Sobolev space is equivalent to the quasiconvexity of the integrand. Acerbi and Fusco proved the following theorem:
Theorem: If formula_30 is Carathéodory function and
it holds formula_31. Then the functional
formula_32
is swlsc in the Sobolev Space formula_33 with formula_34 if and only if formula_35 is quasiconvex. Here formula_36 is a positive constant and formula_37 an integrable function.
Other authors use different growth conditions and different proof conditions. The first proof of it was due to Morrey in his paper, but he required additional assumptions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}: W^{1,p}(\\Omega, \\mathbb{R}^m) \\rightarrow \\R \\qquad u \\mapsto \\int_\\Omega f(x, u(x), \\nabla u(x)) dx"
},
{
"math_id": 1,
"text": " \\Omega \\subset \\mathbb{R}^d "
},
{
"math_id": 2,
"text": " f:\\mathbb{R}^{m\\times d} \\rightarrow \\mathbb{R} "
},
{
"math_id": 3,
"text": " \n\\int_{B(0,1)} \\bigl(f(A + \\nabla \\psi(x)) - f(A)\\bigr)dx \\geq 0\n"
},
{
"math_id": 4,
"text": " A \\in \\mathbb{R}^{m\\times d} "
},
{
"math_id": 5,
"text": " \\psi \\in W_0^{1,\\infty}(B(0,1), \\mathbb{R}^m) "
},
{
"math_id": 6,
"text": " W_0^{1,\\infty} "
},
{
"math_id": 7,
"text": " W_0^{1,\\infty} "
},
{
"math_id": 8,
"text": " V \\in L^1(B(0,1), \\mathbb{R}^m) "
},
{
"math_id": 9,
"text": " \\int_{B(0,1)} V(x)dx = 0 "
},
{
"math_id": 10,
"text": " C_0(\\mathbb{R}^{m\\times d}) "
},
{
"math_id": 11,
"text": " \\mu "
},
{
"math_id": 12,
"text": "\n\\langle h, \\mu\\rangle = \\frac{1}{|B(0,1)|} \\int_{B(0,1)} h(A + V(x)) dx\n"
},
{
"math_id": 13,
"text": "h \\in C_0(\\mathbb{R}^{m\\times d}) "
},
{
"math_id": 14,
"text": "\n[\\mu] = \\langle \\operatorname{id}, \\mu \\rangle = A + \\int_{B(0,1)} V(x) dx = A.\n"
},
{
"math_id": 15,
"text": "\nh(A) = h([\\mu]) \\leq \\langle h, \\mu \\rangle = \\frac{1}{|B(0,1)|} \\int_{B(0,1)} h(A + V(x)) dx.\n"
},
{
"math_id": 16,
"text": " \\psi \\in W_0^{1,\\infty}(B(0,1), \\mathbb{R}^{m\\times d}) "
},
{
"math_id": 17,
"text": " \\det \\mathbb{R}^{d\\times d} \\rightarrow \\mathbb{R} "
},
{
"math_id": 18,
"text": "\nA = \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\\\ \\end{pmatrix} \n\\quad \\text{and} \\quad\nB = \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\\\ \\end{pmatrix}.\n"
},
{
"math_id": 19,
"text": " \\det A = \\det B = 0 "
},
{
"math_id": 20,
"text": " \\lambda \\in (0,1) "
},
{
"math_id": 21,
"text": " \\det (\\lambda A + (1-\\lambda)B) = \\lambda(1-\\lambda) > 0 = \\max(\\det A, \\det B) "
},
{
"math_id": 22,
"text": " f: \\mathbb{R}^{m\\times d} \\rightarrow \\mathbb{R} "
},
{
"math_id": 23,
"text": "\nf \\text{ convex} \\Rightarrow f \\text{ polyconvex} \\Rightarrow f \\text{ quasiconvex} \\Rightarrow\nf \\text{ rank-1-convex}.\n"
},
{
"math_id": 24,
"text": " d = 1 "
},
{
"math_id": 25,
"text": " m=1 "
},
{
"math_id": 26,
"text": " d \\geq 2 "
},
{
"math_id": 27,
"text": " m \\geq 3 "
},
{
"math_id": 28,
"text": " d = 2"
},
{
"math_id": 29,
"text": " m = 2 "
},
{
"math_id": 30,
"text": " f: \\mathbb{R}^d \\times \\mathbb{R}^m \\times \\mathbb{R}^{d\\times m} \\rightarrow \\mathbb{R}, (x,v,A) \\mapsto f(x,v,A) "
},
{
"math_id": 31,
"text": " 0\\leq f(x,v,A) \\leq a(x) + C(|v|^p + |A|^p) "
},
{
"math_id": 32,
"text": "\n\\mathcal{F}[u] = \\int_\\Omega f(x, u(x),\\nabla u(x)) dx\n"
},
{
"math_id": 33,
"text": " W^{1,p}(\\Omega, \\mathbb{R}^m) "
},
{
"math_id": 34,
"text": " p > 1 "
},
{
"math_id": 35,
"text": " f "
},
{
"math_id": 36,
"text": " C"
},
{
"math_id": 37,
"text": " a(x) "
}
]
| https://en.wikipedia.org/wiki?curid=71193979 |
7119900 | Hall's conjecture | Unsolved problem in mathematics
In mathematics, Hall's conjecture is an open question on the differences between perfect squares and perfect cubes. It asserts that a perfect square "y"2 and a perfect cube "x"3 that are not equal must lie a substantial distance apart. This question arose from consideration of the Mordell equation in the theory of integer points on elliptic curves.
The original version of Hall's conjecture, formulated by Marshall Hall, Jr. in 1970, says that there is a positive constant "C" such that for any integers "x" and "y" for which "y"2 ≠ "x"3,
formula_0
Hall suggested that perhaps "C" could be taken as 1/5, which was consistent with all the data known at the time the conjecture was proposed. Danilov showed in 1982 that the exponent 1/2 on the right side (that is, the use of |"x"|1/2) cannot be replaced by any higher power: for no δ > 0 is there a constant "C" such that |"y"2 - "x"3| > C|"x"|1/2 + δ whenever "y"2 ≠ "x"3.
In 1965, Davenport proved an analogue of the above conjecture in the case of polynomials:
if "f"("t") and "g"("t") are nonzero polynomials over C such that
"g"("t")3 ≠ "f"("t")2 in C["t"], then
formula_1
The "weak" form of Hall's conjecture, stated by Stark and Trotter around 1980, replaces the square root on the right side of the inequality by any exponent "less" than 1/2: for any "ε" > 0, there is some constant "c"(ε) depending on ε such that for any integers "x" and "y" for which "y"2 ≠ "x"3,
formula_2
The original, "strong", form of the conjecture with exponent 1/2 has never been disproved, although it is no longer believed to be true and the term "Hall's conjecture" now generally means the version with the ε in it. For example, in 1998, Noam Elkies found the example
4478849284284020423079182 - 58538865167812233 = -1641843,
for which compatibility with Hall's conjecture would require "C" to be less than .0214 ≈ 1/50, so roughly 10 times smaller than the original choice of 1/5 that Hall suggested.
The weak form of Hall's conjecture would follow from the ABC conjecture. A generalization to other perfect powers is Pillai's conjecture, though it is also known that Pillai's conjecture would be true if Hall's conjecture held for any specific 0 < "ε" < 1/2.
The table below displays the known cases with formula_3. Note that "y" can be computed as the
nearest integer to "x3/2".
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " |y^2 - x^3| > C\\sqrt{|x|}."
},
{
"math_id": 1,
"text": " \\deg(g(t)^2 - f(t)^3) \\geq \\frac{1}{2}\\deg f(t) + 1."
},
{
"math_id": 2,
"text": " |y^2 - x^3| > c(\\varepsilon) x^{1/2-\\varepsilon}."
},
{
"math_id": 3,
"text": " r = \\sqrt{x}/|y^2-x^3| > 1"
}
]
| https://en.wikipedia.org/wiki?curid=7119900 |
71200312 | Vadalog | Type of Knowledge Graph Management System
Vadalog is a system for performing complex logic reasoning tasks over knowledge graphs. Its language is based on an extension of the rule-based language Datalog, "Warded Datalog±".
Vadalog was developed by researchers at the University of Oxford and Technische Universität Wien as well as employees at the Bank of Italy.
Knowledge graph management systems (KGMS).
A knowledge graph management system (KGMS) has to manage knowledge graphs, which incorporate large amounts of data in the form of "facts" and "relationships." In general, it can be seen as the union of three components:
From a more technical standpoint, some additional requirements can be identified for defining a proper KGMS:
Other requirements may include more typical DBMS functions and services, as the ones proposed by Codd.
Vadalog system.
Vadalog offers a platform that fulfills all the requirements of a KGMS listed above. It is able to perform "rule-based" reasoning tasks on top of knowledge graphs and it also supports the data science workflow, such as data visualization and machine learning.
Reasoning task and recursion.
A rule is an expression of the form n :− "a"1, ..., "an" where:
A "rule" allows to infer new knowledge starting from the variables that are in the body: when all the variables in the body of a rule are successfully assigned, the rule is activated and it results in the derivation of the head predicate: given a database D and a set of rules Σ, "a reasoning task" aims at inferring new knowledge, applying the rules of the set Σ to the database D (the "extensional knowledge)."
The most widespread form of knowledge that has been adopted over the last decades has been in the form of rules, be it in rule-based systems, ontology-based systems or other forms and it can be typically captured in knowledge graphs. The nature of knowledge graphs also makes the presence of recursion in these rules a particularly important aspect. Recursion means that the same rules might be called multiple times before obtaining the final answer of the reasoning task and it is particularly powerful as it allows an inference based on previously inferred results. This implies that the system must provide a strategy that guarantees termination. More technically, a program is recursive if the dependency graph built with the application of the rules is cyclical. The simplest form of recursion is that in which the head of a rule also appears in the body ("self-recursive rules").
The query language.
The Vadalog language allows to answer reasoning queries that also include recursion. It is based on Warded Datalog±, which belongs to the Datalog± family of languages that extends Datalog with existential quantifiers in rule heads and at the same time restricts its syntax in order to achieve decidability and tractability. Existential rules are also known as tuple-generating dependencies ("tgds").
An "existential rule" has the following form:
formula_0
or, alternatively, in Datalog syntax, it can be written as follows:
p(X,Z) :- r(X).
Variables in Vadalog are like variables in first-order logic and a variable is local to the rule in which it occurs. This means that occurrences of the same variable name in different rules refer to different variables.
Warded Datalog±.
In case of a set of rules formula_1, consisting of the following:
r(X,Y) :- p(X).
p(Z) :- r(X,Z).
the variable Z in the second rule is said to be "dangerous," since the first rule will generate a "null" in the second term of the atom "r" and this will be injected to the second rule to get the atom "p," leading to a propagation of "nulls" when trying to find an answer to the program. If arbitrary propagation is allowed, reasoning is undecidable and the program will be infinite. Warded Datalog± overcomes this issue asking that for every rule defined in a set formula_1, all the variables in the rule bodies must coexist in at least one atom in the head, called a "ward." The concept of "wardness" restricts the way dangerous variable can be used inside a program. Although this is a limit in terms of expressive power, with this requirement and thanks to its architecture and termination algorithms"," Warded Datalog± is able to find answers to a program in a finite number of steps. It also exhibits a good trade-off between computational complexity and expressive power, capturing PTIME data complexity while allowing ontological reasoning and the possibility of running programs with recursion.
Vadalog extension.
Vadalog replicates in its entirety Warded Datalog± and extends it with the inclusion in the language of:
In addition, the system provides a highly engineered architecture to allow efficient computation. This is done in the following two ways.
The Vadalog system is therefore able to perform ontological reasoning tasks, as it belongs to the Datalog family. Reasoning with the logical core of Vadalog captures OWL 2 QL and SPARQL (through the use of existential quantifiers), and graph analytics (through support for recursion and aggregation). The declarative nature of the language makes the code easy-to-read and manageable.
Example of ontological reasoning task.
Consider the following set of Vadalog rules:
ancestor(Y,X) :- person(X).
ancestor(Y,Z) :- ancestor(Y,X), parent(X,Z).
The first rule states that for each person formula_2 there exists an ancestor formula_3. The second rule states that, if formula_2 is a parent of formula_4, then formula_3 is an ancestor of formula_4 too. Note the existential quantification in the first position of the ancestor predicate in the first rule, which will generate a null νi in the chase procedure. Such null is then propagated to the head of the second rule. Consider a database codice_0 with the extensional facts and the query of finding all the entailed ancestor facts as reasoning task.
By performing the chase procedure, the fact codice_1 is generated by triggering the first rule on codice_2. Then, codice_3 is created by activating the second rule on codice_1 and codice_5. Finally, the first rule could be triggered on codice_6, but the resulting fact codice_7 is isomorphic with codice_3, thus this fact is not generated and the corresponding portion of the chase graph is not explored.
In conclusion, the answer to the query is the set of facts codice_9.
Additional Features.
The integration of Vadalog with data science tools is achieved by means of data bindings primitives and functions"."
The system also provides an integration with the JupyterLab platform, where Vadalog programs can be written and run and the output can be read, exploiting the functionalities of the platform. It gives also the possibility to evaluate the correctness of the program, run it and analyse the derivation process of output facts by means of tools as "syntax highlighting", "code analysis" (checking whether the code is correct or there are errors) and "explanations of results" (how the result has been obtained): all these functionalities are embedded in the notebook and help in writing and analyzing Vadalog code.
Use Cases.
The Vadalog system can be employed to address many real-world use cases from distinct research and industry fields. Among the latter, this section presents two relevant and accessible cases belonging to the financial domain.
Company Control.
A company ownership graph shows entities as nodes and shares as edges. When an entity has a certain amount of shares on another one (commonly identified in the absolute majority), it is able to exert a decision power on that entity and this configures a company control and, more generally, a group structure. Searching for all "control" relationships requires to investigate different scenarios and very complex group structures, namely direct and indirect control. This query be translated into the following rules:
These rules can be written in a Vadalog program that will derive all "control" edges like the following:
control(X,X) :- company(X).
control(X,Y) :- control(X,Y), own(Y,Z,W), V = sum(W,<Y>), V > 0.5.
The first rule states that each company controls itself. The second rule defines control of X over Z by summing the shares of Z held by companies Y, over all companies Y controlled by X.
Close Link.
This scenario consists in determining whether there exists a link between two entities in a company ownerships graph. Determining the existence of such links is relevant, for instance, in banking supervision and credit worthiness evaluation, as a company cannot act as guarantor for loans to another company if the two share such a relationship. Formally, two companies X and Y are involved in a "close link" if:
These rules can be written in a Vadalog program that will derive all "close link" edges like the following:
mcl(X,Y,S) :- own(X,Y,S).
mcl(X,Z,S1 * S2) :- mc1(X,Y,S1), own(Y,Z,S2).
cl1(X,Y) :- mcl(X,Y,S), TS = sum(S), TS > 0.2.
cl2(X,Y) :- cl1(Z,X), cl1(Z,Y), X != Y.
closelink(X,Y) :- cl1(X,Y).
closelink(X,Y) :- cl2(X,Y).
The first rule states that two companies X and Y connected by an ownership edge are possible close links. The second rule states that, if X and Y are possible close links with a share S1 and there exists an ownership edge from Y to a company Z with a share S2, then also X and Z are possible close links with a share . The third rule states that, if the sum of all the partial shares S of Y owned directly or indirectly by X is greater than or equal to 20% of the equity of Y, then they are close links according to the first definition. The fourth rule models the second definition of close links, i.e., the third-party case.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi(x) \\Rightarrow \\exist z \\Psi(x,z)"
},
{
"math_id": 1,
"text": "\\Sigma"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "Z"
}
]
| https://en.wikipedia.org/wiki?curid=71200312 |
71200397 | High pressure jet | A high pressure jet is a stream of pressurized fluid that is released from an environment at a significantly higher pressure than ambient pressure from a nozzle or orifice, due to operational or accidental release. In the field of safety engineering, the release of toxic and flammable gases has been the subject of many R&D studies because of the major risk that they pose to the health and safety of workers, equipment and environment. Intentional or accidental release may occur in an industrial settings like natural gas processing plants, oil refineries and hydrogen storage facilities.
A main focus during a risk assessment process is the estimation of the gas cloud extension and dissipation, important parameters that allow to evaluate and establish safety limits that must be respected in order to minimize the possible damage after a high pressure release.
Mechanism and structure of a gaseous jet.
Subsonic and sonic flow.
When a pressurized gas is released, the velocity of the flow will heavily depend on the pressure difference between stagnant pressure and downstream pressure. By assuming an isentropic expansion of an ideal gas from its stagnant conditions (P0 , meaning the velocity of the gas is zero) to downstream conditions (P1, positioned at the exit plane of the nozzle or orifice), the subsonic flow rate of the source term is given by Ramskill's formulation:
formula_0
As the ratio between downstream condition pressure and stagnant condition pressure decreases, the flow rate of the ideal gas will increase. This behavior will continue until a critical value is reached (in air, P1/P0 is roughly 0.528, dependent on the heat capacity ratio, γ), changing the condition of the jet from a non-choked flow to a choked flow. This will lead to the a newly defined expression for the aforementioned pressure ratio and, sub-sequentially, the flow rate equation.
The critical value for the pressure ratio is defined as:
formula_1
This newly defined ratio can then be used to determine the flow rate for a sonic choked flow:
formula_2
The flow rate equation for a choked flow will have a fixed velocity, which is the speed of sound of the medium, where the Mach number is equals to 1:
formula_3
It is important to note that if P1 keeps on decreasing, no flow rate change will occur if the ratio is already below the critical value, unless P0 also changes (also assuming that the orifice/nozzle exit area and upstream temperature stay the same).
Under-expanded jet structure.
An under-expanded jet is one that manifests when the pressure at downstream conditions (at the end of a nozzle or orifice) is greater that the pressure of the environment where the gas is being released in. It is said to be under-expanded since the gas will expand, trying to reach the same pressure of its surroundings. When under-expanded, the jet will have characteristics of a compressible flow, a condition in which pressure variations are significant enough to have a strong effect on the velocity (where it can exceed the speed of sound of the gas), density and temperature. It is important to note that as the jet expands and incorporates gases from the surrounding medium, the jet will behave more and more like an incompressible fluid, allowing for a general definition of the structure of a jet to be the following:
Under-expanded jet classification.
Further classification of the jet can be related to how the nearfield zone develops due to the compressible effects that govern it. When the jet first exists the orifice or nozzle, it will expand very quickly, resulting in an over-expansion of the flow (which will also reduce the temperature and density of the flow as quickly as it depressurized). Gases that have expanded to a pressure lower than the one of the surrounding fluid will be compressed inwards, causing an increase in the pressure of the flow. If this re-compression leads to the fluid having a higher pressure than the surrounding fluid, another expansion will happen.
This process will repeat until the pressure difference between ambient pressure and jet pressure is null (or close to null).
Compression and expansion are accomplished through a series of shock waves, formed as a result of Prandlt-Meyer expansion and compression waves.
Development of the aforementioned shock waves will be related to the difference in pressure between the stagnant conditions or downstream conditions and the ambient conditions (η0 = P0/Pamb and ηe = P1/Pamb, respectively), as well as the mach number (Ma = V/Vc, where V is the velocity of the flow and Vc is the speed of sound of the medium). With varying pressure ratios, under-expanded jets can be classified as:
Natural gas release.
Amongst incidental scenarios, natural gas releases have become particularly relevant within the process industry environment. With an overall composition of 94.7% of methane, it is important to consider how this gas can cause incremental damage when it is released. Methane gas is a non-toxic, flammable gas, that, at higher concentrations, can behave as an asphyxiant due to oxygen displacement from the lungs. The main concern with methane is related to its flammability and the potential damage that could be dealt to its surroundings if the high pressure jet were to ignite into a jet fire.
Three parameters that must be considered when dealing with flammable gasses are their flash point (FP), upper flammability limit (UFL) and lower flammability limit (LFL), as they are set values for any compound at a specific pressure and temperature. If we consider the fire triangle model, to induce a combustion reaction three components are needed: a fuel, an oxidizing agent and heat.
When release happens in an ambient filled with air, the oxidizing agent will be oxygen (air has a constant concentration of 21% in standard conditions). At an almost pure concentration, a few centimeters from the exit plane, the concentration of natural gas is too high and oxygen too low to generate any kind of combustion reaction, but as the high pressure jet develops, the concentration of its components will dilute as air entrainment increases, allowing an enrichment of oxygen within the jet. Assuming a constant concentration for oxygen, the jet must dilute enough to enter within its flammability range; below its UFL. Within this range, a flammable mixture can be made and any source of heat can jump-start the reaction.
To properly judge the damage and potential risk that the jet fire can generate, several studies regarding the maximum distance that the cloud generated by the jet can reach have been made. As dilution of the jet continues due to air entrainment in the farfield, going below its UFL, the maximum distance that the flammable mixture can reach is at the point in which the concentration of the cloud is equals to the LFL of the gas, as it is the lowest concentration allowable that permits the formation of a flammable mixture between air and natural gas at standard conditions (the LFL for natural gas is 4% ).
Considering a free jet at sub-critical pressure (beyond the nearfield zone), its mean volume fraction axial concentration decay of any gas released in air can be defined as follows:
formula_4
Computational Fluid Dynamics.
Experimental data of high pressure jets have to be limited in terms of size and complexity of the scenario due to the inherit dangers and expenses correlated to the experiment itself. Alternative methods to gather data, such as representative models, can be used in order to predict what the maximum extend of the gas cloud at its LFL concentration can reach. Simpler models like a gaussian gas dispersion model (e.g., SCREEN3 - a dispersion model) or integral model (e.g., PHAST- an integral model) can be useful to have a quick and qualitative overview on how the jet may extend. Unfortunately, their inability to properly simulate jet-obstacle interactions make them impossible to use beyond preliminary calculations. This is the reason why Computational Fluid Dynamic (CFD) simulations are generally preferred for more complex scenarios.
Although there exists several approaches for CFD simulations, a common approach is the use of a finite volume method that discretizes the volume into smaller cells of varying shapes. Every single cell will represent a fluid-filled volume where the scenarios parameters will be applied. Every cell that was modeled solves a set of conservation equations of mass, momentum and energy, along with the continuity equation. Fluid-obstacle interaction is then modeled with varying algorithms based on the closure turbulent model used.
Depending on the number of total cells within the volume, the better the quality of the simulation, the longer the simulation time. Convergence problems can arise within the simulation as large momentum, mass and energy gradients appear in the volume. The points where these problems are expected to appear (like in the nearfield zone of the jet) need to have a higher number of cells to achieve gradual changes between one cell and another. Ideally, through CFD simulations, a simpler model can be derived which, for a specific set of scenarios, allows to have results with an accuracy and precision level similar to the CFD simulation itself.
Birch's Approach.
Through a set of small scale experiments at varying pressures, Birch "et al." formulated an equation that allowed the estimation of a virtual surface source, considering the conservation of mass between the exit plane of the orifice and the virtual surface. This approach allows to simulate a compressible, under-expanded jet as an incompressible, fully-expanded jet. As a consequence, a simpler CFD model can be simulated by using the following diameter (named "pseudo-diameter") as the new exit plane:
formula_5
Ground and obstacle interaction.
In the process industry, there exist a variety of cases where a high pressure jet release incident can occur. LNG storage facilities or NG pipeline systems leakage can degenerate into a jet fire and, through a domino effect, cause heavy damage to the workforce, equipment and surrounding environment. For different scenarios that may happen, safety protocols have to be engineered that aim to set minimum distances between equipment and the workforce, along with preventive systems that reduce the danger of the potential incidental scenario. The following are some of the most common scenarios that may be encountered in an industrial environment:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q \\;=\\; C_D\\;A_o\\;\\rho_1\\;\\sqrt{\\;2\\;\\frac{P_0}{ \\rho_0\\ }\\;\\left[\\frac{ \\gamma\\ }{ \\gamma\\ - 1}\\right]\\left[ 1 -\\;\\,\\left(\\frac{\\;P_1}{P_0}\\right)^\\frac{ \\gamma\\ - 1}{ \\gamma\\ }\\;\\right]}"
},
{
"math_id": 1,
"text": "\\frac{\\;P_1}{P_0}\\;=\\; \\left[\\frac{2}{ \\gamma\\ + 1 }\\right]^\\left(\\frac{ \\gamma\\ }{ \\gamma\\ - 1 }\\right)"
},
{
"math_id": 2,
"text": "Q \\;=\\; C_D\\;A_o\\;\\rho_1\\;V_c"
},
{
"math_id": 3,
"text": "V_c\\;=\\;\\sqrt{\\;\\frac{T_1\\; \\gamma\\ R}{ M }}"
},
{
"math_id": 4,
"text": " \\bar{ \\eta\\ } \\;=\\;\\frac{ k\\; d}{z + a}\\sqrt{\\;\\frac{ \\rho_a\\ }{ \\rho_g\\ }}"
},
{
"math_id": 5,
"text": "d_{ps}\\;=\\; d\\;\\sqrt{\\;C_D\\;\\left(\\frac{T_2}{T_0}\\right)^{0.5}\\left(\\frac{P_0}{P_2}\\right)\\left(\\frac{2}{ \\gamma\\ + 1 }\\right)^\\left(\\frac{1+ \\gamma\\ }{ 2\\gamma\\ - 2 }\\right)}"
}
]
| https://en.wikipedia.org/wiki?curid=71200397 |
71200412 | Critical embankment velocity | Critical embankment velocity or critical speed, in transportation engineering, is the velocity value of the upper moving vehicle that causes the severe vibration of the embankment and the nearby ground. This concept and the prediction method was put forward by scholars in civil engineering communities before 1980 and stressed and exhaustively studied by Krylov in 1994 based on the Green function method and predicted more accurately using other methods in the following. When the vehicles such as high-speed trains or airplanes move approaching or beyond this critical velocity (firstly regarded as the Rayleigh wave speed and later obtained by sophisticated calculation or tests), the vibration magnitudes of vehicles and nearby ground increase rapidly and possibly lead to the damage to the passengers and the neighboring residents. This relevant unexpected phenomenon is called the ground vibration boom from 1997 when it was observed in Sweden for the first time.
This critical velocity is similar to that of sound which results in the sonic boom. However, there are some differences in terms of the transferring medium. The critical velocity of sound just changes in a small range, although the air quality and the interaction between the jet flight and atmosphere affect the critical velocity. But the embankment including the filling layers and ground soil underneath surface is a typically random medium. Such complex soil-structure coupling vibration system may have several critical velocity values. Therefore, the critical embankment velocity belongs to the general concept, "the value of which is not constant" and should be acquired by calculation or experiment in accordance with certain engineerings nowadays.
Mechanism.
The wave superposition.
Under the ideal assumptions, when the moving loads are imposed on the surface of the embankment, they will induce sub-waves which propagate inside and along the surface of the embankment. If the velocity of the moving loads is less than the propagating waves, which could be the body or surface waves, the vehicles move slowly than the propagating waves and the crests of waves on the embankment surface don't intersect at all. Therefore, there is no superposition of the waves taking place. The vibration of embankment and vehicles changes in a small range under this stage.
Oppositely, when the operating velocity of the vehicles is greater than the critical velocity, the vehicles move faster gradually and inevitably run at the critical velocity. At that moment, all the crests of the propagating waves coincide in the position where the loads imposed or the wheels and structure contact, which leads to the serious vibration around the vehicles because of the waves superposition.(The phenomenon is shown in the schematic figure in the right side)
From this perspective, the critical embankment velocity equals the dominant value of propagating waves velocity.
The structure resonance.
In reality and practical application, the speed of the propagating sub-waves is still related to their frequency components. The total vibration consists of infinite wave components with different frequencies, the magnitude of each sub-wave changes in accordance with the wave speed and different vehicles moving velocity makes different part of sub-waves oscillate maximally.
Obtaining the critical velocity of the embankment is similar with looking for the resonant frequencies of a multi-DOF system. There are many orders of frequencies and the first few ones could make structure vibrate seriously. When the vehicle moves at the critical embankment velocity with respect to the embankment structure, excitation frequency locates very close to the resonant frequencies of most of the propagating waves in the vehicle-embankment coupling structure. Meanwhile, the vehicles moving velocity coincides with most sub-waves. The detailed determination realized by the dispersion analysis to the whole structure and illustrated in the following sections.
The dominant frequencies of the vibration induced by the critical embankment velocity determined according to the specific configuration of the engineering structure. For instance, the loads of moving train passages transfer from wheels through the welded rails, sleepers to the embankment. The discontinuous sleepers under the moving wheels make the cyclic loads imposed on the embankment propagate in low frequencies.
Impact.
Abnormal vibration.
As compared with the relatively low-speed scenarios, the magnitude and range of the vibration of the high-speed line increases. Obviously, the track or pavement structure and the embankment will deteriorate faster as the cyclic loads act repeatedly in the operation period. More importantly, it is commonly neglected that the vibration of the vehicle itself is magnified under the critical velocity, especially around the area where the wheels interact with the rail. Such high level local vibration can also evidently lead to an increase in the risk of the whole vehicle derailing. That is the real reason why the critical embankment velocity is of much importance.
Low-frequency noise.
Apart from the vibration, the radiation of low-frequency noises induced by vehicle moving at the critical velocity transfer for a very long distance to the residential district. The civilians who live near the line may endure the low-frequency noises for over millions of cycles, which probably makes people feel annoyed, nervous and insomniac as well as even leading to the resonance of human organs. This impacts upon humans are still ignored in the engineering designing process and invoke the research of low frequency noise damage.
Prediction.
Calculating the accurate critical embankment velocity of new line is still difficult and should also be verified by many experiments in practical application. However, performing the analytical or numerical modeling even some simple ones gives lots of insights on the qualitative changes of the typical lines, such as exposing the potential issues in the embankment and track structures designing or the ways to relieve the impacts from critical velocity. As the quick development of the HPC, it gradually becomes feasible to predict the feasible critical embankment velocity through numerical methods before the construction of lines.
Elastic foundation beam model.
Under low frequency ranges (under 100 Hz, less than the dominant frequencies of general embankment induced), it's reasonable to obtain the critical embankment velocity through the theory of the beam on elastic foundation. Based on the elastic theory, The dynamic governing equation of the Euler beam on elastic foundation under the moving point load with velocity formula_0 demonstrates the vertical deflection formula_1 of the track and sleepers
formula_2
Herein, formula_3, formula_4, formula_5 and formula_6 represent the material properties related to the track structure and foundation respectively. formula_7 is the Dirac function determining the location of the point load formula_8. The solution of the above equation is derived as
formula_9
Where formula_10 is a ratio representing the mechanical difference between the track and foundation. formula_11 and formula_12 are the dimensionless parameter associated with the minimal velocity of bend waves formula_13 of the Euler beam respectively, which are written as
formula_14
When the velocity of moving vehicles formula_0 approaches the minimal velocity formula_15
formula_16
Therefore, the minimal phase velocity of bend waves is regarded as the critical embankment velocity in the elastic foundation beam model. Nevertheless, this model is justified for the scenarios when the stiffness of vehicles and track structure is greater than the embankment. The soil-structure interaction and the space dimensional effect are the key factors for the general cases.
Elastic half-space beam model.
If there is no beam putting on the top of the semi-space, the critical velocity of it is Rayleigh wave speed in accordance to the elastic theory, which is smaller than other two types of body wave speed. Furthermore, taking into consideration the above beam and its SSI increasing the number of factors which are related to the critical velocity. The dynamic governing equations of the elastic half-space and beam are respectively
formula_17
formula_18
wherein formula_19, formula_20 are the Lamé constants, formula_21 represents the contacting forces between the semi-space and the beam. The boundary conditions assume the contacting surface is ideally smooth
formula_22
formula_23
Based on the decomposition of the elastic potentials and the integral transform, the vertical displacement response of half-space surface can be obtained
formula_24
Wherein formula_25 is the wave number in corresponding direction. formula_26 represents the width of the beam. formula_27 is the partially transformed value of vertical displacement of half-space surface
formula_28
Therefore, substitute the expression of vertical displacement into the above, the integral expression related to it in the frequency-wavenumber domain is
formula_29
The equation above demonstrates the vibration of the beam and half-space respectively. Rewrite it in order to simplify
formula_30
The first term formula_31 in the equation above is the dispersion equation of the beam, it has the simple form in this model formula_32. The second term formula_33 represents the relation of the semi-space.
In order to analyze the critical velocity of this coupling structure, the equivalent stiffness of it related to the conventional Winkler foundation in the Fourier domain is needed. In Winkler foundation, the last above equation has this form
formula_34
Thus, the equivalent stiffness of semi-space of this SSI model to the Winker foundation is formula_35 , it is written as
formula_36
The equation above has a really complex form, usually approximate form is used to replace it under practical application. The critical velocity is determined by solving the simultaneous equations with beam model
formula_37
The critical velocity formula_38 approximate equation under the Poisson's ratio ranges from 0.2 to 0.38 is
formula_39
According to this equation, there are two critical velocity values existing in this kind of model. One is less than the Rayleigh wave speed and the other one equals it. Advanced research shows that if the periodic supports are taken into consideration, there is a series of critical velocity values of the elastic half-space.
Multi-layered elastic half-space beam model.
The top part of the embankment consists of many layered structure such as track, ballast or slab and foundation with different material properties. Therefore, a more sophisticated critical velocity analysis on the multi-layered or inhomogeneous structure is needed in the practical application. The critical velocity could be determined in accordance with the dispersion relation of each parts. The radial and vertical surface stresses and displacements of layered half-space in the wavenumber-frequency domain obtained by Thompson–Haskell method is
formula_40
Herein, formula_41 represents the stiffness matrix of the whole model. According to Cramer's rule, if the displacements in the frequency domain exist, the determinant of formula_41 should be equal to zero
formula_42
Solve it, the surface dispersion curve of the elastic layered foundation has the form below
formula_43
The first equation explains the change of the horizontal transverse displacement resulted by the SH waves. The second one is related to the P-SV waves. Study shows that the dispersive SH, P-SV waves curves distribute among the ones of the surface Rayleigh wave and shear wave of the half-space, which are non-dispersive waves.
Considering the track structure dispersion relation could obtain more accurate results. For instance the dispersion equation of a typical slab track is written as a function of wavenumber formula_44 and radial frequency formula_45
formula_46
The intersecting points of dispersion curves of structure components are related to the critical velocity of the embankment. The velocity values could be obtained according to the definition of wavenumber.
formula_47
Wherein, the formula_48 represents the excitation frequency of the moving loads. formula_49 means the different moving directions.
Mitigation.
For engineering design, improving the critical embankment velocity to a higher value as compared with the operating speed is a conservative way to protect the passengers safety. As the issues related to the critical embankment velocity taking place after the operation of lines for many years, mitigation measures play an imperative role for the refurbished and new lines with high speed moving vehicles. Considering the convenience of the construction, mitigating measures focuses on the areas near the embankment for new lines and upon nearby area for the renovated lines. However, the former ones, active ones, are more efficient as compared with the latter ones, namely the passive measures.
Measures towards the embankment.
The propagating speed of wave inside objects mainly depends on the stiffness index, namely formula_50. Therefore, the critical embankment velocity could be improved evidently through ground strengthening methods such as pile foundation, grouting, dry deep mixing, etc. The famous Swedish railway line running X2 trains was initially designed using ordinary construction methods. However, since the softness of the top clay, the vibration level induced by the X2 trains was few times higher than that of the conventional trains. The mitigation measure adopted by the operation sector Banverket was dry deep mixing method. After installing a total of 12 trial columns made by special binder with a length of about 8 meters for 2 weeks. The vibration level was reduced to an acceptable value after the mitigation.
Apart from the measures inside the embankment, engineers usually install the damper supports under the rail-pads to isolate the vibration transferred from the wheels downwards. Another common method to weaken the transmission of vibration is to construct the isolating trench with or without filling into porous materials like EPS concrete.
Measures towards the nearby area.
The vibration transferred to a distant area belongs to the low-frequency ones. For the sensitive architectures like museums, laboratory etc., damper supports are installed under the building foundations to decrease the extra vibration. Since the magnitude of this kind of vibration cannot be easily reduced, the mitigating measures are mainly adopted to decrease the noise level. The most common way is installing the noise isolation wall near the borders of lines, which could change the direction of the sonic wave because of the reflection effect. | [
{
"math_id": 0,
"text": "v_0"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "EI{\\partial^4w\\over \\partial x^4}+m{\\partial^2w\\over \\partial t^2}+K_fw=P\\delta(x-v_0t)"
},
{
"math_id": 3,
"text": "E"
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "K_f"
},
{
"math_id": 7,
"text": "\\delta(.)"
},
{
"math_id": 8,
"text": "P"
},
{
"math_id": 9,
"text": "w(x,t)={P\\over8EI\\beta^3\\xi}\\exp^{-\\beta\\xi|x-v_0t|}[\\cos({-\\beta\\psi|x-v_0t|})+{\\xi\\over\\psi}\\sin({-\\beta\\psi|x-v_0t|})]"
},
{
"math_id": 10,
"text": "\\beta"
},
{
"math_id": 11,
"text": "\\xi"
},
{
"math_id": 12,
"text": "\\psi"
},
{
"math_id": 13,
"text": "v_b=\\sqrt[4]{4K_fEI/m^2}"
},
{
"math_id": 14,
"text": "\\begin{cases}\\xi=\\sqrt{1-(v_0/v_b)^2}\\\\ \\psi=\\sqrt{1+(v_0/v_b)^2} \\end{cases}"
},
{
"math_id": 15,
"text": "v_b"
},
{
"math_id": 16,
"text": "\\lim_{v_0 \\to v_b}w(x,t)=\\lim_{\\xi \\to 0}{w_1(x,t)\\over\\xi}=\\infty "
},
{
"math_id": 17,
"text": "\\mu\\nabla^2\\vec{u}+(\\lambda+\\mu)\\nabla(\\nabla\\vec{u})=\\rho{\\partial^2\\vec{u}\\over \\partial t^2}"
},
{
"math_id": 18,
"text": "m{\\partial^2W\\over \\partial t^2}+EI{\\partial^4W\\over \\partial x^4}=F(x,t,W(x,t))"
},
{
"math_id": 19,
"text": "\\lambda"
},
{
"math_id": 20,
"text": "\\mu"
},
{
"math_id": 21,
"text": "F"
},
{
"math_id": 22,
"text": "\\tau_{xz}(.,t)=\\tau_{zx}(.,t)=0"
},
{
"math_id": 23,
"text": "S_{rec}\\sigma_{zz}(.,t)=F(.,t)S_{load}"
},
{
"math_id": 24,
"text": "w(k.,0,\\omega)={\\omega^2R_l \\over \\mu c_t^2}{h(\\omega,k.)D(\\omega,k.) \\over \n\\Delta(\\omega, k.)} {\\sin(ak.)\\over ak.}"
},
{
"math_id": 25,
"text": "k."
},
{
"math_id": 26,
"text": "a"
},
{
"math_id": 27,
"text": "h(\\omega,k.)"
},
{
"math_id": 28,
"text": "h(\\omega,k.)={1 \\over 2\\pi}\\int_{-\\infty} ^{+\\infty}w(k.,0,\\omega)\\operatorname{d}\\!k.\n=\\int_{-\\infty} ^{+\\infty}\\int_{-\\infty} ^{+\\infty}\\int_{-\\infty} ^{+\\infty}\\int_{-\\infty} ^{+\\infty}W(.,t)\\exp(\\omega,t,k.,x.)\\operatorname{d}\\!{t}\\operatorname{d}\\!{x}\\operatorname{d}\\!{y}\\operatorname{d}\\!k."
},
{
"math_id": 29,
"text": "h(\\omega,k.)\\left [1-{\\omega D(.) \\over 2\\pi\\mu c_t^2}\\int_{-\\infty}^{+\\infty}\n{R_l \\over \\Delta} {\\sin(ak.)\\over ak. } \\operatorname{d}\\!k. \\right]=0"
},
{
"math_id": 30,
"text": "h(\\omega,k.)=[D(.)+\\chi(.)]=0"
},
{
"math_id": 31,
"text": "D(.)"
},
{
"math_id": 32,
"text": "D(\\omega,k.)=-m\\omega^2+EIk.^4"
},
{
"math_id": 33,
"text": "\\chi(.)"
},
{
"math_id": 34,
"text": "h(\\omega,k.)=[D(.)+\\chi_0]=0"
},
{
"math_id": 35,
"text": "\\chi"
},
{
"math_id": 36,
"text": "\\chi(\\omega,k.)=\\chi(kV,k.)=-{2\\pi\\mu c_t^2 \\over \\omega^2}\\left (\\int_{-\\infty}^{+\\infty}{R_l \\over \\Delta} {\\sin(ak.)\\over ak. } \\operatorname{d}\\!k. \\right)\n^{-1}"
},
{
"math_id": 37,
"text": "EIk.^4-mk_1^2V^2+\\chi(k.V,k)=0"
},
{
"math_id": 38,
"text": "V"
},
{
"math_id": 39,
"text": "EIk.^4-mk_1^2V^2-{2\\pi\\mu \\over (1-c_s^2/c_p^2)\\ln{a|k.|+{(-2.21+2.2\\nu)(1+0.2V^2/c_s^2-0.38V^4/c_s^4)\\over \\sqrt{1-V^2/c_R^2}}}}=0"
},
{
"math_id": 40,
"text": "\\boldsymbol{K}\\cdot\\binom{\\bar{u_r}(k_r,0),\\omega)}{\\bar{u_z}(k_r,0,\\omega)}=\\binom{\\bar{\\sigma}_{zz}(k_r,0,\\omega)}{\\bar{\\tau}_{zr}(k_r,0,\\omega)}"
},
{
"math_id": 41,
"text": "\\boldsymbol{K}"
},
{
"math_id": 42,
"text": "\\det\\boldsymbol{K}=\\begin{vmatrix} k_{11}(k_r,\\omega) & 0 & 0\\\\ 0 & k_{22}(k_r,\\omega) & k_{23}(k_r,\\omega)\\\\ 0 & k_{32}(k_r,\\omega) & k_{33}(k_r,\\omega)\\end{vmatrix}=0"
},
{
"math_id": 43,
"text": "\\begin{cases} |k_{11}(k_r,\\omega)|&=&0 \\\\ |k_{22}(k_r,\\omega)k_{33}(k_r,\\omega)-k_{23}(k_r,\\omega)k_{32}(k_r,\\omega)|&=&0 \\end{cases}\n"
},
{
"math_id": 44,
"text": "k"
},
{
"math_id": 45,
"text": "\\omega"
},
{
"math_id": 46,
"text": "\\begin{vmatrix} EIk^4+k_p-\\omega^2m_r & -k_p \\\\ -k_p & EIk^4+k_p-\\omega^2 m_s \\end{vmatrix}=0\n"
},
{
"math_id": 47,
"text": "V=\\pm2\\pi(f-f_0)/k"
},
{
"math_id": 48,
"text": "f_0"
},
{
"math_id": 49,
"text": "\\pm"
},
{
"math_id": 50,
"text": "v\\propto f(E,...)"
}
]
| https://en.wikipedia.org/wiki?curid=71200412 |
71200813 | Multiresolution Fourier transform | Multiresolution Fourier Transform is an integral fourier transform that represents a specific wavelet-like transform with a fully scalable modulated window, but not all possible translations.
Comparison of Fourier transform and wavelet transform.
The Fourier transform is one of the most common approaches when it comes to digital signal processing and signal analysis. It represents a signal through sine and cosine functions thus transforming the time-domain into frequency-domain. A disadvantage of the Fourier transform is that both sine and cosine function are defined in the whole time plane, meaning that there is no time resolution. Certain variants of Fourier transform, such as Short Time Fourier Transform (STFT) utilize a window for sampling, but the window length is fixed meaning that the results will be satisfactory only for either low or high frequency components. Fast fourier transform (FFT) is used often because of its computational speed, but shows better results for stationary signals.
On the other hand, the wavelet transform can improve all the aforementioned downsides. It preserves both time and frequency information and it uses a window of variable length, meaning that both low and high frequency components will be derived with higher accuracy than the Fourier transform. The wavelet transform also shows better results in transient states. Multiresolution Fourier Transform leverages the advantageous properties of the wavelet transform and uses them for Fourier transform.
Definition.
Let formula_0 be a function that has its Fourier transform defined as
formula_1 (Eq.1)
The time line can be split by intervals of length π/ω with centers at integer multiples of π/ω
formula_2 (Eq.2)
Then, new transforms of function formula_0 can be introduced
formula_3 (Eq.3)
formula_4 (Eq.4)
and
formula_5 (Eq.5)
formula_6 (Eq.6)
where formula_7, when n is an integer.
Functions formula_8 and formula_9 can be used in order to define the complex Fourier transform
formula_10 (Eq.7)
Then, set of points in the frequency-time plane is defined for the computation of the introduced transforms
formula_11 (Eq.8)
where formula_12, and formula_13 is the infinite in general, or a finite number if the function formula_0 has a finite support. The defined representation of formula_0 with the functions formula_8 and formula_9 is called the B-wavelet transform, and is used to define the integral Fourier transform.
The cosine and sine B-wavelet transforms are:
formula_14 (Eq.9)
formula_15 (Eq.10)
B-wavelet doesn't need to be calculated across the whole range of frequency-time points, but only in the points of set B. The integral Fourier transform can then be defined from B-wavelet transform using.
Now Fourier transform can be represented via two integral wavelet transforms sampled by only translation parameter:
formula_16 (Eq.11)
formula_17 (Eq.12)
Applications.
Multiresolution Fourier Transform is applied in fields such as image and audio signal analysis, curve and corner extraction, and edge detection. | [
{
"math_id": 0,
"text": "f(t)"
},
{
"math_id": 1,
"text": "F(\\omega)=\\int_{-\\infty}^{\\infty} f(t) \\cos (\\omega t) d t-j \\int_{-\\infty}^{\\infty} f(t) \\sin (\\omega t) d t"
},
{
"math_id": 2,
"text": "I_{n}=I_{n}(\\omega)=\\left[\\frac{(2 n-1) \\pi}{2 \\omega}, \\frac{(2 n+1) \\pi}{2 \\omega}\\right), n=0, \\pm 1, \\pm 2, \\ldots"
},
{
"math_id": 3,
"text": "F_{\\Psi}\\left(\\omega, b_{n}\\right)=\\int_{-\\infty}^{\\infty} f(t) \\Psi_{\\omega, b_{n}} d t"
},
{
"math_id": 4,
"text": "F_{\\Psi}(0,0)=\\int_{-\\infty}^{\\infty} f(t) d t"
},
{
"math_id": 5,
"text": "F_{\\varphi}\\left(\\omega, b_{n}\\right)=\\int_{-\\infty}^{\\infty} f(t) \\varphi_{\\omega, b_{n}} d t"
},
{
"math_id": 6,
"text": "F_{\\varphi}(0,0)=0"
},
{
"math_id": 7,
"text": "b_{n}=b_{n}(\\omega)=\\frac{\\pi}{\\omega} n"
},
{
"math_id": 8,
"text": "F_{\\Psi}"
},
{
"math_id": 9,
"text": "F_{\\varphi}"
},
{
"math_id": 10,
"text": "F(\\omega)=\\sum_{n=-\\infty}^{\\infty}(-1)^{n} F_{\\Psi}\\left(\\omega, b_{n}\\right)-\\sum_{n=-\\infty}^{\\infty}(-1)^{n} F_{\\varphi}\\left(\\omega, b_{n}\\right)"
},
{
"math_id": 11,
"text": "B=\\left\\{\\left(\\omega, b_{n}\\right) ; \\omega \\in(-\\infty, \\infty), b_{n}=n \\frac{\\pi}{\\omega}, n=0, \\pm 1, \\pm 2, \\ldots, \\pm \\mathrm{N}(\\omega)\\right\\}"
},
{
"math_id": 12,
"text": "N(0)=0"
},
{
"math_id": 13,
"text": "N(\\omega)"
},
{
"math_id": 14,
"text": "f(t) \\rightarrow\\left\\{F_{\\psi}\\left(\\omega, b_{n}\\right),\\left(\\omega, b_{n}\\right) \\in B\\right\\}"
},
{
"math_id": 15,
"text": "f(t) \\rightarrow\\left\\{F_{\\varphi}\\left(\\omega, b_{n}\\right),\\left(\\omega, b_{n}\\right) \\in B\\right\\}"
},
{
"math_id": 16,
"text": "T_{\\Psi}(\\omega, \\mathrm{b})=\\int_{-\\infty}^{\\infty} f(t) \\Psi_{\\omega, \\mathrm{b}} d t"
},
{
"math_id": 17,
"text": "T_{\\varphi}(\\omega, \\mathrm{b})=\\int_{-\\infty}^{\\infty} f(t) \\varphi_{\\omega, \\mathrm{b}} d t"
}
]
| https://en.wikipedia.org/wiki?curid=71200813 |
712013 | Feature structure | In phrase structure grammars, such as generalised phrase structure grammar, head-driven phrase structure grammar and lexical functional grammar, a feature structure is essentially a set of attribute–value pairs. For example, the attribute named "number" might have the value "singular". The value of an attribute may be either atomic, e.g. the symbol "singular", or complex (most commonly a feature structure, but also a list or a set).
A feature structure can be represented as a directed acyclic graph (DAG), with the nodes corresponding to the variable values and the paths to the variable names. Operations defined on feature structures, e.g. unification, are used extensively in phrase structure grammars. In most theories (e.g. HPSG), operations are strictly speaking defined over equations describing feature structures and not over feature structures themselves, though feature structures are usually used in informal exposition.
Often, feature structures are written like this:
formula_0
Here there are the two features "category" and "agreement". "Category" has the value "noun phrase" whereas the value of "agreement" is indicated by another feature structure with the features "number" and "person" being "singular" and "third".
This particular notation is called "attribute value matrix" (AVM).
The matrix has two columns, one for the feature names and the other for the values. In this sense a feature structure is a list of key-value pairs. The value might be atomic or another feature structure. This leads to another notation for feature structures: the use of trees. In fact, some systems (such as PATR-II) use S-expressions to represent feature structures. | [
{
"math_id": 0,
"text": "\\begin{bmatrix} \\mbox{category} & noun\\ phrase\\\\ \\mbox{agreement} & \\begin{bmatrix} \\mbox{number} & singular \\\\ \\mbox{person} & third \\end{bmatrix} \\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=712013 |
71208595 | Molybdenum in biology | Use of molybdenum by organisms
Molybdenum is an essential element in most organisms. It is most notably present in nitrogenase which is an essential part of nitrogen fixation.
Mo-containing enzymes.
Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals).
At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria. Those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase. With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. The only known exception is nitrogenase, which uses the FeMoco cofactor, which has the formula Fe7MoS9C.
In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon. In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. An extremely high concentration of molybdenum reverses the trend and can inhibit purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth.
Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin. Nitrogenases catalyze the production of ammonia from atmospheric nitrogen:
formula_0
The biosynthesis of the FeMoco active site is highly complex.
Molybdate is transported in the body as MoO42−.
Human metabolism and deficiency.
Molybdenum is an essential trace dietary element. Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxidase, and mitochondrial mitochondrial amidoxime reductase. People severely deficient in molybdenum have poorly functioning sulfite oxidase and are prone to toxic reactions to sulfites in foods. The human body contains about 0.07 mg of molybdenum per kilogram of body weight, with higher concentrations in the liver and kidneys and lower in the vertebrae. Molybdenum is also present within human tooth enamel and may help prevent its decay.
Acute toxicity has not been seen in humans, and the toxicity depends strongly on the chemical state. Studies on rats show a median lethal dose (LD50) as low as 180 mg/kg for some Mo compounds. Although human toxicity data is unavailable, animal studies have shown that chronic ingestion of more than 10 mg/day of molybdenum can cause diarrhea, growth retardation, infertility, low birth weight, and gout; it can also affect the lungs, kidneys, and liver. Sodium tungstate is a competitive inhibitor of molybdenum. Dietary tungsten reduces the concentration of molybdenum in tissues.
Low soil concentration of molybdenum in a geographical band from northern China to Iran results in a general dietary molybdenum deficiency, and is associated with increased rates of esophageal cancer. Compared to the United States, which has a greater supply of molybdenum in the soil, people living in those areas have about 16 times greater risk for esophageal squamous cell carcinoma.
Molybdenum deficiency has also been reported as a consequence of non-molybdenum supplemented total parenteral nutrition (complete intravenous feeding) for long periods of time. It results in high blood levels of sulfite and urate, in much the same way as molybdenum cofactor deficiency. Since pure molybdenum deficiency from this cause occurs primarily in adults, the neurological consequences are not as marked as in cases of congenital cofactor deficiency.
A congenital molybdenum cofactor deficiency disease, seen in infants, is an inability to synthesize molybdenum cofactor, the heterocyclic molecule discussed above that binds molybdenum at the active site in all known human enzymes that use molybdenum. The resulting deficiency results in high levels of sulfite and urate, and neurological damage.
Excretion.
Most molybdenum is excreted from the human body as molybdate in the urine. Furthermore, urinary excretion of molybdenum increases as dietary molybdenum intake increases. Small amounts of molybdenum are excreted from the body in the feces by way of the bile; small amounts also can be lost in sweat and in hair.
Excess and copper antagonism.
High levels of molybdenum can interfere with the body's uptake of copper, producing copper deficiency. Molybdenum prevents plasma proteins from binding to copper, and it also increases the amount of copper that is excreted in urine. Ruminants that consume high levels of molybdenum suffer from diarrhea, stunted growth, anemia, and achromotrichia (loss of fur pigment). These symptoms can be alleviated by copper supplements, either dietary or injection. The effective copper deficiency can be aggravated by excess sulfur.
Copper reduction or deficiency can also be deliberately induced for therapeutic purposes by the compound ammonium tetrathiomolybdate, in which the bright red anion tetrathiomolybdate is the copper-chelating agent. Tetrathiomolybdate was first used therapeutically in the treatment of copper toxicosis in animals. It was then introduced as a treatment in Wilson's disease, a hereditary copper metabolism disorder in humans; it acts both by competing with copper absorption in the bowel and by increasing excretion. It has also been found to have an inhibitory effect on angiogenesis, potentially by inhibiting the membrane translocation process that is dependent on copper ions. This is a promising avenue for investigation of treatments for cancer, age-related macular degeneration, and other diseases that involve a pathologic proliferation of blood vessels.
In some grazing livestock, most strongly in cattle, molybdenum excess in the soil of pasturage can produce scours (diarrhea) if the pH of the soil is neutral to alkaline; see teartness.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{N_2 + 8 \\ H^+ + 8 \\ e^- + 16 \\ ATP + 16 \\ H_2O \\longrightarrow 2 \\ NH_3 + H_2 + 16 \\ ADP + 16 \\ P_i} "
}
]
| https://en.wikipedia.org/wiki?curid=71208595 |
71215326 | Thom's first isotopy lemma | Theorem
In mathematics, especially in differential topology, Thom's first isotopy lemma states: given a smooth map formula_0 between smooth manifolds and formula_1 a closed Whitney stratified subset, if formula_2 is proper and formula_3 is a submersion for each stratum formula_4 of formula_5, then formula_2 is a locally trivial fibration. The lemma was originally introduced by René Thom who considered the case when formula_6. In that case, the lemma constructs an isotopy from the fiber formula_7 to formula_8; whence the name "isotopy lemma".
The local trivializations that the lemma provide preserve the strata. However, they are generally not smooth (not even formula_9). On the other hand, it is possible that local trivializations are semialgebraic if the input data is semialgebraic.
The lemma is also valid for a more general stratified space such as a stratified space in the sense of Mather but still with the Whitney conditions (or some other conditions). The lemma is also valid for the stratification that satisfies Bekka's condition (C), which is weaker than Whitney's condition (B). (The significance of this is that the consequences of the first isotopy lemma cannot imply Whitney’s condition (B).)
Thom's second isotopy lemma is a family version of the first isotopy lemma.
Proof.
The proof is based on the notion of a controlled vector field. Let formula_10 be a system of tubular neighborhoods formula_11 in formula_12 of strata formula_4 in formula_5 where formula_13 is the associated projection and formula_14 given by the square norm on each fiber of formula_15. (The construction of such a system relies on the Whitney conditions or something weaker.) By definition, a controlled vector field is a family of vector fields (smooth of some class) formula_16 on the strata formula_4 such that: for each stratum "A", there exists a neighborhood formula_17 of formula_4 in formula_11 such that for any formula_18,
formula_19
formula_20
on formula_21.
Assume the system formula_11 is compatible with the map formula_0 (such a system exists). Then there are two key results due to Thom:
The lemma now follows in a straightforward fashion. Since the statement is local, assume formula_25 and formula_26 the coordinate vector fields on formula_27. Then, by the lifting result, we find controlled vector fields formula_28 on formula_5 such that formula_29. Let formula_30 be the flows associated to them. Then define
formula_31
by
formula_32
It is a map over formula_27 and is a homeomorphism since formula_33 is the inverse. Since the flows formula_34 preserve the strata, formula_35 also preserves the strata. formula_36
Note.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f : M \\to N"
},
{
"math_id": 1,
"text": "S \\subset M"
},
{
"math_id": 2,
"text": "f|_S"
},
{
"math_id": 3,
"text": "f|_A"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "N = \\mathbb{R}"
},
{
"math_id": 7,
"text": "f^{-1}(a)"
},
{
"math_id": 8,
"text": "f^{-1}(b)"
},
{
"math_id": 9,
"text": "C^1"
},
{
"math_id": 10,
"text": "\\{ (T_A, \\pi_A, \\rho_A) \\mid A \\}"
},
{
"math_id": 11,
"text": "T_A"
},
{
"math_id": 12,
"text": "M"
},
{
"math_id": 13,
"text": "\\pi_A : T_A \\to A"
},
{
"math_id": 14,
"text": "\\rho_A : T_A \\to [0, \\infty)"
},
{
"math_id": 15,
"text": "\\pi_A"
},
{
"math_id": 16,
"text": "\\eta_A"
},
{
"math_id": 17,
"text": "T'_A"
},
{
"math_id": 18,
"text": "B > A"
},
{
"math_id": 19,
"text": "\\eta_B \\circ \\rho_A = 0,"
},
{
"math_id": 20,
"text": "(\\pi_A)_* \\eta_B = \\eta_A \\circ \\pi_A"
},
{
"math_id": 21,
"text": "T_A' \\cap B"
},
{
"math_id": 22,
"text": "\\zeta"
},
{
"math_id": 23,
"text": "\\eta"
},
{
"math_id": 24,
"text": "f_* (\\eta) = \\zeta \\circ f"
},
{
"math_id": 25,
"text": "N = \\mathbb{R}^n"
},
{
"math_id": 26,
"text": "\\partial_i"
},
{
"math_id": 27,
"text": "\\mathbb{R}^n"
},
{
"math_id": 28,
"text": "\\widetilde{\\partial_i}"
},
{
"math_id": 29,
"text": "f_*(\\widetilde{\\partial_i}) = \\partial_i \\circ f"
},
{
"math_id": 30,
"text": "\\varphi_i : \\mathbb{R} \\times S \\to S"
},
{
"math_id": 31,
"text": "H : f|_S^{-1}(0) \\times \\mathbb{R}^n \\to S"
},
{
"math_id": 32,
"text": "H(y, t) = \\varphi_n(t_n, \\phi_{n-1}(t_{n-1}, \\cdots, \\varphi_1(t_1, y) \\cdots))."
},
{
"math_id": 33,
"text": "G(x) = (\\varphi_1(-t_1, \\cdots, \\varphi_n(-t_n, x) \\cdots), t), t = f(x)"
},
{
"math_id": 34,
"text": "\\varphi_i"
},
{
"math_id": 35,
"text": "H"
},
{
"math_id": 36,
"text": "\\square"
}
]
| https://en.wikipedia.org/wiki?curid=71215326 |
712166 | Wagstaff prime | In number theory, a Wagstaff prime is a prime number of the form
formula_0
where "p" is an odd prime. Wagstaff primes are named after the mathematician Samuel S. Wagstaff Jr.; the prime pages credit François Morain for naming them in a lecture at the Eurocrypt 1990 conference. Wagstaff primes appear in the New Mersenne conjecture and have applications in cryptography.
Examples.
The first three Wagstaff primes are 3, 11, and 43 because
formula_1
Known Wagstaff primes.
The first few Wagstaff primes are:
3, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, ... (sequence in the OEIS)
Exponents which produce Wagstaff primes or probable primes are:
3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 79, 101, 127, 167, 191, 199, 313, 347, 701, 1709, 2617, 3539, 5807, ... (sequence in the OEIS)
Generalizations.
It is natural to consider more generally numbers of the form
formula_2
where the base formula_3. Since for formula_4 odd we have
formula_5
these numbers are called "Wagstaff numbers base formula_6", and sometimes considered a case of the repunit numbers with negative base formula_7.
For some specific values of formula_6, all formula_8 (with a possible exception for very small formula_4) are composite because of an "algebraic" factorization. Specifically, if formula_6 has the form of a perfect power with odd exponent (like 8, 27, 32, 64, 125, 128, 216, 243, 343, 512, 729, 1000, etc. (sequence in the OEIS)), then the fact that formula_9, with formula_10 odd, is divisible by formula_11 shows that formula_12 is divisible by formula_13 in these special cases. Another case is formula_14, with "k" a positive integer (like 4, 64, 324, 1024, 2500, 5184, etc. (sequence in the OEIS)), where we have the aurifeuillean factorization.
However, when formula_6 does not admit an algebraic factorization, it is conjectured that an infinite number of formula_4 values make formula_8 prime, notice all formula_4 are odd primes.
For formula_15, the primes themselves have the following appearance: 9091, 909091, 909090909090909091, 909090909090909090909090909091, … (sequence in the OEIS), and these "n"s are: 5, 7, 19, 31, 53, 67, 293, 641, 2137, 3011, 268207, ... (sequence in the OEIS).
See Repunit#Repunit primes for the list of the generalized Wagstaff primes base formula_6. (Generalized Wagstaff primes base formula_6 are generalized repunit primes base formula_7 with odd formula_4)
The least primes "p" such that formula_16 is prime are (starts with "n" = 2, 0 if no such "p" exists)
3, 3, 3, 5, 3, 3, 0, 3, 5, 5, 5, 3, 7, 3, 3, 7, 3, 17, 5, 3, 3, 11, 7, 3, 11, 0, 3, 7, 139, 109, 0, 5, 3, 11, 31, 5, 5, 3, 53, 17, 3, 5, 7, 103, 7, 5, 5, 7, 1153, 3, 7, 21943, 7, 3, 37, 53, 3, 17, 3, 7, 11, 3, 0, 19, 7, 3, 757, 11, 3, 5, 3, ... (sequence in the OEIS)
The least bases "b" such that formula_17 is prime are (starts with "n" = 2)
2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, ... (sequence in the OEIS)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{{2^p+1}\\over 3}"
},
{
"math_id": 1,
"text": "\\begin{align}\n3 & = {2^3+1 \\over 3}, \\\\[5pt]\n11 & = {2^5+1 \\over 3}, \\\\[5pt]\n43 & = {2^7+1 \\over 3}.\n\\end{align}"
},
{
"math_id": 2,
"text": "Q(b,n)=\\frac{b^n+1}{b+1}"
},
{
"math_id": 3,
"text": "b \\ge 2"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\frac{b^n+1}{b+1}=\\frac{(-b)^n-1}{(-b)-1}=R_n(-b)"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "-b"
},
{
"math_id": 8,
"text": "Q(b,n)"
},
{
"math_id": 9,
"text": "x^m+1"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "x+1"
},
{
"math_id": 12,
"text": "Q(a^m, n)"
},
{
"math_id": 13,
"text": "a^n+1"
},
{
"math_id": 14,
"text": "b=4k^4"
},
{
"math_id": 15,
"text": "b=10"
},
{
"math_id": 16,
"text": "Q(n, p)"
},
{
"math_id": 17,
"text": "Q(b, prime(n))"
}
]
| https://en.wikipedia.org/wiki?curid=712166 |
71219978 | McVittie metric | Solution of Einstein field equations
In the general theory of relativity, the McVittie metric is the exact solution of Einstein's field equations describing a black hole or massive object immersed in an expanding cosmological spacetime. The solution was first fully obtained by George McVittie in the 1930s, while investigating the effect of the, then recently discovered, expansion of the Universe on a mass particle.
The simplest case of a spherically symmetric solution to the field equations of General Relativity with a cosmological constant term, the Schwarzschild-De Sitter spacetime, arises as a specific case of the McVittie metric, with positive 3-space scalar curvature formula_0 and constant Hubble parameter formula_1.
Metric.
In isotropic coordinates, the McVittie metric is given by
formula_2
where formula_3 is the usual line element for the euclidean sphere, M is identified as the mass of the massive object, formula_4 is the usual scale factor found in the FLRW metric, which accounts for the expansion of the space-time; and formula_5 is a curvature parameter related to the scalar curvature formula_6 of the 3-space as
formula_7
which is related to the curvature of the 3-space exactly as in the FLRW spacetime. It is generally assumed that formula_8, otherwise the Universe is undergoing a contraction.
One can define the time-dependent mass parameter formula_9, which accounts for the mass density inside the expanding, comoving radius formula_10 at time formula_11, to write the metric in a more succinct way
formula_12
Causal structure and singularities.
From here on, it is useful to define formula_13. For McVittie metrics with the general expanding FLRW solutions properties formula_14 and formula_15, the spacetime has the property of containing at least two singularities. One is a cosmological, null-like naked singularity at the smallest positive root formula_16 of the equation formula_17. This is interpreted as the black hole event-horizon in the case where formula_18. For the formula_18 case, there is an event horizon at formula_19, but no singularity, which is extinguished by the existence of an asymptotic Schwarzschild-De Sitter phase of the spacetime.
The second singularity lies at the causal past of all events in the space-time, and is a space-time singularity at formula_20, which, due to its causal past nature, is interpreted as the usual Big-Bang like singularity.
There are also at least two event horizons: one at the largest solution of formula_17, and space-like, protecting the Big-Bang singularity at finite past time; and one at the formula_21 smallest root of the equation, also at finite time. The second event horizon becomes a black hole horizon for the formula_22 case.
Schwarzschild and FLRW limits.
One can obtain the Schwarzschild and Robertson-Walker metrics from the McVittie metric in the exact limits of formula_23 and formula_24, respectively.
In trying to describe the behavior of a mass particle in an expanding Universe, the original paper of McVittie a black hole spacetime with decreasing Schwarschild radius formula_25 for an expanding surrounding cosmological spacetime. However, one can also interpret, in the limit of a small mass parameter formula_26, a perturbed FLRW spacetime, with formula_27 the Newtonian perturbation. Below we describe how to derive these analogies between the Schwarzschild and FLRW spacetimes from the McVittie metric.
Schwarzschild.
In the case of a flat 3-space, with scalar curvature constant formula_28, the metric (1) becomes
formula_29
which, for each instant of cosmic time formula_30, is the metric of the region outside of a Schwarzschild black hole in isotropic coordinates, with Schwarzschild radius formula_31.
To make this equivalence more explicit, one can make the change of radial variables
formula_32
to obtain the metric in Schwarzschild coordinates:
formula_33
The interesting feature of this form of the metric is that one can clearly see that the Schwarzschild radius, which dictates at which distance from the center of the massive body the event horizon is formed, "shrinks" as the Universe expands. For a comoving observer, which accompanies the Hubble flow this effect is not perceptible, as its radial coordinate is given by formula_34, such that, for the comoving observer, formula_35 is constant, and the Event Horizon will remain static.
FLRW.
In the case of a vanishing mass parameter formula_36, the McVittie metric becomes exactly the FLRW metric in spherical coordinates
formula_37
which leads to the exact Friedmann equations for the evolution of the scale factor formula_38.
If one takes the limit of the mass parameter formula_39, the metric (1) becomes
formula_40
which can be mapped to a perturberd FLRW spacetime in Newtonian gauge, with perturbation potential formula_41; that is, one can understand the small mass of the central object as the perturbation in the FLRW metric.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa = +1 "
},
{
"math_id": 1,
"text": "H(t) = H_0 "
},
{
"math_id": 2,
"text": "\n\nds^2 =-\\left(\\frac{1-\\frac{GM}{2c^2a(t)r}K^{1/2}(r)}{1+\\frac{GM}{2c^2a(t)r}K^{1/2}(r)} \\right)^2 c^2dt^2 + \\frac{\\left(1 + \\frac{GM}{2c^2a(t)r}K^{1/2}(r) \\right)^4}{K^2(r)} a^2(t)(dr^2 + r^2d\\Omega^2),\n\n"
},
{
"math_id": 3,
"text": " d\\Omega^2 "
},
{
"math_id": 4,
"text": "a(t)"
},
{
"math_id": 5,
"text": " K(r) "
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "\n\nK(r) = 1 + \\kappa r^2 = 1 + \\frac{r^2}{4R^2}, \\qquad \\kappa \\in \\{+1, 0, -1\\},\n\n"
},
{
"math_id": 8,
"text": " \\dot{a}(t)> 0 "
},
{
"math_id": 9,
"text": " \\mu(t) \\equiv GM/2c^2a(t)r "
},
{
"math_id": 10,
"text": " a(t)r "
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "\n\nds^2 = -\\left(\\frac{1-\\mu(t)K^{1/2}(r)}{1+\\mu(t)K^{1/2}(r)} \\right)^2 c^2dt^2 + \\frac{\\left(1 +\\mu(t)K^{1/2}(r) \\right)^4}{K^2(r)} a^2(t)(dr^2 + r^2d\\Omega^2),\n\n"
},
{
"math_id": 13,
"text": " m = GM/c^2 "
},
{
"math_id": 14,
"text": "H(t) = \\dot{a}(t)/a(t) > 0"
},
{
"math_id": 15,
"text": "\\lim_{t\\rightarrow \\infty} H(t) = H_0 = 0 "
},
{
"math_id": 16,
"text": " r_- "
},
{
"math_id": 17,
"text": "1-2m/r - H_0^2r^2 =0 "
},
{
"math_id": 18,
"text": "H_0 > 0 "
},
{
"math_id": 19,
"text": " r = r_- "
},
{
"math_id": 20,
"text": " r = 2m, \\mu(t) = 1 "
},
{
"math_id": 21,
"text": "r = r_-"
},
{
"math_id": 22,
"text": " H_0 > 0 "
},
{
"math_id": 23,
"text": " k = 0, \\dot{a}(t) = 0 "
},
{
"math_id": 24,
"text": "\\mu(t) = 0"
},
{
"math_id": 25,
"text": " r_s "
},
{
"math_id": 26,
"text": " \\mu(t) "
},
{
"math_id": 27,
"text": " \\mu "
},
{
"math_id": 28,
"text": " k=0 "
},
{
"math_id": 29,
"text": "\nds^2 = -\\left(\\frac{1-\\frac{M}{2a(t)r}}{1+\\frac{M}{2a(t)r}} \\right)^2 c^2dt^2 + \\left(1 + \\frac{M}{2a(t)r}\\right)^4 a^2(t)(dr^2 + r^2d\\Omega^2),\n"
},
{
"math_id": 30,
"text": "t_0"
},
{
"math_id": 31,
"text": "r_S = \\dfrac{2GM}{a(t_0)c^2}"
},
{
"math_id": 32,
"text": " r' = r\\left(1+ \\frac{M}{2a(t)r}\\right)^2,"
},
{
"math_id": 33,
"text": "ds^2 = -\\left(1-\\frac{2M}{a(t)r'}\\right)c^2dt^2 + \\left(1-\\frac{2M}{a(t)r'}\\right)dr'^2 + r'^2d\\Omega^2. "
},
{
"math_id": 34,
"text": " r'_{(\\text{comov})} = a(t)r' "
},
{
"math_id": 35,
"text": " r_S = 2M/r'_{(\\text{comov})} "
},
{
"math_id": 36,
"text": " \\mu(t) = 0 "
},
{
"math_id": 37,
"text": "ds^2 = -c^2dt^2 + \\frac{a^2(t)}{\\left( 1 - \\frac{r^2}{4R^2} \\right)^2}(dr^2 + r^2d\\Omega^2), "
},
{
"math_id": 38,
"text": " a(t)"
},
{
"math_id": 39,
"text": " \\mu(t) = M/2a(t)r \\ll 1 "
},
{
"math_id": 40,
"text": "\nds^2 = -\\left(1-4\\mu(t)K(r)\\right)^2 c^2dt^2 + \\frac{\\left(1+4\\mu(t)K(r)\\right)}{K^2(r)} a^2(t)(dr^2 + r^2d\\Omega^2),\n"
},
{
"math_id": 41,
"text": "\\Phi = 2\\mu(t) "
}
]
| https://en.wikipedia.org/wiki?curid=71219978 |
71220531 | Maximum inner-product search | Search problem
Maximum inner-product search (MIPS) is a search problem, with a corresponding class of search algorithms which attempt to maximise the inner product between a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, including recommendation algorithms and machine learning.
Formally, for a database of vectors formula_0 defined over a set of labels formula_1 in an inner product space with an inner product formula_2 defined on it, MIPS search can be defined as the problem of determining
formula_3
for a given query formula_4.
Although there is an obvious linear-time implementation, it is generally too slow to be used on practical problems. However, efficient algorithms exist to speed up MIPS search.
Under the assumption of all vectors in the set having constant norm, MIPS can be viewed as equivalent to a nearest neighbor search (NNS) problem in which maximizing the inner product is equivalent to minimizing the corresponding distance metric in the NNS problem. Like other forms of NNS, MIPS algorithms may be approximate or exact.
MIPS search is used as part of DeepMind's RETRO algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_i"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": " \\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 3,
"text": "\\underset{i \\in S}{\\operatorname{arg\\,max}}\\ \\langle x_i, q \\rangle"
},
{
"math_id": 4,
"text": "q"
}
]
| https://en.wikipedia.org/wiki?curid=71220531 |
71224482 | Tony O'Farrell | Irish mathematician
Tony O'Farrell (born Anthony G. O'Farrell in 1947 in Dublin) is an Irish mathematician who is Professor Emeritus at Maynooth University. He has been in the Mathematics and Statistics Department there since 1975.
Early life.
He was born in Dublin and grew up both there and in Tipperary.
Education and career.
He attended University College Dublin (UCD) earning a BSc in mathematical science (1967). After a year working for the
Irish Meteorological Service, he returned to UCD for his MSc (1969). He then moved to the USA, to Brown University from which he earned a PhD in 1973, for a thesis on "Capacities in Uniform Approximation" done under Brian Cole. After two years at the University of California, Los Angeles (UCLA), during which he published extensively, in 1975 he returned to Ireland as Professor of Mathematics at St. Patrick's College, Maynooth (later Maynooth University), outside Dublin. This appointment was notable for two reasons: he was only 28, and, while Maynooth had lay lecturers and senior lecturers, he was the first layman appointed to a chair at this traditionally pontifical institution.
O'Farrell has long been active in the Irish Mathematical Society, serving as president in 1983 and 1984, and as editor of the "Bulletin of the IMS" since 2011.
In 1981 he was elected to the Royal Irish Academy.
From 1992-1995, he also served as head of the Computer Science Department at Maynooth.
In 2002, O'Farrell established Logic Press which publishes mathematics books at various levels in both English and Irish. These range from the Irish Mathematical Olympiad Manual to undergraduate and postgraduate level texts and research monographs.
In 2012, he formally retired from Maynooth, though he remains very active in many arenas.
Hamilton Walk.
In 1990 O’Farrell established the annual Hamilton Walk, which commemorates the 16 October 1843 discovery of quaternions by William Rowan Hamilton. It starts at Dunsink Observatory in County Dublin, just west of the city, and follows the Royal Canal east to Broom Bridge. Over the decades, this has grown in popularity and stature, attracting Nobel laureates and Fields Medallists. O'Farrell's younger colleague Fiacre Ó Cairbre took over the organisation of the walk at the end of the 1990s, but O'Farrell always gives a speech at Broom Bridge. In 2018, O’Farrell and Ó Cairbre received the 2018 Maths Week Ireland Award, for "outstanding work in raising public awareness of mathematics" resulting from the founding and nurturing the Hamilton Walk.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C^\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=71224482 |
712245 | Orchestrated objective reduction | Theory of a quantum origin of consciousness
Orchestrated objective reduction (Orch OR) is a highly controversial theory postulating that consciousness originates at the quantum level inside neurons (rather than being a product of neural connections). The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. It is proposed that the theory may answer the hard problem of consciousness and provide a mechanism for free will. The hypothesis was first put forward in the early 1990s by Nobel laureate for physics Roger Penrose, and anaesthesiologist Stuart Hameroff. The hypothesis combines approaches from molecular biology, neuroscience, pharmacology, philosophy, quantum information theory, and quantum gravity.
While more generally accepted theories assert that consciousness emerges as the complexity of the computations performed by cerebral neurons increases, Orch OR posits that consciousness is based on non-computable quantum processing performed by qubits formed collectively on cellular microtubules, a process significantly amplified in the neurons. The qubits are based on oscillating dipoles forming superposed resonance rings in helical pathways throughout lattices of microtubules. The oscillations are either electric, due to charge separation from London forces, or magnetic, due to electron spin—and possibly also due to nuclear spins (that can remain isolated for longer periods) that occur in gigahertz, megahertz and kilohertz frequency ranges. Orchestration refers to the hypothetical process by which connective proteins, such as microtubule-associated proteins (MAPs), influence or orchestrate qubit state reduction by modifying the spacetime-separation of their superimposed states. The latter is based on Penrose's objective-collapse theory for interpreting quantum mechanics, which postulates the existence of an objective threshold governing the collapse of quantum states, related to the difference of the spacetime curvature of these states in the universe's fine-scale structure.
Orchestrated objective reduction has been criticized from its inception by mathematicians, philosophers, and scientists. The criticism concentrated on three issues: Penrose's interpretation of Gödel's theorem; Penrose's abductive reasoning linking non-computability to quantum events; and the brain's unsuitability to host the quantum phenomena required by the theory, since it is considered too "warm, wet and noisy" to avoid decoherence.
Background.
In 1931, mathematician and logician Kurt Gödel proved that any effectively generated theory capable of proving basic arithmetic cannot be both consistent and complete. In other words, a mathematically sound theory lacks the means to prove itself. In his first book concerning consciousness, "The Emperor's New Mind" (1989), Roger Penrose argued that equivalent statements to "Gödel-type propositions" had recently been put forward.
Partially in response to Gödel's argument, the Penrose–Lucas argument leaves the question of the physical basis of non-computable behaviour open. Most physical laws are computable, and thus algorithmic. However, Penrose determined that wave function collapse was a prime candidate for a non-computable process. In quantum mechanics, particles are treated differently from the objects of classical mechanics. Particles are described by wave functions that evolve according to the Schrödinger equation. Non-stationary wave functions are linear combinations of the eigenstates of the system, a phenomenon described by the superposition principle. When a quantum system interacts with a classical system—i.e. when an observable is measured—the system appears to collapse to a random eigenstate of that observable from a classical vantage point.
If collapse is truly random, then no process or algorithm can deterministically predict its outcome. This provided Penrose with a candidate for the physical basis of the non-computable process that he hypothesized to exist in the brain. However, he disliked the random nature of environmentally induced collapse, as randomness was not a promising basis for mathematical understanding. Penrose proposed that isolated systems may still undergo a new form of wave function collapse, which he called objective reduction (OR).
Penrose sought to reconcile general relativity and quantum theory using his own ideas about the possible structure of spacetime. He suggested that at the Planck scale curved spacetime is not continuous, but discrete. He further postulated that each separated quantum superposition has its own piece of spacetime curvature, a blister in spacetime. Penrose suggests that gravity exerts a force on these spacetime blisters, which become unstable above the Planck scale of formula_0 and collapse to just one of the possible states. The rough threshold for OR is given by Penrose's indeterminacy principle:
formula_1
where:
* formula_2 is the time until OR occurs,
* formula_3 is the gravitational self-energy or the degree of spacetime separation given by the superpositioned mass, and
* formula_4 is the reduced Planck constant.
Thus, the greater the mass–energy of the object, the faster it will undergo OR and vice versa. Mesoscopic objects could collapse on a timescale relevant to neural processing.
An essential feature of Penrose's theory is that the choice of states when objective reduction occurs is selected neither randomly (as are choices following wave function collapse) nor algorithmically. Rather, states are selected by a "non-computable" influence embedded in the Planck scale of spacetime geometry. Penrose claimed that such information is Platonic, representing pure mathematical truths, which relates to Penrose's ideas concerning the three worlds: the physical, the mental, and the Platonic mathematical world. In "Shadows of the Mind" (1994), Penrose briefly indicates that this Platonic world could also include aesthetic and ethical values, but he does not commit to this further hypothesis.
The Penrose–Lucas argument was criticized by mathematicians, computer scientists, and philosophers, and the consensus among experts in these fields is that the argument fails, with different authors attacking different aspects of the argument. Minsky argued that because humans can believe false ideas to be true, human mathematical understanding need not be consistent and consciousness may easily have a deterministic basis. Feferman argued that mathematicians do not progress by mechanistic search through proofs, but by trial-and-error reasoning, insight and inspiration, and that machines do not share this approach with humans.
Orch OR.
Penrose outlined a predecessor to Orch OR in "The Emperor's New Mind", coming to the problem from a mathematical viewpoint and in particular Gödel's theorem, but lacked a detailed proposal for how quantum processes could be implemented in the brain. Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose's book and suggested to him that microtubules within neurons were suitable candidate sites for quantum processing, and ultimately for consciousness. Throughout the 1990s, the two collaborated on the Orch OR theory, which Penrose published in "Shadows of the Mind" (1994).
Hameroff's contribution to the theory derived from his study of the neural cytoskeleton, and particularly on microtubules. As neuroscience has progressed, the role of the cytoskeleton and microtubules has assumed greater importance. In addition to providing structural support, microtubule functions include axoplasmic transport and control of the cell's movement, growth and shape.
Orch OR combines the Penrose–Lucas argument with Hameroff's hypothesis on quantum processing in microtubules. It proposes that when condensates in the brain undergo an objective wave function reduction, their collapse connects noncomputational decision-making to experiences embedded in spacetime's fundamental geometry. The theory further proposes that the microtubules both influence and are influenced by the conventional activity at the synapses between neurons.
Microtubule computation.
Hameroff proposed that microtubules were suitable candidates for quantum processing. Microtubules are made up of tubulin protein subunits. The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalized π electrons. Tubulin has other, smaller non-polar regions, for example 8 tryptophans per tubulin, which contain π electron-rich indole rings distributed throughout tubulin with separations of roughly 2 nm. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. During entanglement, particle states become inseparably correlated.
Hameroff originally suggested in the fringe "Journal of Cosmology" that the tubulin-subunit electrons would form a Bose–Einstein condensate. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was rejected by Reimers's group. Hameroff then responded to Reimers. "Reimers et al have most definitely NOT shown that strong or coherent Frohlich condensation in microtubules is unfeasible. The model microtubule on which they base their Hamiltonian is not a microtubule structure, but a simple linear chain of oscillators." Hameroff reasoned that such condensate behavior would magnify nanoscopic quantum effects to have large scale influences in the brain.
Hameroff then proposed that condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses. Hameroff proposed that the gap between the cells is sufficiently small that quantum objects can tunnel across it, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the much less controversial theory that gap junctions are related to the gamma oscillation.
Related experimental results.
Superradiance.
In a study Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility (in a later study, superradiance was confirmed to occur in networks of tryptophans, which are found in microtubules). “We’re not at the level of interpreting this physiologically, saying 'Yeah, this is where consciousness begins,' but it may," Jack Tuszyński told "New Scientist".
In 2024, a study called Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures, whose result was published in The Journal of Physical Chemistry, confirmed the quantum effect called superradiance in large networks of tryptophans, which are found in microtubules. Large networks of tryptophans are a warm and noisy environment, in which quantum effects typically aren't expected to take place. The results of the study were first theoretically predicted and then experimentally confirmed by the researchers. Majed Chergui, who led the experimental team said "It's a beautiful result. It took very precise and careful application of standard protein spectroscopy methods, but guided by the theoretical predictions of our collaborators, we were able to confirm a stunning signature of superradiance in a micron-scale biological system". Marlan Scully, a physicist well-known for his work in the field of theoretical quantum optics, said "We will certainly be examining closely the implications for quantum effects in living systems for years to come". The study states that "by analyzing the coupling with the electromagnetic field of mega-networks of tryptophans present in these biologically relevant architectures, we find the emergence of collective quantum optical effects, namely, superradiant and subradiant eigenmodes (...) our work demonstrates that collective and cooperative UV excitations in mega-networks of tryptophans support robust quantum states in protein aggregates, with observed consequences even under thermal equilibrium conditions".
Microtubule quantum vibration theory of anesthetic action.
In an experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules farther than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space.
At high concentrations (~5 MAC) the anesthetic gas halothane causes reversible depolymerization of microtubules. This cannot be the mechanism of anesthetic action, however, because human anesthesia is performed at 1 MAC (It is important to note that neither Penrose or Hameroff ever claim that depolymerization is the mechanism of action for ORCH OR.).
At ~1 MAC halothane, reported minor changes in tubulin protein expression (~1.3-fold) in primary cortical neurons after exposure to halothane and isoflurane are not evidence that tubulin directly interacts with general anesthetics, but rather shows that the proteins controlling tubulin production are possible anesthetic targets.
Further proteomic study reports 0.5 mM [14C]halothane binding to tubulin monomers alongside three dozens of other proteins.
In addition, modulation of microtubule stability has been reported during anthracene general anesthesia of tadpoles. The study called Direct Modulation of Microtubule Stability Contributes to Anthracene General Anesthesia claims to provide "strong evidence that destabilization of neuronal microtubules provides a path to achieving general anesthesia".
What might anesthetics do to microtubules to cause loss of consciousness? A highly disputed theory put forth in the mid-1990s by Stuart Hameroff and Sir Roger Penrose posits that consciousness is based on quantum vibrations in tubulin/microtubules inside brain neurons. Computer modeling of tubulin's atomic structure found that anesthetic gas molecules bind adjacent to amino acid aromatic rings of non-polar π-electrons and that collective quantum dipole oscillations among all π-electron resonance rings in each tubulin showed a spectrum with a common mode peak at 613 THz. Simulated presence of 8 different anesthetic gases abolished the 613 THz peak, whereas the presence of 2 different nonanesthetic gases did not affect the 613 THz peak, from which it was speculated that this 613 THz peak in microtubules could be related to consciousness and anesthetic action.
Another study that Stuart Hameroff was a part of claims to show "anesthetic molecules can impair π-resonance energy transfer and exciton hopping in 'quantum channels' of tryptophan rings in tubulin, and thus account for selective action of anesthetics on consciousness and memory".
Criticism.
Orch OR has been criticized both by physicists and neuroscientists who consider it to be a poor model of brain physiology. Orch OR has also been criticized for lacking explanatory power; the philosopher Patricia Churchland wrote, "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."
David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why particular macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no particular reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either.
Decoherence in living organisms.
In 2000 Max Tegmark claimed that any quantum coherent system in the brain would undergo effective wave function collapse due to environmental interaction long before it could influence neural processes (the "warm, wet and noisy" argument, as it later came to be known). He determined the decoherence timescale of microtubule entanglement at brain temperatures to be on the order of femtoseconds, far too brief for neural processing. Christof Koch and Klaus Hepp also agreed that quantum coherence does not play, or does not need to play any major role in neurophysiology. Koch and Hepp concluded that "The empirical demonstration of slowly decoherent and controllable quantum bits in neurons connected by electrical or chemical synapses, or the discovery of an efficient quantum algorithm for computations performed by the brain, would do much to bring these speculations from the 'far-out' to the mere 'very unlikely'."
In response to Tegmark's claims, Hagan, Tuszynski and Hameroff claimed that Tegmark did not address the Orch OR model, but instead a model of his own construction. This involved superpositions of quanta separated by 24 nm rather than the much smaller separations stipulated for Orch OR. As a result, Hameroff's group claimed a decoherence time seven orders of magnitude greater than Tegmark's, although still far below 25 ms. Hameroff's group also suggested that the Debye layer of counterions could screen thermal fluctuations, and that the surrounding actin gel might enhance the ordering of water, further screening noise. They also suggested that incoherent metabolic energy could further order water, and finally that the configuration of the microtubule lattice might be suitable for quantum error correction, a means of resisting quantum decoherence.
In 2009, Reimers "et al." and McKemmish "et al.," published critical assessments. Earlier versions of the theory had required tubulin-electrons to form either Bose–Einsteins or Frohlich condensates, and the Reimers group noted the lack of empirical evidence that such could occur. Additionally they calculated that microtubules could only support weak 8 MHz coherence. McKemmish "et al." argued that aromatic molecules cannot switch states because they are delocalised; and that changes in tubulin protein-conformation driven by GTP conversion would result in a prohibitive energy requirement.
In 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.
Neuroscience.
Hameroff frequently writes: "A typical brain neuron has roughly 107 tubulins (Yu and Baas, 1994)", yet this is Hameroff's own invention, which should not be attributed to Yu and Baas. Hameroff apparently misunderstood that Yu and Baas actually "reconstructed the microtubule (MT) arrays of a 56 μm axon from a cell that had undergone axon differentiation" and this reconstructed axon "contained 1430 MTs ... and the total MT length was 5750 μm." A direct calculation shows that 107 tubulins (to be precise 9.3 × 106 tubulins) correspond to this MT length of 5750 μm inside the 56 μm axon.
Hameroff's 1998 hypothesis required that cortical dendrites contain primarily 'A' lattice microtubules, but in 1994 Kikkawa "et al." showed that all "in vivo" microtubules have a 'B' lattice and a seam.
Orch OR also required gap junctions between neurons and glial cells, yet Binmöller "et al." proved in 1992 that these do not exist in the adult brain. In vitro research with primary neuronal cultures shows evidence for electrotonic (gap junction) coupling between "immature" neurons and astrocytes obtained from rat embryos extracted prematurely through Cesarean section; however, the Orch OR claim is that "mature" neurons are electrotonically coupled to astrocytes in the adult brain. Therefore, Orch OR contradicts the well-documented "electrotonic decoupling" of neurons from astrocytes in the process of neuronal maturation, which is stated by Fróes "et al." as follows: "junctional communication may provide metabolic and electrotonic interconnections between neuronal and astrocytic networks at early stages of neural development and such interactions are weakened as differentiation progresses."
Other biology-based criticisms have been offered, including a lack of explanation for the probabilistic release of neurotransmitter from presynaptic axon terminals and an error in the calculated number of the tubulin dimers per cortical neuron.
In 2014, Penrose and Hameroff published responses to some criticisms and revisions to many of the theory's peripheral assumptions, while retaining the core hypothesis.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{-35} \\text{m}"
},
{
"math_id": 1,
"text": "\\tau \\approx \\hbar/E_G"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "E_G"
},
{
"math_id": 4,
"text": "\\hbar"
}
]
| https://en.wikipedia.org/wiki?curid=712245 |
71231546 | Kaniadakis Weibull distribution | Continuous probability distribution
The Kaniadakis Weibull distribution (or "κ"-Weibull distribution) is a probability distribution arising as a generalization of the Weibull distribution. It is one example of a Kaniadakis "κ"-distribution. The κ-Weibull distribution has been adopted successfully for describing a wide variety of complex systems in seismology, economy, epidemiology, among many others.
Definitions.
Probability density function.
The Kaniadakis "κ"-Weibull distribution is exhibits power-law right tails, and it has the following probability density function:
formula_0
valid for formula_1, where formula_2 is the entropic index associated with the Kaniadakis entropy, formula_3 is the scale parameter, and formula_4 is the shape parameter or Weibull modulus.
The Weibull distribution is recovered as formula_5
Cumulative distribution function.
The cumulative distribution function of "κ"-Weibull distribution is given byformula_6valid for formula_1. The cumulative Weibull distribution is recovered in the classical limit formula_7.
Survival distribution and hazard functions.
The survival distribution function of "κ"-Weibull distribution is given by
formula_8
valid for formula_1. The survival Weibull distribution is recovered in the classical limit formula_7.
The hazard function of the "κ"-Weibull distribution is obtained through the solution of the "κ"-rate equation:formula_9with formula_10, where formula_11 is the hazard function:
formula_12
The cumulative "κ"-Weibull distribution is related to the "κ"-hazard function by the following expression:
formula_13
where
formula_14
formula_15
is the cumulative "κ"-hazard function. The cumulative hazard function of the Weibull distribution is recovered in the classical limit formula_7: formula_16 .
Properties.
Moments, median and mode.
The "κ"-Weibull distribution has moment of order formula_17 given by
formula_18
The median and the mode are:
formula_19
formula_20
Quantiles.
The quantiles are given by the following expressionformula_21with formula_22.
Gini coefficient.
The Gini coefficient is:formula_23
Asymptotic behavior.
The "κ"-Weibull distribution II behaves asymptotically as follows:
formula_24
formula_25
Applications.
The "κ"-Weibull distribution has been applied in several areas, such as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \nf_{_{\\kappa}}(x) = \n\\frac{\\alpha \\beta x^{\\alpha-1}}{\\sqrt{1+\\kappa^2 \\beta^2 x^{2\\alpha} }} \\exp_\\kappa(-\\beta x^\\alpha)\n"
},
{
"math_id": 1,
"text": "x \\geq 0"
},
{
"math_id": 2,
"text": "|\\kappa| < 1"
},
{
"math_id": 3,
"text": "\\beta > 0"
},
{
"math_id": 4,
"text": "\\alpha > 0"
},
{
"math_id": 5,
"text": "\\kappa \\rightarrow 0."
},
{
"math_id": 6,
"text": "F_\\kappa(x) = \n1 - \\exp_\\kappa(-\\beta x^\\alpha) "
},
{
"math_id": 7,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 8,
"text": "S_\\kappa(x) = \\exp_\\kappa(-\\beta x^\\alpha)"
},
{
"math_id": 9,
"text": "\\frac{ S_\\kappa(x) }{ dx } = -h_\\kappa S_\\kappa(x) "
},
{
"math_id": 10,
"text": "S_\\kappa(0) = 1"
},
{
"math_id": 11,
"text": "h_\\kappa"
},
{
"math_id": 12,
"text": "h_\\kappa = \\frac{\\alpha \\beta x^{\\alpha-1}}{\\sqrt{1+\\kappa^2 \\beta^2 x^{2\\alpha} }} "
},
{
"math_id": 13,
"text": "S_\\kappa = e^{-H_\\kappa(x)} "
},
{
"math_id": 14,
"text": "H_\\kappa (x) = \\int_0^x h_\\kappa(z) dz "
},
{
"math_id": 15,
"text": "H_\\kappa (x) = \\frac{1}{\\kappa} \\textrm{arcsinh}\\left(\\kappa \\beta x^\\alpha \\right) "
},
{
"math_id": 16,
"text": "H(x) = \\beta x^\\alpha"
},
{
"math_id": 17,
"text": "m \\in \\mathbb{N}"
},
{
"math_id": 18,
"text": "\\operatorname{E}[X^m] = \\frac{|2\\kappa \\beta|^{-m/\\alpha}}{1+\\kappa \\frac{m}{\\alpha}} \\frac{\\Gamma\\Big(\\frac{1}{2\\kappa}-\\frac{m}{2\\alpha}\\Big)}{\\Gamma\\Big(\\frac{1}{2\\kappa}+\\frac{m}{2\\alpha}\\Big)} \\Gamma\\Big(1+\\frac{m}{\\alpha}\\Big)"
},
{
"math_id": 19,
"text": "x_{\\textrm{median}} (F_\\kappa) = \\beta^{-1/\\alpha} \\Bigg(\\ln_\\kappa (2)\\Bigg)^{1/\\alpha} "
},
{
"math_id": 20,
"text": "x_{\\textrm{mode}} = \\beta^{ -1 / \\alpha } \\Bigg( \\frac{ \\alpha^2 + 2 \\kappa^2 (\\alpha - 1 )}{ 2 \\kappa^2 ( \\alpha^2 - \\kappa^2)}\\Bigg)^{1/2 \\alpha} \\Bigg( \\sqrt{1 + \\frac{4 \\kappa^2 (\\alpha^2 - \\kappa^2 )( \\alpha - 1)^2}{ [ \\alpha^2 + 2 \\kappa^2 (\\alpha - 1) ]^2} } - 1 \\Bigg)^{1/2 \\alpha} \\quad (\\alpha > 1) "
},
{
"math_id": 21,
"text": "x_{\\textrm{quantile}} (F_\\kappa) = \\beta^{-1 / \\alpha } \\Bigg[ \\ln_\\kappa \\Bigg(\\frac{1}{1 - F_\\kappa} \\Bigg) \\Bigg]^{1/ \\alpha} "
},
{
"math_id": 22,
"text": "0 \\leq F_\\kappa \\leq 1"
},
{
"math_id": 23,
"text": "\\operatorname{G}_\\kappa = 1 - \\frac{\\alpha + \\kappa}{ \\alpha + \\frac{1}{2}\\kappa } \\frac{\\Gamma\\Big( \\frac{1}{\\kappa} - \\frac{1}{2 \\alpha}\\Big)}{\\Gamma\\Big( \\frac{1}{\\kappa} + \\frac{1}{2 \\alpha}\\Big)} \\frac{\\Gamma\\Big( \\frac{1}{2 \\kappa} + \\frac{1}{2 \\alpha}\\Big)}{\\Gamma\\Big( \\frac{1}{ 2\\kappa} - \\frac{1}{2 \\alpha}\\Big)}"
},
{
"math_id": 24,
"text": "\\lim_{x \\to +\\infty} f_\\kappa (x) \\sim \\frac{\\alpha}{\\kappa} (2 \\kappa \\beta)^{-1/\\kappa} x^{-1 - \\alpha/\\kappa}"
},
{
"math_id": 25,
"text": "\\lim_{x \\to 0^+} f_\\kappa (x) = \\alpha \\beta x^{\\alpha - 1}"
},
{
"math_id": 26,
"text": "\\alpha = 1"
},
{
"math_id": 27,
"text": "\\kappa = 0"
},
{
"math_id": 28,
"text": "\\alpha = 2"
}
]
| https://en.wikipedia.org/wiki?curid=71231546 |
71241698 | Regge–Wheeler–Zerilli equations | In general relativity, Regge–Wheeler–Zerilli equations are a pair of equations that describes gravitational perturbations of a Schwarzschild black hole, named after Tullio Regge, John Archibald Wheeler and Frank J. Zerilli. The perturbations of a Schwarzchild metric is classified into two types, namely, "axial" and "polar" perturbations, a terminology introduced by Subrahmanyan Chandrasekhar. Axial perturbations induce frame dragging by imparting rotations to the black hole and change sign when azimuthal direction is reversed, whereas polar perturbations do not impart rotations and do not change sign under the reversal of azimuthal direction. The equation for axial perturbations is called Regge–Wheeler equation and the equation governing polar perturbations is called Zerilli equation.
The equations take the same form as the one-dimensional Schrödinger equation. The equations read as
formula_0
where formula_1 characterizes the polar perturbations and formula_2 the axial perturbations. Here formula_3 is the tortoise coordinate (we set formula_4), formula_5 belongs to the Schwarzschild coordinates formula_6, formula_7 is the Schwarzschild radius and formula_8 represents the time frequency of the perturbations appearing in the form formula_9. The Regge–Wheeler potential and Zerilli potential are respectively given by
formula_10
formula_11
where formula_12 and formula_13 characterizes the eigenmode for the formula_14 coordinate. For gravitational perturbations, the modes formula_15 are irrelevant because they do not evolve with time. Physically gravitational perturbations with formula_16 (monopole) mode represents a change in the black hole mass, whereas the formula_17 (dipole) mode corresponds to a shift in the location and value of the black hole's angular momentum. The shape of above potentials are exhibited in the figure.
Remember that in the tortoise coordinate, formula_18 denotes event horizon and formula_19 is equivalent to formula_20 i.e., to distances far away from the back hole. The potentials are short ranged as they decay faster than formula_21; as formula_22 we have formula_23 and as formula_24, we have formula_25 Consequently, the asymptotic behaviour of the solutions for formula_26 is formula_27
Relations between the two problems.
In 1975, Subrahmanyan Chandrasekhar and Steven Detweiler discovered a one-to-one mapping between the two equations, leading to a consequence that the spectrum corresponding to both potentials are identical. The two potentials can also be written as
formula_28
The relations between formula_1 and formula_2 are given by
formula_29
Reflection and transmission coefficients.
Here formula_30 is always positive and the problem is one of reflection and transmission of waves incident from formula_22 to formula_24. The problem is essentially the same as that of a reflection and transmission problem by a potential barrier in quantum mechanics. Let the incident wave with unit amplitude be formula_31, then the asymptotic behaviours of the solution are given by
formula_32
formula_33
where formula_34 and formula_35 are respectively are the reflection and transmission amplitudes. In the second equation, we have imposed the physical requirement that no waves emerge from the event horizon.
The "reflection" and "transmission coefficients" are thus defined as
formula_36
subjected to the condition formula_37 Because of the inherent connection between the two equations as outlined in the previous section, it turns out
formula_38
and thus consequently, since formula_39 and formula_40 differ only in their phases, we get
formula_41
It is clear from the figure for the reflection coefficient that small-frequency perturbations are readily reflected by the black hole whereas large-frequecny ones are absorbed by the black hole.
Quasi-normal modes.
Quasi-normal modes correspond to pure tones of the black hole. It describes for arbitrary, but small, perturbations such as an object falling into the black hole, accretion of matter surrounding it, last stage of slightly aspherical collapse etc. Unlike the reflection and transmission coefficient problem, quasi-normal modes are characterised by complex-valued formula_8's with the convention formula_42. The required boundary conditions are
formula_43
formula_44
indicating that we have purely outgoing waves with amplitude formula_45 and purely ingoing waves at the horizon.
The problem becomes an eigenvalue problem. The quasi-normal modes are of damping type in time, although these waves diverge in space as formula_46 (this is due to the implicit assumption that the perturbation in quasi-normal modes is 'infinite' in the remote past). Again because of the relation mentioned between the two problem, the spectrum of formula_1 and formula_2 are identical and thus it enough to consider the spectrum of formula_47 The problem is simplified by introducing
formula_48
The nonlinear eigenvalue problem is given by
formula_49
The solution is found to exist only for a discrete set of values of formula_50 This equation also implies the identity
formula_51
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\frac{d^2}{dr_*^2} + \\sigma^2\\right) Z^{\\pm} = V^{\\pm} Z^{\\pm}"
},
{
"math_id": 1,
"text": "Z^+"
},
{
"math_id": 2,
"text": "Z^-"
},
{
"math_id": 3,
"text": "r_*=r+2M\\ln(r/2M-1)"
},
{
"math_id": 4,
"text": "G=c=1"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "(t,r,\\theta,\\varphi)"
},
{
"math_id": 7,
"text": "2M"
},
{
"math_id": 8,
"text": "\\sigma"
},
{
"math_id": 9,
"text": "e^{i\\sigma t}"
},
{
"math_id": 10,
"text": "V^{-}= \\frac{2(r^2-2Mr)}{r^5}[(n+1)r-3M]"
},
{
"math_id": 11,
"text": "V^{+}= \\frac{2(r^2-2Mr)}{r^5(nr+3M)^2}[n^2(n+1)r^3+3Mn^2r^2+9M^2nr + 9M^3]"
},
{
"math_id": 12,
"text": "2n=(l-1)(l+2)"
},
{
"math_id": 13,
"text": "l=2,3,4,\\dots"
},
{
"math_id": 14,
"text": "\\theta"
},
{
"math_id": 15,
"text": "l=0,\\,1"
},
{
"math_id": 16,
"text": "l=0"
},
{
"math_id": 17,
"text": "l=1"
},
{
"math_id": 18,
"text": "r_*\n\\rightarrow -\\infty"
},
{
"math_id": 19,
"text": "r_* \\rightarrow \\infty"
},
{
"math_id": 20,
"text": "r\\rightarrow \\infty"
},
{
"math_id": 21,
"text": "1/r_*"
},
{
"math_id": 22,
"text": "r_*\\rightarrow \\infty"
},
{
"math_id": 23,
"text": "V^{\\pm}\\rightarrow 2(n+1)/r^2"
},
{
"math_id": 24,
"text": "r_*\\rightarrow -\\infty"
},
{
"math_id": 25,
"text": "V^{\\pm}\\sim e^{r_*/2M}."
},
{
"math_id": 26,
"text": "r_*\\rightarrow \\pm\\infty"
},
{
"math_id": 27,
"text": "e^{\\pm i\\sigma r_*}."
},
{
"math_id": 28,
"text": "V^{\\pm} = \\pm 6M \\frac{df}{dr_*} + (6Mf)^2 + 4n(n+1) f, \\quad f = \\frac{r^2-2Mr}{2r^3(nr+3M)}."
},
{
"math_id": 29,
"text": "[4n(n+1)\\pm 12 i \\sigma M] Z^{\\pm} = \\left[4n(n+1) + \\frac{72M^2(r^2-2Mr)}{r^3(2nr+6M)}\\right] Z^{\\mp} \\pm 12M \\frac{dZ^{\\mp}}{dr_*}."
},
{
"math_id": 30,
"text": "V^{\\pm}"
},
{
"math_id": 31,
"text": "e^{+i\\sigma r_*}"
},
{
"math_id": 32,
"text": "Z^{\\pm} = e^{+i\\sigma r_*} + R^{\\pm} e^{-i\\sigma r_*} \\quad \\text{as} \\quad r_*\\rightarrow +\\infty"
},
{
"math_id": 33,
"text": "Z^{\\pm} = T^{\\pm} e^{i\\sigma r_*} \\quad \\text{as} \\quad r_*\\rightarrow -\\infty"
},
{
"math_id": 34,
"text": "R=R(\\sigma)"
},
{
"math_id": 35,
"text": "T=T(\\sigma)"
},
{
"math_id": 36,
"text": "\\mathcal{R}^{\\pm}=|R^{\\pm}|^2, \\quad \\mathcal{T}^{\\pm}=|T^{\\pm}|^2"
},
{
"math_id": 37,
"text": "\\mathcal{R}^{\\pm}+ \\mathcal{T}^{\\pm}=1."
},
{
"math_id": 38,
"text": "T^+=T^-,\\quad R^+ = e^{i\\delta}R^-, \\quad e^{i\\delta}=\\frac{n(n+1)-3i\\sigma M}{n(n+1)+3 i\\sigma M}"
},
{
"math_id": 39,
"text": "R^+"
},
{
"math_id": 40,
"text": "R^-"
},
{
"math_id": 41,
"text": "\\mathcal{T}\\equiv \\mathcal{T}^{+}=\\mathcal{T}^{-}, \\quad \\mathcal{R}\\equiv\\mathcal{R}^{+}=\\mathcal{R}^{-}."
},
{
"math_id": 42,
"text": "\\mathrm{Re}\\{\\sigma\\}>0"
},
{
"math_id": 43,
"text": "Z^{\\pm} = A^{\\pm} e^{-i\\sigma r_*} \\quad \\text{as} \\quad r_*\\rightarrow +\\infty"
},
{
"math_id": 44,
"text": "Z^{\\pm} = e^{i\\sigma r_*} \\quad \\text{as} \\quad r_*\\rightarrow -\\infty"
},
{
"math_id": 45,
"text": "A^{\\pm}"
},
{
"math_id": 46,
"text": "r^*\\to\\pm\\infty"
},
{
"math_id": 47,
"text": "Z^-."
},
{
"math_id": 48,
"text": "Z^-=\\exp\\left(i\\int^{r_*} \\phi \\,dr_*\\right)."
},
{
"math_id": 49,
"text": "i \\frac{d\\phi}{dr_*} + \\sigma^2 - \\phi^2 - V^-=0, \\quad \\phi(-\\infty)=+\\sigma, \\quad \\phi(+\\infty) =-\\sigma."
},
{
"math_id": 50,
"text": "\\sigma."
},
{
"math_id": 51,
"text": "-2i\\sigma + \\int_{-\\infty}^{+\\infty} (\\sigma^2-\\phi^2) dr_* = \\int_{-\\infty}^{+\\infty} V^- dr_* = \\frac{4n+1}{4M}."
}
]
| https://en.wikipedia.org/wiki?curid=71241698 |
712430 | Kronecker product | Mathematical operation on matrices
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product (which is denoted by the same symbol) from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
The Kronecker product is named after the German mathematician Leopold Kronecker (1823–1891), even though there is little evidence that he was the first to define and use it. The Kronecker product has also been called the "Zehfuss matrix", and the "Zehfuss product", after Johann Georg Zehfuss, who in 1858 described this matrix operation, but Kronecker product is currently the most widely used term. The misattribution to Kronecker rather than Zehfuss was due to Kurt Hensel.
Definition.
If A is an "m" × "n" matrix and B is a "p" × "q" matrix, then the Kronecker product A ⊗ B is the "pm" × "qn" block matrix:
formula_0
more explicitly:
formula_1
Using formula_2 and formula_3 to denote truncating integer division and remainder, respectively, and numbering the matrix elements starting from 0, one obtains
formula_4
formula_5
For the usual numbering starting from 1, one obtains
formula_6
formula_7
If A and B represent linear transformations V1 → W1 and V2 → W2, respectively, then the tensor product of the two maps is represented by A ⊗ B, which is the same as V1 ⊗ V2 → W1 ⊗ W2.
formula_8
Examples.
Similarly:
formula_9
Matrix equations.
The Kronecker product can be used to get a convenient representation for some matrix equations. Consider for instance the equation AXB = C, where A, B and C are given matrices and the matrix X is the unknown. We can use the "vec trick" to rewrite this equation as
formula_10
Here, vec(X) denotes the vectorization of the matrix X, formed by stacking the columns of X into a single column vector.
It now follows from the properties of the Kronecker product that the equation AXB = C has a unique solution, if and only if A and B are invertible .
If X and C are row-ordered into the column vectors u and v, respectively, then
formula_11
The reason is that
formula_12
Applications.
For an example of the application of this formula, see the article on the Lyapunov equation. This formula also comes in handy in showing that the matrix normal distribution is a special case of the multivariate normal distribution. This formula is also useful for representing 2D image processing operations in matrix-vector form.
Another example is when a matrix can be factored as a Kronecker product, then matrix multiplication can be performed faster by using the above formula. This can be applied recursively, as done in the radix-2 FFT and the Fast Walsh–Hadamard transform. Splitting a known matrix into the Kronecker product of two smaller matrices is known as the "nearest Kronecker product" problem, and can be solved exactly by using the SVD. To split a matrix into the Kronecker product of more than two matrices, in an optimal fashion, is a difficult problem and the subject of ongoing research; some authors cast it as a tensor decomposition problem.
In conjunction with the least squares method, the Kronecker product can be used as an accurate solution to the hand–eye calibration problem.
Related matrix operations.
Two related matrix operations are the Tracy–Singh and Khatri–Rao products, which operate on partitioned matrices. Let the "m" × "n" matrix A be partitioned into the "m""i" × "n""j" blocks A"ij" and "p" × "q" matrix B into the "pk" × "qℓ" blocks B"kl", with of course Σ"i mi" = "m", Σ"j nj" = "n", Σ"k pk" = "p" and Σ"ℓ qℓ" = "q".
Tracy–Singh product.
The Tracy–Singh product is defined as
formula_13
which means that the ("ij")-th subblock of the "mp" × "nq" product A formula_14 B is the "mi p" × "nj q" matrix A"ij" formula_14 B, of which the ("kℓ")-th subblock equals the "mi pk" × "nj qℓ" matrix A"ij" ⊗ B"kℓ". Essentially the Tracy–Singh product is the pairwise Kronecker product for each pair of partitions in the two matrices.
For example, if A and B both are 2 × 2 partitioned matrices e.g.:
formula_15
we get:
formula_16
Face-splitting product.
Mixed-products properties
formula_17
where formula_18 denotes the Face-splitting product.
formula_19
Similarly:
formula_20
formula_21
where formula_22 and formula_23 are vectors,
formula_24
where formula_22 and formula_23 are vectors, and formula_14 denotes the Hadamard product.
Similarly:
formula_25
formula_26,
where formula_27 is vector convolution and formula_28 is the Fourier transform matrix (this result is an evolving of count sketch properties),
formula_29
where formula_30 denotes the Column-wise Khatri–Rao product.
Similarly:
formula_31
formula_32
where formula_22 and formula_23 are vectors.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{A}\\otimes\\mathbf{B} = \\begin{bmatrix}\n a_{11} \\mathbf{B} & \\cdots & a_{1n}\\mathbf{B} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{m1} \\mathbf{B} & \\cdots & a_{mn} \\mathbf{B}\n\\end{bmatrix}, "
},
{
"math_id": 1,
"text": "{\\mathbf{A}\\otimes\\mathbf{B}} = \\begin{bmatrix}\n a_{11} b_{11} & a_{11} b_{12} & \\cdots & a_{11} b_{1q} &\n \\cdots & \\cdots & a_{1n} b_{11} & a_{1n} b_{12} & \\cdots & a_{1n} b_{1q} \\\\\n a_{11} b_{21} & a_{11} b_{22} & \\cdots & a_{11} b_{2q} &\n \\cdots & \\cdots & a_{1n} b_{21} & a_{1n} b_{22} & \\cdots & a_{1n} b_{2q} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots & & & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{11} b_{p1} & a_{11} b_{p2} & \\cdots & a_{11} b_{pq} &\n \\cdots & \\cdots & a_{1n} b_{p1} & a_{1n} b_{p2} & \\cdots & a_{1n} b_{pq} \\\\\n \\vdots & \\vdots & & \\vdots & \\ddots & & \\vdots & \\vdots & & \\vdots \\\\\n \\vdots & \\vdots & & \\vdots & & \\ddots & \\vdots & \\vdots & & \\vdots \\\\\n a_{m1} b_{11} & a_{m1} b_{12} & \\cdots & a_{m1} b_{1q} &\n \\cdots & \\cdots & a_{mn} b_{11} & a_{mn} b_{12} & \\cdots & a_{mn} b_{1q} \\\\\n a_{m1} b_{21} & a_{m1} b_{22} & \\cdots & a_{m1} b_{2q} &\n \\cdots & \\cdots & a_{mn} b_{21} & a_{mn} b_{22} & \\cdots & a_{mn} b_{2q} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots & & & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} b_{p1} & a_{m1} b_{p2} & \\cdots & a_{m1} b_{pq} &\n \\cdots & \\cdots & a_{mn} b_{p1} & a_{mn} b_{p2} & \\cdots & a_{mn} b_{pq}\n\\end{bmatrix}. "
},
{
"math_id": 2,
"text": "/\\!/"
},
{
"math_id": 3,
"text": "\\%"
},
{
"math_id": 4,
"text": "(A\\otimes B)_{pr+v, qs+w} = a_{rs} b_{vw}"
},
{
"math_id": 5,
"text": "(A\\otimes B)_{i, j} = a_{i /\\!/ p, j /\\!/ q} b_{i \\%p, j \\% q}."
},
{
"math_id": 6,
"text": "\n(A\\otimes B)_{p(r-1)+v, q(s-1)+w} = a_{rs} b_{vw}\n"
},
{
"math_id": 7,
"text": "\n(A\\otimes B)_{i, j} = a_{\\lceil i/p \\rceil,\\lceil j/q \\rceil} b_{(i-1)\\%p +1, (j-1)\\%q + 1}.\n"
},
{
"math_id": 8,
"text": "\n \\begin{bmatrix}\n 1 & 2 \\\\\n 3 & 4 \\\\\n \\end{bmatrix} \\otimes\n \\begin{bmatrix}\n 0 & 5 \\\\\n 6 & 7 \\\\\n \\end{bmatrix} =\n \\begin{bmatrix}\n 1 \\begin{bmatrix}\n 0 & 5 \\\\\n 6 & 7 \\\\\n \\end{bmatrix} & \n 2 \\begin{bmatrix}\n 0 & 5 \\\\\n 6 & 7 \\\\\n \\end{bmatrix} \\\\\n\n 3 \\begin{bmatrix}\n 0 & 5 \\\\\n 6 & 7 \\\\\n \\end{bmatrix} & \n 4 \\begin{bmatrix}\n 0 & 5 \\\\\n 6 & 7 \\\\\n \\end{bmatrix} \\\\\n \\end{bmatrix} =\n\n \\left[\\begin{array}{cc|cc}\n 1\\times 0 & 1\\times 5 & 2\\times 0 & 2\\times 5 \\\\\n 1\\times 6 & 1\\times 7 & 2\\times 6 & 2\\times 7 \\\\ \\hline\n 3\\times 0 & 3\\times 5 & 4\\times 0 & 4\\times 5 \\\\\n 3\\times 6 & 3\\times 7 & 4\\times 6 & 4\\times 7 \\\\\n \\end{array}\\right] =\n\n \\left[\\begin{array}{cc|cc}\n 0 & 5 & 0 & 10 \\\\\n 6 & 7 & 12 & 14 \\\\ \\hline\n 0 & 15 & 0 & 20 \\\\\n 18 & 21 & 24 & 28\n \\end{array}\\right].\n"
},
{
"math_id": 9,
"text": "\n\\begin{bmatrix}\n1 & -4 & 7 \\\\\n-2 & 3 & 3\n\\end{bmatrix} \\otimes\n\\begin{bmatrix}\n8 & -9 & -6 & 5 \\\\\n1 & -3 & -4 & 7 \\\\\n2 & 8 & -8 & -3 \\\\\n1 & 2 & -5 & -1\n\\end{bmatrix} =\n\\left[\\begin{array}{cccc|cccc|cccc}\n8 & -9 & -6 & 5 & -32 & 36 & 24 & -20 & 56 & -63 & -42 & 35 \\\\\n1 & -3 & -4 & 7 & -4 & 12 & 16 & -28 & 7 & -21 & -28 & 49 \\\\\n2 & 8 & -8 & -3 & -8 & -32 & 32 & 12 & 14 & 56 & -56 & -21 \\\\\n1 & 2 & -5 & -1 & -4 & -8 & 20 & 4 & 7 & 14 & -35 & -7 \\\\ \\hline\n-16 & 18 & 12 & -10 & 24 & -27 & -18 & 15 & 24 & -27 & -18 & 15 \\\\\n-2 & 6 & 8 & -14 & 3 & -9 & -12 & 21 & 3 & -9 & -12 & 21 \\\\\n-4 & -16 & 16 & 6 & 6 & 24 & -24 & -9 & 6 & 24 & -24 & -9 \\\\\n-2 & -4 & 10 & 2 & 3 & 6 & -15 & -3 & 3 & 6 & -15 & -3\n\\end{array}\\right]\n"
},
{
"math_id": 10,
"text": "\n \\left(\\mathbf{B}^\\textsf{T} \\otimes \\mathbf{A}\\right) \\, \\operatorname{vec}(\\mathbf{X})\n = \\operatorname{vec}(\\mathbf{AXB}) = \\operatorname{vec}(\\mathbf{C})\n."
},
{
"math_id": 11,
"text": "\n \\mathbf{v} = \n \\left(\\mathbf{A} \\otimes \\mathbf{B}^\\textsf{T}\\right)\\mathbf{u}\n."
},
{
"math_id": 12,
"text": "\n \\mathbf{v} = \n \\operatorname{vec}\\left((\\mathbf{AXB})^\\textsf{T}\\right) = \n \\operatorname{vec}\\left(\\mathbf{B}^\\textsf{T}\\mathbf{X}^\\textsf{T}\\mathbf{A}^\\textsf{T}\\right) = \n \\left(\\mathbf{A} \\otimes \\mathbf{B}^\\textsf{T}\\right)\\operatorname{vec}\\left(\\mathbf{X^\\textsf{T}}\\right) = \n \\left(\\mathbf{A} \\otimes \\mathbf{B}^\\textsf{T}\\right)\\mathbf{u}\n."
},
{
"math_id": 13,
"text": "\\mathbf{A} \\circ \\mathbf{B} = \\left(\\mathbf{A}_{ij} \\circ \\mathbf{B}\\right)_{ij} = \\left(\\left(\\mathbf{A}_{ij} \\otimes \\mathbf{B}_{kl}\\right)_{kl}\\right)_{ij}"
},
{
"math_id": 14,
"text": "\\circ"
},
{
"math_id": 15,
"text": " \\mathbf{A} = \n\\left[\n\\begin{array} {c | c}\n\\mathbf{A}_{11} & \\mathbf{A}_{12} \\\\\n\\hline\n\\mathbf{A}_{21} & \\mathbf{A}_{22}\n\\end{array}\n\\right]\n= \n\\left[\n\\begin{array} {c c | c}\n1 & 2 & 3 \\\\\n4 & 5 & 6 \\\\\n\\hline\n7 & 8 & 9\n\\end{array}\n\\right]\n,\\quad\n\\mathbf{B} = \n\\left[\n\\begin{array} {c | c}\n\\mathbf{B}_{11} & \\mathbf{B}_{12} \\\\\n\\hline\n\\mathbf{B}_{21} & \\mathbf{B}_{22}\n\\end{array}\n\\right]\n= \n\\left[\n\\begin{array} {c | c c}\n1 & 4 & 7 \\\\\n\\hline\n2 & 5 & 8 \\\\\n3 & 6 & 9\n\\end{array}\n\\right]\n,\n"
},
{
"math_id": 16,
"text": "\\begin{align}\n \\mathbf{A} \\circ \\mathbf{B} \n ={}& \\left[\\begin{array} {c | c}\n \\mathbf{A}_{11} \\circ \\mathbf{B} & \\mathbf{A}_{12} \\circ \\mathbf{B} \\\\\n \\hline\n \\mathbf{A}_{21} \\circ \\mathbf{B} & \\mathbf{A}_{22} \\circ \\mathbf{B}\n \\end{array}\\right] \\\\\n ={} &\\left[\\begin{array} {c | c | c | c}\n \\mathbf{A}_{11} \\otimes \\mathbf{B}_{11} & \\mathbf{A}_{11} \\otimes \\mathbf{B}_{12} & \\mathbf{A}_{12} \\otimes \\mathbf{B}_{11} & \\mathbf{A}_{12} \\otimes \\mathbf{B}_{12} \\\\\n \\hline\n \\mathbf{A}_{11} \\otimes \\mathbf{B}_{21} & \\mathbf{A}_{11} \\otimes \\mathbf{B}_{22} & \\mathbf{A}_{12} \\otimes \\mathbf{B}_{21} & \\mathbf{A}_{12} \\otimes \\mathbf{B}_{22} \\\\\n \\hline\n \\mathbf{A}_{21} \\otimes \\mathbf{B}_{11} & \\mathbf{A}_{21} \\otimes \\mathbf{B}_{12} & \\mathbf{A}_{22} \\otimes \\mathbf{B}_{11} & \\mathbf{A}_{22} \\otimes \\mathbf{B}_{12} \\\\\n \\hline\n \\mathbf{A}_{21} \\otimes \\mathbf{B}_{21} & \\mathbf{A}_{21} \\otimes \\mathbf{B}_{22} & \\mathbf{A}_{22} \\otimes \\mathbf{B}_{21} & \\mathbf{A}_{22} \\otimes \\mathbf{B}_{22}\n \\end{array}\\right] \\\\\n ={} &\\left[\\begin{array} {c c | c c c c | c | c c}\n 1 & 2 & 4 & 7 & 8 & 14 & 3 & 12 & 21 \\\\\n 4 & 5 & 16 & 28 & 20 & 35 & 6 & 24 & 42 \\\\\n \\hline\n 2 & 4 & 5 & 8 & 10 & 16 & 6 & 15 & 24 \\\\\n 3 & 6 & 6 & 9 & 12 & 18 & 9 & 18 & 27 \\\\\n 8 & 10 & 20 & 32 & 25 & 40 & 12 & 30 & 48 \\\\\n 12 & 15 & 24 & 36 & 30 & 45 & 18 & 36 & 54 \\\\\n \\hline\n 7 & 8 & 28 & 49 & 32 & 56 & 9 & 36 & 63 \\\\\n \\hline\n 14 & 16 & 35 & 56 & 40 & 64 & 18 & 45 & 72 \\\\\n 21 & 24 & 42 & 63 & 48 & 72 & 27 & 54 & 81\n \\end{array}\\right].\n\\end{align}"
},
{
"math_id": 17,
"text": "\\mathbf{A} \\otimes (\\mathbf{B}\\bull \\mathbf{C}) = (\\mathbf{A}\\otimes \\mathbf{B}) \\bull \\mathbf{C} ,"
},
{
"math_id": 18,
"text": "\\bull"
},
{
"math_id": 19,
"text": "(\\mathbf{A} \\bull \\mathbf{B})(\\mathbf{C} \\otimes \\mathbf{D}) = (\\mathbf{A}\\mathbf{C}) \\bull (\\mathbf{B} \\mathbf{D}) ,"
},
{
"math_id": 20,
"text": "(\\mathbf{A} \\bull \\mathbf{L})(\\mathbf{B} \\otimes \\mathbf{M}) \\cdots (\\mathbf{C} \\otimes \\mathbf{S}) = (\\mathbf{A}\\mathbf{B} \\cdots \\mathbf{C}) \\bull (\\mathbf{L}\\mathbf{M} \\cdots \\mathbf{S}) ,"
},
{
"math_id": 21,
"text": "\\mathbf{c}^\\textsf{T} \\bull \\mathbf{d}^\\textsf{T} = \\mathbf{c}^\\textsf{T} \\otimes \\mathbf{d}^\\textsf{T} , "
},
{
"math_id": 22,
"text": "\\mathbf c"
},
{
"math_id": 23,
"text": "\\mathbf d"
},
{
"math_id": 24,
"text": "(\\mathbf{A} \\bull \\mathbf{B})(\\mathbf{c} \\otimes \\mathbf{d}) = (\\mathbf{A}\\mathbf{c}) \\circ (\\mathbf{B}\\mathbf{d}) ,"
},
{
"math_id": 25,
"text": "(\\mathbf{A} \\bull \\mathbf{B})(\\mathbf{M}\\mathbf{N}\\mathbf{c} \\otimes \\mathbf{Q}\\mathbf{P}\\mathbf{d}) = (\\mathbf{A}\\mathbf{M}\\mathbf{N}\\mathbf{c}) \\circ (\\mathbf{B}\\mathbf{Q}\\mathbf{P}\\mathbf{d}),"
},
{
"math_id": 26,
"text": "\\mathcal F(C^{(1)}x \\star C^{(2)}y) = (\\mathcal F C^{(1)} \\bull \\mathcal F C^{(2)})(x \\otimes y)= \\mathcal F C^{(1)}x \\circ \\mathcal F C^{(2)}y "
},
{
"math_id": 27,
"text": "\\star"
},
{
"math_id": 28,
"text": "\\mathcal F"
},
{
"math_id": 29,
"text": "(\\mathbf{A} \\bull \\mathbf{L})(\\mathbf{B} \\otimes \\mathbf{M}) \\cdots (\\mathbf{C} \\otimes \\mathbf{S})(\\mathbf{K} \\ast \\mathbf{T}) = (\\mathbf{A}\\mathbf{B} \\cdot \\mathbf{C}\\mathbf{K}) \\circ (\\mathbf{L}\\mathbf{M} \\cdots \\mathbf{S}\\mathbf{T}) ,"
},
{
"math_id": 30,
"text": "\\ast"
},
{
"math_id": 31,
"text": "(\\mathbf{A} \\bull \\mathbf{L})(\\mathbf{B} \\otimes \\mathbf{M}) \\cdots (\\mathbf{C} \\otimes \\mathbf{S})(c \\otimes d ) = (\\mathbf{A}\\mathbf{B} \\cdots \\mathbf{C}\\mathbf{c}) \\circ (\\mathbf{L}\\mathbf{M} \\cdots \\mathbf{S}\\mathbf{d}) ,"
},
{
"math_id": 32,
"text": "(\\mathbf{A} \\bull \\mathbf{L})(\\mathbf{B} \\otimes \\mathbf{M}) \\cdots (\\mathbf{C} \\otimes \\mathbf{S})(\\mathbf{P}\\mathbf{c} \\otimes \\mathbf{Q}\\mathbf{d} ) = (\\mathbf{A}\\mathbf{B} \\cdots \\mathbf{C}\\mathbf{P}\\mathbf{c}) \\circ (\\mathbf{L}\\mathbf{M} \\cdots \\mathbf{S}\\mathbf{Q}\\mathbf{d}) ,"
}
]
| https://en.wikipedia.org/wiki?curid=712430 |
712450 | Quantum statistical mechanics | Statistical mechanics of quantum-mechanical systems
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble (probability distribution over possible quantum states) is described by a density operator "S", which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space "H" describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics.
Expectation.
From classical probability theory, we know that the expectation of a random variable "X" is defined by its distribution D"X" by
formula_0
assuming, of course, that the random variable is integrable or that the random variable is non-negative. Similarly, let "A" be an observable of a quantum mechanical system. "A" is given by a densely defined self-adjoint operator on "H". The spectral measure of "A" defined by
formula_1
uniquely determines "A" and conversely, is uniquely determined by "A". E"A" is a Boolean homomorphism from the Borel subsets of R into the lattice "Q" of self-adjoint projections of "H". In analogy with probability theory, given a state "S", we introduce the "distribution" of "A" under "S" which is the probability measure defined on the Borel subsets of R by
formula_2
Similarly, the expected value of "A" is defined in terms of the probability distribution D"A" by
formula_3
Note that this expectation is relative to the mixed state "S" which is used in the definition of D"A".
Remark. For technical reasons, one needs to consider separately the positive and negative parts of "A" defined by the Borel functional calculus for unbounded operators.
One can easily show:
formula_4
Note that if "S" is a pure state corresponding to the vector formula_5, then:
formula_6
The trace of an operator A is written as follows:
formula_7
Von Neumann entropy.
Of particular significance for describing randomness of a state is the von Neumann entropy of "S" "formally" defined by
formula_8.
Actually, the operator "S" log2 "S" is not necessarily trace-class. However, if "S" is a non-negative self-adjoint operator not of trace class we define Tr("S") = +∞. Also note that any density operator "S" can be diagonalized, that it can be represented in some orthonormal basis by a (possibly infinite) matrix of the form
formula_9
and we define
formula_10
The convention is that formula_11, since an event with probability zero should not contribute to the entropy. This value is an extended real number (that is in [0, ∞]) and this is clearly a unitary invariant of "S".
Remark. It is indeed possible that H("S") = +∞ for some density operator "S". In fact "T" be the diagonal matrix
formula_12
"T" is non-negative trace class and one can show "T" log2 "T" is not trace-class.
Theorem. Entropy is a unitary invariant.
In analogy with classical entropy (notice the similarity in the definitions), H("S") measures the amount of randomness in the state "S". The more dispersed the eigenvalues are, the larger the system entropy. For a system in which the space "H" is finite-dimensional, entropy is maximized for the states "S" which in diagonal form have the representation
formula_13
For such an "S", H("S") = log2 "n". The state "S" is called the maximally mixed state.
Recall that a pure state is one of the form
formula_14
for ψ a vector of norm 1.
Theorem. H("S") = 0 if and only if "S" is a pure state.
For "S" is a pure state if and only if its diagonal form has exactly one non-zero entry which is a 1.
Entropy can be used as a measure of quantum entanglement.
Gibbs canonical ensemble.
Consider an ensemble of systems described by a Hamiltonian "H" with average energy "E". If "H" has pure-point spectrum and the eigenvalues formula_15 of "H" go to +∞ sufficiently fast, e−"r H" will be a non-negative trace-class operator for every positive "r".
The "Gibbs canonical ensemble" is described by the state
formula_16
Where β is such that the ensemble average of energy satisfies
formula_17
and
formula_18
This is called the partition function; it is the quantum mechanical version of the canonical partition function of classical statistical mechanics. The probability that a system chosen at random from the ensemble will be in a state corresponding to energy eigenvalue formula_19 is
formula_20
Under certain conditions, the Gibbs canonical ensemble maximizes the von Neumann entropy of the state subject to the energy conservation requirement.
Grand canonical ensemble.
For open systems where the energy and numbers of particles may fluctuate, the system is described by the grand canonical ensemble, described by the density matrix
formula_21
where the "N"1, "N"2, ... are the particle number operators for the different species of particles that are exchanged with the reservoir. Note that this is a density matrix including many more states (of varying N) compared to the canonical ensemble.
The grand partition function is
formula_22 | [
{
"math_id": 0,
"text": " \\mathbb{E}(X) = \\int_\\mathbb{R}d \\lambda \\operatorname{D}_X(\\lambda) "
},
{
"math_id": 1,
"text": " \\operatorname{E}_A(U) = \\int_U d\\lambda \\operatorname{E}(\\lambda), "
},
{
"math_id": 2,
"text": " \\operatorname{D}_A(U) = \\operatorname{Tr}(\\operatorname{E}_A(U) S). "
},
{
"math_id": 3,
"text": " \\mathbb{E}(A) = \\int_\\mathbb{R} d\\lambda \\, \\operatorname{D}_A(\\lambda)."
},
{
"math_id": 4,
"text": " \\mathbb{E}(A) = \\operatorname{Tr}(A S) = \\operatorname{Tr}(S A). "
},
{
"math_id": 5,
"text": "\\psi"
},
{
"math_id": 6,
"text": " \\mathbb{E}(A) = \\langle \\psi | A | \\psi \\rangle. "
},
{
"math_id": 7,
"text": " \\operatorname{Tr}(A) = \\sum_{m} \\langle m | A | m \\rangle . "
},
{
"math_id": 8,
"text": " \\operatorname{H}(S) = -\\operatorname{Tr}(S \\log_2 S) "
},
{
"math_id": 9,
"text": " \\begin{bmatrix} \\lambda_1 & 0 & \\cdots & 0 & \\cdots \\\\ 0 & \\lambda_2 & \\cdots & 0 & \\cdots\\\\ \\vdots & \\vdots & \\ddots & \\\\ 0 & 0 & & \\lambda_n & \\\\ \\vdots & \\vdots & & & \\ddots \\end{bmatrix} "
},
{
"math_id": 10,
"text": " \\operatorname{H}(S) = - \\sum_i \\lambda_i \\log_2 \\lambda_i. "
},
{
"math_id": 11,
"text": " \\; 0 \\log_2 0 = 0"
},
{
"math_id": 12,
"text": " T = \\begin{bmatrix} \\frac{1}{2 (\\log_2 2)^2 }& 0 & \\cdots & 0 & \\cdots \\\\ 0 & \\frac{1}{3 (\\log_2 3)^2 } & \\cdots & 0 & \\cdots\\\\ \\vdots & \\vdots & \\ddots & \\\\ 0 & 0 & & \\frac{1}{n (\\log_2 n)^2 } & \\\\ \\vdots & \\vdots & & & \\ddots \\end{bmatrix} "
},
{
"math_id": 13,
"text": " \\begin{bmatrix} \\frac{1}{n} & 0 & \\cdots & 0 \\\\ 0 & \\frac{1}{n} & \\dots & 0 \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 & \\cdots & \\frac{1}{n} \\end{bmatrix} "
},
{
"math_id": 14,
"text": " S = | \\psi \\rangle \\langle \\psi |, "
},
{
"math_id": 15,
"text": "E_n"
},
{
"math_id": 16,
"text": " S= \\frac{\\mathrm{e}^{- \\beta H}}{\\operatorname{Tr}(\\mathrm{e}^{- \\beta H})}. "
},
{
"math_id": 17,
"text": " \\operatorname{Tr}(S H) = E "
},
{
"math_id": 18,
"text": "\\operatorname{Tr}(\\mathrm{e}^{- \\beta H}) = \\sum_n \\mathrm{e}^{- \\beta E_n} = Z(\\beta) "
},
{
"math_id": 19,
"text": "E_m"
},
{
"math_id": 20,
"text": "\\mathcal{P}(E_m) = \\frac{\\mathrm{e}^{- \\beta E_m}}{\\sum_n \\mathrm{e}^{- \\beta E_n}}."
},
{
"math_id": 21,
"text": " \\rho = \\frac{\\mathrm{e}^{\\beta (\\sum_i \\mu_iN_i - H)}}{\\operatorname{Tr}\\left(\\mathrm{e}^{ \\beta ( \\sum_i \\mu_iN_i - H)}\\right)}. "
},
{
"math_id": 22,
"text": "\\mathcal Z(\\beta, \\mu_1, \\mu_2, \\cdots) = \\operatorname{Tr}(\\mathrm{e}^{\\beta (\\sum_i \\mu_iN_i - H)}) "
}
]
| https://en.wikipedia.org/wiki?curid=712450 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.