id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
7423338 | Stolarsky mean | In mathematics, the Stolarsky mean is a generalization of the logarithmic mean. It was introduced by Kenneth B. Stolarsky in 1975.
Definition.
For two positive real numbers "x", "y" the Stolarsky Mean is defined as:
formula_0
Derivation.
It is derived from the mean value theorem, which states that a secant line, cutting the graph of a differentiable function formula_1 at formula_2 and formula_3, has the same slope as a line tangent to the graph at some point formula_4 in the interval formula_5.
formula_6
The Stolarsky mean is obtained by
formula_7
when choosing formula_8.
Generalizations.
One can generalize the mean to "n" + 1 variables by considering the mean value theorem for divided differences for the "n"th derivative.
One obtains
formula_20 for formula_21.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\nS_p(x,y)\n& = \\lim_{(\\xi,\\eta)\\to(x,y)}\n\\left({\\frac{\\xi^p-\\eta^p}{p (\\xi-\\eta)}}\\right)^{1/(p-1)} \\\\[10pt]\n& = \\begin{cases}\nx & \\text{if }x=y \\\\\n\\left({\\frac{x^p-y^p}{p (x-y)}}\\right)^{1/(p-1)} & \\text{else}\n\\end{cases}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "( x, f(x) )"
},
{
"math_id": 3,
"text": "( y, f(y) )"
},
{
"math_id": 4,
"text": "\\xi"
},
{
"math_id": 5,
"text": "[x,y]"
},
{
"math_id": 6,
"text": " \\exists \\xi\\in[x,y]\\ f'(\\xi) = \\frac{f(x)-f(y)}{x-y} "
},
{
"math_id": 7,
"text": " \\xi = \\left[f'\\right]^{-1}\\left(\\frac{f(x)-f(y)}{x-y}\\right) "
},
{
"math_id": 8,
"text": "f(x) = x^p"
},
{
"math_id": 9,
"text": "\\lim_{p\\to -\\infty} S_p(x,y)"
},
{
"math_id": 10,
"text": "S_{-1}(x,y)"
},
{
"math_id": 11,
"text": "\\lim_{p\\to 0} S_p(x,y)"
},
{
"math_id": 12,
"text": "f(x) = \\ln x"
},
{
"math_id": 13,
"text": "S_{\\frac{1}{2}}(x,y)"
},
{
"math_id": 14,
"text": "\\frac{1}{2}"
},
{
"math_id": 15,
"text": "\\lim_{p\\to 1} S_p(x,y)"
},
{
"math_id": 16,
"text": "f(x) = x\\cdot \\ln x"
},
{
"math_id": 17,
"text": "S_2(x,y)"
},
{
"math_id": 18,
"text": "S_3(x,y) = QM(x,y,GM(x,y))"
},
{
"math_id": 19,
"text": "\\lim_{p\\to\\infty} S_p(x,y)"
},
{
"math_id": 20,
"text": "S_p(x_0,\\dots,x_n) = {f^{(n)}}^{-1}(n!\\cdot f[x_0,\\dots,x_n])"
},
{
"math_id": 21,
"text": "f(x)=x^p"
}
]
| https://en.wikipedia.org/wiki?curid=7423338 |
7423424 | Identric mean | The identric mean of two positive real numbers "x", "y" is defined as:
formula_0
It can be derived from the mean value theorem by considering the secant of the graph of the function formula_1. It can be generalized to more variables according by the mean value theorem for divided differences. The identric mean is a special case of the Stolarsky mean.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\nI(x,y)\n&=\n\\frac{1}{e}\\cdot\n\\lim_{(\\xi,\\eta)\\to(x,y)}\n\\sqrt[\\xi-\\eta]{\\frac{\\xi^\\xi}{\\eta^\\eta}}\n\\\\[8pt]\n&=\n\\lim_{(\\xi,\\eta)\\to(x,y)}\n\\exp\\left(\\frac{\\xi\\cdot\\ln\\xi-\\eta\\cdot\\ln\\eta}{\\xi-\\eta}-1\\right)\n\\\\[8pt]\n&=\n\\begin{cases}\nx & \\text{if }x=y \\\\[8pt]\n\\frac{1}{e} \\sqrt[x-y]{\\frac{x^x}{y^y}} & \\text{else}\n\\end{cases}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "x \\mapsto x\\cdot \\ln x"
}
]
| https://en.wikipedia.org/wiki?curid=7423424 |
742352 | Derivative test | Method for finding the extrema of a function
In calculus, a derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about the concavity of a function.
The usefulness of derivatives to find extrema is proved mathematically by Fermat's theorem of stationary points.
First-derivative test.
The first-derivative test examines a function's monotonic properties (where the function is increasing or decreasing), focusing on a particular point in its domain. If the function "switches" from increasing to decreasing at the point, then the function will achieve a highest value at that point. Similarly, if the function "switches" from decreasing to increasing at the point, then it will achieve a least value at that point. If the function fails to "switch" and remains increasing or remains decreasing, then no highest or least value is achieved.
One can examine a function's monotonicity without calculus. However, calculus is usually helpful because there are sufficient conditions that guarantee the monotonicity properties above, and these conditions apply to the vast majority of functions one would encounter.
Precise statement of monotonicity properties.
Stated precisely, suppose that "f" is a real-valued function defined on some open interval containing the point "x" and suppose further that "f" is continuous at "x".
Note that in the first case, "f" is not required to be strictly increasing or strictly decreasing to the left or right of "x", while in the last case, "f" is required to be strictly increasing or strictly decreasing. The reason is that in the definition of local maximum and minimum, the inequality is not required to be strict: e.g. every value of a constant function is considered both a local maximum and a local minimum.
Precise statement of first-derivative test.
The first-derivative test depends on the "increasing–decreasing test", which is itself ultimately a consequence of the mean value theorem. It is a direct consequence of the way the derivative is defined and its connection to decrease and increase of a function locally, combined with the previous section.
Suppose "f" is a real-valued function of a real variable defined on some interval containing the critical point "a". Further suppose that "f" is continuous at "a" and differentiable on some open interval containing "a", except possibly at "a" itself.
Again, corresponding to the comments in the section on monotonicity properties, note that in the first two cases, the inequality is not required to be strict, while in the third, strict inequality is required.
Applications.
The first-derivative test is helpful in solving optimization problems in physics, economics, and engineering. In conjunction with the extreme value theorem, it can be used to find the absolute maximum and minimum of a real-valued function defined on a closed and bounded interval. In conjunction with other information such as concavity, inflection points, and asymptotes, it can be used to sketch the graph of a function.
Second-derivative test (single variable).
After establishing the critical points of a function, the "second-derivative test" uses the value of the second derivative at those points to determine whether such points are a local maximum or a local minimum. If the function "f" is twice-differentiable at a critical point "x" (i.e. a point where "f′"("x") = 0), then:
In the last case, Taylor's Theorem may sometimes be used to determine the behavior of "f" near "x" using higher derivatives.
Proof of the second-derivative test.
Suppose we have formula_3 (the proof for formula_0 is analogous). By assumption, formula_5. Then
formula_6
Thus, for "h" sufficiently small we get
formula_7
which means that formula_8 if formula_9 (intuitively, "f" is decreasing as it approaches formula_2 from the left), and that formula_10 if formula_11 (intuitively, "f" is increasing as we go right from "x"). Now, by the first-derivative test, formula_1 has a local minimum at formula_2.
Concavity test.
A related but distinct use of second derivatives is to determine whether a function is concave up or concave down at a point. It does not, however, provide information about inflection points. Specifically, a twice-differentiable function "f" is concave up if formula_3 and concave down if formula_0. Note that if formula_12, then formula_13 has zero second derivative, yet is not an inflection point, so the second derivative alone does not give enough information to determine whether a given point is an inflection point.
Higher-order derivative test.
The "higher-order derivative test" or "general derivative test" is able to determine whether a function's critical points are maxima, minima, or points of inflection for a wider variety of functions than the second-order derivative test. As shown below, the second-derivative test is mathematically identical to the special case of "n" = 1 in the higher-order derivative test.
Let "f" be a real-valued, sufficiently differentiable function on an interval formula_14, let formula_15, and let formula_16 be a natural number. Also let all the derivatives of "f" at "c" be zero up to and including the "n"-th derivative, but with the ("n" + 1)th derivative being non-zero:
formula_17
There are four possibilities, the first two cases where "c" is an extremum, the second two where "c" is a (local) saddle point:
Since "n" must be either odd or even, this analytical test classifies any stationary point of "f", so long as a nonzero derivative shows up eventually.
Example.
Say we want to perform the general derivative test on the function formula_20 at the point formula_13. To do this, we calculate the derivatives of the function and then evaluate them at the point of interest until the result is nonzero.
formula_21, formula_22
formula_23, formula_24
formula_25, formula_26
formula_27, formula_28
formula_29, formula_30
formula_31, formula_32
As shown above, at the point formula_13, the function formula_33 has all of its derivatives at 0 equal to 0, except for the 6th derivative, which is positive. Thus "n" = 5, and by the test, there is a local minimum at 0.
Multivariable case.
For a function of more than one variable, the second-derivative test generalizes to a test based on the eigenvalues of the function's Hessian matrix at the critical point. In particular, assuming that all second-order partial derivatives of "f" are continuous on a neighbourhood of a critical point "x", then if the eigenvalues of the Hessian at "x" are all positive, then "x" is a local minimum. If the eigenvalues are all negative, then "x" is a local maximum, and if some are positive and some negative, then the point is a saddle point. If the Hessian matrix is singular, then the second-derivative test is inconclusive.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f''(x) < 0"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "f''(x) > 0"
},
{
"math_id": 4,
"text": "f''(x) = 0"
},
{
"math_id": 5,
"text": "f'(x) = 0"
},
{
"math_id": 6,
"text": "0 < f''(x) = \\lim_{h \\to 0} \\frac{f'(x + h) - f'(x)}{h} = \\lim_{h \\to 0} \\frac{f'(x + h)}{h}."
},
{
"math_id": 7,
"text": "\\frac{f'(x + h)}{h} > 0,"
},
{
"math_id": 8,
"text": "f'(x + h) < 0"
},
{
"math_id": 9,
"text": "h < 0"
},
{
"math_id": 10,
"text": "f'(x + h) > 0"
},
{
"math_id": 11,
"text": "h > 0"
},
{
"math_id": 12,
"text": "f(x) = x^4"
},
{
"math_id": 13,
"text": "x = 0"
},
{
"math_id": 14,
"text": "I \\subset \\R"
},
{
"math_id": 15,
"text": "c \\in I"
},
{
"math_id": 16,
"text": "n \\ge 1"
},
{
"math_id": 17,
"text": "f'(c) = \\cdots =f^{(n)}(c) = 0\\quad \\text{and}\\quad f^{(n+1)}(c) \\ne 0."
},
{
"math_id": 18,
"text": "f^{(n+1)}(c) < 0"
},
{
"math_id": 19,
"text": "f^{(n+1)}(c) > 0"
},
{
"math_id": 20,
"text": "f(x) = x^6 + 5"
},
{
"math_id": 21,
"text": "f'(x) = 6x^5"
},
{
"math_id": 22,
"text": "f'(0) = 0;"
},
{
"math_id": 23,
"text": "f''(x) = 30x^4"
},
{
"math_id": 24,
"text": "f''(0) = 0;"
},
{
"math_id": 25,
"text": "f^{(3)}(x) = 120x^3"
},
{
"math_id": 26,
"text": "f^{(3)}(0) = 0;"
},
{
"math_id": 27,
"text": "f^{(4)}(x) = 360x^2"
},
{
"math_id": 28,
"text": "f^{(4)}(0) = 0;"
},
{
"math_id": 29,
"text": "f^{(5)}(x) = 720x"
},
{
"math_id": 30,
"text": "f^{(5)}(0) = 0;"
},
{
"math_id": 31,
"text": "f^{(6)}(x) = 720"
},
{
"math_id": 32,
"text": "f^{(6)}(0) = 720."
},
{
"math_id": 33,
"text": "x^6 + 5"
}
]
| https://en.wikipedia.org/wiki?curid=742352 |
74237264 | Pharmacological cardiotoxicity | Pharmacological cardiotoxicity is a cardiac damage under the action of drugs and it can occur both affecting the performances of the cardiac muscle and by altering the ion channels/currents of the functional cardiac cells, named the cardiomyocytes.
Two distinct case in which can occur are related to anti-cancer drugs and antiarrhythmic drugs. From early observations, some of the first ones which go under the name of anthracycline. It has emerged that such drugs cause a progressive form of heart failure leading to cardiac death. The mechanism of cell injury is thought to account for iron-dependent generation of reactive oxygen species with a spreading of oxidative damage to the cardiomyocytes. On the other hand, related to the antiarrhythmic drugs, the cardiotoxicity is associated to the risk of induce a potential fatal arrhythmias due to an imbalance in the amount of ion currents that flows in/out the cell membrane of the cardiomyocytes.
Pharmacological action.
The pharmacological action represents a mechanism by means of a specific effect can be obtained. Depending on the class and type of the drug, the pharmacological action may be different.
In the case of electrophysiology, the drug directly acts at the level of the cells, affecting the mechanism of opening/closing of the ionic channels, as it happens with the anti-arrhythmic drugs. Due to the ionic permeability properties of the cardiac cells membrane, during the action potential, the opening of the ion channels generates ion currents that flow in/out of the lipophilic cell membrane.
The anti-arrhythmic drugs action is that of modifying such ion currents, acting on the structure of the ion channel, and trying to restore the physiological opening/closing mechanism of the ion channels. It may be that, instead of providing a benefit to the heart, such as the aforementioned desired effect, a new drug can negatively affect the ion currents, ending up to excessively modifying the amount of ion currents flowing throughout the cell membrane, thus increasing the risk of inducing a potentially fatal arrhythmias.
Examples of pharmacological cardiotoxicity.
Anti-arrhythmic drugs cardiotoxicity.
The anti-arrhythmic drugs are a class of pharmacological compounds whose action is that of restore the normal sinus rhythm when a patient is affected by an arrhythmia, so their action is that of performing a pharmacological cardioversion.
Indeed, the pharmacological cardiotoxicity of anti-arrhythmic compounds is related to the action of these drugs to induce potential fatal arrhythmias such as torsade de pointes or ventricular fibrillation. The anti-arrhythmic drugs directly act on the opening/closing of ion channels, thus modifying the ion currents.
In treating arrhythmias, the pharmacological therapeutic action is related to the generation of a new combination of the blockage/opening of ion channels. Nevertheless, this new pharmacologically induced configuration may lead to an unbalance in ionic currents and as a consequence causing a modification in the action potential morphology which increases the risk of inducing an arrhythmia.
Over the years, it has been studied how the change of the action potential shape, i.e. prolongation of the repolarization phase or early after depolarizations, is bonded to the likelihood of inducing fatal arrhythmias, such as torsade de pointes. Thus, the risk of inducing a fatal arrhythmias has to be prevented assessing the pharmacological cardiotoxicity at the early stages of the manufacturing of a new drug.
Clinical cardiotoxicity assessment.
During the study of a new pharmacological compound, the clinical trial is one of the phases before the market release.
At this level, following the directions of the clinical trial protocol, the new drug is administrated to the patient as a therapy, and the patient's clinical status is monitored aiming to evaluate possible side effects.
Old paradigm.
To assess pharmacological cardiotoxicity, it was common practice to measure QT interval in vivo and the blockage of potassium channel. Nevertheless, a new paradigm has been developed to overcome the limits of the previous one since 2013. In fact, it has been demonstrated that the old paradigm was stringent, labeling as pro-arrhythmic some pharmacological compounds which actually were not.
New paradigm: CiPA.
The comprehensive in vitro pro-arrhythmia assay was born, accounting for both experimental data and detailed computational models which take into account multiple ionic currents instead of measuring just QT interval and potassium channel blockage. This new paradigm aims to interlink the clinical evidence with in silico modeling to reconstruct the atrial and ventricular action potential and evaluate the likelihood for early afterdepolarization to occur.
In Silico cardiotoxicity assessment.
Background.
In the last years, in silico medicine turned out to be promising, aiding scientists and clinicians to prevent and adequately cure several diseases. Computational modeling aid in understanding complex phenomena, allowing scientists to vary parameters aiming to measure variables that otherwise could have not been investigated.
In the field of electrophysiology, the pharmacological cardiotoxicity assessment can be carried out leveraging specific computational models. According to the type and parameters to be investigated in the research, it is possible to analyze the pharmacological effect on the atria and ventricles separately.
Since the two cardiac chambers are very different each other and play a key role both on a functional and anatomical basis, suitable computational models have to be accounted for to describe their different behaviour. During the years, several models have been developed o best characterize and replicate the cellular action potential behaviour of the most relevant anatomical region of the heart, such as Courtemanche model for atria or O'Hara model for ventricles.
Creation of a population of cellular action potentials.
In this way, it has been possible to create a virtual cellular population of cardiomyocytes and vary their conductances that are related to the main ionic currents which contribute to the action potential morphology, reflective of a specific anatomical region of the heart.
In order to create a stable population of cellular action potentials, the biomarkers have to be considered. During the years, several biomarkers have been developed to best characterize the instability of cellular action potentials. Few biomarkers are reported:
formula_0
formula_1
formula_2
formula_3
formula_4
Many other can be used according to the needs of the research .
Regional clusterization.
Once the cellular population is stable, all the action potential are compared to physiological data related to the most relevant anatomical regions to appropriately filter the action potential, aiming to consider just the physiologically relevant ones.
At the atrial level, the clusterization occurs with data associated to:
Simulation of the pharmacological action.
According to pharmacokinetic and pharmacodynamic data of the drugs, the pharmacological action is integrated in the model. By means of specific electrical stimuli protocols, the pharmacological effect of a new drug can be investigated in a completely safe, and controlled computational environment, providing preliminary important considerations concerning the cardiotoxicity of new pharmacological compounds.
According to the outcome of the simulations, several aspects can be investigated to identify the pro-arrhythmicity of a new pharmacological compound. The typical changes, called repolarization abnormalities, in the action potential morphology that are considered pro-arrhythmic are:
Torsade de point risk score.
Simulation can be carried out at different effective plasmatic therapeutic level of the drugs to identify the level at which cardiotoxicity cannot be neglected. The data collected could be finally managed to create a score system aimed to define the torsadogenic risk, namely the risk of inducing torsade de pointes, of the new drugs.
A possible torsade de point risk score to assess cardiotoxicity could be:
formula_5
where formula_6 is the sum of all concentrations, [C] is the concentration taken into account, formula_7 , formula_8 is the total number of models in the population, and formula_9 represents the number of models showing repolarization abnormalities.
Tissue simulations.
More detailed computation simulations can be carried out accounting for not cellular models, but taking into consideration the functional syncytium and enabling the cells to mutually interact, the so-called electrotonic coupling.
In case of tissue simulation or in wider cases, such as in whole organ simulations, all the cellular models are note applicable anymore, and several corrections have to be made. Firstly, the governing equations can not be just ordinary differential equations, but a system of partial differential equations has to be accounted for. A suitable choice may be the monodomain model:
formula_10 formula_11 formula_12
formula_13 formula_11 formula_14
where formula_15 is the effective conductivity tensor, formula_16is the capacitance of the cellular membrane, formula_17 the transmembrane ionic current, formula_12 and formula_18 are the domain of interest and its boundary, respectively, with formula_19 the outward boundary of formula_18. | [
{
"math_id": 0,
"text": "APD_{90}=t_{90}-t_0"
},
{
"math_id": 1,
"text": "APD_{50}=t_{50}-t_0"
},
{
"math_id": 2,
"text": "APD_{20}=t_{20}-t_0"
},
{
"math_id": 3,
"text": "Triangulation=APD_{90}-APD_{50}"
},
{
"math_id": 4,
"text": "APA=V_{Max}-V_0"
},
{
"math_id": 5,
"text": "TdPRS=\\frac{\\sum_{c}(W_c\\cdot nRA_c)}{N\\cdot \\sum_{c}W_c)}\n\n"
},
{
"math_id": 6,
"text": "\\sum_{c}\n\n"
},
{
"math_id": 7,
"text": "W_c=\\frac{EFTPC}{[C]}\n\n"
},
{
"math_id": 8,
"text": "N\n\n"
},
{
"math_id": 9,
"text": "nRA_c\n\n"
},
{
"math_id": 10,
"text": "\\triangledown \\cdot(D\\nabla V)=(C_m\\frac{\\partial V}{\\partial t} + I_{ion}(V,u)) \n"
},
{
"math_id": 11,
"text": "in"
},
{
"math_id": 12,
"text": "\\Omega"
},
{
"math_id": 13,
"text": "n \\cdot(D\\nabla V)=0"
},
{
"math_id": 14,
"text": "\\partial \\Omega"
},
{
"math_id": 15,
"text": "D"
},
{
"math_id": 16,
"text": "C_m"
},
{
"math_id": 17,
"text": "I_{ion}"
},
{
"math_id": 18,
"text": "\\partial\\Omega"
},
{
"math_id": 19,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=74237264 |
74240445 | Johan Gielis | Belgian engineer, scientist, mathematician, and entrepreneur
Johan Gielis (born July 8, 1962) is a Belgian engineer, scientist, mathematician, and entrepreneur. Gielis is known for his contributions to the field of mathematics, specifically in the area of modeling and geometrical methods. He is best known for developing the concept of the superformula, which is a generalization of the traditional Pythagorean theorem and the equation of the circle, that can generate a wide variety of complex shapes found in nature.
Career.
Gielis obtained a degree in horticultural engineering. Later, he changed direction from botany and plant biotechnology to geometry and mathematics. In 2013, Gielis co founded the Antenna Company, in Eindhoven. The company applies the superfomula to develop efficient antennas to transmit data via various frequencies. The company made antenna system for ultra-fast WiFi 6 devices. Antenna systems focus on 2-7 gigaHertz, in line with the IEEE 802.11ax standard and beyond. Other products focus on Internet of Things and mmWave antenna systems.
Superformula.
Gielis proposed the superformula in 2003. The superfomula is a generalization of the superellipse. He suggested that it allows for the creation of shapes that can mimic natural forms such as flowers, shells, and other intricate structures. The mathematical equation combines elements of trigonometry and algebra to generate complex and visually appealing patterns. It also allowed for a generalization of minimal surfaces based on a more general notion of the energy functional and allowed for a generalized definition of the Laplacian, and the use of Fourier projection methods to solve boundary value problems.
formula_0
"r" - distance from the center, "Φ" - Angle to the x-axis, "m" - symmetry, "n"1, "n"2, "n"3: - Form, "a", "b": - expansion (semi-axes)
Gielis patented the synthesis of patterns generated by the superformula. The superformula was used in No Man's Sky, an action-adventure survival game developed and published by Hello Games. The formula was also used in the Jewels of the Sea.
* Modeling in Mathematics Proceedings of the Second Tbilisi-Salerno Workshop on Modeling in Mathematics 2017
* Inventing the Circle
*The geometrical beauty of plants
*Universal Natural Shapes
* A generic geometric transformation that unifies a wide range of natural and abstract shapes
* Diatom frustule morphogenesis and function: a multidisciplinary survey
* Somatic embryogenesis from mature Bambusa balcooa Roxburgh as basis for mass production of elite forestry bamboos
* Tissue culture strategies for genetic improvement of bamboo
* Computer implemented tool box systems and methods
* Superquadrics with rational and irrational symmetry
* Comparison of dwarf bamboos (Indocalamus sp.) leaf parameters to determine relationship between spatial density of plants and total leaf area per plant
* A general leaf area geometric formula exists for plants—Evidence from the simplified Gielis equation
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\frac{1}{r} =\\sqrt[n_1]{\\,\\left| \\frac{1}{a} \\cos \\left( \\frac{m}{4} \\phi \\right) \\right|^{n_2} + \\left| \\frac{1}{b} \\sin \\left( \\frac{m}{4} \\phi \\right) \\right|^{n_3}} "
}
]
| https://en.wikipedia.org/wiki?curid=74240445 |
742477 | Newton polygon | In mathematics, the Newton polygon is a tool for understanding the behaviour of polynomials over local fields, or more generally, over ultrametric fields.
In the original case, the local field of interest was "essentially" the field of formal Laurent series in the indeterminate "X", i.e. the field of fractions of the formal power series ring formula_0,
over formula_1, where formula_1 was the real number or complex number field. This is still of considerable utility with respect to Puiseux expansions. The Newton polygon is an effective device for understanding the leading terms formula_2
of the power series expansion solutions to equations formula_3
where formula_4 is a polynomial with coefficients in formula_5, the polynomial ring; that is, implicitly defined algebraic functions. The exponents formula_6 here are certain rational numbers, depending on the branch chosen; and the solutions themselves are power series in formula_7
with formula_8 for a denominator formula_9 corresponding to the branch. The Newton polygon gives an effective, algorithmic approach to calculating formula_9.
After the introduction of the p-adic numbers, it was shown that the Newton polygon is just as useful in questions of ramification for local fields, and hence in algebraic number theory. Newton polygons have also been useful in the study of elliptic curves.
Definition.
A priori, given a polynomial over a field, the behaviour of the roots (assuming it has roots) will be unknown. Newton polygons provide one technique for the study of the behaviour of the roots.
Let formula_1 be a field endowed with a non-archimedean valuation formula_10, and let
formula_11
with formula_12. Then the Newton polygon of formula_13 is defined to be the lower boundary of the convex hull of the set of points formula_14
ignoring the points with formula_15.
Restated geometrically, plot all of these points "P""i" on the "xy"-plane. Let's assume that the points indices increase from left to right ("P""0" is the leftmost point, "P""n" is the rightmost point). Then, starting at "P"0, draw a ray straight down parallel with the "y"-axis, and rotate this ray counter-clockwise until it hits the point "P"k1 (not necessarily "P"1). Break the ray here. Now draw a second ray from "P"k1 straight down parallel with the "y"-axis, and rotate this ray counter-clockwise until it hits the point "P"k2. Continue until the process reaches the point "P""n"; the resulting polygon (containing the points "P"0, "P"k1, "P"k2, ..., "P"km, "P""n") is the Newton polygon.
Another, perhaps more intuitive way to view this process is this : consider a rubber band surrounding all the points "P"0, ..., "P"n. Stretch the band upwards, such that the band is stuck on its lower side by some of the points (the points act like nails, partially hammered into the xy plane). The vertices of the Newton polygon are exactly those points.
For a neat diagram of this see Ch6 §3 of "Local Fields" by JWS Cassels, LMS Student Texts 3, CUP 1986. It is on p99 of the 1986 paperback edition.
Main theorem.
With the notations in the previous section, the main result concerning the Newton polygon is the following theorem, which states that the valuation of the roots of formula_13 are entirely determined by its Newton polygon:
Let formula_16
be the slopes of the line segments of the Newton polygon of formula_17 (as defined above) arranged in increasing order, and let
formula_18
be the corresponding lengths of the line segments projected onto the x-axis (i.e. if we have a line segment stretching between the points formula_19 and formula_20 then the length is formula_21).
Corollaries and applications.
With the notation of the previous sections, we denote, in what follows, by formula_29 the splitting field of formula_13 over formula_1, and by formula_30 an extension of formula_31 to formula_29.
Newton polygon theorem is often used to show the irreducibility of polynomials, as in the next corollary for example:
Indeed, by the main theorem, if formula_24 is a root of formula_13, formula_41
If formula_13 were not irreducible over formula_1, then the degree formula_9 of formula_24 would be formula_42, and there would hold formula_43. But this is impossible since formula_44 with formula_36 coprime to formula_37.
Another simple corollary is the following:
"Proof:" By the main theorem, formula_13 must have a single root formula_24 whose valuation is formula_47 In particular, formula_24 is separable over formula_1.
If formula_24 does not belong to formula_1, formula_24 has a distinct Galois conjugate formula_48 over formula_1, with formula_49, and formula_48 is a root of formula_13, a contradiction.
More generally, the following factorization theorem holds:
"Moreover, formula_57, and if formula_58 is coprime to formula_28, formula_54 is irreducible over formula_1."
"Proof:"
For every formula_26, denote by formula_54 the product of the monomials formula_59 such that formula_24 is a root of formula_13 and formula_60. We also denote formula_61 the factorization of formula_62 in formula_5 into prime monic factors formula_63
Let formula_24 be a root of formula_54. We can assume that formula_64 is the minimal polynomial of formula_24 over formula_1.
If formula_48 is a root of formula_64, there exists a K-automorphism formula_50 of formula_29 that sends formula_24 to formula_48, and we have formula_65 since formula_1 is Henselian. Therefore formula_48 is also a root of formula_54.
Moreover, every root of formula_64 of multiplicity formula_66 is clearly a root of formula_67 of multiplicity formula_68, since repeated roots share obviously the same valuation. This shows that formula_69 divides formula_70
Let formula_71. Choose a root formula_72 of formula_73. Notice that the roots of formula_73 are distinct from the roots of formula_64. Repeat the previous argument with the minimal polynomial of formula_74 over formula_1, assumed w.l.g. to be formula_75, to show that formula_76 divides formula_77.
Continuing this process until all the roots of formula_54 are exhausted, one eventually arrives to
formula_78, with formula_79. This shows that formula_53, formula_54 monic.
But the formula_54 are coprime since their roots have distinct valuations. Hence clearly formula_80, showing the main contention.
The fact that formula_81 follows from the main theorem, and so does the fact that formula_57, by remarking that the Newton polygon of formula_54 can have only one segment joining formula_82 to formula_83. The condition for the irreducibility of formula_54 follows from the corollary above. (q.e.d.)
The following is an immediate corollary of the factorization above, and constitutes a test for the reducibility of polynomials over Henselian fields:
Other applications of the Newton polygon comes from the fact that a Newton Polygon is sometimes a special case of a Newton polytope, and can be used to construct asymptotic solutions of two-variable polynomial equations like
formula_85
Symmetric function explanation.
In the context of a valuation, we are given certain information in the form of the valuations of elementary symmetric functions of the roots of a polynomial, and require information on the valuations of the actual roots, in an algebraic closure. This has aspects both of ramification theory and singularity theory. The valid inferences possible are to the valuations of power sums, by means of Newton's identities.
History.
Newton polygons are named after Isaac Newton, who first described them and some of their uses in correspondence from the year 1676 addressed to Henry Oldenburg.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K[[X]]"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "aX^r"
},
{
"math_id": 3,
"text": "P(F(X)) = 0"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "K[X]"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "K[[Y]]"
},
{
"math_id": 8,
"text": "Y = X^{\\frac{1}{d}}"
},
{
"math_id": 9,
"text": "d"
},
{
"math_id": 10,
"text": "v_K: K \\to \\mathbb R\\cup \\{ \\infty \\}"
},
{
"math_id": 11,
"text": "f(x) = a_nx^n + \\cdots + a_1x + a_0 \\in K[x],"
},
{
"math_id": 12,
"text": "a_0 a_n \\ne 0"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "P_i=\\left(i,v_K(a_i)\\right),"
},
{
"math_id": 15,
"text": "a_i = 0"
},
{
"math_id": 16,
"text": "\\mu_1, \\mu_2, \\ldots, \\mu_r"
},
{
"math_id": 17,
"text": "f(x)"
},
{
"math_id": 18,
"text": "\\lambda_1, \\lambda_2, \\ldots, \\lambda_r"
},
{
"math_id": 19,
"text": "P_i"
},
{
"math_id": 20,
"text": "P_j"
},
{
"math_id": 21,
"text": "j-i"
},
{
"math_id": 22,
"text": "\\mu_i"
},
{
"math_id": 23,
"text": "\\sum_i \\lambda_i = n"
},
{
"math_id": 24,
"text": "\\alpha"
},
{
"math_id": 25,
"text": "v(\\alpha) \\in \\{-\\mu_1, \\ldots , -\\mu_r\\}"
},
{
"math_id": 26,
"text": "i"
},
{
"math_id": 27,
"text": "-\\mu_i"
},
{
"math_id": 28,
"text": "\\lambda_i"
},
{
"math_id": 29,
"text": "L"
},
{
"math_id": 30,
"text": "v_L"
},
{
"math_id": 31,
"text": "v_K"
},
{
"math_id": 32,
"text": "v"
},
{
"math_id": 33,
"text": "\\mu"
},
{
"math_id": 34,
"text": "\\lambda"
},
{
"math_id": 35,
"text": "\\mu = a/n"
},
{
"math_id": 36,
"text": "a"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "-\\frac{1}{n}"
},
{
"math_id": 39,
"text": "(0,1)"
},
{
"math_id": 40,
"text": " (n,0)"
},
{
"math_id": 41,
"text": "v_L(\\alpha) = -a/n."
},
{
"math_id": 42,
"text": "< n "
},
{
"math_id": 43,
"text": "v_L(\\alpha) \\in {1\\over d}\\mathbb Z"
},
{
"math_id": 44,
"text": "v_L(\\alpha) = -a/n"
},
{
"math_id": 45,
"text": "(K, v_K)"
},
{
"math_id": 46,
"text": "\\lambda_i = 1"
},
{
"math_id": 47,
"text": "v_L(\\alpha) = -\\mu_i."
},
{
"math_id": 48,
"text": "\\alpha'"
},
{
"math_id": 49,
"text": "v_L(\\alpha') = v_L(\\alpha)"
},
{
"math_id": 50,
"text": "\\sigma"
},
{
"math_id": 51,
"text": "f = A\\,f_1\\, f_2\\cdots f_r,"
},
{
"math_id": 52,
"text": "A\\in K"
},
{
"math_id": 53,
"text": "f_i\\in K[X]"
},
{
"math_id": 54,
"text": "f_i"
},
{
"math_id": 55,
"text": "-\\mu_i "
},
{
"math_id": 56,
"text": "\\deg(f_i) = \\lambda_i"
},
{
"math_id": 57,
"text": "\\mu_i = v_K(f_i(0))/\\lambda_i"
},
{
"math_id": 58,
"text": "v_K(f_i(0))"
},
{
"math_id": 59,
"text": "(X - \\alpha)"
},
{
"math_id": 60,
"text": "v_L(\\alpha) = -\\mu_i"
},
{
"math_id": 61,
"text": "f = A P_1^{k_1}P_2^{k_2}\\cdots P_s^{k_s}"
},
{
"math_id": 62,
"text": "f "
},
{
"math_id": 63,
"text": "(A\\in K)."
},
{
"math_id": 64,
"text": "P_1"
},
{
"math_id": 65,
"text": "v_L(\\sigma \\alpha) = v_L(\\alpha)"
},
{
"math_id": 66,
"text": "\\nu"
},
{
"math_id": 67,
"text": " f_i"
},
{
"math_id": 68,
"text": "k_1\\nu"
},
{
"math_id": 69,
"text": "P_1^{k_1}"
},
{
"math_id": 70,
"text": " f_i."
},
{
"math_id": 71,
"text": "g_i = f_i/P_1^{k_1}"
},
{
"math_id": 72,
"text": "\\beta "
},
{
"math_id": 73,
"text": "g_i"
},
{
"math_id": 74,
"text": "\\beta"
},
{
"math_id": 75,
"text": "P_2"
},
{
"math_id": 76,
"text": "P_2^{k_2}"
},
{
"math_id": 77,
"text": " g_i"
},
{
"math_id": 78,
"text": "f_i = P_1^{k_1}\\cdots P_m^{k_m}"
},
{
"math_id": 79,
"text": "m \\leq s"
},
{
"math_id": 80,
"text": "f = A f_1\\cdot f_2\\cdots f_r"
},
{
"math_id": 81,
"text": "\\lambda_i = \\deg(f_i)"
},
{
"math_id": 82,
"text": "(0, v_K(f_i(0))"
},
{
"math_id": 83,
"text": "(\\lambda_i, 0 = v_K(1))"
},
{
"math_id": 84,
"text": "(\\mu, \\lambda),"
},
{
"math_id": 85,
"text": " 3 x^2 y^3 - x y^2 + 2 x^2 y^2 - x^3 y = 0. "
}
]
| https://en.wikipedia.org/wiki?curid=742477 |
74263 | Frame of reference | Abstract coordinate system
<templatestyles src="Hlist/styles.css"/>
In physics and astronomy, a frame of reference (or reference frame) is an abstract coordinate system whose origin, orientation, and scale are specified by a set of reference points―geometric points whose position is identified both mathematically (with numerical coordinate values) and physically (signaled by conventional markers).
For "n" dimensions, "n" + 1 reference points are sufficient to fully define a reference frame. Using rectangular Cartesian coordinates, a reference frame may be defined with a reference point at the origin and a reference point at one unit distance along each of the "n" coordinate axes.
In Einsteinian relativity, reference frames are used to specify the relationship between a moving observer and the phenomenon under observation. In this context, the term often becomes observational frame of reference (or observational reference frame), which implies that the observer is at rest in the frame, although not necessarily located at its origin. A relativistic reference frame includes (or implies) the coordinate time, which does not equate across different reference frames moving relatively to each other. The situation thus differs from Galilean relativity, in which all possible coordinate times are essentially equivalent.
Definition.
The need to distinguish between the various meanings of "frame of reference" has led to a variety of terms. For example, sometimes the type of coordinate system is attached as a modifier, as in "Cartesian frame of reference". Sometimes the state of motion is emphasized, as in "rotating frame of reference". Sometimes the way it transforms to frames considered as related is emphasized as in "Galilean frame of reference". Sometimes frames are distinguished by the scale of their observations, as in "macroscopic" and "microscopic frames of reference".
In this article, the term "observational frame of reference" is used when emphasis is upon the "state of motion" rather than upon the coordinate choice or the character of the observations or observational apparatus. In this sense, an observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that could be attached to this frame. On the other hand, a "coordinate system" may be employed for many purposes where the state of motion is not the primary concern. For example, a coordinate system may be adopted to take advantage of the symmetry of a system. In a still broader perspective, the formulation of many problems in physics employs "generalized coordinates", "normal modes" or "eigenvectors", which are only indirectly related to space and time. It seems useful to divorce the various aspects of a reference frame for the discussion below. We therefore take observational frames of reference, coordinate systems, and observational equipment as independent concepts, separated as below:
Coordinate systems.
Although the term "coordinate system" is often used (particularly by physicists) in a nontechnical sense, the term "coordinate system" does have a precise meaning in mathematics, and sometimes that is what the physicist means as well.
A coordinate system in mathematics is a facet of geometry or of algebra, in particular, a property of manifolds (for example, in physics, configuration spaces or phase spaces). The coordinates of a point r in an "n"-dimensional space are simply an ordered set of "n" numbers:
formula_0
In a general Banach space, these numbers could be (for example) coefficients in a functional expansion like a Fourier series. In a physical problem, they could be spacetime coordinates or normal mode amplitudes. In a robot design, they could be angles of relative rotations, linear displacements, or deformations of joints. Here we will suppose these coordinates can be related to a Cartesian coordinate system by a set of functions:
formula_1
where "x", "y", "z", "etc." are the "n" Cartesian coordinates of the point. Given these functions, coordinate surfaces are defined by the relations:
formula_2
The intersection of these surfaces define coordinate lines. At any selected point, tangents to the intersecting coordinate lines at that point define a set of basis vectors {e1, e2, ..., en} at that point. That is:
formula_3
which can be normalized to be of unit length. For more detail see curvilinear coordinates.
Coordinate surfaces, coordinate lines, and basis vectors are components of a coordinate system. If the basis vectors are orthogonal at every point, the coordinate system is an orthogonal coordinate system.
An important aspect of a coordinate system is its metric tensor "gik", which determines the arc length "ds" in the coordinate system in terms of its coordinates:
formula_4
where repeated indices are summed over.
As is apparent from these remarks, a coordinate system is a mathematical construct, part of an axiomatic system. There is no necessary connection between coordinate systems and physical motion (or any other aspect of reality). However, coordinate systems can include time as a coordinate, and can be used to describe motion. Thus, Lorentz transformations and Galilean transformations may be viewed as coordinate transformations.
Observational frame of reference.
An observational frame of reference, often referred to as a "physical frame of reference", a "frame of reference", or simply a "frame", is a physical concept related to an observer and the observer's state of motion. Here we adopt the view expressed by Kumar and Barve: an observational frame of reference is characterized "only by its state of motion". However, there is lack of unanimity on this point. In special relativity, the distinction is sometimes made between an "observer" and a "frame". According to this view, a "frame" is an "observer" plus a coordinate lattice constructed to be an orthonormal right-handed set of spacelike vectors perpendicular to a timelike vector. See Doran. This restricted view is not used here, and is not universally adopted even in discussions of relativity. In general relativity the use of general coordinate systems is common (see, for example, the Schwarzschild solution for the gravitational field outside an isolated sphere).
There are two types of observational reference frame: inertial and non-inertial. An inertial frame of reference is defined as one in which all laws of physics take on their simplest form. In special relativity these frames are related by Lorentz transformations, which are parametrized by rapidity. In Newtonian mechanics, a more restricted definition requires only that Newton's first law holds true; that is, a Newtonian inertial frame is one in which a free particle travels in a straight line at constant speed, or is at rest. These frames are related by Galilean transformations. These relativistic and Newtonian transformations are expressed in spaces of general dimension in terms of representations of the Poincaré group and of the Galilean group.
In contrast to the inertial frame, a non-inertial frame of reference is one in which fictitious forces must be invoked to explain observations. An example is an observational frame of reference centered at a point on the Earth's surface. This frame of reference orbits around the center of the Earth, which introduces the fictitious forces known as the Coriolis force, centrifugal force, and gravitational force. (All of these forces including gravity disappear in a truly inertial reference frame, which is one of free-fall.)
Measurement apparatus.
A further aspect of a frame of reference is the role of the measurement apparatus (for example, clocks and rods) attached to the frame (see Norton quote above). This question is not addressed in this article, and is of particular interest in quantum mechanics, where the relation between observer and measurement is still under discussion (see measurement problem).
In physics experiments, the frame of reference in which the laboratory measurement devices are at rest is usually referred to as the laboratory frame or simply "lab frame." An example would be the frame in which the detectors for a particle accelerator are at rest. The lab frame in some experiments is an inertial frame, but it is not required to be (for example the laboratory on the surface of the Earth in many physics experiments is not inertial). In particle physics experiments, it is often useful to transform energies and momenta of particles from the lab frame where they are measured, to the center of momentum frame "COM frame" in which calculations are sometimes simplified, since potentially all kinetic energy still present in the COM frame may be used for making new particles.
In this connection it may be noted that the clocks and rods often used to describe observers' measurement equipment in thought, in practice are replaced by a much more complicated and indirect metrology that is connected to the nature of the vacuum, and uses atomic clocks that operate according to the standard model and that must be corrected for gravitational time dilation. (See second, meter and kilogram).
In fact, Einstein felt that clocks and rods were merely expedient measuring devices and they should be replaced by more fundamental entities based upon, for example, atoms and molecules.
Generalization.
The discussion is taken beyond simple space-time coordinate systems by Brading and Castellani. Extension to coordinate systems using generalized coordinates underlies the Hamiltonian and Lagrangian formulations of quantum field theory, classical relativistic mechanics, and quantum gravity.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{r} = [x^1,\\ x^2,\\ \\dots,\\ x^n]."
},
{
"math_id": 1,
"text": "x^j = x^j (x,\\ y,\\ z,\\ \\dots),\\quad j = 1,\\ \\dots,\\ n,"
},
{
"math_id": 2,
"text": " x^j (x, y, z, \\dots) = \\mathrm{constant},\\quad j = 1,\\ \\dots,\\ n."
},
{
"math_id": 3,
"text": "\\mathbf{e}_i(\\mathbf{r}) = \\lim_{\\epsilon \\rightarrow 0} \\frac{\\mathbf{r}\\left(x^1,\\ \\dots,\\ x^i + \\epsilon,\\ \\dots,\\ x^n\\right) - \\mathbf{r}\\left(x^1,\\ \\dots,\\ x^i,\\ \\dots ,\\ x^n\\right)}{\\epsilon},\\quad i = 1,\\ \\dots,\\ n,"
},
{
"math_id": 4,
"text": "(ds)^2 = g_{ik}\\ dx^i\\ dx^k,"
}
]
| https://en.wikipedia.org/wiki?curid=74263 |
74274244 | Audio inpainting | Audio inpainting (also known as audio interpolation) is an audio restoration task which deals with the reconstruction of missing or corrupted portions of a digital audio signal. Inpainting techniques are employed when parts of the audio have been lost due to various factors such as transmission errors, data corruption or errors during recording.
The goal of audio inpainting is to fill in the gaps (i.e., the missing portions) in the audio signal seamlessly, making the reconstructed portions indistinguishable from the original content and avoiding the introduction of audible distortions or alterations.
Many techniques have been proposed to solve the audio inpainting problem and this is usually achieved by analyzing the temporal and spectral information surrounding each missing portion of the considered audio signal.
Classic methods employ statistical models or digital signal processing algorithms to predict and synthesize the missing or damaged sections. Recent solutions, instead, take advantage of deep learning models, thanks to the growing trend of exploiting data-driven methods in the context of audio restoration.
Depending on the extent of the lost information, the inpainting task can be divided in three categories.
Short inpainting refers to the reconstruction of few milliseconds (approximately less than 10) of missing signal, that occurs in the case of short distortions such as clicks or clipping.
In this case, the goal of the reconstruction is to recover the lost information exactly.
In long inpainting instead, with gaps in the order of hundreds of milliseconds or even seconds, this goal becomes unrealistic, since restoration techniques cannot rely on local information.
Therefore, besides providing a coherent reconstruction, the algorithms need to generate new information that has to be semantically compatible with the surrounding context (i.e., the audio signal surrounding the gaps).
The case of medium duration gaps lays between short and long inpainting.
It refers to the reconstruction of tens of millisecond of missing data, a scale where the non-stationary characteristic of audio already becomes important.
Definition.
Consider a digital audio signal formula_0. A corrupted version of formula_0, which is the audio signal presenting missing gaps to be reconstructed, can be defined as formula_1, where formula_2 is a binary mask encoding the reliable or missing samples of formula_0, and formula_3 represents the element-wise product.
Audio inpainting aims at finding formula_4 (i.e., the reconstruction), which is an estimation of formula_5. This is an ill-posed inverse problem, which is characterized by a non-unique set of solutions. For this reason, similarly to the formulation used for the inpainting problem in other domains, the reconstructed audio signal can be found through an optimization problem that is formally expressed as
formula_6.
In particular, formula_7 is the optimal reconstructed audio signal and formula_8 is a distance measure term that computes the reconstruction accuracy between the corrupted audio signal and the estimated one. For example, this term can be expressed with a mean squared error or similar metrics.
Since formula_8 is computed only on the reliable frames, there are many solutions that can minimize formula_9. It is thus necessary to add a constraint to the minimization, in order to restrict the results only to the valid solutions. This is expressed through the regularization term formula_10 that is computed on the reconstructed audio signal formula_4. This term encodes some kind of "a-priori" information on the audio data. For example, formula_10 can express assumptions on the stationarity of the signal, on the sparsity of its representation or can be learned from data.
Techniques.
There exist various techniques to perform audio inpainting. These can vary significantly, influenced by factors such as the specific application requirements, the length of the gaps and the available data. In the literature, these techniques are broadly divided in model-based techniques (sometimes also referred as signal processing techniques) and data-driven techniques.
Model-based techniques.
Model-based techniques involve the exploitation of mathematical models or assumptions about the underlying structure of the audio signal. These models can be based on prior knowledge of the audio content or statistical properties observed in the data. By leveraging these models, missing or corrupted portions of the audio signal can be inferred or estimated.
An example of a model-based techniques are autoregressive models. These methods interpolate or extrapolate the missing samples based on the neighboring values, by using mathematical functions to approximate the missing data. In particular, in autoregressive models the missing samples are completed through linear prediction. The autoregressive coefficients necessary for this prediction are learned from the surrounding audio data, specifically from the data adjacent to each gap.
Some more recent techniques approach audio inpainting by representing audio signals as sparse linear combinations of a limited number of basis functions (as for example in the Short Time Fourier Transform). In this context, the aim is to find the sparse representation of the missing section of the signal that most accurately matches the surrounding, unaffected signal.
The aforementioned methods exhibit optimal performance when applied to filling in relatively short gaps, lasting only a few tens of milliseconds, and thus they can be included in the context of short inpainting. However, these signal-processing techniques tend to struggle when dealing with longer gaps. The reason behind this limitation lies in the violation of the stationarity condition, as the signal often undergoes significant changes after the gap, making it substantially different from the signal preceding the gap.
As a way to overcome these limitations, some approaches add strong assumptions also about the fundamental structure of the gap itself, exploiting sinusoidal modeling or similarity graphs to perform inpainting of longer missing portions of audio signals.
Data-driven techniques.
Data-driven techniques rely on the analysis and exploitation of the available audio data. These techniques often employ deep learning algorithms that learn patterns and relationships directly from the provided data. They involve training models on large datasets of audio examples, allowing them to capture the statistical regularities present in the audio signals. Once trained, these models can be used to generate missing portions of the audio signal based on the learned representations, without being restricted by stationarity assumptions.
Data-driven techniques also offer the advantage of adaptability and flexibility, as they can learn from diverse audio datasets and potentially handle complex inpainting scenarios.
As of today, such techniques constitute the state-of-the-art of audio inpainting, being able to reconstruct gaps of hundreds of milliseconds or even seconds. These performances are made possible by the use of generative models that have the capability to generate novel content to fill in the missing portions. For example, generative adversarial networks, which are the state-of-the-art of generative models in many areas, rely on two competing neural networks trained simultaneously in a two-player minmax game: the generator produces new data from samples of a random variable, the discriminator attempts to distinguish between generated and real data.
During the training, the generator's objective is to "fool" the discriminator, while the discriminator attempts to learn to better classify real and fake data.
In GAN-based inpaniting methods the generator acts as a context encoder and produces a plausible completion for the gap only given the available information surrounding it. The discriminator is used to train the generator and tests the consistency of the produced inpainted audio.
Recently, also diffusion models have established themselves as the state-of-the-art of generative models in many fields, often beating even GAN-based solutions. For this reason they have also been used to solve the audio inpainting problem, obtaining valid results. These models generate new data instances by inverting the diffusion process, where data samples are progressively transformed into Gaussian noise.
One drawback of generative models is that they typically need a huge amount of training data. This is necessary to make the network generalize well and make it able to produce coherent audio information, that also presents some kind of structural complexity.
Nonetheless, some works demonstrated that, capturing the essence of an audio signal is also possible using only a few tens of seconds from a single training sample. This is done by overfitting a generative neural network to a single training audio signal. In this way, researchers were able to perform audio inpainting without exploiting large datasets.
Applications.
Audio inpainting finds applications in a wide range of fields, including audio restoration and audio forensics among the others. In these fields, audio inpainting can be used to eliminate noise, glitches, or undesired distortions from an audio recording, thus enhancing its quality and intelligibility. It can also be employed to recover deteriorated old recordings that have been affected by local modifications or have missing audio samples due to scratches on CDs.
Audio inpainting is also closely related to packet loss concealment (PLC). In the PLC problem, it is necessary to compensate the loss of audio packets in communication networks. While both problems aim at filling missing gaps in an audio signal, PLC has more computation time restrictions and only the packets preceding a gap are considered to be reliable (the process is said to be causal).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{x}"
},
{
"math_id": 1,
"text": "\\mathbf{\\tilde{x}} = \\mathbf{m} \\circ \\mathbf{x}"
},
{
"math_id": 2,
"text": "\\mathbf{m}"
},
{
"math_id": 3,
"text": "\\circ"
},
{
"math_id": 4,
"text": "\\mathbf{\\hat{x}}"
},
{
"math_id": 5,
"text": "\\mathbf{x} "
},
{
"math_id": 6,
"text": "\\mathbf{\\hat{x}}^* = \\underset{\\hat{\\mathbf{X}}}{\\text{argmin}} ~ L(\\mathbf{m} \\circ\\mathbf{\\hat{x}}, \\mathbf{\\tilde{x}}) + R(\\mathbf{\\hat{x}})"
},
{
"math_id": 7,
"text": "\\mathbf{\\hat{x}}^*"
},
{
"math_id": 8,
"text": "L"
},
{
"math_id": 9,
"text": "L(\\mathbf{m} \\circ\\mathbf{\\hat{x}}, \\mathbf{\\tilde{x}})"
},
{
"math_id": 10,
"text": "R"
}
]
| https://en.wikipedia.org/wiki?curid=74274244 |
74274540 | Chambolle-Pock algorithm | Primal-Dual algorithm optimization for convex problems
In mathematics, the Chambolle-Pock algorithm is an algorithm used to solve convex optimization problems. It was introduced by Antonin Chambolle and Thomas Pock in 2011 and has since become a widely used method in various fields, including image processing, computer vision, and signal processing.
The Chambolle-Pock algorithm is specifically designed to efficiently solve convex optimization problems that involve the minimization of a non-smooth cost function composed of a data fidelity term and a regularization term. This is a typical configuration that commonly arises in ill-posed imaging inverse problems such as image reconstruction, denoising and inpainting.
The algorithm is based on a primal-dual formulation, which allows for simultaneous updates of primal and dual variables. By employing the proximal operator, the Chambolle-Pock algorithm efficiently handles non-smooth and non-convex regularization terms, such as the total variation, specific in imaging framework.
Problem statement.
Let be formula_0 two real vector spaces equipped with an inner product formula_1 and a norm formula_2. From up to now, a function formula_3 is called "simple" if its proximal operator formula_4 has a closed-form representation or can be accurately computed, for formula_5, where formula_4 is referred to
formula_6
Consider the following constrained primal problem:
formula_7
where formula_8 is a bounded linear operator, formula_9 are convex, lower semicontinuous and simple.
The minimization problem has its dual corresponding problem as
formula_10
where formula_11 and formula_12 are the dual map of formula_13 and formula_14, respectively.
Assume that the primal and the dual problems have at least a solution formula_15, that means they satisfies
formula_16
where formula_17 and formula_18 are the subgradient of the convex functions formula_19 and formula_20, respectively.
The Chambolle-Pock algorithm solves the so-called saddle-point problem
formula_21
which is a primal-dual formulation of the nonlinear primal and dual problems stated before.
Algorithm.
The Chambolle-Pock algorithm primarily involves iteratively alternating between ascending in the dual variable formula_22 and descending in the primal variable formula_23 using a gradient-like approach, with step sizes formula_24 and formula_25 respectively, in order to simultaneously solve the primal and the dual problem. Furthermore, an over-relaxation technique is employed for the primal variable with the parameter formula_26.
Algorithm Chambolle-Pock algorithm
Input: formula_27 and set formula_28, "stopping criterion".
formula_29
do while "stopping criterion" not satisfied
formula_30
formula_31
formula_32
formula_33
end do
Chambolle and Pock proved that the algorithm converges if formula_34 and formula_35, sequentially and with formula_36 as rate of convergence for the primal-dual gap. This has been extended by S. Banert et al. to hold whenever formula_37 and formula_38.
The semi-implicit Arrow-Hurwicz method coincides with the particular choice of formula_39 in the Chambolle-Pock algorithm.
Acceleration.
There are special cases in which the rate of convergence has a theoretical speed up. In fact, if formula_40, respectively formula_19, is uniformly convex then formula_41, respectively formula_42, has a Lipschitz continuous gradient. Then, the rate of convergence can be improved to formula_43, providing a slightly changes in the Chambolle-Pock algorithm. It leads to an accelerated version of the method and it consists in choosing iteratively formula_44, and also formula_45, instead of fixing these values.
In case of formula_20 uniformly convex, with formula_46 the uniform-convexity constant, the modified algorithm becomes
Algorithm Accelerated Chambolle-Pock algorithm
Input: formula_47 such that formula_48 and set formula_49, "stopping criterion".
formula_29
do while "stopping criterion" not satisfied
formula_50
formula_51
formula_52
formula_53
formula_54
formula_55
formula_33
end do
Moreover, the convergence of the algorithm slows down when formula_56, the norm of the operator formula_57, cannot be estimated easily or might be very large. Choosing proper preconditioners formula_58 and formula_59, modifying the proximal operator with the introduction of the induced norm through the operators formula_58 and formula_59, the convergence of the proposed preconditioned algorithm will be ensured.
Application.
A typical application of this algorithm is in the image denoising framework, based on total variation. It operates on the concept that signals containing excessive and potentially erroneous details exhibit a high total variation, which represents the integral of the absolute value gradient of the image. By adhering to this principle, the process aims to decrease the total variation of the signal while maintaining its similarity to the original signal, effectively eliminating unwanted details while preserving crucial features like edges. In the classical bi-dimensional discrete setting, consider formula_60, where an element formula_61 represents an image with the pixels values collocated in a Cartesian grid formula_62.
Define the inner product on formula_63 as
formula_64
that induces an formula_65 norm on formula_63, denoted as formula_66.
Hence, the gradient of formula_67 is computed with the standard finite differences,
formula_68
which is an element of the space formula_69, where
formula_70
On formula_71 is defined an formula_72 based norm as
formula_73
Then, the primal problem of the ROF model, proposed by Rudin, Osher, and Fatemi, is given by
formula_74
where formula_75 is the unknown solution and formula_76 the given noisy data, instead formula_77 describes the trade-off between regularization and data fitting.
The primal-dual formulation of the ROF problem is formulated as follow
formula_78
where the indicator function is defined as
formula_79
on the convex set formula_80 which can be seen as formula_81 unitary balls with respect to the defined norm on formula_82.
Observe that the functions involved in the stated primal-dual formulation are simple, since their proximal operator can be easily computedformula_83The image total-variation denoising problem can be also treated with other algorithms such as the alternating direction method of multipliers (ADMM), projected (sub)-gradient or fast iterative shrinkage thresholding.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathcal{X}, \\mathcal{Y} "
},
{
"math_id": 1,
"text": " \\langle \\cdot, \\cdot \\rangle "
},
{
"math_id": 2,
"text": " \\lVert \\,\\cdot \\,\\rVert = \\langle \\cdot, \\cdot \\rangle^{\\frac{1}{2}} "
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": " \\text{prox}_{\\tau F} "
},
{
"math_id": 5,
"text": "\\tau >0"
},
{
"math_id": 6,
"text": " x = \\text{prox}_{\\tau F}(\\tilde{x}) = \\text{arg } \\min_{x'\\in \\mathcal{X}}\\left\\{\n\\frac{\\lVert x'-\\tilde{x}\\rVert^2}{2\\tau} + F(x')\n\\right\\}"
},
{
"math_id": 7,
"text": " \\min_{x\\in\\mathcal{X}} F(Kx) + G(x) "
},
{
"math_id": 8,
"text": "K:\\mathcal{X} \\rightarrow \\mathcal{Y}"
},
{
"math_id": 9,
"text": "F:\\mathcal{Y} \\rightarrow [0, +\\infty), G:\\mathcal{X} \\rightarrow [0, +\\infty) "
},
{
"math_id": 10,
"text": " \\max_{y\\in\\mathcal{Y}} -\\left(G^*(-K^*y) + F^*(y)\\right)\n"
},
{
"math_id": 11,
"text": "F^*, G^*"
},
{
"math_id": 12,
"text": " K^*"
},
{
"math_id": 13,
"text": " F, G "
},
{
"math_id": 14,
"text": " K "
},
{
"math_id": 15,
"text": " (\\hat{x}, \\hat{y}) \\in \\mathcal{X}\\times \\mathcal{Y} "
},
{
"math_id": 16,
"text": "\n\\begin{align}\n K\\hat{x} &\\in \\partial F^*(\\hat{y})\\\\\n -(K^*\\hat{y}) &\\in \\partial G(\\hat{x})\n \\end{align}\n"
},
{
"math_id": 17,
"text": " \\partial F^* "
},
{
"math_id": 18,
"text": " \\partial G"
},
{
"math_id": 19,
"text": " F^* "
},
{
"math_id": 20,
"text": " G "
},
{
"math_id": 21,
"text": " \\min_{x\\in\\mathcal{X}} \\max_{y\\in\\mathcal{Y}} \\langle Kx, y \\rangle + G(x) - F^*(y) \n"
},
{
"math_id": 22,
"text": " y "
},
{
"math_id": 23,
"text": " x "
},
{
"math_id": 24,
"text": "\\sigma"
},
{
"math_id": 25,
"text": "\\tau"
},
{
"math_id": 26,
"text": "\\theta"
},
{
"math_id": 27,
"text": " F, G, K, \\tau, \\sigma >0, \\, \\theta \\in[0,1],\\, (x^0,y^0)\\in\\mathcal{X}\\times\\mathcal{Y}"
},
{
"math_id": 28,
"text": " \\overline{x}^0 = x^0"
},
{
"math_id": 29,
"text": " k \\leftarrow 0 "
},
{
"math_id": 30,
"text": " y^{n+1} \\leftarrow \\text{prox}_{\\sigma F^*}\\left(y^{n} + \\sigma K\\overline{x}^{n}\\right) "
},
{
"math_id": 31,
"text": " x^{n+1} \\leftarrow \\text{prox}_{\\tau G}\\left(x^{n} - \\tau K^*y^{n+1}\\right) "
},
{
"math_id": 32,
"text": " \\overline{x}^{n+1} \\leftarrow x^{n+1} + \\theta\\left( x^{n+1} -x^{n}\\right)"
},
{
"math_id": 33,
"text": "k\\leftarrow k+1"
},
{
"math_id": 34,
"text": "\\theta = 1"
},
{
"math_id": 35,
"text": "\\tau \\sigma \\lVert K \\rVert^2 \\leq 1"
},
{
"math_id": 36,
"text": "\\mathcal{O}(1/N)"
},
{
"math_id": 37,
"text": "\\theta>1/2"
},
{
"math_id": 38,
"text": "\\tau \\sigma \\lVert K \\rVert^2 < 4 / (1+2\\theta)"
},
{
"math_id": 39,
"text": "\\theta = 0"
},
{
"math_id": 40,
"text": "G "
},
{
"math_id": 41,
"text": " G^* "
},
{
"math_id": 42,
"text": " F "
},
{
"math_id": 43,
"text": " \\mathcal{O}(1/N^2)"
},
{
"math_id": 44,
"text": " \\tau_n, \\sigma_n"
},
{
"math_id": 45,
"text": " \\theta_n"
},
{
"math_id": 46,
"text": " \\gamma>0 "
},
{
"math_id": 47,
"text": " F, G, \\tau_0, \\sigma_0 >0"
},
{
"math_id": 48,
"text": " \\tau_0\\sigma_0 L^2 \\leq 1,\\, (x^0,y^0)\\in\\mathcal{X}\\times\\mathcal{Y}"
},
{
"math_id": 49,
"text": " \\overline{x}^0 = x^0."
},
{
"math_id": 50,
"text": " y^{n+1} \\leftarrow \\text{prox}_{\\sigma_n F^*}\\left(y^{n} + \\sigma_n K\\overline{x}^{n}\\right) "
},
{
"math_id": 51,
"text": " x^{n+1} \\leftarrow \\text{prox}_{\\tau_n G}\\left(x^{n} - \\tau_n K^*y^{n+1}\\right) "
},
{
"math_id": 52,
"text": " \\theta_n \\leftarrow \\frac{1}{\\sqrt{1+2\\gamma \\tau_n}}"
},
{
"math_id": 53,
"text": " \\tau_{n+1} \\leftarrow \\theta_n \\tau_n"
},
{
"math_id": 54,
"text": " \\sigma_{n+1} \\leftarrow \\frac{\\sigma_n}{\\theta_n}"
},
{
"math_id": 55,
"text": " \\overline{x}^{n+1} \\leftarrow x^{n+1} + \\theta_n\\left( x^{n+1} -x^{n}\\right)"
},
{
"math_id": 56,
"text": "L"
},
{
"math_id": 57,
"text": "K"
},
{
"math_id": 58,
"text": "T"
},
{
"math_id": 59,
"text": "\\Sigma"
},
{
"math_id": 60,
"text": "\\mathcal{X} = \\mathbb{R}^{NM}"
},
{
"math_id": 61,
"text": " u\\in\\mathcal{X} "
},
{
"math_id": 62,
"text": "N\\times M"
},
{
"math_id": 63,
"text": " \\mathcal{X} "
},
{
"math_id": 64,
"text": "\n\\langle u, v\\rangle_{\\mathcal{X}} = \\sum_{i,j} u_{i,j}v_{i,j},\\quad u,v \\in \\mathcal{X}\n"
},
{
"math_id": 65,
"text": " L^2"
},
{
"math_id": 66,
"text": " \\lVert \\, \\cdot \\, \\rVert_2 "
},
{
"math_id": 67,
"text": " u "
},
{
"math_id": 68,
"text": "\\left(\\nabla u \\right)_{i,j} = \\left(\n\\begin{aligned}\n\\left(\\nabla u \\right)^1_{i,j}\\\\\n\\left(\\nabla u \\right)^2_{i,j}\n\\end{aligned}\n\\right)"
},
{
"math_id": 69,
"text": " \\mathcal{Y}=\\mathcal{X}\\times \\mathcal{X} "
},
{
"math_id": 70,
"text": "\\begin{align}\n&\n\\left(\n\\nabla u\n\\right)_{i,j}^1 = \\left\\{\n\\begin{aligned}\n&\\frac{u_{i+1,j}-u_{i,j}}{h} &\\text{ if } i<M\\\\\n&0 &\\text{ if } i=M\n\\end{aligned}\n\\right.\n,\\\\\n&\n\\left(\n\\nabla u\n\\right)_{i,j}^2 = \\left\\{\n\\begin{aligned}\n&\\frac{u_{i,j+1}-u_{i,j}}{h} &\\text{ if } j<N\\\\\n&0 &\\text{ if } j=N\n\\end{aligned}\n\\right.\n\\end{align}"
},
{
"math_id": 71,
"text": "\\mathcal{Y}"
},
{
"math_id": 72,
"text": " L^1-"
},
{
"math_id": 73,
"text": "\n\\lVert p \\rVert_1 = \\sum_{i,j} \\sqrt{\\left(p_{i,j}^1\\right)^2 + \\left(p_{i,j}^2\\right)^2}, \\quad p\\in \\mathcal{Y}.\n"
},
{
"math_id": 74,
"text": " \nh^2 \\min_{u\\in \\mathcal{X}} \\lVert \\nabla u \\rVert_1 + \\frac{\\lambda}{2} \\lVert u-g\\rVert^2_2\n"
},
{
"math_id": 75,
"text": " u \\in \\mathcal{X}"
},
{
"math_id": 76,
"text": " g \\in \\mathcal{X}"
},
{
"math_id": 77,
"text": " \\lambda "
},
{
"math_id": 78,
"text": " \n\\min_{u\\in \\mathcal{X}}\\max_{p\\in \\mathcal{Y}} -\\langle u, \\text{div}\\, p\\rangle_{\\mathcal{X}} + \\frac{\\lambda}{2} \\lVert u-g\\rVert^2_2 - \\delta_P(p)\n"
},
{
"math_id": 79,
"text": " \n\\delta_P(p) = \\left\\{\n\\begin{aligned}\n&0, & \\text{if } p \\in P\\\\\n&+\\infty,& \\text{if } p \\notin P\n\\end{aligned}\n\\right.\n"
},
{
"math_id": 80,
"text": " P = \\left\\{\np\\in \\mathcal{Y}\\, : \\, \\max_{i,j}\\sqrt{\\left(p_{i,j}^1\\right)^2 + \\left(p_{i,j}^2\\right)^2} \\leq 1\n\\right\\}, "
},
{
"math_id": 81,
"text": " L^\\infty "
},
{
"math_id": 82,
"text": " \\mathcal{Y}"
},
{
"math_id": 83,
"text": "\n\\begin{align}\np &= \\text{prox}_{\\sigma F^*}(\\tilde{p}) &\\iff p_{i,j} &= \\frac{\\tilde{p}_{i,j}}{\\max\\{1,| \\tilde{p}_{i,j}| \\}}\\\\\nu &= \\text{prox}_{\\tau G}(\\tilde{u}) &\\iff u_{i,j} &= \\frac{\n\\tilde{u}_{i,j}+\\tau\\lambda g_{i,j}}{1+\\tau \\lambda}\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=74274540 |
74274621 | Dattorro industry scheme | Delay-based audio effects system
The Dattorro industry scheme is a digital system used to implement a wide range of delay-based audio effects for digital signals. It was proposed by Jon Dattorro. The common nature of these effects allows to produce an output signal as the linear combination of (dynamically modulated) delayed replicas of the input signal. The proposed scheme allows to implement such effects in a compact form, only using a set of three parameters to control the type of effect.
The Dattorro industry scheme is based on digital delay lines and to ensure a proper resolution in the time domain, it leverages fractional delay lines, thus avoiding discontinuities.
The effects that this scheme is able to produce are: echo, chorus, vibrato, flanger, doubling, white chorus. These effects are characterized by a nominal delay, a modulating function for the delay and the depth of modulation.
Delay line interpolation.
Consider continuous time signal formula_0. The formula_1-delayed version of such signal is formula_2. Considering the signal in the discrete time domain (i.e., sampling it with sampling frequency formula_3 at formula_4), we obtain formula_5. The delay line can then be described in terms of the Z-transform of the discrete time signal as
formula_6.
When going back in the time domain exploiting the inverse Z-transform, this corresponds to formula_7, where formula_8 is the Dirac delta. The former equation holds for formula_9 but for the implementation we need a fractional delay, meaning formula_10. In this case, interpolation is required to reconstruct the value of the signal that lies between two samples. One can resort to the preferred interpolation technique, e.g., Lagrange interpolation, or all-pass interpolation.
Block diagram.
The output of the delay block formula_11 is noted above and below as rarely it is taken from the last sample of the tap but instead it changes dynamically. Considering the end to end structure, we can write the filter as:
Dattorro scheme
formula_12
Setting the coefficients according to the desired effect results in a change of the filter topology. For example, setting the feedback to 0 results in a FIR filter, whereas, if the coefficients are set to be equal, we approximate an all-pass filter.
Effects.
All the effects can be obtained by changing the parameters of the Dattorro system and by setting the delay ranges according to the following table. Delay values are expressed in milliseconds.
Vibrato.
Vibrato is a small quasi-periodic change in pitch of a tone. It's more of a technique than an effect per se but can be added to any audio signal. The delay is modulated with a low frequency sinusoidal function and no mix of the direct path of the signal is considered.
Chorus.
The chorus is an effect which tries to emulate multiple independent voices playing in unison. This effect is made as a linear combination of the input signal (dry signal) and a dynamically delayed version of the input (wet signal).
Flanging.
The flanging effect originated with tape machines. This effect was created by mixing two tape machines set to play the same track but one of them is slowed down. This produces a lowering in pitch and a delay of the slow track. The process is then repeated with the other track reabsorbing the accumulated delay.
This effect is very similar to chorus and the main difference is due to the delay range. Chorus usually has longer delay, larger depth and lower modulating frequency.
White chorus.
White chorus is a modification to the standard chorus effect aimed at reducing the aberrations introduced by the forward path. The change consists in adding a negative feedback path with a different and fixed tap point in order to obtain an approximation of an all-pass configuration.
Doubling.
When the system is used without feedback we achieve doubling. This effect is analogous to that of the Leslie speaker, a particular kind of speaker consisting of a rotating chamber in front of the bass loudspeaker and rotating cones above the treble loudspeakers
Pseudocode.
The filter can be implemented in software or hardware. In the following is the pseudocode for a software Dattorro system.
function dattorro is
input: "x", # input signal
"Fs", # sampling frequency
"depth," # modulation depth
"freq," # LFO frequency
"b", # blend knob
"ff", # feedforward knob
"fb", # feedback knob
"mod" # modulation type
output: "y" # signal with FX applied
"depth_samples = depth·Fs" # compute delay in samples
if "mod" is sinusoid # compute delay sequence
"delay_seq = depth_samples*(1 + sin(2·formula_13·freq*t))"
else "mod" is noise
lowpass_noise = noise_gen() # generate white noise and low-pass filter it
"delay_seq = depth_samples + depth_samples·lowpass_noise"
for each "n" of the "N" samples
"d_int = floor(delay_seq(n))" # integer delay
"d_frac = delay_seq(n) - d_int # fractional delay"
"h0 = (d_frac - 1)·(d_frac - 2)/2" # first FIR filter coefficient
"h1 = d_frac·(2 - d_frac)" # second FIR filter coefficient
"h2 = d_frac/2·(d_frac - 1)" # third FIR filter coefficient
"x_d = delay_line(d_int + 1)·h0 + delay_line(d_int + 2)·h1 + delay_line(d_int + 3)·h2" # delay input
"f_b = fb·x_d" # delayed input feedback
"delay_line(2:L) = delay_line(1:L-1)" # delay line update
"delay_line(1) = x(i) - fb_comp"
"f_f = ff·x_d" # feedforward component
"blend_comp = b·delay_line(1)" # blend component
"y(n) = ff_comp + blend_comp" # compute output sample
return "y"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " y(t) "
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "y(t) = x(t-\\tau)"
},
{
"math_id": 3,
"text": "F_s"
},
{
"math_id": 4,
"text": "n = tF_s "
},
{
"math_id": 5,
"text": "y[n] = x[n-M] "
},
{
"math_id": 6,
"text": "Y(z) = \\mathcal Z \\{y \\}(n) = z^{-M}X(z) = H_M(z) X(z) "
},
{
"math_id": 7,
"text": "h_m[n] = \\delta[n-M] "
},
{
"math_id": 8,
"text": "\\delta[\\cdot]"
},
{
"math_id": 9,
"text": "D\\in\\mathbb N"
},
{
"math_id": 10,
"text": "D =\\lfloor D\\rfloor + (D-\\lfloor D\\rfloor) = M+d,\\quad d\\in\\mathbb R, \\ 0<d<1"
},
{
"math_id": 11,
"text": " H_{M}(z) = z^{-M}"
},
{
"math_id": 12,
"text": "Y(z) = H(z)X(z) = \\frac{b+f_f z^{-D (t) } } {1+z^{-D(t)} f_b }X(z)"
},
{
"math_id": 13,
"text": "\\pi "
},
{
"math_id": 14,
"text": "H^{\\infty}"
}
]
| https://en.wikipedia.org/wiki?curid=74274621 |
74274721 | UWB ranging | Wireless positioning technology
Ultra-wideband impulse radio ranging (or UWB-IR ranging) is a wireless positioning technology based on IEEE 802.15.4z standard, which is a wireless communication protocol introduced by IEEE, for systems operating in unlicensed spectrum, equipped with extremely large bandwidth transceivers. UWB enables very accurate ranging (in the order of centimeters) without introducing significant interference with narrowband systems. To achieve these stringent requirements, UWB-IR systems exploit the available bandwidth (which exceeds 500 MHz for systems compliant to IEEE 802.15.4z protocol) that they support, which guarantees very accurate timing (and thus ranging) and robustness against multipath, especially in indoor environments. The available bandwidth also enables UWB systems to spread the signal power over a large spectrum (this technology is thus called spread spectrum), avoiding narrowband interference.
Protocol.
UWB-IR relies on the low-power transmission of specific sequences of short-duration pulses. The transmit power is limited according to FCC regulations, in order to reduce interference and power consumption. The bands supported by the standard are the following ones:
The primary time division in UWB systems is structured in frames. Each frame is composed by the concatenation of 2 sequences:
The further time subdivisions of the preamble and the PPDU are organized in different ways. For localization purposes, only the preamble is employed (and described in detail later on), since it is specifically designed to perform accurate synchronization at receiver side.
The SHR sequence is composed by the concatenation of 2 other subsequences:
SHR waveform.
The transmitted SHR waveform (baseband equivalent) can be modeled as follows
formula_6
where the parameters are defined as shown here below
The received SHR waveform can instead be described as
formula_10
where the additional parameters are defined as follows
In order to associate the propagation delay to a distance, there must exists a LoS path between transmitter and receiver or, alternatively, a detailed map of the environment has to be known in order to perform localization based on the reflected rays.
In presence of multipath, the large bandwidth is of paramount importance to distinguish all the replicas, which otherwise would significantly overlap at receiver side, especially in indoor environments.
Ranging.
The propagation delay can be estimated through several algorithms, usually based on finding the peak of the cross-correlation between the received signal and the transmitted SHR waveform. Commonly used algorithms are maximum correlation and maximum likelihood.
There are two methods to estimate the mutual distance between the transceivers. The first one is based on the time of arrival (TOA) and it is called one-way ranging. It requires a priori synchronization between the anchors and it consists in estimating the delay and computing the range as
formula_15where formula_16 refers to the LoS path estimated delay.
The second method is based on the round-trip time (RTT) and it is called two-way ranging. It consists in the following procedure:
In this second case the distance between the 2 anchors can be computed as
formula_18Also in this case formula_16 refers to the LoS path estimated delay.
Pros and cons.
Performing ranging through UWB presents several advantages:
However, there are also some disadvantages related to UWB systems:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_{\\mathrm{sync}} \\in \\{16,64,1024,4096\\}"
},
{
"math_id": 1,
"text": "N_{\\mathrm{sfd}} \\in \\{8,64\\}"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "N_{\\mathrm{cps}} \\in \\{ 496,508,1984 \\}"
},
{
"math_id": 4,
"text": "f_c"
},
{
"math_id": 5,
"text": "T_c"
},
{
"math_id": 6,
"text": "x(t) = \\sum_{k,n} c_{kn} \\cdot p \\big( t - n L T_c - k N_{\\mathrm{cps}} T_c \\big)"
},
{
"math_id": 7,
"text": "c_{kn} \\in \\{ 0, \\pm1 \\}"
},
{
"math_id": 8,
"text": "p (t)"
},
{
"math_id": 9,
"text": "N_{\\mathrm{cps}}"
},
{
"math_id": 10,
"text": "y(t) = \\sum_{k,n,\\ell} \\alpha_\\ell \\cdot c_{kn} \\cdot p \\big( t - n L T_c - k N_{\\mathrm{cps}} T_c -\\tau_\\ell \\big) + w(t)"
},
{
"math_id": 11,
"text": "\\alpha_\\ell"
},
{
"math_id": 12,
"text": "\\tau_\\ell"
},
{
"math_id": 13,
"text": "\\ell^{th}"
},
{
"math_id": 14,
"text": "w(t)"
},
{
"math_id": 15,
"text": "\\hat{r} = c\\cdot \\hat{\\tau}"
},
{
"math_id": 16,
"text": "\\hat{\\tau}"
},
{
"math_id": 17,
"text": "T"
},
{
"math_id": 18,
"text": "\\hat{r} = \\frac{1}{2} \\cdot c \\cdot ( \\hat{\\tau} - T)"
}
]
| https://en.wikipedia.org/wiki?curid=74274721 |
74277546 | Computed torque control | Robot control
Computed torque control is a control scheme used in motion control in robotics. It combines feedback linearization via a PID controller of the error with a dynamical model of the controlled robot.
Let the dynamics of the controlled robot be described by
formula_0 where formula_1 is the state vector of joint variables that describe the system, formula_2 is the inertia matrix, formula_3 is the vector Coriolis and centrifugal torques, formula_4 are the torques caused by gravity and formula_5 is the vector of joint torque inputs.
Assume that we have an approximate model of the system made up of formula_6. This model does not need to be perfect, but it should justify the approximations formula_7 and formula_8.
Given a desired trajectory formula_9 the error relative to the current state formula_10 is then formula_11.
We can then set the input of the system to be
formula_12
With this input the dynamics of the entire systems becomes
formula_13
and the normal methods for PID controller tuning can be applied. In this way the complicated nonlinear control problem has been reduced to a relatively simple linear control problem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{M}\\left( \\vec\\theta \\right) \\ddot\\vec\\theta + \n\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) \\dot\\vec\\theta +\n\\vec\\tau_g \\left(\\vec\\theta\\right)\n=\n\\vec\\tau"
},
{
"math_id": 1,
"text": "\\vec\\theta \\in \\mathbb{R}^N"
},
{
"math_id": 2,
"text": "\\mathbf{M}\\left(\\vec\\theta\\right)"
},
{
"math_id": 3,
"text": "\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) \\dot\\vec\\theta \n\n"
},
{
"math_id": 4,
"text": "\\vec\\tau_g \\left(\\vec\\theta\\right)"
},
{
"math_id": 5,
"text": "\\vec\\tau \n"
},
{
"math_id": 6,
"text": "\\tilde\\mathbf{M}\\left( \\vec\\theta \\right),\n\\tilde\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right), \n\\tilde\\vec\\tau_g \\left(\\vec\\theta\\right)\n"
},
{
"math_id": 7,
"text": "\\mathbf{M}\\left( \\vec\\theta \\right)^{-1} \\tilde\\mathbf{M}\\left( \\vec\\theta \\right) \\approx \\mathbf 1\n"
},
{
"math_id": 8,
"text": "\\mathbf{M} ^{-1} \\left(\n\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) \\dot\\vec\\theta +\n\\vec\\tau_g \\left(\\vec\\theta\\right)\n\\right)\n\\approx\n\\mathbf{M} ^{-1} \\left(\n\\tilde\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) \\dot\\vec\\theta +\n\\tilde\\vec\\tau_g \\left(\\vec\\theta\\right)\n\\right)"
},
{
"math_id": 9,
"text": "\\vec\\theta_d(t)"
},
{
"math_id": 10,
"text": "\\vec\\theta(t)"
},
{
"math_id": 11,
"text": "\\vec\\theta_e(t) = \\vec\\theta_d(t) - \\vec\\theta(t)"
},
{
"math_id": 12,
"text": "\\vec\\tau(t) =\n\\tilde\\mathbf{M}\\left( \\vec\\theta \\right) \\left(\n \\ddot\\vec\\theta_d(t) \n + K_p \\vec\\theta_e(t) \n + K_i \\int_0^t \\ddot\\vec\\theta_e(t') dt' \n + K_d \\dot\\vec\\theta_e(t)\n\\right) +\n\\tilde\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) + \n\\tilde\\vec\\tau_g \\left(\\vec\\theta\\right)\n"
},
{
"math_id": 13,
"text": "\\begin{align}\n\\mathbf{M}\\left( \\vec\\theta \\right) \\ddot\\vec\\theta + \n\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) \\dot\\vec\\theta +\n\\vec\\tau_g \\left(\\vec\\theta\\right)\n=&\n\\tilde\\mathbf{M}\\left( \\vec\\theta \\right) \\left(\n \\ddot\\vec\\theta_d(t) \n + K_p \\vec\\theta_e(t) \n + K_i \\int_0^t \\ddot\\vec\\theta_e(t') dt' \n + K_d \\dot\\vec\\theta_e(t)\n\\right) +\n\\tilde\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) + \n\\tilde\\vec\\tau_g \\left(\\vec\\theta\\right)\n\n\\\\\n\n\\ddot\\vec\\theta + \n\\mathbf{M}\\left( \\vec\\theta \\right)^{-1} \\left(\n\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) \\dot\\vec\\theta +\n\\vec\\tau_g \\left(\\vec\\theta\\right)\n\\right)\n=&\n\\underbrace{\n\\mathbf{M}\\left( \\vec\\theta \\right)^{-1} \n\\tilde\\mathbf{M}\\left( \\vec\\theta \\right) \n}_{\\approx \\mathbf{1}}\n\\left(\n \\ddot\\vec\\theta_d(t) \n + K_p \\vec\\theta_e(t) \n + K_i \\int_0^t \\ddot\\vec\\theta_e(t') dt' \n + K_d \\dot\\vec\\theta_e(t)\n\\right) +\n\\mathbf{M}\\left( \\vec\\theta \\right)^{-1} \\left(\n\\tilde\\mathbf{C}\\left( \\vec\\theta, \\dot\\vec\\theta \\right) + \n\\tilde\\vec\\tau_g \\left(\\vec\\theta\\right)\n\\right) \n\n\\\\\n\n\\ddot\\vec\\theta \n= &\n\\ddot\\vec\\theta_d(t) \n+ K_p \\vec\\theta_e(t) \n+ K_i \\int_0^t \\ddot\\vec\\theta_e(t') dt' \n+ K_d \\dot\\vec\\theta_e(t)\n\n\\\\\n\n0 = &\n\\ddot\\vec\\theta_e\n+ K_p \\vec\\theta_e(t) \n+ K_i \\int_0^t \\ddot\\vec\\theta_e(t') dt' \n+ K_d \\dot\\vec\\theta_e(t)\n\n\\end{align} \n"
}
]
| https://en.wikipedia.org/wiki?curid=74277546 |
7427971 | Hilal-i-Jur'at | Second-highest military award of Pakistan
The Hilal-e-Jurat ( , as if it were "Halāl-e-Jurāt"; English: Crescent of Courage , sometimes spelled as Hilal-e-Jur'at, Hilal-e-Jurat, Hilal-i-Jurrat and Hilal-i-Juraat) is the second-highest military award of Pakistan out of a total of four gallantry awards that were created in 1957. In order of rank it comes after the Nishan-e-Haider (the "Sign of the Lion", which is the equivalent to the Victoria Cross and the Medal of Honor under the British Honours System and the United States Honors System, respectively) coming before the Sitara-e-Jurat (the "Star of Courage", which is the equivalent of the Distinguished Service Cross and the Silver Star, respectively).
It was created and declared for official use on 16 March 1957 by the President of Pakistan. The Hilal-i-Ju'rat is considered to be the equivalent of the Conspicuous Gallantry Cross and the Distinguished Service Cross. The medal is only conferrable to those who are ranked at an Officer level only and it is only allowed to be given to the Army (excluding paramilitary personnel), Navy and Air-force. The award after this honour is the Sitara-e-Jurat ("Star of Courage"), and subsequent to this medal is the Tamgha-e-Jurat ("Medal of Courage").
Unlike the Nishan-e-Haider, the Hilal-e-Jurat is the highest military award thus far that has been given to living Pakistanis to date. The medallion has been given to many famous Pakistani army personnel, including many national heroes. Most notably, well known major generals, brigadiers and lieutenants of the Pakistan Armed Forces have all received the medal.
The award holds significant benefits for the recipient including social, political and financial benefits. Land and pensions are awarded as recompense for serving in the Army of Pakistan on behalf of the State for acts of "valour and courage" during battle against the enemy. As of 2003 it was revealed that cash rewards have replaced land being given to the recipient under new defence housing schemes, which had taken place for the duration of the past twelve years perpetrated by the army, which was accounted to the Pakistan National Assembly as reported in the last decade.
History.
Established on 16 March 1957, the award was founded in celebration of Pakistan becoming a Republic and was formally given award status by the President of Pakistan. According to the official army website of Pakistan the award is given for "acts of valour, courage or devotion to duty, performed on land, at sea or in the air in the face of the enemy". The recipient of the award is able to use the distinguished honorific post-nominal letters "HJ" after his or her name. The apportion is considered to be the equivalent to the Distinguished Service Order under the British Honours System and the United States Distinguished Service Cross.
The names of the medals originate from the Persian language but are written in the form of the Arabic language. This was unusual since the major languages of Pakistan are Punjabi and Urdu. In the Pakistan Parliament there was a debate on why the names were given in Persian but were spelled in Arabic as some politicians were not entirely sure other medals made were inscribed of words from the Arabic language in the decade that it was made official.
Pakistan became a republic in 1956. Prior to that Pakistan had been a commonwealth realm and had as such come under the British honours system. When the award was established, however, it was instituted retrospectively back to the independence of Pakistan in 1947—and it was subsequently conferred on a number of Pakistani officers for service during the Indo-Pakistani War of 1947.
One particular unit that appeared in an article from "Dawn", the Guards Battalion, was mentioned which emphasized that they had earned several Military Crosses and one Victoria Cross was congratulated in 2004 by the president. In the article it was emphasized that before the independence of Pakistan in 1947 the unit had been given British gallantry awards which suggests the Hilal-i-Jur'at didn't exist at the time.
Appearance.
It is a circular golden medal, surrounded by ten bundles of golden leaves with the Islamic crescent and star at its centre, suspended from a golden bar that reads "Hilal-i-Ju'rat" in Persian with Arabic lettering in gold. The ribbon attached to the golden bar is made up of three stripes, totalling two colours (two red and one green) that have been placed on the gallantary award. On the official Pakistan Army website the colour insignia is seen as being red, green, and red.
Eligibility and privileges.
Officers serving in the Pakistani Armed Forces, including and limited to the Pakistan Army, the Pakistani Navy and the Pakistani Air Force, are the only eligible potential recipients for the award. It is conferred for acts of valour, courage, bravery and devotion to duty. The following is an extract, a word for word statement stating the eligibility of the medal on the Pakistan Army website.
This award is conferrable on officers only, for acts of valour, courage or devotion to duty, performed on land, at sea or in the air in the face of the enemy.
The recipients of the medal are allowed to use the honorific post nominal title letters "HJ" after their names as stated again by the Pakistan Army:
The recipient has the privilege to add the letters "HJ" after his name.
Although there rules are clear, there have been some challenges to change the rules. In March 2009 a group of policeman in Islamabad challenged the eligibility requirements by campaigning for the medal to be given to Faisal Khan, a police officer, who gave up his life by successfully preventing an Uzbek suicide bomber from entering a police station and causing massive widespread casualties. The journalist covering the incident wrote about the anger felt in the community, particularly from the policeman whom Faisal Khan worked with:
...So sad is the situation that the police have to submit a recommendation for an award 'Hilal-i-Jurrat' and more money for his brave feat...
Khan had adamament dreams of joining the military in his youth or the police force. Whilst he was a police officer he was said to have wanted to "die in the line of duty", serving in the military for his country; many of his colleagues felt that this was a viable reason for him qualifying for the Hilal-i-Jur'at since he didn't receive any gallantry award, only a cash lump sum.
Benefits.
As well as commanding respect and admiration the Hilal-i-Ju'rat holds huge financial benefits for the recipient including land being given to the awardee. In accordance with Pakistan Law the recipient of the Hilal-i-Ju'rat is granted "two squares of land" according to retired Major General of the Pakistan Army Tajammul Hussain Malik, who in his 1991 book, "The Story of My Struggle", revealed this.
Squadron Leader Safaraz was said to have received seventy-seven acres (0.3116 km2) of land, which was later donated to a charity to benefit the poor and needy, for both his Hilal-i-Ju'rat and the Sitara-i-Ju'rat medals.
Mathematically, if the seventy seven acres is divided and the sums calculated, one square of land, which was awarded to the Sitara-i-Jur'at recipient (according to the book "The Story of My Struggle"), then this would mean "one square of land" is the equivalent of , making "two squares of land" equal to . The method of calculating the sums is detailed as below with two sources being taken in for consideration to calculate the land awardances on a logical basis.
formula_0
formula_1
It was revealed by the Pakistan news agency Dawn.com, that the gallantry awards have major cash rewards for the recipients and in the last twelve years this has replaced land awardances given to the recipient under defence housing schemes, which was reported in 2003 to the National Assembly of Pakistan. Rs. 500,000 rupees (£3679.98 or $5824.13, or €4317.5 as of September 2010) are given as recompense for obtaining the Hilal-i-Ju'rat during service.
During the Kargil Conflict in 1999, however, land was given to those that participated in the war and to those that gained gallantry awards. The Kargil Conflict was the only exception to this when it came to the land awards when the housing schemes were taking place.
Recipients.
Several high-profile generals of the Pakistan army have received the Hilal-i-Jur'at medal, who've gone on to make successful careers in the army and in Pakistani politics including Akhtar Abdur Rahman, who was known as the second most powerful man in Pakistan during the 1980s, known for being the head of the Inter-Services Intelligence Agency (ISI) during Zia-ul-Huq's presidency. The ISI is the equivalence of the British intelligence service, MI5 and for Americans, the CIA.
General Ayub Khan, the first military ruler of Pakistan who became a controversial figure towards the end of his presidency, serving as the second President of Pakistan between 1958 and 1969, also received the award. Notably A.O. Mitha, a legendary major general who played a significant part in the 1971 Liberation War in which he was stationed in East Pakistan (modern day Bangladesh), which ultimately led to the Secession of Bangladesh, was also bestowed the medal.
Brigadier (r) Saadullah Khan, the only living soldier in Pakistan Army's history to have been recommended for Nishan-e-Haider for the demonstration of unmatched gallantry in 1971 war. His book "From East Pakistan to Bangladesh", guides the army's textbook curriculum.
He was a charismatic person. Upright, handsome, soft-spoken and very, very spiritual.
He was seen as being an oddball and 'soft on Bengalis,’ fought the hardest in the war.
He was recommended for a Nishan e Haider but was awarded Hilal e Jurat instead.
It is also believed that Saadullah never appreciated Zia's role in Jordan.
Brigadier Saadullah, who had fought gallantly in East Pakistan and then added a humanitarian dimension to the military's brutal tussle with the Baloch was prematurely retired on the pretext of 'being too religious' by a General who would go on to topple his beloved prime minister on the pretext that 'he was not religious enough.'
Other notable heroic personnel of the Pakistan Army who died during service and were given the medal in the line of fire include Ghulam Hussain Shaheed for his duty in standing his ground during an ambush by the Indian army near Pakistan's modern day border, near Kasur (of which it was later renamed after him). He was said to have held the national flag of Pakistan until his last breath when he was fatally wounded twice during battle with Indian armed soldiers.
Major Ziaur Rahman was also bestowed a Hilal e Jurat for his contributions in the 1965 war; he later defected from Pakistan Army in 1971, and subsequently became the seventh President of Bangladesh.
Sarfaraz Ahmed Rafiqui.
Most significantly of all Squadron Leader Sarfaraz Ahmed Rafiqui, considered a national hero in the region, was bestowed the award after a war between neighbouring countries Pakistan and India erupted. He earned the prestigious award for bravely fighting and defending his pilots against the Indian Air Force during the Indo-Pakistani War of 1965 in which he participated to the end. He was shot down over the Indian air base in the final moments of air warfare.
His equipment malfunctioned and subsequently he was left in a position to attempt to lure enemy pilots away from concentrating fire on the two fully functioning jets left on the battlefield. Taking on heavy fire during the air attack on 6 September, he was finally brought down and crashed in the airfield. His parents were informed he was given the honour in a telegram sent by the PAF.
The mission he was sent on went awry as the result of his guns jamming mid-battle, and as the fighting commenced IAF pilot, Flight Lieutenant DN Rathore of 27th Squadron, shot down his fighter jet after Rafiqui's unit caused significant damage to the enemy. It is reported eight Hunters and five pilots were destroyed, which included the defeat of the IAF Squadron Leader Ajit Kumar 'Peter' Rawlley of the 7th Squadron of the Indian Air Forces. Rafiqui's qualification for the posthumous honour was enhanced as a direct result of the prestigious "Best Pilot Trophy" from the Pakistan Air Force Academy in Risalpur where he received it five months after graduating, leading him to be apportioned the penultimate gallantry award from the state, the "Hilal-i-Ju'rat" after the 1965 war had ended. He, along with his subordinates, Cecil Chaudhry and Yunis Hussain, were given the Sitara-i-Jur'at. Chaudhry was the only survivor left who made it back to the home airbase. Pakistan's third airbase, the Rafiqui Airbase (Shorkot Cantonment), is named after Sarfaraz. His body was never found and still lies somewhere around the Halwara Airbase where the battle took place.
<templatestyles src="Template:Quote_box/styles.css" />
Rafiqui, HJ, SJ, (Shaheed) was my role model. As a matter of fact he was the role model for a large number of pilots in the PAF. He was a born leader and officers like him you come across once in a lifetime. As a pilot he was the best.
– Group Captain Cecil Chaudhry, SJ when asked who was his role model and inspiration (2001).
Controversy.
Retractions.
During the Bangladesh War of 1971 several HJs were given out and later retracted.
Faisal Khan.
On 23 March 2009, Faisal Khan, who was outside the gates of the "G-7 special police branch", was killed when he stopped and refused to let go of an apparent suicide bomber of Uzbek origin who wanted to blow up the police compound near Sitara Market in Islamabad. After he was killed many around the area were thankful for his sacrifice, especially the local police, who thought Khan deserved being given heroic status by the country. Although the bomber did kill several people, it was thought he could have done more damage by causing a high number of casualties which could have arisin if Khan didn't stop the man going towards the branch. The building was described as being "poorly guarded" at the time. Khan only received Rs. 150,000 (£1107.68 or $1753.41 or €1300.87 as of September 2010), which was given to his siblings, as he had no parents nor a family of his own.
Despite the poor conditions of the police and the faulty hierarchic and bureaucratic system in the police force, he sank with his ship. But was he acknowledged by the state as a hero? Certainly not. Prime Minister’s adviser on Interior Senator Rehman Malik had announced Rs 150,000 for his family – which is a measly amount for someone’s life – for someone who sacrificed his life for others and is nothing less than a national hero[...] Sadly the state too has not shown its appreciation of such a man who saved the lives of so many especially in a time when they are most ill-equipped and the prime targets[...] Its individuals like Faisal Khan who make the difference but get little acknowledgement. When will the government realise that their faces are saved from public humiliation because of the sacrifice of many Faisal Khans[?]
Controversy arose when this amount was seen as not being nearly enough for what he had done, and that the thirty-year-old Khan deserved more for his sacrifice such as gaining the prestigious Hilal-i-Jur'at for his duty in guarding the station. In memory, because of his aspirations in wanting to always "join the army or police force" in his youth and adult life, some thought he deserved the gallantry award in honour for what he did in protecting and saving the lives of many people around the area. The police force decided to campaign against the low sum of money that was given to him by submitting a recommendation for him to receive the Hilal-i-Jur'at to the government of Pakistan, as they saw it as an embarrassment for the state in not recognising Khan as a "national hero".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{1\\ Square\\ of\\ Land} =(\\frac{\\mathrm{77\\ acres}}{\\mathrm{3\\ Squares\\ of\\ Land}}) \\cdot 1 = 25.41\\ acres "
},
{
"math_id": 1,
"text": "\\mathrm{2\\ Squares\\ of\\ Land} =(\\frac{\\mathrm{77\\ acres}}{\\mathrm{3\\ Squares\\ of\\ Land}}) \\cdot 2 = 50.82\\ acres "
}
]
| https://en.wikipedia.org/wiki?curid=7427971 |
74281844 | Tsirelson's stochastic differential equation | Tsirelson's stochastic differential equation (also Tsirelson's drift or Tsirelson's equation) is a stochastic differential equation which has a weak solution but no strong solution. It is therefore a counter-example and named after its discoverer Boris Tsirelson. Tsirelson's equation is of the form
formula_0
where formula_1 is the one-dimensional Brownian motion. Tsirelson chose the drift formula_2 to be a bounded measurable function that depends on the past times of formula_3 but is independent of the natural filtration formula_4 of the Brownian motion. This gives a weak solution, but since the process formula_3 is not formula_5-measurable, not a strong solution.
Tsirelson's Drift.
Let
Tsirelson now defined the following drift
formula_15
Let the expression
formula_16
be the abbreviation for
formula_17
Theorem.
According to a theorem by Tsirelson and Yor:
1) The natural filtration of formula_3 has the following decomposition
formula_18
2) For each formula_19 the formula_20 are uniformly distributed on formula_21 and independent of formula_22 resp. formula_23.
3) formula_24 is the formula_25-trivial σ-algebra, i.e. all events have probability formula_26 or formula_27. | [
{
"math_id": 0,
"text": "dX_t = a[t,(X_s, s\\leq t)]dt + dW_t, \\quad X_0=0,"
},
{
"math_id": 1,
"text": "W_t"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\mathcal{F}^W"
},
{
"math_id": 5,
"text": "\\mathcal{F}_{\\infty}^W"
},
{
"math_id": 6,
"text": "\\mathcal{F}_t^{W}=\\sigma(W_s : 0 \\leq s \\leq t)"
},
{
"math_id": 7,
"text": "\\{\\mathcal{F}_t^{W}\\} _{t\\in \\R_+}"
},
{
"math_id": 8,
"text": "t_0=1"
},
{
"math_id": 9,
"text": "(t_n)_{n\\in-\\N}"
},
{
"math_id": 10,
"text": "t_0>t_{-1}>t_{-2} >\\dots,"
},
{
"math_id": 11,
"text": "\\lim_{n\\to -\\infty } t_n=0"
},
{
"math_id": 12,
"text": "\\Delta X_{t_n}=X_{t_n}-X_{t_{n-1}}"
},
{
"math_id": 13,
"text": "\\Delta t_n=t_n-t_{n-1}"
},
{
"math_id": 14,
"text": "\\{x\\}=x-\\lfloor x \\rfloor"
},
{
"math_id": 15,
"text": "a[t,(X_s, s\\leq t)]=\\sum\\limits_{n\\in -\\N}\\bigg\\{\\frac{\\Delta X_{t_n}}{\\Delta t_n}\\bigg\\}1_{(t_n,t_{n+1}]}(t)."
},
{
"math_id": 16,
"text": "\\eta_n=\\xi_n+\\{\\eta_{n-1}\\}"
},
{
"math_id": 17,
"text": "\\frac{\\Delta X_{t_{n+1}}}{\\Delta t_{n+1}}=\\frac{\\Delta W_{t_{n+1}}}{\\Delta t_{ n+1}}+\\bigg\\{\\frac{\\Delta X_{t_n}}{\\Delta t_n}\\bigg\\}."
},
{
"math_id": 18,
"text": "\\mathcal{F}_t^{X}=\\mathcal{F}_t^{W} \\vee \\sigma\\big(\\{\\eta_{n-1}\\}\\big),\\quad \\forall t\\geq 0, \\quad \\forall t_n\\leq t"
},
{
"math_id": 19,
"text": "n\\in -\\N"
},
{
"math_id": 20,
"text": "\\{\\eta_n\\}"
},
{
"math_id": 21,
"text": "[0,1)"
},
{
"math_id": 22,
"text": "(W_t)_{t\\geq 0}"
},
{
"math_id": 23,
"text": "\\mathcal{F}_{\\infty}^{W}"
},
{
"math_id": 24,
"text": "\\mathcal{F}_{0+}^{X}"
},
{
"math_id": 25,
"text": "P"
},
{
"math_id": 26,
"text": "0"
},
{
"math_id": 27,
"text": "1"
}
]
| https://en.wikipedia.org/wiki?curid=74281844 |
74284664 | 9855 | Magic constant for order 27 magic square
9855 (nine thousand eight hundred fifty-five) is an odd, composite, four-digit number. The number 9855 is the magic constant of an "n" × "n" normal magic square as well as n-Queens Problem for "n" = 27. It can be expressed as the product of its prime factors:
formula_0
9855 is also the Magic constant of a Magic square of order 27. In a magic square, the magic constant is the sum of numbers in each row, column, and diagonal, which is the same. For magic squares of order "n", the magic constant is given by the formula formula_1.
The magic constant 9855 for the magic square of order 27 can be calculated as follows:
formula_2
This square contains the numbers 1 to 729, with 365 in the center. The square consists of 9 nine power magic squares. It has been noted that the number of days in 27 years (365 days per year) is 9855, the constant of the larger square. This was first discovered and solved by ancient Greeks: Aristotle understood this magic square, but it is noted from "numeris Platonics nihil obscuris" that Cicero was unable to solve it. The 27 years as alluded to by the square was mentioned in reference to Greek generation time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "9855 = 3^3 \\times 5 \\times 73"
},
{
"math_id": 1,
"text": "\\frac{n(n^2 + 1)}{2}"
},
{
"math_id": 2,
"text": "9855 = \\frac{1}{27} \\sum_{k=1}^{27^2} k = \\frac{27 \\times (27^2 + 1)}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=74284664 |
74285940 | Synchronous impedance curve | The synchronous impedance curve (also short-circuit characteristic, SCC) of a synchronous generator is a plot of the output short circuit current as a function of the excitation current or field. The curve is typically plotted alongside the open-circuit saturation curve.
The SCC is almost linear, since under the short-circuit conditions the magnetic flux in the generator is below the iron saturation levels and thus the reluctance is almost entirely defined by the fixed one of the air gap. The name "synchronous impedance curve" is due to the fact that in the short-circuit condition all the generated voltage dissipates across the generator internal synchronous impedance formula_0.
The curve is obtained by rotating the generator at the rated RPM with the output terminals shorted and the output current going to 100% of the rated for the device (higher values are typically not tested to avoid overheating).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Z_S"
}
]
| https://en.wikipedia.org/wiki?curid=74285940 |
7428961 | Hadamard code | Error-correcting code
The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science.
The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.
The Hadamard code is an example of a linear code of length formula_3 over a binary alphabet.
Unfortunately, this term is somewhat ambiguous as some references assume a message length formula_4 while others assume a message length of formula_5.
In this article, the first case is called the Hadamard code while the second is called the augmented Hadamard code.
The Hadamard code is unique in that each non-zero codeword has a Hamming weight of exactly formula_6, which implies that the distance of the code is also formula_6.
In standard coding theory notation for block codes, the Hadamard code is a formula_1-code, that is, it is a linear code over a binary alphabet, has block length formula_7, message length (or dimension) formula_0, and minimum distance formula_8.
The block length is very large compared to the message length, but on the other hand, errors can be corrected even in extremely noisy conditions.
The augmented Hadamard code is a slightly improved version of the Hadamard code; it is a formula_2-code and thus has a slightly better rate while maintaining the relative distance of formula_9, and is thus preferred in practical applications.
In communication theory, this is simply called the Hadamard code and it is the same as the first order Reed–Muller code over the binary alphabet.
Normally, Hadamard codes are based on Sylvester's construction of Hadamard matrices, but the term “Hadamard code” is also used to refer to codes constructed from arbitrary Hadamard matrices, which are not necessarily of Sylvester type.
In general, such a code is not linear.
Such codes were first constructed by Raj Chandra Bose and Sharadchandra Shankar Shrikhande in 1959.
If "n" is the size of the Hadamard matrix, the code has parameters formula_10, meaning it is a not-necessarily-linear binary code with 2"n" codewords of block length "n" and minimal distance "n"/2. The construction and decoding scheme described below apply for general "n", but the property of linearity and the identification with Reed–Muller codes require that "n" be a power of 2 and that the Hadamard matrix be equivalent to the matrix constructed by Sylvester's method.
The Hadamard code is a locally decodable code, which provides a way to recover parts of the original message with high probability, while only looking at a small fraction of the received word. This gives rise to applications in computational complexity theory and particularly in the design of probabilistically checkable proofs.
Since the relative distance of the Hadamard code is 1/2, normally one can only hope to recover from at most a 1/4 fraction of error. Using list decoding, however, it is possible to compute a short list of possible candidate messages as long as fewer than formula_11 of the bits in the received word have been corrupted.
In code-division multiple access (CDMA) communication, the Hadamard code is referred to as Walsh Code, and is used to define individual communication channels. It is usual in the CDMA literature to refer to codewords as “codes”. Each user will use a different codeword, or “code”, to modulate their signal. Because Walsh codewords are mathematically orthogonal, a Walsh-encoded signal appears as random noise to a CDMA capable mobile terminal, unless that terminal uses the same codeword as the one used to encode the incoming signal.
History.
"Hadamard code" is the name that is most commonly used for this code in the literature. However, in modern use these error correcting codes are referred to as Walsh–Hadamard codes.
There is a reason for this:
Jacques Hadamard did not invent the code himself, but he defined Hadamard matrices around 1893, long before the first error-correcting code, the Hamming code, was developed in the 1940s.
The Hadamard code is based on Hadamard matrices, and while there are many different Hadamard matrices that could be used here, normally only Sylvester's construction of Hadamard matrices is used to obtain the codewords of the Hadamard code.
James Joseph Sylvester developed his construction of Hadamard matrices in 1867, which actually predates Hadamard's work on Hadamard matrices. Hence the name "Hadamard code" is disputed and sometimes the code is called "Walsh code", honoring the American mathematician Joseph Leonard Walsh.
An augmented Hadamard code was used during the 1971 Mariner 9 mission to correct for picture transmission errors. The binary values used during this mission were 6 bits long, which represented 64 grayscale values.
Because of limitations of the quality of the alignment of the transmitter at the time (due to Doppler Tracking Loop issues) the maximum useful data length was about 30 bits. Instead of using a repetition code, a [32, 6, 16] Hadamard code was used.
Errors of up to 7 bits per 32-bit word could be corrected using this scheme. Compared to a 5-repetition code, the error correcting properties of this Hadamard code are much better, yet its rate is comparable. The efficient decoding algorithm was an important factor in the decision to use this code.
The circuitry used was called the "Green Machine". It employed the fast Fourier transform which can increase the decoding speed by a factor of three. Since the 1990s use of this code by space programs has more or less ceased, and the NASA Deep Space Network does not support this error correction scheme for its dishes that are greater than 26 m.
Constructions.
While all Hadamard codes are based on Hadamard matrices, the constructions differ in subtle ways for different scientific fields, authors, and uses. Engineers, who use the codes for data transmission, and coding theorists, who analyse extremal properties of codes, typically want the rate of the code to be as high as possible, even if this means that the construction becomes mathematically slightly less elegant.
On the other hand, for many applications of Hadamard codes in theoretical computer science it is not so important to achieve the optimal rate, and hence simpler constructions of Hadamard codes are preferred since they can be analyzed more elegantly.
Construction using inner products.
When given a binary message formula_12 of length formula_0, the Hadamard code encodes the message into a codeword formula_13 using an encoding function formula_14
This function makes use of the inner product formula_15 of two vectors formula_16, which is defined as follows:
formula_17
Then the Hadamard encoding of formula_18 is defined as the sequence of "all" inner products with formula_18:
formula_19
As mentioned above, the "augmented" Hadamard code is used in practice since the Hadamard code itself is somewhat wasteful.
This is because, if the first bit of formula_20 is zero, formula_21, then the inner product contains no information whatsoever about formula_22, and hence, it is impossible to fully decode formula_18 from those positions of the codeword alone.
On the other hand, when the codeword is restricted to the positions where formula_23, it is still possible to fully decode formula_18.
Hence it makes sense to restrict the Hadamard code to these positions, which gives rise to the "augmented" Hadamard encoding of formula_18; that is, formula_24.
Construction using a generator matrix.
The Hadamard code is a linear code, and all linear codes can be generated by a generator matrix formula_25. This is a matrix such that formula_26 holds for all formula_12, where the message formula_18 is viewed as a row vector and the vector-matrix product is understood in the vector space over the finite field formula_27. In particular, an equivalent way to write the inner product definition for the Hadamard code arises by using the generator matrix whose columns consist of "all" strings formula_20 of length formula_0, that is,
formula_28
where formula_29 is the formula_30-th binary vector in lexicographical order.
For example, the generator matrix for the Hadamard code of dimension formula_31 is:
formula_32
The matrix formula_25 is a formula_33-matrix and gives rise to the linear operator formula_34.
The generator matrix of the "augmented" Hadamard code is obtained by restricting the matrix formula_25 to the columns whose first entry is one.
For example, the generator matrix for the augmented Hadamard code of dimension formula_31 is:
formula_35
Then formula_36 is a linear mapping with formula_37.
For general formula_0, the generator matrix of the augmented Hadamard code is a parity-check matrix for the extended Hamming code of length formula_6 and dimension formula_38, which makes the augmented Hadamard code the dual code of the extended Hamming code.
Hence an alternative way to define the Hadamard code is in terms of its parity-check matrix: the parity-check matrix of the Hadamard code is equal to the generator matrix of the Hamming code.
Construction using general Hadamard matrices.
Hadamard codes are obtained from an "n"-by-"n" Hadamard matrix "H". In particular, the 2"n" codewords of the code are the rows of "H" and the rows of −"H". To obtain a code over the alphabet {0,1}, the mapping −1 ↦ 1, 1 ↦ 0, or, equivalently, "x" ↦ (1 − "x")/2, is applied to the matrix elements. That the minimum distance of the code is "n"/2 follows from the defining property of Hadamard matrices, namely that their rows are mutually orthogonal. This implies that two distinct rows of a Hadamard matrix differ in exactly "n"/2 positions, and, since negation of a row does not affect orthogonality, that any row of "H" differs from any row of −"H" in "n"/2 positions as well, except when the rows correspond, in which case they differ in "n" positions.
To get the augmented Hadamard code above with formula_39, the chosen Hadamard matrix "H" has to be of Sylvester type, which gives rise to a message length of formula_40.
Distance.
The distance of a code is the minimum Hamming distance between any two distinct codewords, i.e., the minimum number of positions at which two distinct codewords differ. Since the Walsh–Hadamard code is a linear code, the distance is equal to the minimum Hamming weight among all of its non-zero codewords. All non-zero codewords of the Walsh–Hadamard code have a Hamming weight of exactly formula_6 by the following argument.
Let formula_12 be a non-zero message. Then the following value is exactly equal to the fraction of positions in the codeword that are equal to one:
formula_41
The fact that the latter value is exactly formula_9 is called the "random subsum principle". To see that it is true, assume without loss of generality that formula_42.
Then, when conditioned on the values of formula_43, the event is equivalent to formula_44 for some formula_45 depending on formula_46 and formula_43. The probability that formula_47 happens is exactly formula_9. Thus, in fact, "all" non-zero codewords of the Hadamard code have relative Hamming weight formula_9, and thus, its relative distance is formula_9.
The relative distance of the "augmented" Hadamard code is formula_9 as well, but it no longer has the property that every non-zero codeword has weight exactly formula_9 since the all formula_48s vector formula_49 is a codeword of the augmented Hadamard code. This is because the vector formula_50 encodes to formula_51. Furthermore, whenever formula_18 is non-zero and not the vector formula_52, the random subsum principle applies again, and the relative weight of formula_13 is exactly formula_9.
Local decodability.
A locally decodable code is a code that allows a single bit of the original message to be recovered with high probability by only looking at a small portion of the received word.
A code is formula_53-query locally decodable if a message bit, formula_54, can be recovered by checking formula_53 bits of the received word. More formally, a code, formula_55, is formula_56-locally decodable, if there exists a probabilistic decoder, formula_57, such that "(Note: formula_58 represents the Hamming distance between vectors formula_18 and formula_20)":
formula_59, formula_60 implies that formula_61
Theorem 1: The Walsh–Hadamard code is formula_62-locally decodable for all formula_63.
Lemma 1: For all codewords, formula_64 in a Walsh–Hadamard code, formula_65, formula_66, where formula_67 represent the bits in formula_64 in positions formula_30 and formula_68 respectively, and formula_69 represents the bit at position formula_70.
Proof of lemma 1.
Let formula_71 be the codeword in formula_65 corresponding to message formula_18.
Let formula_72 be the generator matrix of formula_65.
By definition, formula_73. From this, formula_74. By the construction of formula_25, formula_75. Therefore, by substitution, formula_76.
Proof of theorem 1.
To prove theorem 1 we will construct a decoding algorithm and prove its correctness.
Algorithm.
Input: Received word formula_77
For each formula_78:
Output: Message formula_85
Proof of correctness.
For any message, formula_18, and received word formula_20 such that formula_20 differs from formula_86 on at most formula_87 fraction of bits, formula_54 can be decoded with probability at least formula_88.
By lemma 1, formula_89. Since formula_68 and formula_0 are picked uniformly, the probability that formula_90 is at most formula_87. Similarly, the probability that formula_91 is at most formula_87. By the union bound, the probability that either formula_92 or formula_93 do not match the corresponding bits in formula_64 is at most formula_94. If both formula_92 and formula_93 correspond to formula_64, then lemma 1 will apply, and therefore, the proper value of formula_54 will be computed. Therefore, the probability formula_54 is decoded properly is at least formula_95. Therefore, formula_96 and for formula_97 to be positive, formula_98.
Therefore, the Walsh–Hadamard code is formula_62 locally decodable for formula_63.
Optimality.
For "k" ≤ 7 the linear Hadamard codes have been proven optimal in the sense of minimum distance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "[2^k,k,2^{k-1}]_2"
},
{
"math_id": 2,
"text": "[2^k,k+1,2^{k-1}]_2"
},
{
"math_id": 3,
"text": "2^m"
},
{
"math_id": 4,
"text": "k = m"
},
{
"math_id": 5,
"text": "k = m+1"
},
{
"math_id": 6,
"text": "2^{k-1}"
},
{
"math_id": 7,
"text": "2^k"
},
{
"math_id": 8,
"text": "2^k/2"
},
{
"math_id": 9,
"text": "1/2"
},
{
"math_id": 10,
"text": "(n,2n,n/2)_2"
},
{
"math_id": 11,
"text": "\\frac{1}{2}-\\epsilon"
},
{
"math_id": 12,
"text": "x\\in\\{0,1\\}^k"
},
{
"math_id": 13,
"text": "\\text{Had}(x)"
},
{
"math_id": 14,
"text": "\\text{Had} : \\{0,1\\}^k\\to\\{0,1\\}^{2^k}."
},
{
"math_id": 15,
"text": "\\langle x , y \\rangle"
},
{
"math_id": 16,
"text": "x,y\\in\\{0,1\\}^k"
},
{
"math_id": 17,
"text": "\\langle x , y \\rangle = \\sum_{i=1}^{k} x_i y_i\\ \\bmod\\ 2\\,."
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\text{Had}(x) = \\Big(\\langle x , y \\rangle\\Big)_{y\\in\\{0,1\\}^k}"
},
{
"math_id": 20,
"text": "y"
},
{
"math_id": 21,
"text": "y_1=0"
},
{
"math_id": 22,
"text": "x_1"
},
{
"math_id": 23,
"text": "y_1=1"
},
{
"math_id": 24,
"text": "\\text{pHad}(x) = \\Big(\\langle x , y \\rangle\\Big)_{y\\in\\{1\\}\\times\\{0,1\\}^{k-1}}"
},
{
"math_id": 25,
"text": "G"
},
{
"math_id": 26,
"text": "\\text{Had}(x)= x\\cdot G"
},
{
"math_id": 27,
"text": "\\mathbb F_2"
},
{
"math_id": 28,
"text": "G = \n\\begin{pmatrix}\n\\uparrow & \\uparrow & & \\uparrow\\\\ \ny_1 & y_2 & \\dots & y_{2^k} \\\\ \n\\downarrow & \\downarrow & & \\downarrow\n\\end{pmatrix}\\,."
},
{
"math_id": 29,
"text": "y_i \\in \\{0,1\\}^k"
},
{
"math_id": 30,
"text": "i"
},
{
"math_id": 31,
"text": "k=3"
},
{
"math_id": 32,
"text": "\nG = \n\\begin{bmatrix}\n0 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\\\ \n0 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\\\ \n0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \n\\end{bmatrix}.\n"
},
{
"math_id": 33,
"text": "(k\\times 2^k)"
},
{
"math_id": 34,
"text": "\\text{Had}:\\{0,1\\}^k\\to\\{0,1\\}^{2^k}"
},
{
"math_id": 35,
"text": "\nG' = \n\\begin{bmatrix}\n1 & 1 & 1 & 1\\\\ \n0 & 0 & 1 & 1\\\\ \n0 & 1 & 0 & 1 \n\\end{bmatrix}.\n"
},
{
"math_id": 36,
"text": "\\text{pHad}:\\{0,1\\}^k\\to\\{0,1\\}^{2^{k-1}}"
},
{
"math_id": 37,
"text": "\\text{pHad}(x)= x \\cdot G'"
},
{
"math_id": 38,
"text": "2^{k-1}-k"
},
{
"math_id": 39,
"text": "n=2^{k-1}"
},
{
"math_id": 40,
"text": "\\log_2(2n)=k"
},
{
"math_id": 41,
"text": "\\Pr_{y\\in\\{0,1\\}^k} \\big[ (\\text{Had}(x))_y = 1 \\big] = \\Pr_{y\\in\\{0,1\\}^k} \\big[ \\langle x,y\\rangle = 1 \\big]\\,."
},
{
"math_id": 42,
"text": "x_1=1"
},
{
"math_id": 43,
"text": "y_2,\\dots,y_k"
},
{
"math_id": 44,
"text": "y_1 \\cdot x_1 = b"
},
{
"math_id": 45,
"text": "b\\in\\{0,1\\}"
},
{
"math_id": 46,
"text": "x_2,\\dots,x_k"
},
{
"math_id": 47,
"text": "y_1=b"
},
{
"math_id": 48,
"text": "1"
},
{
"math_id": 49,
"text": "1^{2^{k-1}}"
},
{
"math_id": 50,
"text": "x=10^{k-1}"
},
{
"math_id": 51,
"text": "\\text{pHad}(10^{k-1}) = 1^{2^{k-1}}"
},
{
"math_id": 52,
"text": "10^{k-1}"
},
{
"math_id": 53,
"text": "q"
},
{
"math_id": 54,
"text": "x_i"
},
{
"math_id": 55,
"text": "C: \\{0,1\\}^k \\rightarrow \\{0,1\\}^n"
},
{
"math_id": 56,
"text": "(q, \\delta\\geq 0, \\epsilon\\geq 0)"
},
{
"math_id": 57,
"text": "D:\\{0,1\\}^n \\rightarrow \\{0,1\\}^k"
},
{
"math_id": 58,
"text": "\\Delta(x,y)"
},
{
"math_id": 59,
"text": "\\forall x \\in \\{0,1\\}^k, \\forall y \\in \\{0,1\\}^n"
},
{
"math_id": 60,
"text": "\\Delta(y, C(x)) \\leq \\delta n"
},
{
"math_id": 61,
"text": "Pr[D(y)_i = x_i] \\geq \\frac{1}{2} + \\epsilon, \\forall i \\in [k]"
},
{
"math_id": 62,
"text": "(2, \\delta, \\frac{1}{2}-2\\delta)"
},
{
"math_id": 63,
"text": "0\\leq \\delta \\leq \\frac{1}{4}"
},
{
"math_id": 64,
"text": "c"
},
{
"math_id": 65,
"text": "C"
},
{
"math_id": 66,
"text": "c_i+c_j=c_{i+j}"
},
{
"math_id": 67,
"text": "c_i, c_j"
},
{
"math_id": 68,
"text": "j"
},
{
"math_id": 69,
"text": "c_{i+j}"
},
{
"math_id": 70,
"text": "(i+j)"
},
{
"math_id": 71,
"text": "C(x) = c = (c_0,\\dots,c_{2^n-1})"
},
{
"math_id": 72,
"text": "G = \n\\begin{pmatrix}\n\\uparrow & \\uparrow & & \\uparrow\\\\ \ng_0 & g_1 & \\dots & g_{2^n-1} \\\\ \n\\downarrow & \\downarrow & & \\downarrow\n\\end{pmatrix}"
},
{
"math_id": 73,
"text": "c_i = x\\cdot g_i"
},
{
"math_id": 74,
"text": "c_i+c_j = x\\cdot g_i + x\\cdot g_j = x\\cdot(g_i+g_j)"
},
{
"math_id": 75,
"text": "g_i + g_j = g_{i+j}"
},
{
"math_id": 76,
"text": "c_i + c_j = x\\cdot g_{i+j} = c_{i+j}"
},
{
"math_id": 77,
"text": "y = (y_0, \\dots, y_{2^n-1})"
},
{
"math_id": 78,
"text": "i \\in \\{1, \\dots, n\\}"
},
{
"math_id": 79,
"text": "j \\in \\{0, \\dots, 2^n-1\\}"
},
{
"math_id": 80,
"text": "k \\in \\{0, \\dots, 2^n-1\\}"
},
{
"math_id": 81,
"text": "j+k = e_i"
},
{
"math_id": 82,
"text": "e_i"
},
{
"math_id": 83,
"text": "j+k"
},
{
"math_id": 84,
"text": "x_i \\gets y_j+y_k"
},
{
"math_id": 85,
"text": "x = (x_1, \\dots, x_n)"
},
{
"math_id": 86,
"text": "c = C(x)"
},
{
"math_id": 87,
"text": "\\delta"
},
{
"math_id": 88,
"text": "\\frac{1}{2}+(\\frac{1}{2}-2\\delta)"
},
{
"math_id": 89,
"text": "c_j+c_k = c_{j+k} = x\\cdot g_{j+k} = x\\cdot e_i = x_i"
},
{
"math_id": 90,
"text": "y_j \\not = c_j"
},
{
"math_id": 91,
"text": "y_k \\not = c_k"
},
{
"math_id": 92,
"text": "y_j"
},
{
"math_id": 93,
"text": "y_k"
},
{
"math_id": 94,
"text": "2\\delta"
},
{
"math_id": 95,
"text": "1-2\\delta"
},
{
"math_id": 96,
"text": "\\epsilon = \\frac{1}{2} - 2\\delta"
},
{
"math_id": 97,
"text": "\\epsilon"
},
{
"math_id": 98,
"text": "0 \\leq \\delta \\leq \\frac{1}{4}"
}
]
| https://en.wikipedia.org/wiki?curid=7428961 |
7430072 | Noise spectral density | Noise power per unit of bandwidth
In communications, noise spectral density (NSD), noise power density, noise power spectral density, or simply noise density ("N"0) is the power spectral density of noise or the noise power per unit of bandwidth. It has dimension of power over frequency, whose SI unit is watt per hertz (equivalent to watt-second or joule).
It is commonly used in link budgets as the denominator of the important figure-of-merit ratios, such as carrier-to-noise-density ratio as well as "E""b"/"N"0 and "E""s"/"N"0.
If the noise is one-sided white noise, i.e., constant with frequency, then the total noise power "N" integrated over a bandwidth "B" is "N" = "BN"0 (for double-sided white noise, the bandwidth is doubled, so "N" is "BN"0/2). This is utilized in signal-to-noise ratio calculations.
For thermal noise, its spectral density is given by "N"0 = "kT", where "k" is the Boltzmann constant in joules per kelvin, and "T" is the receiver system noise temperature in kelvins.
The noise amplitude spectral density is the square root of the noise power spectral density, and is given in units such as formula_0. | [
{
"math_id": 0,
"text": "\\mathrm{V}/\\sqrt{\\mathrm{Hz}}"
}
]
| https://en.wikipedia.org/wiki?curid=7430072 |
743106 | Scott continuity | Definition of continuity for functions between posets
In mathematics, given two partially ordered sets "P" and "Q", a function "f": "P" → "Q" between them is Scott-continuous (named after the mathematician Dana Scott) if it preserves all directed suprema. That is, for every directed subset "D" of "P" with supremum in "P", its image has a supremum in "Q", and that supremum is the image of the supremum of "D", i.e. formula_0, where formula_1 is the directed join. When formula_2 is the poset of truth values, i.e. Sierpiński space, then Scott-continuous functions are characteristic functions of open sets, and thus Sierpiński space is the classifying space for open sets.
A subset "O" of a partially ordered set "P" is called Scott-open if it is an upper set and if it is inaccessible by directed joins, i.e. if all directed sets "D" with supremum in "O" have non-empty intersection with "O". The Scott-open subsets of a partially ordered set "P" form a topology on "P", the Scott topology. A function between partially ordered sets is Scott-continuous if and only if it is continuous with respect to the Scott topology.
The Scott topology was first defined by Dana Scott for complete lattices and later defined for arbitrary partially ordered sets.
Scott-continuous functions are used in the study of models for lambda calculi and the denotational semantics of computer programs.
Properties.
A Scott-continuous function is always monotonic, meaning that if formula_3 for formula_4, then formula_5.
A subset of a directed complete partial order is closed with respect to the Scott topology induced by the partial order if and only if it is a lower set and closed under suprema of directed subsets.
A directed complete partial order (dcpo) with the Scott topology is always a Kolmogorov space (i.e., it satisfies the T0 separation axiom). However, a dcpo with the Scott topology is a Hausdorff space if and only if the order is trivial. The Scott-open sets form a complete lattice when ordered by inclusion.
For any Kolmogorov space, the topology induces an order relation on that space, the specialization order: "x" ≤ "y" if and only if every open neighbourhood of "x" is also an open neighbourhood of "y". The order relation of a dcpo "D" can be reconstructed from the Scott-open sets as the specialization order induced by the Scott topology. However, a dcpo equipped with the Scott topology need not be sober: the specialization order induced by the topology of a sober space makes that space into a dcpo, but the Scott topology derived from this order is finer than the original topology.
Examples.
The open sets in a given topological space when ordered by inclusion form a lattice on which the Scott topology can be defined. A subset "X" of a topological space "T" is compact with respect to the topology on "T" (in the sense that every open cover of "X" contains a finite subcover of "X") if and only if the set of open neighbourhoods of "X" is open with respect to the Scott topology.
For CPO, the cartesian closed category of dcpo's, two particularly notable examples of Scott-continuous functions are curry and apply.
Nuel Belnap used Scott continuity to extend logical connectives to a four-valued logic. | [
{
"math_id": 0,
"text": "\\sqcup f[D] = f(\\sqcup D)"
},
{
"math_id": 1,
"text": "\\sqcup"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "A \\le_{P} B"
},
{
"math_id": 4,
"text": "A, B \\subset P"
},
{
"math_id": 5,
"text": "f(A) \\le_{Q} f(B)"
}
]
| https://en.wikipedia.org/wiki?curid=743106 |
74311319 | Complete orthogonal decomposition | In linear algebra, the complete orthogonal decomposition is a matrix decomposition. It is similar to the singular value decomposition, but typically somewhat cheaper to compute and in particular much cheaper and easier to update when the original matrix is slightly altered.
Specifically, the complete orthogonal decomposition factorizes an arbitrary complex matrix formula_0 into a product of three matrices, formula_1, where formula_2 and formula_3 are unitary matrices and formula_4 is a triangular matrix. For a matrix formula_0 of rank formula_5, the triangular matrix formula_4 can be chosen such that only its top-left formula_6 block is nonzero, making the decomposition rank-revealing.
For a matrix of size formula_7, assuming formula_8, the complete orthogonal decomposition requires formula_9 floating point operations and formula_10 auxiliary memory to compute, similar to other rank-revealing decompositions. Crucially however, if a row/column is added or removed or the matrix is perturbed by a rank-one matrix, its decomposition can be updated in formula_11 operations.
Because of its form, formula_1, the decomposition is also known as UTV decomposition. Depending on whether a left-triangular or right-triangular matrix is used in place of formula_4, it is also referred to as ULV decomposition or URV decomposition, respectively.
Construction.
The UTV decomposition is usually computed by means of a pair of QR decompositions: one QR decomposition is applied to the matrix from the left, which yields formula_2, another applied from the right, which yields formula_3, which "sandwiches" triangular matrix formula_4 in the middle.
Let formula_0 be a formula_7 matrix of rank formula_5. One first performs a QR decomposition with column pivoting:
formula_12,
where formula_13 is a formula_14 permutation matrix, formula_2 is a formula_15 unitary matrix, formula_16 is a formula_6 upper triangular matrix and formula_17 is a formula_18 matrix. One then performs another QR decomposition on the adjoint of formula_19:
formula_20,
where formula_21 is a formula_14 unitary matrix and formula_4 is an formula_6 lower (left) triangular matrix. Setting formula_22 yields the complete orthogonal (UTV) decomposition:
formula_23.
Since any diagonal matrix is by construction triangular, the singular value decomposition, formula_24, where formula_25, is a special case of the UTV decomposition. Computing the SVD is slightly more expensive than the UTV decomposition, but has a stronger rank-revealing property.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "A = U T V^*"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "V^*"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "r\\times r"
},
{
"math_id": 7,
"text": "m\\times n"
},
{
"math_id": 8,
"text": "m \\ge n"
},
{
"math_id": 9,
"text": "O(mn^2)"
},
{
"math_id": 10,
"text": "O(m^2)"
},
{
"math_id": 11,
"text": "O(mn)"
},
{
"math_id": 12,
"text": "A\\Pi = U \\begin{bmatrix} R_{11} & R_{12} \\\\ 0 & 0 \\end{bmatrix}"
},
{
"math_id": 13,
"text": "\\Pi"
},
{
"math_id": 14,
"text": "n\\times n"
},
{
"math_id": 15,
"text": "m\\times m"
},
{
"math_id": 16,
"text": "R_{11}"
},
{
"math_id": 17,
"text": "R_{12}"
},
{
"math_id": 18,
"text": "r\\times(n-r)"
},
{
"math_id": 19,
"text": "R"
},
{
"math_id": 20,
"text": "\\begin{bmatrix} R^*_{11} \\\\ R^*_{12}\\end{bmatrix}\n= V' \\begin{bmatrix} T^* \\\\ 0\\end{bmatrix}\n"
},
{
"math_id": 21,
"text": "V'"
},
{
"math_id": 22,
"text": "V = \\Pi V'"
},
{
"math_id": 23,
"text": "A = U \\begin{bmatrix} T & 0 \\\\ 0 & 0 \\end{bmatrix} V^*"
},
{
"math_id": 24,
"text": "A = USV^*"
},
{
"math_id": 25,
"text": "S_{11} \\ge S_{22} \\ge \\ldots \\ge 0"
}
]
| https://en.wikipedia.org/wiki?curid=74311319 |
74324439 | Thermal equation of state of solids | In physics, the thermal equation of state is a mathematical expression of pressure "P", temperature "T", and, volume "V". The thermal equation of state for ideal gases is the ideal gas law, expressed as "PV"
"nRT" (where "R" is the gas constant and "n" the amount of substance), while the thermal equation of state for solids is expressed as: <br>
formula_0
where formula_1 is the volume dependence of pressure at room temperature (isothermal), and formula_2 is the temperature dependence of pressure at constant volume (isochoric), known as thermal pressure.
For the ideal gas at high pressure-temperature (high "P"-"T"), the soft gas is filled in a solid firm container, and the gas is restrained inside the container; while for a solid at high "P"-"T", a solid is loaded inside the soft medium, and the solid can expands/shrinks in the soft medium when heated and compressed. Therefore, the compression/heating process of the gas could be either constant temperature (isothermal), constant pressure (isobaric) or constant volume (isochoric). Though the compression/heating process of solids can be constant temperature (isothermal), and constant pressure (isobaric), it can not be a constant volume (isochoric), At high "P"-"T", the pressure for the ideal gas is calculated by the force divided by the area, while the pressure for the solid is calculated from bulk modulus (K, or B) and volume at room temperature, or from Eq (1) at high "P"-"T". A pressure gauge's bulk modulus is known, and its thermal equation of state is well known. To study a solid with unknown bulk modulus, it has to be loaded with a pressure gauge, and its pressure will be determined from its pressure gauge.
The most common pressure gauges are Au, Pt, Cu, and MgO, etc. When two or more pressure gauges are loaded together at high "P"-"T", their pressure readings should be the same. However, large discrepancies have been reported in pressure determination using different pressure gauges or different thermal equations of state for the same pressure gauge. Fig.1 is a schematic plot showing the discrepancy in paper.
Out of the total pressure in Eq.(1), the first term pressures on the right side of Ag, Cu, Mo, Pd at room temperature are consistent in a wide pressure range, according to the Mao ruby scale up to 1 Mba. In addition, the first term pressure of Ag, Cu, and MgO are consistent according to third-order Birch–Murnaghan equation of state. Therefore, the discrepancy of the total pressure, "P"("V", "T"), should be from the second term in Eq. 1, which is the thermal pressure "P"th("V", "T") at high "P"-"T".
Thermal pressure.
Anderson thermal pressure model.
In 1968, Anderson developed (∂"T"/∂"P")v
("αK")-1 for the thermal gradient, and its reciprocal correlate the thermal pressure and temperature in a constant volume heating process by (∂"P"/∂"T")v
"αK" . Note, thermal pressure is the pressure change in a constant volume heating process, and expressed by integration of αK.
Anderson thermal pressure model is the first thermal pressure model and it is the most common thermal pressure model as well.
Experimental.
The thermal pressure is the pressure change in a constant volume heating. In the section above, there are large discrepancies in pressure determination using different pressure gauges or different thermal equations of state for the same pressure gauge, however, the pressure determination in the heating process need to be reliable to measure the thermal pressure in experiments. In addition, to measure the thermal pressure in experiments, the heating process has to be a volume constant (isochoric) process. According to the first section above, an heating for a solid can not be a isochoric, so the pressure change in a non-isochoric heating process is not exactly the thermal pressure.
When a solid is loaded with a pressure gauge, and heated/compressed together at high "P"-"T", the thermal pressure of the solid does not equal that of its gauge. The pressure is a state variable, while the thermal pressure is a process variable. A solid is subject to the same pressure as its gauge, In a heating process from "T"1 to "T"2, if the solid's volume is kept constant by compression, most likely its pressure gauge's volume will not be constant in the same heating process. In paper, the authors demonstrate that, "α"sample≠"α"gauge, and "K"sample≠"K"gauge, so
formula_3
which means the thermal pressure of a solid doesn't equal that of its gauge.
Determination from models.
According to the Anderson model, thermal pressure is the integration of the product thermal expansion "α"p and bulk modulus "K""T", i.e. "P"th
∫"αK"d"t". In this model, both "α"p and "K""T" are pressure dependent and temperature dependent, so integrating the αp and KT in an isochoric process over temperature is not straight forward. To bypass this issue, the "P"-"T" dependent "α"p and "K""T" are assumed to constant "α"0 and "K"0. But authors in publication demonstrated that the model predicted pressure of Au and MgO from constant "α"0 and "K"0 at ambient pressure deviate from its experimental values, and the higher temperature, the higher deviation. A cartoon plot for the pressures predicted from thermal pressure version equation of state in paper is shown in Fig. 2 here.
Authors in paper proposed an altenate way to make the integration of "α"p"K""T") possible. They assumes the thermal expansion to be pressure independent, and reduce the "P"-"T" dependent "α"p and "K""T" to only temperature dependent. But in a preprint paper, author proved that pressure independent thermal expansion leads to the bulk modulus to be temperature independent, which again reduce the "P"-"T" dependent "α"p and "K""T" to constant "α"0 and "K"0.
There are various other thermal pressure models, but accurately determined thermal pressures are required to prove these models.
Pressure-dependent thermal expansion equation of state.
It was explained that the thermal pressure can not be accurately determined in experiments in section "The thermal pressure from an experiment" above, and the thermal pressure can't accurately calculated from Anderson model above.Thermal expansion equation of state has been proposed before, which consists of an thermal expansion at ambient pressure and followed by an isothermal compression at high temperature. In this model, there is no thermal pressure term, but accurate pressure determination high "P"-"T" and temperature dependent KT are of big challenge at the present. In paper, the authors proposed a different thermal expansion equation of state, which consists of isothermal compression at room temperature, following by thermal expansion at high pressure. To distinguish these two thermal expansion equations of state, the latter one is called pressure-dependent thermal expansion equation of state.
To deveop the pressure-dependent thermal expansion equation of state, at room temperature, a general form of pressure in an compression process from V0 to V1 is expressed as
formula_4
The authors established the relation between V1 and final volume V in an isobaric heating through the thermal expansion, and yield the general form of thermal equation of state as
formula_5
While VM= V·exp(-∫αp·dx) in an isobaric heating process, and V is the volume after the isobaric heating. In paper, authors explained in detail how to develop Eq (3), and took the third order Birch-Murnagha equation as an example to explain it.
To partially prove the pressure-dependent thermal expansion equation of state, the authors collected a set of MgO x-ray diffraction data at various temperatures at ambient pressure. At ambient pressure, P=0 GPA is known, so, the volume, pressure, and temperature are all given. Then, authors predict the pressure value from the given (V, T) from pressure-dependent thermal expansion equation of state. The predicted pressures match with the known experimental value of 0 GPa, see in Figure 2. In addition of MgO, the authors demonstrate that the Au has a similar trend as well. In the future, the validation of the pressure-dependent thermal expansion equation of state at high "P"-"T" conditions is required.
The pressure dependent αp has to be determined from an isobaric heating process. It has been reported that the heating in DAC with membrane at high "P"-"T" were isobaric. Authors in the paper propose a reversible isobaric heating concept, in which the plotted heating data points and cooling data points line on the same curve. Authors consider this heating and cooling process very close to the ideal isobaric. A cartoon plot of reversible heating/cooling proposed in paper is shown as Fig. 3.
In paper, the authors demonstrated the reversible isobaric heating concept by MgO at 9.5 GPa. In a reversible heating process, no pressure determination at high "P"-"T" is required, thus, avoid the difficulty of accurately determining the pressure at high "P"-"T".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P(V, T) = P(V, T_0) + P_\\text{th}(V, T) "
},
{
"math_id": 1,
"text": " P(V, T_0) "
},
{
"math_id": 2,
"text": " P_\\text{th}(V, T) "
},
{
"math_id": 3,
"text": "\\int\\alpha_{\\text{sample}}K_{\\text{sample}}\\mathrm d T\\neq\\int \\alpha_{\\text{gauge}}K_{\\text{gauge}}\\mathrm d T,"
},
{
"math_id": 4,
"text": " P = f_{-1} (T_0,V_1,V_0,K_0,K_0') "
},
{
"math_id": 5,
"text": " P=f_{-1} (T_0,V_M,V_0,K_0,K_0') "
}
]
| https://en.wikipedia.org/wiki?curid=74324439 |
74325364 | Kato's inequality | Inequality relating to the Laplace operator
In functional analysis, a subfield of mathematics, Kato's inequality is a distributional inequality for the Laplace operator or certain elliptic operators. It was proven in 1972 by the Japanese mathematician Tosio Kato.
The original inequality is for some degenerate elliptic operators. This article treats the special (but important) case for the Laplace operator.
Inequality for the Laplace operator.
Let formula_0 be a bounded and open set, and formula_1 such that formula_2. Then the following holds
formula_3 in formula_4,
where
formula_5
formula_6 is the space of locally integrable functions – i.e., functions that are integrable on every "compact" subset of their domains of definition.
formula_7 in formula_4
where formula_8 and formula_9 is the indicator function.
formula_3 in formula_12. | [
{
"math_id": 0,
"text": "\\Omega\\subset \\R^d"
},
{
"math_id": 1,
"text": "f\\in L^1_{\\operatorname{loc}}(\\Omega)"
},
{
"math_id": 2,
"text": "\\Delta f\\in L^1_{\\operatorname{loc}}(\\Omega)"
},
{
"math_id": 3,
"text": "\\Delta |f| \\geq \\operatorname{Re}\\left((\\operatorname{sgn}\\overline f) \\Delta f\\right)\\quad"
},
{
"math_id": 4,
"text": "\\;\\mathcal{D}'(\\Omega)"
},
{
"math_id": 5,
"text": "\\operatorname{sgn}\\overline f=\\begin{cases}\\frac{\\overline{f(x)}}{|f(x)|} & \\text{if }f\\neq 0\\\\\n0 & \\text{if }f=0.\n\\end{cases}"
},
{
"math_id": 6,
"text": "L^1_{\\operatorname{loc}}"
},
{
"math_id": 7,
"text": "\\Delta f^+ \\geq \\operatorname{Re}\\left(1_{[f\\geq 0]} \\Delta f\\right)\\quad"
},
{
"math_id": 8,
"text": "f^+=\\operatorname{max}(f,0)"
},
{
"math_id": 9,
"text": "1_{[f\\geq 0]}"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "\\Omega"
},
{
"math_id": 12,
"text": "\\;\\mathcal{D}'(\\{f\\neq 0\\})"
}
]
| https://en.wikipedia.org/wiki?curid=74325364 |
7433141 | Eight-foot pitch | Standard pitch designation
An organ pipe, or a harpsichord string, designated as eight-foot pitch (8′) is sounded at standard, ordinary pitch. For example, the A above middle C in eight-foot pitch would be sounded at 440 Hz (or at some similar value, depending on how concert pitch was set at the time and place the organ or harpsichord was made).
Similar terms.
Eight-foot pitch may be contrasted with four-foot pitch (4′; one octave above the standard), two-foot pitch (2′; two octaves above the standard), and sixteen-foot pitch (16′; one octave below the standard). The latter three pitches are often sounded (by extra pipes or strings) along with an eight-foot pitch pipe or string, as a way of enriching the tonal quality. The numbers just mentioned largely exhaust the possibilities for harpsichords, but in organs a far greater variety is possible; see Organ stop.
These lengths can all be obtained by successive doubling because, all else being equal, a pipe or string that is double the length of another will vibrate at a pitch one octave lower.
Choice of length.
The particular length "eight feet" is based on the approximate length of an organ pipe sounding the pitch two octaves below middle C, the bottom note on an organ keyboard. This may be calculated as follows.
If a pipe is open at both ends, as is true of most organ pipes, its fundamental frequency "f" can be calculated (approximately) as follows:
where
If "v" is assumed to be 343 m/s (the speed of sound at sea level, with temperature of 20 °C), and the pipe length "l" is assumed to be eight feet (2.44 m), then the formula yields the value of 70.4 hertz (Hz; cycles per second). This is not far from the pitch of the C two octaves below 440 Hz, which (when concert pitch is set at A = 440 Hz) is 65.4 Hz. The discrepancy may be related to various factors, including effects of pipe diameter, the historical differing definitions of the length of the foot, and variations in tuning prior to the setting of A = 440 Hz as the standard pitch in the 20th century. | [
{
"math_id": 0,
"text": "f=\\frac{v}{2l}"
}
]
| https://en.wikipedia.org/wiki?curid=7433141 |
74341249 | Frequency principle/spectral bias | Phenomenon observed in the study of Artificial Neural Networks
The frequency principle/spectral bias is a phenomenon observed in the study of Artificial Neural Networks(ANNs), specifically deep neural networks(DNNs). It describes the tendency of deep neural networks to fit target functions from low to high frequencies during the training process.
This phenomenon is referred to as the frequency principle (F-Principle) by Zhi-Qin John Xu et al. or spectral bias by Nasim Rahaman et al. The F-Principle can be robustly observed in DNNs, regardless of overparametrization. A key mechanism of the F-Principle is that the regularity of the activation function translates into the decay rate of the loss function in the frequency domain.
The discovery of the frequency principle has inspired the design of DNNs that can quickly learn high-frequency functions. This has applications in scientific computing, image classification, and point cloud fitting problems. Furthermore, it provides a means to comprehend phenomena in practical applications and has inspired numerous studies on deep learning from the frequency perspective.
Main results (informal).
Experimental results.
In one-dimensional problems, the Discrete Fourier Transform (DFT) of the target function and the output of DNNs can be obtained, and we can observe from Fig.1 that the blue line fits the low-frequency faster than the high-frequency.
In two-dimensional problems, Fig.2 utilises DNN to fit an image of the camera man. The DNN starts learning from a coarse image and produces a more detailed image as training progresses. This demonstrates learning from low to high frequencies, which is analogous to how the biological brain remembers an image. This example shows the 2D frequency principle, which utilises DNNs for image restoration by leveraging preferences for low frequencies, such as in inpainting tasks. However, it is important to account for insufficient learning of high-frequency structures. To address this limitation, certain algorithms have been developed, which are introduced in the Applications section.
In high-dimensional problems, one can use projection method to visualize the frequency convergence in one particular direction or use Gaussian filter to roughly see the convergence of the low-frequency part and the high-frequency part.
Theoretical results.
Based on the following assumptions, i.e., i) certain regularity of target function, sample distribution function and activation function; ii) bounded training trajectory with loss convergence, Luo et al. prove that the change of high-frequency loss over the total loss decays with the separated frequency with a certain power, which is determined by the regularity assumption. A key aspect of the proof is that composite functions maintain a certain regularity, causing decay in the frequency domain. Thus this result can be applied to general network structures with multiple layers. While this characterization of the F-Principle is very general, it is too coarse-grained to differentiate the effects of network structure or special properties of DNNs. It provides only a qualitative understanding rather than quantitatively characterizing differences.
There is a continuous framework to study machine learning and suggest gradient flows of neural networks are nice flows and obey the F-Principle. This is because they are integral equations which have higher regularity. The increased regularity of integral equations leads to faster decay in the Fourier domain.
Applications.
Algorithms designed to overcome the challenge of high-frequency.
Phase shift DNN: PhaseDNN converts high-frequency component of the data downward to a low-frequency spectrum for learning, and then converts the learned one back to the original high frequency.
Adaptive activation functions: Adaptive activation functions replace the activation function formula_0 by formula_1, where formula_2 is a fixed scale factor with formula_3 and formula_4 is a trainable variable shared for all neurons.
Multi-scale DNN: To alleviate the high-frequency difficulty for high-dimensional problems, a Multi-scale DNN (MscaleDNN) method considers the frequency conversion only in the radial direction. The conversion in the frequency space can be done by scaling, which is equivalent to an inverse scaling in the spatial space.
For the first a MscaleDNN takes the following form formula_5
where formula_6, formula_7, formula_8 is the neuron number of formula_9-th hidden layer, formula_10, formula_11,
formula_12 is a scalar function and formula_13 means entry-wise operation, formula_14 is the Hadamard product andformula_15 where formula_16, formula_17 or formula_18. This structure is called Multi-scale DNN-1 (MscaleDNN-1).
The second kind of MscaleDNN which is denoted as MscaleDNN-2 in Fig.3 is a sum of formula_19 subnetworks, in which each scale input goes through a subnetwork. In MscaleDNN-2, weight matrices from formula_20 to formula_21 are block diagonal. Again, the scale coefficient formula_17 or formula_18.
Fourier feature network: Fourier feature network map input formula_22 to formula_23 for imaging reconstruction tasks. formula_24 is then used as the input to neural network. An extended Fourier feature network for PDE problem, where the selection for formula_25 is from different ranges. Ben Mildenhall et al. successfully apply this multiscale Fourier feature input in the neural radiance fields for view synthesis
Frequency perspective for understanding experimental phenomena.
Compression phase: The F-Principle explains the compression phase in information plane. The entropy or information quantifies the possibility of output values, i.e., more possible output values lead to a higher entropy. In learning a discretized function, the DNN first fits
the continuous low-frequency components of the discretized function, i.e., large entropy state. Then, the DNN output tends to be discretized as the network gradually captures the high-frequency components, i.e., entropy decreasing. Thus, the compression phase appears in the information plane.
Increasing complexity: The F-Principle also explains the increasing complexity of DNN output during the training.
Strength and limitation: The F-Principle points out that deep neural networks are good at learning low-frequency functions but difficult to learn high-frequency functions.
Early-stopping trick: As noise is often dominated by high-frequency, with early-stopping, a neural network with spectral bias can avoid learn high-frequency noise.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma(x)"
},
{
"math_id": 1,
"text": "\\sigma(\\mu ax)"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "\\mu\\geq1"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "\n f(\\boldsymbol{x};\\boldsymbol{\\theta}) = \\boldsymbol{W}^{[L-1]} \\sigma\\circ(\\cdots (\\boldsymbol{W}^{[1]} \\sigma\\circ(\\boldsymbol{K}\\odot(\\boldsymbol{W}^{[0]} \\boldsymbol{x}) + \\boldsymbol{b}^{[0]} ) + \\boldsymbol{b}^{[1]} )\\cdots)+\\boldsymbol{b}^{[L-1]}, \n"
},
{
"math_id": 6,
"text": "\\boldsymbol{x}\\in\\mathbb{R}^d"
},
{
"math_id": 7,
"text": "\\boldsymbol{W}^{[l]}\\in\\mathbb{R}^{m_{l+1}\\times m_{l}}"
},
{
"math_id": 8,
"text": "m_l"
},
{
"math_id": 9,
"text": "l"
},
{
"math_id": 10,
"text": "m_0=d"
},
{
"math_id": 11,
"text": "\\boldsymbol{b}^{[l]}\\in\\mathbb{R}^{m_{l+1}}"
},
{
"math_id": 12,
"text": "\\sigma"
},
{
"math_id": 13,
"text": "\\circ"
},
{
"math_id": 14,
"text": "\\odot"
},
{
"math_id": 15,
"text": "\\boldsymbol{K}=(\\underbrace{a_1,a_1,\\cdots,a_1}_{\\text{1st part}},a_2,\\cdots,a_{i-1},\\underbrace{a_i,a_i,\\cdots,a_i}_{\\text{ith part}},\\cdots,\\underbrace{a_{N},a_{N}\\cdots,a_{N}}_{\\text{Nth part}})^T"
},
{
"math_id": 16,
"text": "\\boldsymbol{K}\\in\\mathbb{R}^{m_{1}}"
},
{
"math_id": 17,
"text": "a_i=i"
},
{
"math_id": 18,
"text": "a_i=2^{i-1}"
},
{
"math_id": 19,
"text": "N"
},
{
"math_id": 20,
"text": "W^{[1]}"
},
{
"math_id": 21,
"text": "W^{[L-1]}"
},
{
"math_id": 22,
"text": "\\boldsymbol{x}"
},
{
"math_id": 23,
"text": "\\gamma(\\boldsymbol{x})=[a_1 \\cos(2\\pi \\boldsymbol{b}_{1}^{T}\\boldsymbol{x}), a_1 \\cos(2\\pi \\boldsymbol{b}_{1}^{T}\\boldsymbol{x}),\\cdots ,a_m \\cos(2\\pi \\boldsymbol{b}_{m}^{T}\\boldsymbol{x}),a_m \\cos(2\\pi \\boldsymbol{b}_{m}^{T}\\boldsymbol{x})]"
},
{
"math_id": 24,
"text": "\\gamma(\\boldsymbol{x})"
},
{
"math_id": 25,
"text": "b_i"
}
]
| https://en.wikipedia.org/wiki?curid=74341249 |
74341487 | Nadel vanishing theorem | Vanishing theorem for multiplier ideals
In mathematics, the Nadel vanishing theorem is a global vanishing theorem for multiplier ideals, introduced by A. M. Nadel in 1989. It generalizes the Kodaira vanishing theorem using singular metrics with (strictly) positive curvature, and also it can be seen as an analytical analogue of the Kawamata–Viehweg vanishing theorem.
Statement.
The theorem can be stated as follows. Let X be a smooth complex projective variety, D an effective formula_0-divisor and L a line bundle on X, and formula_1 is a multiplier ideal sheaves. Assume that formula_2 is big and nef. Then
formula_3
Nadel vanishing theorem in the analytic setting: Let formula_4 be a Kähler manifold (X be a reduced complex space (complex analytic variety) with a Kähler metric) such that weakly pseudoconvex, and let F be a holomorphic line bundle over X equipped with a singular hermitian metric of weight formula_5. Assume that formula_6 for some continuous positive function formula_7 on X. Then
formula_8
Let arbitrary plurisubharmonic function formula_9 on formula_10, then a multiplier ideal sheaf formula_11 is a coherent on formula_12, and therefore its zero variety is an analytic set.
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Q}"
},
{
"math_id": 1,
"text": "\\mathcal{J}(D)"
},
{
"math_id": 2,
"text": "L - D"
},
{
"math_id": 3,
"text": "H^{i} \\left(X, \\mathcal{O}_{X}(K_X + L) \\otimes \\mathcal{J}(D) \\right) = 0 \\;\\; \\text{for} \\;\\; i > 0."
},
{
"math_id": 4,
"text": "(X, \\omega)"
},
{
"math_id": 5,
"text": "\\varphi"
},
{
"math_id": 6,
"text": "\\sqrt{-1} \\cdot \\theta(F) > \\varepsilon \\cdot \\omega"
},
{
"math_id": 7,
"text": "\\varepsilon"
},
{
"math_id": 8,
"text": "H^{i} \\left(X, \\mathcal{O}_{X}(K_X + F) \\otimes \\mathcal{J}(\\varphi) \\right) = 0 \\;\\; \\text{for} \\;\\; i > 0."
},
{
"math_id": 9,
"text": "\\phi"
},
{
"math_id": 10,
"text": "\\Omega \\subset X"
},
{
"math_id": 11,
"text": "\\mathcal{J}(\\phi)"
},
{
"math_id": 12,
"text": "\\Omega"
}
]
| https://en.wikipedia.org/wiki?curid=74341487 |
743441 | Bouguer anomaly | Type of gravity anomaly
In geodesy and geophysics, the Bouguer anomaly (named after Pierre Bouguer) is a gravity anomaly, corrected for the height at which it is measured and the attraction of terrain. The height correction alone gives a free-air gravity anomaly.
Definition.
The Bouguer anomaly formula_0 defined as:
formula_1
Here,
The free-air anomaly formula_2, in its turn, is related to the observed gravity formula_5 as follows:
formula_6
where:
Reduction.
A Bouguer reduction is called "simple" (or "incomplete") if the terrain is approximated by an infinite flat plate called the Bouguer plate. A "refined" (or "complete") Bouguer reduction removes the effects of terrain more precisely. The difference between the two is called the "(residual) terrain effect" (or "(residual) terrain correction") and is due to the differential gravitational effect of the unevenness of the terrain; it is always negative.
Simple reduction.
The gravitational acceleration formula_9 outside a Bouguer plate is perpendicular to the plate and towards it, with magnitude "2πG" times the mass per unit area, where formula_10 is the gravitational constant. It is independent of the distance to the plate (as can be proven most simply with Gauss's law for gravity, but can also be proven directly with Newton's law of gravity). The value of formula_10 is , so formula_9 is times the mass per unit area. Using = () we get times the mass per unit area. For mean rock density () this gives .
The Bouguer reduction for a Bouguer plate of thickness formula_11 is
formula_12
where formula_13 is the density of the material and formula_10 is the constant of gravitation. On Earth the effect on gravity of elevation is 0.3086 mGal m−1 decrease when going up, minus the gravity of the Bouguer plate, giving the "Bouguer gradient" of 0.1967 mGal m−1.
More generally, for a mass distribution with the density depending on one Cartesian coordinate "z" only, gravity for any "z" is 2π"G" times the difference in mass per unit area on either side of this "z" value. A combination of two parallel infinite if equal mass per unit area plates does not produce any gravity between them.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g_B"
},
{
"math_id": 1,
"text": " g_B = g_{F} - \\delta g_B + \\delta g_T "
},
{
"math_id": 2,
"text": "g_F"
},
{
"math_id": 3,
"text": "\\delta g_B"
},
{
"math_id": 4,
"text": "\\delta g_T"
},
{
"math_id": 5,
"text": "g_{obs}"
},
{
"math_id": 6,
"text": " g_F = g_{obs} - g_\\lambda + \\delta g_F"
},
{
"math_id": 7,
"text": "g_\\lambda"
},
{
"math_id": 8,
"text": "\\delta g_F"
},
{
"math_id": 9,
"text": "g"
},
{
"math_id": 10,
"text": "G"
},
{
"math_id": 11,
"text": "H"
},
{
"math_id": 12,
"text": " \\delta g_B = 2\\pi\\rho G H "
},
{
"math_id": 13,
"text": "\\rho"
}
]
| https://en.wikipedia.org/wiki?curid=743441 |
74356806 | General Concept Lattice | The General Concept Lattice (GCL) proposes a novel general construction of concept hierarchy from formal context, where the conventional Formal Concept Lattice based on Formal Concept Analysis (FCA) only serves as a substructure.
The formal context is a data table of heterogeneous relations illustrating how objects carrying attributes. By analogy with truth-value table, every formal context can develop its fully extended version including all the columns corresponding to attributes constructed, by means of Boolean operations, out of the given attribute set. The GCL is based on the "extended" formal context which comprehends the full information content of the formal context in the sense that it incorporates whatever the formal context should consistently imply. Noteworthily, different formal contexts may give rise to the same extended formal context.
Background.
The GCL claims to take into account the extended formal context for the preservation of information content. Consider describing a three-ball system (3BS) with three distinct colours (formula_0red, formula_1green, formula_2blue). According to "Table 1", one may refer to different attribute sets, say, formula_3, formula_4 or formula_5 to reach different formal contexts. The concept hierarchy for the 3BS is supposed to be unique regardless of how the 3BS being described. However, the FCA exhibits different "formal concept lattice"s subject to the chosen formal contexts for the 3BS , see "Fig. 1". In contrast, the GCL is an invariant lattice structure with respect to these formal contexts since they can infer each other and ultimately entail the same information content.
In information science, the Formal Concept Analysis (FCA) promises practical applications in various fields based on the following fundamental characteristics.
The FCL does not appear to be the only lattice applicable to the interpretation of data table. Alternative concept lattices subject to different derivation operators based on the notions relevant to the Rough Set Analysis have also been proposed. Specifically, the object-oriented concept lattice, which is referred to as the rough set lattice (RSL) afterwards, is found to be particularly instructive to supplement the standard FCA in further understandings of the formal context.
Consequently, there are two crucial points to be contemplated.
The GCL accomplishes a sound theoretical foundation for the concept hierarchies acquired from formal context. Maintaining the generality that preserves the information, the GCL underlies both the FCL and RSL, which correspond to substructures at particular restrictions. Technically, the GCL would be reduced to the FCL and RSL when restricted to conjunctions and disjunctions of elements in the referred attribute set (formula_11), respectively. In addition, the GCL unveils extra information complementary to the results via the FCL and RSL. Surprisingly, the implementation of formal context via GCL is much more manageable than those via FCL and RSL.
Related mathematical formulations.
Algebras of derivation operators.
The derivation operators constitute the building blocks of concept lattices and thus deserve distinctive notations. Subject to a formal context concerning the object set formula_9 and attribute set formula_11,
formula_12
formula_13
formula_14
are considered as different modal operators (Sufficiency, Necessity and Possibility, respectively) that generalise the FCA. For notations, formula_15, the operator adopted in the standard FCA, follows Bernhard Ganter and R. Wille; formula_16 as well as formula_17 follows Y. Y. Yao. By formula_18, i.e., formula_19 the object formula_20 carries the attribute formula_21 as its property, which is also referred to as formula_22 where formula_23 is the "set of all objects carrying the attribute" formula_21.
With formula_24 it is straightforward to check that
formula_25 formula_26
where the same relations hold if given in terms of formula_27.
Two Galois lattices.
Galois connections.
From the above algebras, there exist different types of Galois connections, e.g.,
(1) formula_28 formula_29, (2) formula_30 formula_31
and (3) formula_32 formula_33 that corresponds to (2) when one replaces formula_34and formula_35. Note that (1) and (2) enable different object-oriented constructions for the concept hierarchies FCL and RSL, respectively. Note that (3) corresponds to the attribute-oriented construction where the roles of object and attribute in the RSL are exchanged. The FCL and RSL apply to different 2-tuple formula_36 concept collections that manifest different well-defined partial orderings.
Two concept hierarchies.
Given as a concept, the 2-tuple formula_36 is in general constituted by an "extent" formula_37 and an "intent" formula_38, which should be distinguished when applied to FCL and RSL. The concept formula_39 is furnished by formula_40 based on (1) while formula_41 is furnished by formula_42 based on (2). In essence, there are two Galois lattices based on different orderings of the two collections of concepts as follows.
formula_43 entails formula_44 and formula_45
since formula_46 iff formula_47, and formula_48 iff formula_49.
formula_50 entails formula_44 and formula_51
since formula_46 iff formula_52, and formula_53 iff formula_54.
Common extents of FCL and RSL.
Every "attribute listed in the formal context" provides an extent for FCL and RSL simultaneously via "the object set carrying the attribute". Though the extents for FCL and for RSL do not coincide totally, every formula_23 for formula_55 is known to be a common extent of FCL and RSL. This turns up from the main results in FCL (Formale Begriffsanalyse#Hauptsatz der Formalen Begriffsanalyse) and RSL: every formula_56 (formula_38) is an extent for FCL and formula_57is an extent for RSL. Note that choosing formula_58 gives rise to formula_59.
Two types of informative implications.
The consideration of the attribute set-to-set implication formula_60 (formula_61) via FCL has an intuitive interpretation: every object possessing all the attributes in formula_62 possesses all the attributes in formula_63, in other words formula_64. Alternatively, one may consider formula_65 based on the RSL in a similar manner: the set of all objects carrying "any" of the attributes in formula_62 is contained in the set of all objects carrying "any" of the attributes in formula_63, in other words formula_66. It is apparent that formula_60 and formula_65 relate different pairs of attribute sets and are incapable of expressing each other.
Extension of formal context.
For every formal context one may acquire its extended version deduced in the sense of completing a truth-value table. It is instructive to explicitly label the object/attribute dependence for the formal context, say, formula_67 rather than formula_68 since one may have to investigate more than one formal contexts. As is illustrated in "Table 1", formula_7 can be employed to deduce the extended version formula_8, where formula_69 is the set of all attributes constructed out of elements in formula_70 by means of Boolean operations. Note that formula_7 includes three columns reflecting the use of formula_3 and formula_6 the attribute set formula_4.
Obtaining the general concept lattice.
Observations based on mathematical facts.
Intents in terms of single attributes.
The FCL and RSL will not be altered if their intents are interpreted as single attributes.
formula_39 can be understood as formula_71 with formula_72 (the conjunction of all elements in formula_73),formula_74 plays the role of formula_75 since formula_76.
formula_41 can be understood as formula_77 with formula_78 (the disjunction of all elements in formula_73),formula_79 plays the role of formula_80 since formula_81.
Here, the dot product formula_82 stands for the conjunction (the dots is often omitted for compactness) and the summation formula_83 the disjunction, which are notations in the Curry-Howard style. Note that the orderings become
formula_84 and formula_85, both are implemented by formula_86formula_87formula_88.
Implications from single attribute to single attribute.
Concerning the implications extracted from formal context,
formula_89 serves as the general form of implication relations available from the formal context, which holds for any pair of formula_90 fulfilling formula_91.
Note that formula_91 turns out to be trivial if formula_92, which entails formula_93. Intuitively, every object carrying formula_94 is an object carrying formula_95, which means the implication "any object having the propert"y formula_94 "must also have the property" formula_95. In particular,
formula_60 can be interpreted as formula_89 with formula_96 and formula_97,
formula_65 can be interpreted as formula_89 with formula_98 and formula_99,
where formula_64 and formula_66 collapse into formula_91.
Lattice of 3-tuple concepts with double Galois connection.
When extended to formula_100, the algebras of derivation operators remain "formally" unchanged, apart from the generalisation "from" formula_55 "to" formula_101 which is signified in terms of the "replacements" formula_102, formula_103 and formula_104. The concepts under consideration become then formula_105 and formula_106, where formula_37 and formula_107, which are constructions allowable by the two Galois connections i.e. formula_108 and formula_109, respectively. Henceforth,
formula_110 and formula_111 for formula_105, formula_112 and formula_113 for formula_106.
The extents for the two concepts now "coincide exactly". All the attributes in formula_69 are listed in "the formal context" formula_100, each contributes a common extent for FCL and RSL. Furthermore, the collection of these common extents formula_114 amounts to formula_115 which exhausts all the possible unions of the "minimal object sets discernible by the formal context". Note that each formula_116 collects "objects of the same property", see "Table 2". One may then join formula_105 and formula_106 into a 3-tuple with common extent:
formula_117 where formula_118, formula_119 and formula_120.
Note that formula_121are introduced in order to differentiate the two intents. Clearly, the number of these 3-tuples equals the cardinality of set of common extent which counts formula_122. Moreover, formula_117 manifests well-defined ordering. For formula_123, where formula_124and formula_125,
formula_126 iff formula_127 and formula_128 and formula_129.
Emergence of the GCL.
While it is generically impossible to determine formula_121 subject to formula_130, the structure of concept hierarchy need not rely on these intents directly. An efficient way to implement the concept hierarchy for formula_117 is to consider intents in terms of single attributes.
Let henceforth formula_131 and formula_132. Upon introducing formula_133, one may check that formula_134 and formula_135, formula_136. Therefore,
formula_137,
which is a "closed interval" bounded from below by formula_138 and from above by formula_139 since formula_140. Moreover,
formula_141 iff formula_142, formula_143 iff formula_144 iff formula_145.
In addition, formula_146, namely, the collection of intents formula_147 exhausts all the generalised attributes formula_69, in comparison to formula_148. Then, the GCL enters as the lattice structure formula_149 based on the formal context via formula_100:
formula_152 iff formula_153 and formula_154 and formula_155.
formula_158, where formula_159
formula_160, whereformula_161
formula_165, formula_166.
Consequence of the general concept lattice.
Manageable general lattice.
The construction for FCL was known to count on efficient algorithms, not to mention the construction for RSL which did not receive much attention yet. Intriguingly, though the GCL furnishes the general structure on which both the FCL and RSL can be rediscovered, the GCL can be acquired via simple "readout".
Reading out the lattice.
The completion of GCL is equivalent to the completion of the intents of GCL in terms of the lower and bounds.
The above enables the determinations of the intents depicted as in "Fig. 3" for the 3BS given by "Table 1", where one can read out that formula_183, formula_184 and formula_185. Hence, e.g., formula_186, formula_187. Note that the GCL also appears to be a Hasse diagram due to the resemblance of its extents to a power set. Moreover, each intent formula_188 at formula_169 also exhibits another Hasse diagram isomorphic to the ordering of attributes in the closed interval formula_189. It can be shown that formula_190 where formula_191 with formula_192. Hence, formula_193 making the cardinality formula_194 a constant given as formula_195. Clearly, one may check that formula_196
Rediscovering FCL and RSL on the GCL.
The GCL underlies the "original" FCL and RSL subject to formula_182, as one can tell from formula_197 and formula_198. To rediscover a node for FCL, one looks for a "conjunction of attributes in" formula_70 contained in formula_147, which can be identified within the conjunctive normal form of formula_138 if exists. Likewise, for the RSL one looks for a "disjunction of attributes in" formula_70 contained in formula_147, which can be found within the disjunctive normal form of formula_139, see "Fig 3".
For instance, from the node formula_199 on the GCL, one finds that formula_200 formula_201formula_202. Note that formula_203 appears to be the "only" attribute belonging to formula_204, which is simultaneously a conjunction and a disjunction. Therefore, both the FCL and RSL have the concept formula_205 in common. To illustrate a different situation, formula_206formula_207formula_208. Apparently, formula_209 is the attribute emerging as disjunction of elements in formula_70 which belongs to formula_210, in which no attribute composed by conjunction of elements in formula_70 is found. Hence, formula_211 could not be an extent of FCL, it only constitutes the concept formula_212 for the RSL.
Information content of a formal context.
Informative implications as equivalence due to categorisation.
Non-tautological implication relations signify the information contained in the formal context and are referred to as "informative implications". In general, formula_213 entails the implication formula_214. The implication is informative if it is formula_215 (i.e. formula_216).
In case it is strictly formula_217, one has formula_218 where formula_219. Then, formula_214 can be replaced by means of formula_220 together with the tautology formula_221. Therefore, what remains to be taken into account is the equivalence formula_222 for some formula_223. Logically, both attributes are properties carried by the same object class, formula_224 reflects that equivalence relation.
All attributes in formula_147 must be mutually implied, which can be implemented, e.g., by formula_225 (in fact, formula_226 where formula_227 is a tautology), i.e., all attributes are equivalent to the lower bound of intent.
A formula that implements all the informative implications.
Extraction of the implications of type formula_60 from the formal context was known to be complicated, it necessitates efforts for constructing a canonical basis, which does "not" apply to the implications of type formula_65. By contrast, the above equivalence only proposes
"formula_228", which can be restated as "formula_229",
formula_214 is allowed by the formal context iff formula_230 (or formula_231).
Hence, purely algebraic formulae can be employed to determine the implication relations, one need not consult the object-attribute dependence in the formal context, which is the typical effort in finding the canonical basis.
Remarkably, formula_232 and formula_233 are referred to as the "contextual truth" and "falsity", respectively. formula_234 formula_235 and formula_236 as well as formula_237 and formula_238 similar to the "conventional truth" 1 and "falsity" 0 that can be identified with formula_239 and formula_240, respectively.
Beyond the set-to-set implications.
formula_60 and formula_65 are found to be particular forms of formula_214. Assume formula_241 and formula_242 for both cases. By formula_60, an object set carrying all the attributes in formula_243 implies carrying all the attributes in formula_244 "simultaneously", i.e. formula_245. By formula_65, an object set carrying "any" of the attributes in formula_243 implies carrying "some" of the attributes in formula_244, therefore formula_246. Notably, the point of view "conjunction-to-conjunction" has also been emphasised by Ganter while dealing with the attribute exploration.
One could overlook significant parts of the logic content in formal context were it "not" for the consideration based on the GCL. Here, the formal context describing 3BS given in "Table 1" suggests an extreme case where no implication of the type formula_65 could be found. Nevertheless, one ends up, e.g., formula_247 (or formula_248), whose meaning appears to be ambiguous. Though it is true that formula_249, one also notices that formula_250 as well as formula_251 formula_252. Indeed, by using the above formula with the formula_253 provided in "Fig. 2" it can be seen that formula_254 formula_255, hence it is formula_256 and formula_257 that underlies formula_249.
Remarkably, the same formula will lead to (1) formula_258 (or formula_259) and (2) formula_260 (or formula_261), where formula_262, formula_263 and formula_264 can be interchanged. Hence, what one has captured from the 3BS are that (1) no two colours could coexist and that (2) there is no colour other than formula_262, formula_263 and formula_264. The two issues are certainly less trivial in the scopes of formula_60 and formula_65.
Rules to assemble or transform implications.
The rules to assemble or transform implications of type formula_265 are of direct consequences of object set inclusion relations. Notably, some of these rules can be reduced to the Armstrong axioms, which pertain to the main considerations of Guigues and Duquenne based on the non-redundant collection of informative implications acquired via FCL. In particular,
(1) formula_214 and formula_266 formula_267 formula_268
since formula_213 and formula_269 leads to formula_270, i.e., formula_271.
In the case of formula_272, formula_273, formula_274 and formula_275, where formula_276 are sets of attributes, the rule (1) can be re-expressed as Armstrong's composition:
(1') formula_277 and formula_278formula_267 formula_279
formula_280 and formula_281.
The Armstrong axioms are not suited for formula_65 which requires formula_282. This is in contrast to formula_60 for which Armstrong's reflexivity is implemented by formula_283. Nevertheless, a similar "composition" may occur but signify a different rule from (1). Note that one also arrives at
(2) formula_284 and formula_285 formula_267 formula_286
since formula_213 and formula_269 formula_267 formula_287, which gives rise to
(2') formula_288 and formula_289 formula_267formula_290 whenever formula_291, formula_292, formula_293 and formula_294.
Example.
For concreteness, consider the example depicted by "Table 2", which has been originally adopted for clarification of the RSL but worked out for the GCL.
The GCL structure and the identifications of FCL and RSL on the GCL.
formula_295 formula_296formula_297formula_298formula_299formula_300,
formula_301formula_302formula_303,
formula_304formula_305formula_306, and so forth.
Clearly, one may also check that formula_307.
formula_308formula_309formula_310formula_311 formula_312formula_313formula_314formula_315formula_316formula_317formula_318formula_319,
formula_320formula_321formula_322formula_323formula_324formula_325formula_326.
Within the expression of formula_327 it can be seen that formula_328formula_329, while within formula_320 it can be seen formula_330formula_331formula_329. Therefore, one finds out the concepts formula_332 for FCL and formula_333 for RSL. By contrast,
formula_334 formula_335, formula_336formula_337formula_338
with formula_339 formula_340 gives rise to the concept formula_341 for FCL however fails to provide an extent for RSL because formula_342.
Implication relations in general.
formula_343 and formula_344 denote formula_345 and formula_346, respectively.
For the present case, the above relations can be examined via the auxiliary formula:
formula_347 (or formula_348), formula_349 (or formula_350).
Both formula_352 and formula_353, according to the formal context of "Table 2", are interpreted as formula_354, which means formula_352 based on formula_355and formula_353 based on formula_356.
Note that formula_357formula_358 formula_359formula_360formula_361formula_362. Moreover, formula_354 entails both formula_363 and formula_364, which correspond to formula_365 and formula_366, respectively.
(1) With "formula_367" one may infer the properties of objects of interest from the condition "formula_232" by specifying "formula_368", thereby incorporating abundant informative implications as equivalent relations between any pair of attributes within the interval formula_369, i.e., "formula_370" "formula_371" if formula_372 and formula_373. Note that "formula_367" entails "formula_374" since "formula_375".
For instance, by formula_376 formula_377 formula_378 the relation formula_379 "formula_380 is neither" of the type formula_60 "nor" of the type formula_65. Nevertheless, one may also derive, e.g., formula_381, formula_382 and formula_383, which are formula_384, formula_385 and formula_343, respectively. As a further interesting implication formula_386 entails formula_387 by means of material implication. Namely, for the objects carrying the property formula_203 or formula_388, formula_389 must hold and, in addition, objects carrying the property formula_390 must also carry the property formula_388 and vice versa.
(1') Alternatively, the equivalent formula "formula_391" can be employed to specify the objects of particular interest. In effect, "formula_370" "formula_371" if formula_392 and formula_393.
One may be interested in the properties inferring a "particular consequent", say, formula_394. Consider formula_395 formula_396 giving rise to "formula_397 " formula_398 according to "Table 2". Clearly, with formula_399 formula_400formula_401 one has formula_402. This gives rise to many possible antecedents such as formula_403, formula_404, formula_405, formula_406 and so forth.
(2)" formula_232" governs all the implications extractable from the formal context by means of (1) and (1'). Indeed, it plays the role of canonical basis with "one single" implication relation.
"formula_232" can be understood as "formula_407" or equivalently "formula_408", which turns out be the "only" non-redundant implication one needs to deduce all the informative implications from any formal context. The basis "formula_407" or "formula_408" suffices the deduction of all implications as follows. While "formula_409" "formula_407formula_410" and "formula_411" "formula_408formula_412", choosing either "formula_413" or "formula_414" gives rise to "formula_415". Notably, this encompasses (1) and (1') by means of formula_416formula_417 formula_418 for any formula_419, where formula_420 can be identified with some "formula_421" corresponding to one of the 32 nodes on the GCL in "Fig. 4".
"formula_415" develops equivalence, at each single node, for all attributes contained within the interval "formula_422". Moreover, informative implications could also relate different nodes via Hypothetical syllogism by invoking tautology. Typically, "formula_423" "formula_424" whenever "formula_425" "formula_426". This corresponds to the cases considered in (1'): formula_427, formula_428, formula_429 etc. Explicitly, formula_430 is based upon formula_431 and formula_432 where formula_433. Note that formula_434formula_435 and formula_436formula_437 while formula_438formula_439 (also formula_440). Therefore, formula_430. Similarly, formula_441 with formula_442 gives formula_428.
Indeed, "formula_407" or equivalently "formula_408" plays the role of canonical basis with "one single" implication relation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a:="
},
{
"math_id": 1,
"text": "b:="
},
{
"math_id": 2,
"text": "c:="
},
{
"math_id": 3,
"text": "M=\\{a,b,c \\}"
},
{
"math_id": 4,
"text": "M_1=\\{a\\ {\\bf or}\\ b, b\\ {\\bf or}\\ c,c \\ {\\bf or}\\ a \\}"
},
{
"math_id": 5,
"text": "M_2=\\{a\\ {\\bf or}\\ b, b\\ {\\bf or}\\ c,c \\}"
},
{
"math_id": 6,
"text": "F_1(G,M_1)"
},
{
"math_id": 7,
"text": "F_{\\scriptscriptstyle 3BS} (G,M)"
},
{
"math_id": 8,
"text": "F_{\\scriptscriptstyle 3BS}^\\ast (G,M^\\ast)"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "M^\\ast"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": " I\\ :\\ \n\\begin{array}{l}\nX\\subseteq G \\mapsto\\ X^I=\\lbrace m\\in M \\mid gRm,\\ \\forall g \\in X \\rbrace\\subseteq M\\\\\nY\\subseteq M \\mapsto\\ Y^I=\\lbrace g\\in G \\mid gRm,\\ \\forall m \\in Y \\rbrace\\subseteq G\n\\end{array},"
},
{
"math_id": 13,
"text": "\\Box\\ :\\ \n\\begin{array}{l}\nX\\subseteq G \\mapsto\\ X^{\\Box}=\\lbrace m\\in M \\mid \\forall g \\in G, gRm \\implies g\\in X \\rbrace\\subseteq M\\\\\n\nY\\subseteq M \\mapsto\\ Y^{\\Box}=\\lbrace g\\in G \\mid \\forall m \\in M, gRm \\implies m\\in Y \\rbrace\\subseteq G\n\\end{array},"
},
{
"math_id": 14,
"text": "\\Diamond\\ :\\ \n\\begin{array}{c}\n\nX\\subseteq G \\mapsto\\ X^{\\Diamond}= \\lbrace m \\in M \\mid \\exists g\\in G, (gRm,\\ g\\in X) \\rbrace\\subseteq M\\\\\n\nY\\subseteq M \\mapsto\\ Y^{\\Diamond}= \\lbrace g \\in G \\mid \\exists m\\in M, (gRm,\\ m\\in Y) \\rbrace\\subseteq G\n\n\\end{array}"
},
{
"math_id": 15,
"text": "I"
},
{
"math_id": 16,
"text": "\\Box\\mbox{ and }\\Diamond"
},
{
"math_id": 17,
"text": "R"
},
{
"math_id": 18,
"text": "gRm"
},
{
"math_id": 19,
"text": "(g,m)\\in R"
},
{
"math_id": 20,
"text": "g"
},
{
"math_id": 21,
"text": "m"
},
{
"math_id": 22,
"text": "g\\in m^R"
},
{
"math_id": 23,
"text": "m^R"
},
{
"math_id": 24,
"text": "X,X_1,X_2 \\subseteq G \\mbox{ and } X^c:=G\\backslash X"
},
{
"math_id": 25,
"text": "X^{III}=X^I,\\quad\n\\begin{array}{c}\n\nX^{\\Box\\Diamond\\Box}=X^{\\Box}\\\\\n\nX^{\\Diamond\\Box\\Diamond}=X^{\\Diamond}\n\n\\end{array},\\quad\n\n\\begin{array}{c}\n\nX^{c\\Box c}=X^{\\Diamond}\\\\\n\nX^{c\\Diamond c}=X^{\\Box}\n\n\\end{array},"
},
{
"math_id": 26,
"text": "X_1\\subseteq X_2\\iff (X_2)^I\\subseteq (X_1)^I,\\quad\n\\begin{array}{c}\nX_1\\subseteq X_2 \\iff (X_1)^{\\Box}\\subseteq (X_2)^{\\Box}\\\\\n\nX_1\\subseteq X_2\n\n\\iff (X_1)^{\\Diamond}\\subseteq (X_2)^{\\Diamond}\n\n\\end{array},"
},
{
"math_id": 27,
"text": "Y,Y_1,Y_2 \\subseteq M \\mbox{ and } Y^c:=M\\backslash Y"
},
{
"math_id": 28,
"text": "X\\subseteq Y^I "
},
{
"math_id": 29,
"text": "\\iff Y\\subseteq X^I"
},
{
"math_id": 30,
"text": "Y^{\\Diamond}\\subseteq X "
},
{
"math_id": 31,
"text": "\\iff Y\\subseteq X^{\\Box}"
},
{
"math_id": 32,
"text": "X\\subseteq Y^{\\Box}"
},
{
"math_id": 33,
"text": "\\iff X^{\\Diamond}\\subseteq Y"
},
{
"math_id": 34,
"text": "X \\mbox{ for } X^c"
},
{
"math_id": 35,
"text": "Y \\mbox{ for } Y^c"
},
{
"math_id": 36,
"text": "(X,Y)"
},
{
"math_id": 37,
"text": "X\\subseteq G"
},
{
"math_id": 38,
"text": "Y\\subseteq M"
},
{
"math_id": 39,
"text": "(X,Y)_{fcl}"
},
{
"math_id": 40,
"text": "X^I=Y\\mbox{ and }Y^I=X"
},
{
"math_id": 41,
"text": "(X,Y)_{rsl}"
},
{
"math_id": 42,
"text": "X^{\\Box}=Y\\mbox{ and }Y^{\\Diamond}=X"
},
{
"math_id": 43,
"text": "(X_1, Y_1)_{fcl}\\leq (X_2, Y_2)_{fcl}"
},
{
"math_id": 44,
"text": " X_1\\subseteq X_2"
},
{
"math_id": 45,
"text": " Y_2 \\subseteq Y_1"
},
{
"math_id": 46,
"text": "X_1\\subseteq X_2"
},
{
"math_id": 47,
"text": "Y_2=X_2^I\\subseteq X_1^I=Y_1"
},
{
"math_id": 48,
"text": "Y_2\\subseteq Y_1"
},
{
"math_id": 49,
"text": "X_1=Y_1^I\\subseteq Y_2^I=X_2"
},
{
"math_id": 50,
"text": "(X_1, Y_1)_{rsl}\\leq (X_2, Y_2)_{rsl}"
},
{
"math_id": 51,
"text": " Y_1 \\subseteq Y_2"
},
{
"math_id": 52,
"text": "Y_1=X_1^\\Box\\subseteq X_2^\\Box=Y_2"
},
{
"math_id": 53,
"text": "Y_1 \\subseteq Y_2"
},
{
"math_id": 54,
"text": "X_1=Y_1^\\Diamond\\subseteq Y_2^\\Diamond=X_2"
},
{
"math_id": 55,
"text": "m\\in M"
},
{
"math_id": 56,
"text": "Y^I"
},
{
"math_id": 57,
"text": "Y^\\Diamond"
},
{
"math_id": 58,
"text": "Y=\\{m\\}"
},
{
"math_id": 59,
"text": "Y^I=Y^\\Diamond=m^R"
},
{
"math_id": 60,
"text": "A\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} B"
},
{
"math_id": 61,
"text": "A, B\\subseteq M"
},
{
"math_id": 62,
"text": "A"
},
{
"math_id": 63,
"text": "B"
},
{
"math_id": 64,
"text": "A^I\\subseteq B^I"
},
{
"math_id": 65,
"text": "A\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} B"
},
{
"math_id": 66,
"text": "A^\\Diamond\\subseteq B^\\Diamond"
},
{
"math_id": 67,
"text": "F(G,M):=(G, M, I)"
},
{
"math_id": 68,
"text": "\\mathbb{K}:= (G, M, I)"
},
{
"math_id": 69,
"text": "M^\\ast"
},
{
"math_id": 70,
"text": "M"
},
{
"math_id": 71,
"text": "(X,\\mu)_{fcl}"
},
{
"math_id": 72,
"text": "\\mu=\\prod Y"
},
{
"math_id": 73,
"text": "Y"
},
{
"math_id": 74,
"text": "\\ \\begin{smallmatrix} \\prod X^I=\\mu\\\\\n \\mu^R=X\\end{smallmatrix}\n "
},
{
"math_id": 75,
"text": "\n \\begin{smallmatrix} X^I=Y\\\\\n Y^I=X\\end{smallmatrix}"
},
{
"math_id": 76,
"text": "Y^I=(\\prod Y)^R=\\mu^R\\subseteq G"
},
{
"math_id": 77,
"text": "(X,\\mu)_{rsl}"
},
{
"math_id": 78,
"text": "\\mu=\\sum Y"
},
{
"math_id": 79,
"text": "\\ \\begin{smallmatrix} \\sum X^\\Box=\\mu\\\\\n \\mu^R=X\\end{smallmatrix}\n "
},
{
"math_id": 80,
"text": "\n \\begin{smallmatrix} X^{\\Box}=Y\\\\\n Y^{\\Diamond}=X\\end{smallmatrix}"
},
{
"math_id": 81,
"text": "Y^\\Diamond=(\\sum Y)^R=\\mu^R\\subseteq G"
},
{
"math_id": 82,
"text": "\\cdot\\ (\\prod)"
},
{
"math_id": 83,
"text": "+\\ (\\sum)"
},
{
"math_id": 84,
"text": "(X_1,\\mu_1)_{fcl}\\leq (X_2,\\mu_2)_{fcl}"
},
{
"math_id": 85,
"text": "(X_1,\\mu_1)_{rsl}\\leq (X_2,\\mu_2)_{rsl}"
},
{
"math_id": 86,
"text": "\n\n X_1\\subseteq X_2"
},
{
"math_id": 87,
"text": "\n \\iff"
},
{
"math_id": 88,
"text": "\n \\mu_1\\leq \\mu_2\n "
},
{
"math_id": 89,
"text": "\\mu_1\\rightarrow \\mu_2"
},
{
"math_id": 90,
"text": "\\mu_1,\\mu_2\\in M^\\ast"
},
{
"math_id": 91,
"text": "\\mu_1^R\\subseteq \\mu_2^R"
},
{
"math_id": 92,
"text": "\\mu_1\\leq \\mu_2"
},
{
"math_id": 93,
"text": "\\mu_1=\\mu_1\\cdot \\mu_2"
},
{
"math_id": 94,
"text": "\\mu_1"
},
{
"math_id": 95,
"text": "\\mu_2"
},
{
"math_id": 96,
"text": "\\mu_1=\\prod A"
},
{
"math_id": 97,
"text": "\\mu_2=\\prod B"
},
{
"math_id": 98,
"text": "\\mu_1=\\sum A"
},
{
"math_id": 99,
"text": "\\mu_2=\\sum B"
},
{
"math_id": 100,
"text": "F^\\ast(G,M^\\ast)"
},
{
"math_id": 101,
"text": "\\mu \\in M^\\ast"
},
{
"math_id": 102,
"text": "I\\mbox{ by }I^\\ast"
},
{
"math_id": 103,
"text": "\\Box\\mbox{ by }\\Box^\\ast"
},
{
"math_id": 104,
"text": "\\Diamond\\mbox{ by }\\Diamond^\\ast"
},
{
"math_id": 105,
"text": "(X, Y)^\\ast_{fcl}"
},
{
"math_id": 106,
"text": "(X, Y)^\\ast_{rsl}"
},
{
"math_id": 107,
"text": "Y\\subseteq M^\\ast"
},
{
"math_id": 108,
"text": "X\\subseteq Y^{I^\\ast}\\iff Y \\subseteq X^{I^\\ast} "
},
{
"math_id": 109,
"text": "Y^{\\Diamond^\\ast}\\subseteq X \\iff Y\\subseteq X^{\\Box^\\ast}"
},
{
"math_id": 110,
"text": "\n X^{I^\\ast}=Y"
},
{
"math_id": 111,
"text": " Y^{I^\\ast}=X\n "
},
{
"math_id": 112,
"text": "\n X^{\\Box^\\ast}=Y"
},
{
"math_id": 113,
"text": "\n Y^{\\Diamond^\\ast}=X"
},
{
"math_id": 114,
"text": "E_F:=\\{\\mu^R\\mid\\mu\\in M^\\ast\\}"
},
{
"math_id": 115,
"text": " \\{\\bigcup_{k\\in J} D_k\\mid J\\subseteq \\{1\\ldots n_F\\}\\}"
},
{
"math_id": 116,
"text": "D_k"
},
{
"math_id": 117,
"text": "(X,Y^{fcl\\ast},Y^{rsl\\ast})"
},
{
"math_id": 118,
"text": "X^{I^\\ast}=Y^{fcl\\ast}"
},
{
"math_id": 119,
"text": "X^{\\Box^\\ast}={Y^{rsl\\ast}}"
},
{
"math_id": 120,
"text": "{Y^{fcl\\ast}}^{I^\\ast}={Y^{rsl\\ast}}^{\\Diamond^\\ast}=X"
},
{
"math_id": 121,
"text": "Y^{fcl\\ast}\\mbox{ and }Y^{rsl\\ast}"
},
{
"math_id": 122,
"text": "|E_F|=2^{n_F}"
},
{
"math_id": 123,
"text": "X_1, X_2\\in E_F\\subseteq G\\ \n "
},
{
"math_id": 124,
"text": " {Y_1^{fcl\\ast}},{Y_2^{fcl\\ast}}\\subset M^\\ast\n "
},
{
"math_id": 125,
"text": "\n {Y_1^{rsl\\ast}},{Y_2^{rsl\\ast}}\\subset M^\\ast"
},
{
"math_id": 126,
"text": "(X_1,{Y_1^{fcl\\ast}},{Y_1^{rsl\\ast}})\\leq (X_2,{Y_2^{fcl\\ast}},{Y_2^{rsl\\ast}}) "
},
{
"math_id": 127,
"text": "\n \n X_1 \\subseteq X_2"
},
{
"math_id": 128,
"text": " {Y_2^{fcl\\ast}}\\subseteq {Y_1^{fcl\\ast}}"
},
{
"math_id": 129,
"text": " {Y_1^{rsl\\ast}}\\subseteq {Y_2^{rsl\\ast}}"
},
{
"math_id": 130,
"text": "X\\in E_F\\subseteq G\n"
},
{
"math_id": 131,
"text": "\n \\eta(X):=\\prod Y^{fcl\\ast}\n"
},
{
"math_id": 132,
"text": "\n\\rho(X):=\\sum Y^{rsl\\ast}\n"
},
{
"math_id": 133,
"text": "[X]_F:=\\{\\mu\\in M^\\ast\\mid \\mu^R=X\\}"
},
{
"math_id": 134,
"text": "\\prod [X]_F=\\prod Y^{fcl\\ast}"
},
{
"math_id": 135,
"text": "\\sum [X]_F=\\sum Y^{rsl\\ast}"
},
{
"math_id": 136,
"text": "\\forall X\\in E_F"
},
{
"math_id": 137,
"text": "[X]_F\\equiv [\\eta(X), \\rho(X)]=\\{\\mu\\in M^\\ast\\mid \\eta(X)\\leq\\mu\\leq \\rho(X)\\}"
},
{
"math_id": 138,
"text": "\\eta(X)"
},
{
"math_id": 139,
"text": "\\rho(X)"
},
{
"math_id": 140,
"text": "\\forall\\mu\\ \\mu^R=X\\implies \\eta(X)\\leq\\mu\\leq \\rho(X)"
},
{
"math_id": 141,
"text": "\\forall X_1\\forall X_2\\in E_F\\ X_1\\neq X_2"
},
{
"math_id": 142,
"text": "[X_1]_F\\cap [X_2]_F=\\emptyset"
},
{
"math_id": 143,
"text": "X_1 \\subset X_2"
},
{
"math_id": 144,
"text": "\\eta(X_1) < \\eta(X_2)"
},
{
"math_id": 145,
"text": "\\rho(X_1) < \\rho(X_2)"
},
{
"math_id": 146,
"text": "\\bigcup_{X\\in E_F}[X]_F=M^\\ast"
},
{
"math_id": 147,
"text": "[X]_F"
},
{
"math_id": 148,
"text": "\\bigcup_{X\\in E_F} X=G"
},
{
"math_id": 149,
"text": "\\Gamma_F:=(L_F,\\wedge,\\vee)"
},
{
"math_id": 150,
"text": "L_F=\\{(X,[X]_F)\\mid\\ X\\in E_F\\}"
},
{
"math_id": 151,
"text": "(L_F,\\leq)"
},
{
"math_id": 152,
"text": "l_1:=(X_1,[X_1]_F)\\leq l_2:=(X_2,[X_2]_F)\n "
},
{
"math_id": 153,
"text": "\n X_1 \\subseteq X_2 "
},
{
"math_id": 154,
"text": "\n \n \\eta(X_1)\\leq\\eta(X_2)\n "
},
{
"math_id": 155,
"text": "\n \n \\rho(X_2)\\leq\\rho(X_2)\n "
},
{
"math_id": 156,
"text": "\\wedge"
},
{
"math_id": 157,
"text": "\\vee "
},
{
"math_id": 158,
"text": "l_1\\wedge l_2 = \\left(X_1\\cap X_2, [X_1\\cap X_2]_F\\right)\n \\in L_F"
},
{
"math_id": 159,
"text": "[X_1\\cap X_2]_F=[\\eta(X_1\\cap X_2), \\rho(X_1\\cap X_2)]\n =[\\eta(X_1)\\cdot \\eta(X_2), \\rho(X_1)\\cdot \\rho(X_2)],"
},
{
"math_id": 160,
"text": "l_1\\vee l_2 = \\left(X_1\\cup X_2,[X_1\\cup X_2]_F\\right)\n \\in L_F"
},
{
"math_id": 161,
"text": "[X_1\\cup X_2]_F=[\\eta(X_1\\cup X_2), \\rho(X_1\\cup X_2)]\n =[\\eta(X_1)+\\eta(X_2), \\rho(X_1)+\\rho(X_2)]."
},
{
"math_id": 162,
"text": "l_{sup}"
},
{
"math_id": 163,
"text": "l_{inf}"
},
{
"math_id": 164,
"text": "L_F"
},
{
"math_id": 165,
"text": "l_{sup}=\\bigvee_{l\\in L_F} l=(G, [G]_F)=(G,[\\eta(G),{\\bf 1}])"
},
{
"math_id": 166,
"text": "l_{inf}=\\bigwedge_{l\\in L_F} l=(\\emptyset, [\\emptyset]_F)=(\\emptyset,[{\\bf 0},\\rho(\\emptyset)])"
},
{
"math_id": 167,
"text": "(\\eta(X)\\mbox{ for } X\\in E_F)"
},
{
"math_id": 168,
"text": "(\\rho(X)\\mbox{ for } X\\in E_F)"
},
{
"math_id": 169,
"text": "X"
},
{
"math_id": 170,
"text": "X^c"
},
{
"math_id": 171,
"text": "(X,[X]_F)=(X,[\\eta(X),\\rho(X)])"
},
{
"math_id": 172,
"text": "(X^c,[X^c]_F)"
},
{
"math_id": 173,
"text": "=(X^c,[\\eta(X^c),\\rho(X^c)])"
},
{
"math_id": 174,
"text": "\\eta(X^c)=\\neg \\rho(X)"
},
{
"math_id": 175,
"text": "\\neg \\eta(X)= \\rho(X^c)"
},
{
"math_id": 176,
"text": "\\eta(X^c)=\\neg \\rho(X)\\iff\\neg \\eta(X)= \\rho(X^c)"
},
{
"math_id": 177,
"text": "D_k"
},
{
"math_id": 178,
"text": "1\\leq k\\leq n_F\n"
},
{
"math_id": 179,
"text": "D_k\\in E_F"
},
{
"math_id": 180,
"text": "\\eta(D_k)=\\prod \\Psi^k"
},
{
"math_id": 181,
"text": "\\Psi^k=\\lbrace m\\in M \\mid m\\in D_k^I\\rbrace\\cup\\lbrace \\neg m \\mid m\\not\\in D_k^I, m\\in M\\rbrace"
},
{
"math_id": 182,
"text": "F(G,M)"
},
{
"math_id": 183,
"text": "\\eta(\\{1\\})=a\\neg b\\neg c"
},
{
"math_id": 184,
"text": "\\eta(\\{2\\})=\\neg a b\\neg c"
},
{
"math_id": 185,
"text": "\\eta(\\{3\\})=\\neg a b\\neg c"
},
{
"math_id": 186,
"text": "\\rho(\\{1,2\\})=\\neg \\eta(\\{ 3 \\})= a+ b+\\neg c"
},
{
"math_id": 187,
"text": "\\eta(\\{1,2\\})=a\\neg b\\neg c+ \\neg a b\\neg c=\\neg \\rho(\\{ 3 \\})\n "
},
{
"math_id": 188,
"text": "[X]_F=[\\eta(X), \\rho(X)]"
},
{
"math_id": 189,
"text": "[{\\bf 0}, 0_\\rho]"
},
{
"math_id": 190,
"text": "\\forall X\\in E_F\\ \\rho(X)=\\eta(X)+0_\\rho"
},
{
"math_id": 191,
"text": "0_\\rho:=\\neg 1_\\eta\\equiv \\rho(\\emptyset)"
},
{
"math_id": 192,
"text": "1_\\eta:=\\sum_{k=1}^{n_F} \\eta(D_k)\\equiv \\eta(G)"
},
{
"math_id": 193,
"text": "[X]_F=\\{ \\eta(X)+\\tau\\mid {\\bf 0}\\leq\\tau\\leq 0_\\rho\\}\n"
},
{
"math_id": 194,
"text": "|[X]_F|"
},
{
"math_id": 195,
"text": "2^{2^{|M|}-n_F}"
},
{
"math_id": 196,
"text": "\\rho(\\{1,2\\})=\\neg \\eta(\\{ 3 \\})= \\eta(\\{1,2\\})+ 0_\\rho"
},
{
"math_id": 197,
"text": "\n \\eta(X)=\\prod Y^{fcl\\ast}\n"
},
{
"math_id": 198,
"text": "\n\\rho(X)=\\sum Y^{rsl\\ast}\n"
},
{
"math_id": 199,
"text": "(\\{3\\},[\\{3 \\}]_F)"
},
{
"math_id": 200,
"text": "\\eta(\\{3\\})=\\neg a\\neg bc\\leq c"
},
{
"math_id": 201,
"text": "\\leq (a+\\neg b+c)(\\neg a+ b+c)"
},
{
"math_id": 202,
"text": "= \\rho(\\{ 3\\})"
},
{
"math_id": 203,
"text": "c"
},
{
"math_id": 204,
"text": "[\\{3\\}]_F"
},
{
"math_id": 205,
"text": "(\\{3\\},\\{c\\})"
},
{
"math_id": 206,
"text": "\\rho(\\{1,3\\})=(a+\\neg b+c)\\geq a+c"
},
{
"math_id": 207,
"text": "\\geq a\\neg b\\neg c+\\neg a\\neg bc"
},
{
"math_id": 208,
"text": "= \\eta(\\{ 1,3 \\})"
},
{
"math_id": 209,
"text": "a+c"
},
{
"math_id": 210,
"text": "[\\{1,3\\}]_F"
},
{
"math_id": 211,
"text": "\\{1,3\\}"
},
{
"math_id": 212,
"text": "(\\{1, 3\\},\\{a,c\\})"
},
{
"math_id": 213,
"text": "\\mu_1^R\\subseteq \\mu_2^R"
},
{
"math_id": 214,
"text": "\\mu_1\\rightarrow \\mu_2"
},
{
"math_id": 215,
"text": "not\\ \\mu_1 \\leq \\mu_2"
},
{
"math_id": 216,
"text": "\\mu_1\\neq \\mu_1\\cdot\\mu_2"
},
{
"math_id": 217,
"text": "\\mu_1^R\\subset \\mu_2^R"
},
{
"math_id": 218,
"text": "\\mu_1^R=\\mu_1^R\\cap\\mu_2^R=(\\mu_1\\cdot \\mu_2)^R"
},
{
"math_id": 219,
"text": "\\mu_1^R\\cap\\mu_2^R\\subset \\mu_2^R"
},
{
"math_id": 220,
"text": "\\mu_1\\leftrightarrow \\mu_1\\cdot\\mu_2"
},
{
"math_id": 221,
"text": "\\mu_1\\cdot \\mu_2\\implies \\mu_2"
},
{
"math_id": 222,
"text": "\\mu^R= \\nu^R=X"
},
{
"math_id": 223,
"text": "X\\in E_F"
},
{
"math_id": 224,
"text": "\\mu\\leftrightarrow \\nu"
},
{
"math_id": 225,
"text": "\\forall \\mu\\in [X]_F\\ \\mu\\rightarrow \\eta(X)"
},
{
"math_id": 226,
"text": "\\mu\\leftrightarrow \\eta(X)"
},
{
"math_id": 227,
"text": "\\eta(X)\\rightarrow\n\\mu "
},
{
"math_id": 228,
"text": "\\forall \\mu \\in M^\\ast\\ \\mu\\rightarrow \\mu\\cdot 1_\\eta"
},
{
"math_id": 229,
"text": "\\forall \\mu \\in M^\\ast\\ \\mu+0_\\rho\\rightarrow \\mu"
},
{
"math_id": 230,
"text": "\\mu_1\\cdot 1_\\eta \\leq \\mu_2\\cdot 1_\\eta"
},
{
"math_id": 231,
"text": "\\mu_1+ 0_\\rho \\leq \\mu_2+ 0_\\rho"
},
{
"math_id": 232,
"text": "1_\\eta"
},
{
"math_id": 233,
"text": "0_\\rho"
},
{
"math_id": 234,
"text": "\\forall X \\in E_F\n\n"
},
{
"math_id": 235,
"text": "0_\\rho+\\rho(X)=\\rho(X)\n"
},
{
"math_id": 236,
"text": " 0_\\rho\\cdot\\rho(X)=0_\\rho\n\n"
},
{
"math_id": 237,
"text": "1_\\eta\\cdot \\eta(X)=\\eta(X)\n"
},
{
"math_id": 238,
"text": "1_\\eta+ \\eta(X)=1_\\eta\n\n"
},
{
"math_id": 239,
"text": "\\rho(G)"
},
{
"math_id": 240,
"text": "\\eta(\\emptyset)"
},
{
"math_id": 241,
"text": "A=\\{a_1,a_2,\\ldots\\}\\subseteq M"
},
{
"math_id": 242,
"text": "B=\\{b_1,b_2,\\ldots\\}\\subseteq M"
},
{
"math_id": 243,
"text": "A"
},
{
"math_id": 244,
"text": "B"
},
{
"math_id": 245,
"text": "\\prod_i a_i\\rightarrow \\prod_i b_i"
},
{
"math_id": 246,
"text": "\\sum_i a_i\\rightarrow \\sum_i b_i"
},
{
"math_id": 247,
"text": "\\{a,b\\}\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} \\{a,b,c \\}"
},
{
"math_id": 248,
"text": "\\{a,b\\}\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} \\{c \\}"
},
{
"math_id": 249,
"text": "ab \\rightarrow abc"
},
{
"math_id": 250,
"text": "(ab)^R=\\{a,b\\}^I=\\emptyset"
},
{
"math_id": 251,
"text": "(abc)^R=\\{a,b,c\\}^I"
},
{
"math_id": 252,
"text": " = \\emptyset"
},
{
"math_id": 253,
"text": " 1_\\eta\n\n"
},
{
"math_id": 254,
"text": " ab\\cdot 1_\\eta\\equiv {\\bf 0}"
},
{
"math_id": 255,
"text": "\\equiv abc\\cdot 1_\\eta"
},
{
"math_id": 256,
"text": "ab \\leftrightarrow {\\bf 0}"
},
{
"math_id": 257,
"text": "abc \\leftrightarrow {\\bf 0}"
},
{
"math_id": 258,
"text": "a\\rightarrow a\\neg b\\neg c "
},
{
"math_id": 259,
"text": "a \\rightarrow \\neg b\\neg c "
},
{
"math_id": 260,
"text": "\\neg b\\neg c\\rightarrow \\neg b\\neg ca "
},
{
"math_id": 261,
"text": "\\neg b\\neg c\\rightarrow a "
},
{
"math_id": 262,
"text": "a "
},
{
"math_id": 263,
"text": "b "
},
{
"math_id": 264,
"text": "c "
},
{
"math_id": 265,
"text": "\\mu\\rightarrow \\nu"
},
{
"math_id": 266,
"text": "\\nu_1\\rightarrow \\nu_2"
},
{
"math_id": 267,
"text": "\\implies"
},
{
"math_id": 268,
"text": "\\mu_1\\cdot\\nu_1\\rightarrow \\mu_2\\cdot\\nu_2"
},
{
"math_id": 269,
"text": "\\nu_1^R\\subseteq \\nu_2^R"
},
{
"math_id": 270,
"text": "\\mu_1^R\\cap\\nu_1^R\\subseteq \\mu_2^R\\cap\\nu_2^R"
},
{
"math_id": 271,
"text": "(\\mu_1\\cdot \\nu_1)^R\\subseteq (\\mu_2\\cdot\\nu_2)^R"
},
{
"math_id": 272,
"text": "\\mu_1=\\prod A_1"
},
{
"math_id": 273,
"text": "\\nu_1=\\prod B_1"
},
{
"math_id": 274,
"text": "\\mu_2=\\prod A_2"
},
{
"math_id": 275,
"text": "\\nu_2=\\prod B_2"
},
{
"math_id": 276,
"text": "A_1,A_2,B_1,B_2"
},
{
"math_id": 277,
"text": " A_1\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} A_2"
},
{
"math_id": 278,
"text": " B_1\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} B_2"
},
{
"math_id": 279,
"text": "A_1\\cup B_1\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} A_2\\cup B_2"
},
{
"math_id": 280,
"text": "\\because (\\prod A_1)\\cdot ( \\prod B_1)\\equiv \\prod (A_1\\cup B_1)"
},
{
"math_id": 281,
"text": "(\\prod A_2)\\cdot ( \\prod B_2)\\equiv \\prod (A_2\\cup B_2)"
},
{
"math_id": 282,
"text": "A\\subseteq B"
},
{
"math_id": 283,
"text": "A\\supseteq B"
},
{
"math_id": 284,
"text": "(\\mu_1\\rightarrow \\mu_2)"
},
{
"math_id": 285,
"text": "(\\nu_1\\rightarrow \\nu_2)"
},
{
"math_id": 286,
"text": "(\\mu_1+\\nu_1\\rightarrow \\mu_2+\\nu_2)"
},
{
"math_id": 287,
"text": "(\\mu_1+\\nu_1)^R\\subseteq (\\mu_2+\\nu_2)^R"
},
{
"math_id": 288,
"text": "A_1 \\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} A_2"
},
{
"math_id": 289,
"text": " B_1\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} B_2"
},
{
"math_id": 290,
"text": "A_1\\cup A_2\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} B_1\\cup B_2"
},
{
"math_id": 291,
"text": "\\mu_1=\\sum A_1"
},
{
"math_id": 292,
"text": "\\nu_1=\\sum B_1"
},
{
"math_id": 293,
"text": "\\mu_2=\\sum A_2"
},
{
"math_id": 294,
"text": "\\nu_2=\\sum B_2"
},
{
"math_id": 295,
"text": "\\eta(\\{2,5\\})"
},
{
"math_id": 296,
"text": " =\\eta(D_2\\cup D_4)"
},
{
"math_id": 297,
"text": " =\\eta(D_2)+\\eta(D_4)"
},
{
"math_id": 298,
"text": " =\\eta(\\{2\\})+\\eta(\\{5\\})"
},
{
"math_id": 299,
"text": "=a\\neg bc\\neg d\\neg e+a\\neg b\\neg c\\neg d\\neg e"
},
{
"math_id": 300,
"text": "=a\\neg b\\neg d\\neg e"
},
{
"math_id": 301,
"text": "\\rho(\\{2,5\\})=\\neg \\eta(\\{1,3,4,6\\})\n "
},
{
"math_id": 302,
"text": "=(\\neg a+b+\\neg c+\\neg d+\\neg e)"
},
{
"math_id": 303,
"text": "(\\neg b+c+d+\\neg e)"
},
{
"math_id": 304,
"text": " \\eta(\\{3,4\\})\n "
},
{
"math_id": 305,
"text": " =\\eta(D_3)\n "
},
{
"math_id": 306,
"text": " =\\neg ab\\neg c\\neg de\n "
},
{
"math_id": 307,
"text": "\\rho(\\{2,5\\})=\\neg \\eta(\\{1,3,4,6\\})=\\eta(\\{2,5\\})+0_\\rho\n"
},
{
"math_id": 308,
"text": "\\eta(\\{1,2,5,6\\})=a\\neg bcde"
},
{
"math_id": 309,
"text": " +a\\neg bc\\neg d\\neg e"
},
{
"math_id": 310,
"text": "+a\\neg b\\neg c\\neg d\\neg e"
},
{
"math_id": 311,
"text": "+ab\\neg c\\neg de"
},
{
"math_id": 312,
"text": "= a(\\neg b+e)"
},
{
"math_id": 313,
"text": "(\\neg d+e)\n "
},
{
"math_id": 314,
"text": " (\\neg b+\\neg c)"
},
{
"math_id": 315,
"text": "(\\neg b+\\neg d)"
},
{
"math_id": 316,
"text": " (b+c+\\neg e)"
},
{
"math_id": 317,
"text": " (c+\\neg d)"
},
{
"math_id": 318,
"text": " (b+d+\\neg e)"
},
{
"math_id": 319,
"text": " (\\neg c+d+\\neg e)"
},
{
"math_id": 320,
"text": "\\rho(\\{1,2,5,6\\})"
},
{
"math_id": 321,
"text": "=a"
},
{
"math_id": 322,
"text": "+\\neg b"
},
{
"math_id": 323,
"text": "+c"
},
{
"math_id": 324,
"text": "+d"
},
{
"math_id": 325,
"text": "+\\neg e\n "
},
{
"math_id": 326,
"text": " =\\neg\\eta(\\{3,4\\})\n "
},
{
"math_id": 327,
"text": "\\eta(\\{1,2,5,6\\})"
},
{
"math_id": 328,
"text": "{a}^R=\\lbrace a \\rbrace^I"
},
{
"math_id": 329,
"text": "=\\lbrace 1,2,5,6 \\rbrace"
},
{
"math_id": 330,
"text": "( a+ c+d )^R"
},
{
"math_id": 331,
"text": "=\\lbrace {a,c,d} \\rbrace^\\Diamond"
},
{
"math_id": 332,
"text": "(\\lbrace 1,2,5,6 \\rbrace,\\lbrace a\\rbrace)"
},
{
"math_id": 333,
"text": "(\\lbrace 1,2,5,6 \\rbrace,\\lbrace a,c,d\\rbrace)"
},
{
"math_id": 334,
"text": "\\eta(\\{1,6 \\})"
},
{
"math_id": 335,
"text": "=ae(\\neg bcd+b\\neg c\\neg d)"
},
{
"math_id": 336,
"text": "\\rho(\\lbrace 1,6 \\rbrace)=\n {d}+ab+ce"
},
{
"math_id": 337,
"text": "+\\neg be"
},
{
"math_id": 338,
"text": " +\\neg a\\neg e"
},
{
"math_id": 339,
"text": "(ae)^R=\\{a,e\\}^I"
},
{
"math_id": 340,
"text": " =\\{ 1,6\\}"
},
{
"math_id": 341,
"text": "(\\lbrace 1,6 \\rbrace,\\lbrace a,e\\rbrace)"
},
{
"math_id": 342,
"text": "d^R\\equiv \\lbrace d\\rbrace^\\Diamond=\n\\lbrace 1\\rbrace\\neq \\lbrace 1,6\\rbrace"
},
{
"math_id": 343,
"text": "\\{c, d\\}\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} \\{a\\}"
},
{
"math_id": 344,
"text": "\\{c,d \\}\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} \\{a\\}"
},
{
"math_id": 345,
"text": "c\\cdot d\\rightarrow a"
},
{
"math_id": 346,
"text": "c+d\\rightarrow a"
},
{
"math_id": 347,
"text": "c\\cdot d\\cdot 1_\\eta \\leq a\\cdot 1_\\eta"
},
{
"math_id": 348,
"text": "c\\cdot d+ 0_\\rho \\leq a+ 0_\\rho"
},
{
"math_id": 349,
"text": "(c+d)\\cdot 1_\\eta \\leq a\\cdot 1_\\eta"
},
{
"math_id": 350,
"text": "c+ d+0_\\rho \\leq a+ 0_\\rho"
},
{
"math_id": 351,
"text": "A\\mbox{ and }B"
},
{
"math_id": 352,
"text": "\\{c \\}\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} \\{a\\}"
},
{
"math_id": 353,
"text": "\\{c \\}\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} \\{a\\}"
},
{
"math_id": 354,
"text": "c\\rightarrow a"
},
{
"math_id": 355,
"text": "\\{c\\}^I\\subset \\{a\\}^I"
},
{
"math_id": 356,
"text": "\\{c\\}^\\Diamond\\subset \\{a\\}^\\Diamond"
},
{
"math_id": 357,
"text": "c^R=\\{c\\}^I=\\{c\\}^\\Diamond"
},
{
"math_id": 358,
"text": "=\\{1,2\\}"
},
{
"math_id": 359,
"text": "\\subset \\{1,2,5,6\\}"
},
{
"math_id": 360,
"text": "=a^R"
},
{
"math_id": 361,
"text": "=\\{a\\}^I"
},
{
"math_id": 362,
"text": " =\\{a\\}^\\Diamond"
},
{
"math_id": 363,
"text": "c\\rightarrow c\\cdot a"
},
{
"math_id": 364,
"text": "c+a\\rightarrow a"
},
{
"math_id": 365,
"text": "\\{c\\}\\stackrel{\\scriptscriptstyle fcl}{\\rightarrow} \\{a,c\\}"
},
{
"math_id": 366,
"text": "\\{c,a\\}\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} \\{a\\}"
},
{
"math_id": 367,
"text": " \\mu\\rightarrow \\mu\\cdot 1_\\eta"
},
{
"math_id": 368,
"text": " \\mu"
},
{
"math_id": 369,
"text": " [\\mu \\cdot 1_\\eta,\\mu]"
},
{
"math_id": 370,
"text": " \\forall \\mu_1\\forall\\mu_2"
},
{
"math_id": 371,
"text": " \\mu_1\\leftrightarrow \\mu_2"
},
{
"math_id": 372,
"text": " \\mu \\cdot 1_\\eta\\leq \\mu_1 \\leq \\mu"
},
{
"math_id": 373,
"text": " \\mu \\cdot 1_\\eta\\leq \\mu_2\\leq \\mu"
},
{
"math_id": 374,
"text": " \\mu \\leftrightarrow \\mu\\cdot 1_\\eta"
},
{
"math_id": 375,
"text": " \\mu\\cdot 1_\\eta \\leq \\mu"
},
{
"math_id": 376,
"text": "(c+d)\\cdot 1_\\eta "
},
{
"math_id": 377,
"text": "=c\\cdot 1_\\eta"
},
{
"math_id": 378,
"text": " =a\\neg bc(de+\\neg d\\neg e)"
},
{
"math_id": 379,
"text": "c+d"
},
{
"math_id": 380,
"text": " \\rightarrow a\\neg bc(de+\\neg d\\neg e)"
},
{
"math_id": 381,
"text": " c+d\\rightarrow c"
},
{
"math_id": 382,
"text": " c+d\\rightarrow a"
},
{
"math_id": 383,
"text": "cd\\rightarrow a"
},
{
"math_id": 384,
"text": "\\{ c, d \\}\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} \\{ c \\}"
},
{
"math_id": 385,
"text": "\\{c, d\\}\\stackrel{\\scriptscriptstyle rsl}{\\rightarrow} \\{a\\}"
},
{
"math_id": 386,
"text": " c+d \\rightarrow \\neg b(de+ \\neg d\\neg e)"
},
{
"math_id": 387,
"text": " c+d \\rightarrow \\neg b \\cdot ( e \\leftrightarrow d )"
},
{
"math_id": 388,
"text": "d"
},
{
"math_id": 389,
"text": "\\neg b"
},
{
"math_id": 390,
"text": "e"
},
{
"math_id": 391,
"text": "\\mu+0_\\rho\\rightarrow \\mu"
},
{
"math_id": 392,
"text": " \\mu \\leq \\mu_1 \\leq \\mu+0_\\rho"
},
{
"math_id": 393,
"text": " \\mu \\leq \\mu_2 \\leq \\mu+0_\\rho"
},
{
"math_id": 394,
"text": "e\\rightarrow a"
},
{
"math_id": 395,
"text": "\\mu:=\\neg e+a"
},
{
"math_id": 396,
"text": " \\iff e\\rightarrow a"
},
{
"math_id": 397,
"text": "\\mu+ 0_\\rho"
},
{
"math_id": 398,
"text": "=a+\\neg b+c+d+\\neg e"
},
{
"math_id": 399,
"text": " \\neg e+a"
},
{
"math_id": 400,
"text": " \\leq \\mu_1"
},
{
"math_id": 401,
"text": " \\leq a+\\neg b+c+d+\\neg e"
},
{
"math_id": 402,
"text": " \\mu_1 \\leftrightarrow ( e \\rightarrow a )"
},
{
"math_id": 403,
"text": " (e\\rightarrow a+c+d) \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 404,
"text": " (b\\rightarrow (e\\rightarrow a+c )) \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 405,
"text": " (e\\rightarrow (b \\rightarrow a+c )) \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 406,
"text": " (b\\rightarrow (e\\rightarrow a+c+d)) \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 407,
"text": "{\\bf 1}\\rightarrow 1_\\eta"
},
{
"math_id": 408,
"text": "0_\\rho\\rightarrow {\\bf 0}"
},
{
"math_id": 409,
"text": " \\forall \\mu "
},
{
"math_id": 410,
"text": " \\implies \\mu \\rightarrow \\mu 1_\\eta"
},
{
"math_id": 411,
"text": "\\forall \\nu "
},
{
"math_id": 412,
"text": "\\implies \\nu+0_\\rho \\rightarrow \\nu"
},
{
"math_id": 413,
"text": " \\mu=\\rho(X)"
},
{
"math_id": 414,
"text": " \\nu=\\eta(X)"
},
{
"math_id": 415,
"text": " \\rho(X)\\rightarrow \\eta(X)"
},
{
"math_id": 416,
"text": "\n\\mu\\cdot 1_\\eta\\equiv\n\\eta(\\mu^R) "
},
{
"math_id": 417,
"text": "\n \\leq \\mu "
},
{
"math_id": 418,
"text": "\n \\leq \\rho(\\mu^R)\\equiv \\mu+0_\\rho"
},
{
"math_id": 419,
"text": " \\mu"
},
{
"math_id": 420,
"text": " \\mu^R"
},
{
"math_id": 421,
"text": " X"
},
{
"math_id": 422,
"text": " [ \\eta (X), \\rho(X) ]"
},
{
"math_id": 423,
"text": " \\forall \\mu_1\\in [X_1]_F\\forall\\mu_2\\in [X_2]_F "
},
{
"math_id": 424,
"text": " \\mu_1\\rightarrow\\mu_2 "
},
{
"math_id": 425,
"text": " ( X_1,[X_1]_F) "
},
{
"math_id": 426,
"text": " \\leq ( X_2,[X_2]_F) "
},
{
"math_id": 427,
"text": " (b\\rightarrow c) \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 428,
"text": " c \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 429,
"text": " \\neg b \\rightarrow (e\\rightarrow a)"
},
{
"math_id": 430,
"text": " ( b\\rightarrow c )\\rightarrow (e\\rightarrow a)"
},
{
"math_id": 431,
"text": " \\neg b+c\\in [\\{1,2,5\\}]_F"
},
{
"math_id": 432,
"text": " \\neg e+a\\in [\\{1,2,5,6\\}]_F "
},
{
"math_id": 433,
"text": " \\{1,2,5\\}\\subseteq\\{1,2,5,6\\} "
},
{
"math_id": 434,
"text": " \\neg b +c\\leftrightarrow \\rho(\\{1,2,5\\})"
},
{
"math_id": 435,
"text": " \\leftrightarrow \\eta (\\{1,2,5\\})"
},
{
"math_id": 436,
"text": " \\neg e+a\\leftrightarrow \\rho(\\{1,2,5,6\\}) "
},
{
"math_id": 437,
"text": "\n\\leftrightarrow \\eta (\\{ 1,2,5,6 \\})"
},
{
"math_id": 438,
"text": " \\rho(\\{1,2,5\\}) "
},
{
"math_id": 439,
"text": " \\leq \\rho (\\{1,2,5,6\\}) "
},
{
"math_id": 440,
"text": " \\eta(\\{ 1,2,5 \\}) \\leq \\eta(\\{ 1,2,5,6 \\}) "
},
{
"math_id": 441,
"text": " c\\in [\\{1,2\\}]_F"
},
{
"math_id": 442,
"text": " \\{1,2\\} \\subseteq\\{1,2,5,6\\} "
}
]
| https://en.wikipedia.org/wiki?curid=74356806 |
74358254 | Projection filters | Geometric algorithms for signal processing
Projection filters are a set of algorithms based on stochastic analysis and information geometry, or the differential geometric approach to statistics, used to find approximate solutions for filtering problems for nonlinear state-space systems.
The filtering problem consists of estimating the unobserved signal of a random dynamical system from partial noisy observations of the signal. The objective is computing the probability distribution of the signal conditional on the history of the noise-perturbed observations. This distribution allows for calculations of all statistics of the signal given the history of observations. If this distribution has a density, the density satisfies (SPDEs) called Kushner-Stratonovich equation, or Zakai equation.
It is known that the nonlinear filter density evolves in an infinite dimensional function space.
One can choose a finite dimensional family of probability densities, for example Gaussian densities, Gaussian mixtures, or exponential families, on which the infinite-dimensional filter density can be approximated. The basic idea of the projection filter is to use a geometric structure in the chosen spaces of densities to project the infinite dimensional SPDE of the optimal filter onto the chosen finite dimensional family, obtaining a finite dimensional stochastic differential equation (SDE) for the parameter of the density in the finite dimensional family that approximates the full filter evolution. To do this, the chosen finite dimensional family is equipped with a manifold structure as in information geometry.
The projection filter was tested against the optimal filter for the cubic sensor problem. The projection filter could track effectively bimodal densities of the optimal filter that would have been difficult to approximate with standard algorithms like the extended Kalman filter.
Projection filters are ideal for in-line estimation, as they are quick to implement and run efficiently in time, providing a finite dimensional SDE for the parameter that can be implemented efficiently.
Projection filters are also flexible, as they allow fine tuning the precision of the approximation by choosing richer approximating families, and some exponential families make the correction step in the projection filtering algorithm exact. Some formulations coincide with heuristic based assumed density filters or with Galerkin methods. Projection filters can also approximate the full infinite-dimensional filter in an optimal way, beyond the optimal approximation of the SPDE coefficients alone, according to precise criteria such as mean square minimization. Projection filters have been studied by the Swedish Defense Research Agency and have also been successfully applied to a variety of fields including navigation, ocean dynamics, quantum optics and quantum systems, estimation of fiber diameters, estimation of chaotic time series, change point detection and other areas.
History and development.
The term "projection filter" was first coined in 1987 by Bernard Hanzon, and the related theory and numerical examples were fully developed, expanded and made rigorous during the Ph.D. work of Damiano Brigo, in collaboration with Bernard Hanzon and Francois LeGland.
These works dealt with the projection filters in Hellinger distance and Fisher information metric, that were used to project the optimal filter infinite-dimensional SPDE on a chosen exponential family. The exponential family can be chosen so as to make the prediction step of the filtering algorithm exact.
A different type of projection filters, based on an alternative projection metric, the direct formula_0 metric, was introduced in Armstrong and Brigo (2016). With this metric, the projection filters on families of mixture distributions coincide with filters based on Galerkin methods. Later on, Armstrong, Brigo and Rossi Ferrucci (2021) derive optimal projection filters that satisfy specific optimality criteria in approximating the infinite dimensional optimal filter. Indeed, the Stratonovich-based projection filters optimized the approximations of the SPDE separate coefficients on the chosen manifold but not the SPDE solution as a whole. This has been dealt with by introducing the optimal projection filters. The innovation here is to work directly with Ito calculus, instead of resorting to the Stratonovich calculus version of the filter equation. This is based on research on the geometry of Ito Stochastic differential equations on manifolds based on the jet bundle, the so-called 2-jet interpretation of Ito stochastic differential equations on manifolds.
Projection filters derivation.
Here the derivation of the different projection filters is sketched.
Stratonovich-based projection filters.
This is a derivation of both the initial filter in Hellinger/Fisher metric sketched by Hanzon and fully developed by Brigo, Hanzon and LeGland, and the later projection filter in direct L2 metric by Armstrong and Brigo (2016).
It is assumed that the unobserved random signal formula_1 is modelled by the Ito stochastic differential equation:
formula_2
where "f" and formula_3 are formula_4 valued and formula_5 is a Brownian motion. Validity of all regularity conditions necessary for the results to hold will be assumed, with details given in the references. The associated noisy observation process formula_6 is modelled by
formula_7
where formula_8 is formula_9 valued and formula_10 is a Brownian motion independent of formula_5. As hinted above, the full filter is the conditional
distribution of formula_11 given a prior for formula_12
and the history of formula_13 up to time formula_14. If this distribution has a density described informally as
formula_15
where formula_16 is the sigma-field generated by the history of noisy observations formula_13 up to time formula_14, under suitable technical conditions the density formula_17 satisfies the Kushner—Stratonovich SPDE:
formula_18
where formula_19 is the expectation
formula_20
and the forward diffusion operator formula_21 is
formula_22
where formula_23 and formula_24 denotes transposition.
To derive the first version of the projection filters, one needs to put the formula_17 SPDE in Stratonovich form. One obtains
formula_25
Through the chain rule, it's immediate to derive the SPDE for formula_26.
To shorten notation one may rewrite this last SPDE as
formula_27
where the operators formula_28 and formula_29 are defined as
formula_30
formula_31
The square root version is
formula_32
These are Stratonovich SPDEs whose solutions evolve in infinite dimensional function spaces. For example formula_17 may evolve in formula_0 (direct metric formula_33)
formula_34
or formula_35 may evolve in formula_0 (Hellinger metric formula_36)
formula_37
where formula_38 is the norm of Hilbert space formula_0.
In any case, formula_17 (or formula_35) will not evolve inside any finite dimensional family of densitities,
formula_39
The projection filter idea is approximating formula_40 (or formula_41) via a finite dimensional density formula_42 (or formula_43).
The fact that the filter SPDE is in Stratonovich form allows for the following. As Stratonovich SPDEs satisfy the chain rule, formula_44 and formula_45 behave as vector fields. Thus, the equation is characterized by a formula_46 vector field formula_44 and a formula_47 vector field formula_45. For this version of the projection filter one is satisfied with dealing with the two vector fields separately.
One may project formula_44 and formula_45 on the tangent space of the densities in formula_48 (direct metric) or of their square roots (Hellinger metric). The direct metric case yields
formula_49
where formula_50 is the tangent space projection at the point formula_51 for the manifold formula_48, and where, when applied to a vector such as formula_52, it is assumed to act component-wise by projecting each of formula_52's components. As a basis of this tangent space is
formula_53
by denoting the inner product of formula_0 with formula_54, one defines the metric
formula_55
and the projection is thus
formula_56
where formula_57 is the inverse of formula_58.
The projected equation thus reads
formula_59
which can be written as
formula_60
where it has been crucial that Stratonovich calculus obeys the chain rule. From the above equation, the final projection filter SDE is
formula_61
with initial condition a chosen formula_62.
By substituting the definition of the operators F and G we obtain the fully explicit projection filter equation in direct metric:
formula_63
formula_64
If one uses the Hellinger distance instead, square roots of densities are needed. The
tangent space basis is then
formula_65
and one defines the metric
formula_66
The metric formula_67 is the Fisher information metric. One follows steps completely analogous to the direct metric case and the filter equation in Hellinger/Fisher metric is
formula_68
again with initial condition a chosen formula_62.
Substituting F and G one obtains
formula_69
formula_70
The projection filter in direct metric, when implemented on a manifold formula_48 of mixture families, leads to equivalence with a Galerkin method.
The projection filter in Hellinger/Fisher metric when implemented on a manifold formula_71 of square roots of an exponential family of densities is equivalent to the assumed density filters.
One should note that it is also possible to project the simpler Zakai equation for an unnormalized version of the density p. This would result in the same Hellinger projection filter but in a different direct metric projection filter.
Finally, if in the exponential family case one includes among the sufficient statistics of the exponential family the observation function in formula_47, namely formula_72's components and formula_73, then one can see that the correction step in the filtering algorithm becomes exact. In other terms, the projection of the vector field formula_45 is exact, resulting in formula_45 itself. Writing the filtering algorithm in a setting with continuous state formula_74 and discrete time observations formula_13, one can see that the correction step at each new observation is exact, as the related Bayes formula entails no approximation.
Optimal projection filters based on Ito vector and Ito jet projections.
Now rather than considering the exact filter SPDE in Stratonovich calculus form, one keeps it in Ito calculus form
formula_75
In the Stratonovich projection filters above, the vector fields formula_44 and formula_45 were projected separately. By definition, the projection is the optimal approximation for formula_44 and formula_45 separately, although this does not imply it provides the best approximation for the filter SPDE solution as a whole. Indeed, the Stratonovich projection, acting on the two terms formula_44 and formula_45 separately, does not guarantee optimality of the solution formula_76 as an approximation of the exact formula_77 for say small formula_78. One may look for a norm formula_79 to be applied to the solution, for which
formula_80
The Ito-vector projection is obtained as follows.
Let us choose a norm for the space of densities, formula_81, which might be associated with the direct metric or the Hellinger metric.
One chooses the diffusion term in the approximating Ito equation for formula_82 by minimizing (but not zeroing) the formula_78 term of the Taylor expansion for the mean square error
formula_83,
finding the drift term in the approximating Ito equation that minimizes the formula_84 term of the same difference. Here the formula_78 order term is minimized, not zeroed, and one never attains formula_84 convergence, only formula_78 convergence.
A further benefit of the Ito vector projection is that it minimizes the order 1 Taylor expansion in formula_78 of
formula_85
To achieve formula_84 convergence, rather than formula_78 convergence, the Ito-jet projection is introduced. It is based on the notion of metric projection.
The metric projection of a density formula_86 (or formula_87) onto the manifold formula_48 (or formula_71) is the closest point on formula_48 (or formula_71) to formula_88 (or formula_89). Denote it by formula_90. The metric projection is, by definition, according to the chosen metric, the best one can ever do for approximating formula_88 in formula_48. Thus the idea is finding a projection filter that comes as close as possible to the metric projection. In other terms, one considers the criterion
formula_91
The detailed calculations are lengthy and laborious, but the resulting approximation achieves formula_84 convergence.
Indeed, the Ito jet projection attains the following optimality criterion. It zeroes the formula_78 order term and it minimizes the formula_84 order term of the Taylor expansion of the mean square distance in formula_0 between formula_92 and formula_76.
Both the Ito vector and the Ito jet projection result in final SDEs, driven by the observations formula_93, for the parameter formula_82 that best approximates the exact filter evolution for small times.
Applications.
Jones and Soatto (2011) mention projection filters as possible algorithms for on-line estimation in visual-inertial navigation, mapping and localization, while again on navigation Azimi-Sadjadi and Krishnaprasad (2005) use projection filters algorithms.
The projection filter has been also considered for applications in ocean dynamics by Lermusiaux 2006. Kutschireiter, Rast, and Drugowitsch (2022) refer to the projection filter in the context of continuous time circular filtering. For quantum systems applications, see for example van Handel and Mabuchi (2005), who applied the quantum projection filter to quantum optics, studying a quantum model of optical phase bistability of a strongly coupled two-level atom in an optical cavity. Further applications to quantum systems are considered in Gao, Zhang and Petersen (2019). Ma, Zhao, Chen and Chang (2015) refer to projection filters in the context of hazard position estimation, while Vellekoop and Clark (2006) generalize the projection filter theory to deal with changepoint detection. Harel, Meir and Opper (2015) apply the projection filters in assumed density form to the filtering of optimal point processes with applications to neural encoding. Broecker and Parlitz (2000) study projection filter methods for noise reduction in chaotic time series. Zhang, Wang, Wu and Xu (2014)
apply the Gaussian projection filter as part of their estimation technique to deal with measurements of fiber diameters in melt-blown nonwovens.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L^2"
},
{
"math_id": 1,
"text": "X_t \\in \\R^m"
},
{
"math_id": 2,
"text": " d X_t = f(X_t,t) \\, d t + \\sigma(X_t,t) \\, d W_t "
},
{
"math_id": 3,
"text": "\\sigma\\, dW"
},
{
"math_id": 4,
"text": "\\R^m"
},
{
"math_id": 5,
"text": "W_t"
},
{
"math_id": 6,
"text": "Y_t \\in \\R^d"
},
{
"math_id": 7,
"text": " d Y_t = b(X_t,t) \\, d t + d V_t "
},
{
"math_id": 8,
"text": "b"
},
{
"math_id": 9,
"text": "\\R^d"
},
{
"math_id": 10,
"text": "V_t"
},
{
"math_id": 11,
"text": "X_t"
},
{
"math_id": 12,
"text": "X_0"
},
{
"math_id": 13,
"text": "Y"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "p_t(x)dx = Prob\\{X_t \\in dx | \\sigma(Y_s, s\\leq t)\\}"
},
{
"math_id": 16,
"text": "\\sigma(Y_s, s\\leq t)"
},
{
"math_id": 17,
"text": "p_t"
},
{
"math_id": 18,
"text": "d p_t = {\\cal L}^*_t p_t \\ d t \n+ p_t[b(\\cdot,t) - E_{p_t}(b(\\cdot,t))]^T [ d Y_t - E_{p_t}(b(\\cdot,t)) dt]"
},
{
"math_id": 19,
"text": "E_p"
},
{
"math_id": 20,
"text": "E_p[h] = \\int h(x) p(x) dx, "
},
{
"math_id": 21,
"text": "{\\cal L}^*_t"
},
{
"math_id": 22,
"text": "{\\cal L}_t^* p = - \\sum_{i=1}^m \\frac{\\partial}{\\partial x_i} [ f_i(x,t) p_t(x) ] + \\frac{1}{2} \\sum_{i,j=1}^m \\frac{\\partial^2}{\\partial x_i \\partial x_j} [a_{ij}(x,t) p_t(x)]"
},
{
"math_id": 23,
"text": "a=\\sigma \\sigma^T"
},
{
"math_id": 24,
"text": "T"
},
{
"math_id": 25,
"text": " d p_t = {\\cal L}^\\ast_t\\, p_t\\,dt\n - \\frac{1}{2}\\, p_t\\, [\\vert b(\\cdot,t) \\vert^2 - E_{p_t}\\{\\vert b(\\cdot,t) \\vert^2\\}] \\,dt\n + p_t\\, [b(\\cdot,t)-E_{p_t}\\{b(\\cdot,t)\\}]^T \\circ dY_t\\ .\n"
},
{
"math_id": 26,
"text": "d \\sqrt{p_t}"
},
{
"math_id": 27,
"text": " dp = F(p) \\,dt + G^T(p) \\circ dY\\ ,"
},
{
"math_id": 28,
"text": "F(p)"
},
{
"math_id": 29,
"text": "G^T(p)"
},
{
"math_id": 30,
"text": "F(p) = {\\cal L}^\\ast_t\\, p\\,\n - \\frac{1}{2}\\, p\\, [\\vert b(\\cdot,t) \\vert^2 - E_{p}\\{\\vert b(\\cdot,t) \\vert^2\\}],"
},
{
"math_id": 31,
"text": "G^T(p) = p\\, [b(\\cdot,t)-E_{p}\\{b(\\cdot,t)\\}]^T."
},
{
"math_id": 32,
"text": " d \\sqrt{p} = \\frac{1}{2 \\sqrt{p}}[ F(p) \\,dt + G^T(p) \\circ dY]\\ ."
},
{
"math_id": 33,
"text": "d_2"
},
{
"math_id": 34,
"text": " d_2(p_1,p_2)= \\Vert p_1- p_2 \\Vert\\ , \\ \\ p_{1,2}\\in L^2 "
},
{
"math_id": 35,
"text": "\\sqrt{p_t}"
},
{
"math_id": 36,
"text": "d_H"
},
{
"math_id": 37,
"text": " d_H(\\sqrt{p_1},\\sqrt{p_2})= \\Vert \\sqrt{p_1}-\\sqrt{p_2} \\Vert , \\ \\ \\ p_{1,2}\\in L^1"
},
{
"math_id": 38,
"text": "\\Vert\\cdot\\Vert"
},
{
"math_id": 39,
"text": "S_\\Theta=\\{p(\\cdot, \\theta), \\ \\theta \\in \\Theta \\subset \\R^n\\} \\ (or \\ S_\\Theta^{1/2}=\\{\\sqrt{p(\\cdot, \\theta)}, \\ \\theta \\in \\Theta \\subset \\R^n\\})."
},
{
"math_id": 40,
"text": "p_t(x)"
},
{
"math_id": 41,
"text": "\\sqrt{p_t(x)}"
},
{
"math_id": 42,
"text": "p(x,\\theta_t)"
},
{
"math_id": 43,
"text": "\\sqrt{p(x,\\theta_t)}"
},
{
"math_id": 44,
"text": "F"
},
{
"math_id": 45,
"text": "G"
},
{
"math_id": 46,
"text": "dt"
},
{
"math_id": 47,
"text": "dY_t"
},
{
"math_id": 48,
"text": "S_\\Theta"
},
{
"math_id": 49,
"text": " dp(\\cdot,\\theta_t) = \\Pi_{p(\\cdot,\\theta_t)}[F(p(\\cdot,\\theta_t))] \\,dt + \\Pi_{p(\\cdot,\\theta_t)}[G^T(p(\\cdot,\\theta_t))] \\circ dY_t\\ \n"
},
{
"math_id": 50,
"text": "\\Pi_{p(\\cdot,\\theta)}"
},
{
"math_id": 51,
"text": "p(\\cdot,\\theta)"
},
{
"math_id": 52,
"text": "G^T"
},
{
"math_id": 53,
"text": " \\left\\{ \\frac{\\partial p(\\cdot,\\theta)}{\\partial \\theta_1},\\cdots,\n \\frac{\\partial p(\\cdot,\\theta)}{\\partial \\theta_n} \\right\\},"
},
{
"math_id": 54,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 55,
"text": " \\gamma_{ij}(\\theta) = \\left\\langle \\frac{\\partial {p(\\cdot,\\theta)}}{\n \\partial \\theta_i}\\, , \\frac{\\partial {p(\\cdot,\\theta)}}{\n \\partial \\theta_j} \\right\\rangle =\n \\int \\frac{\\partial p(x,\\theta)}{\\partial \\theta_i}\\,\n \\frac{\\partial p(x,\\theta)}{\\partial \\theta_j}\\, d x "
},
{
"math_id": 56,
"text": "\\Pi^\\gamma_{p(\\cdot,\\theta)} [v] = \\sum_{i=1}^n \\left[ \\sum_{j=1}^n\n \\gamma^{ij}(\\theta)\\; \\left\\langle v,\\,\n \\frac{\\partial {p(\\cdot,\\theta)}}{\\partial \\theta_j} \\right\\rangle \\right]\\; \n \\frac{\\partial {p(\\cdot,\\theta)}}{\\partial \\theta_i} "
},
{
"math_id": 57,
"text": "\\gamma^{ij}"
},
{
"math_id": 58,
"text": "\\gamma_{ij}"
},
{
"math_id": 59,
"text": " d p(\\cdot, \\theta_t) = \\Pi_{p(\\cdot,\\theta)}[F(p(\\cdot, \\theta_t))] dt + \\Pi_{p(\\cdot,\\theta)}[G^T(p(\\cdot, \\theta_t))] \\circ dY_t"
},
{
"math_id": 60,
"text": " \\sum_{i=1}^n \\frac{\\partial p(\\cdot, \\theta_t)}{\\theta_i}\\circ d \\theta_i = \n\\sum_{i=1}^n \\left[ \\sum_{j=1}^n\n \\gamma^{ij}(\\theta)\\; \\left\\langle F(p(\\cdot, \\theta_t)),\\,\n \\frac{\\partial {p(\\cdot,\\theta)}}{\\partial \\theta_j} \\right\\rangle \\right]\\; \n \\frac{\\partial {p(\\cdot,\\theta)}}{\\partial \\theta_i} dt + \n\\sum_{i=1}^n \\left[ \\sum_{j=1}^n\n \\gamma^{ij}(\\theta)\\; \\left\\langle G^T(p(\\cdot, \\theta_t)),\\,\n \\frac{\\partial {p(\\cdot,\\theta)}}{\\partial \\theta_j} \\right\\rangle \\right]\\; \n \\frac{\\partial {p(\\cdot,\\theta)}}{\\partial \\theta_i} \\circ dY_t ,"
},
{
"math_id": 61,
"text": " d \\theta_i = \n \\left[\\sum_{j=1}^n \\gamma^{ij}(\\theta_t)\\; \n \\int F(p(x, \\theta_t)) \\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j} dx \\right] dt + \n \\sum_{k=1}^d\\; \\left[ \\sum_{j=1}^n \\gamma^{ij}(\\theta_t)\\; \n \\int G_k(p(x, \\theta_t)) \\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; d x \\right]\n \\circ dY_k "
},
{
"math_id": 62,
"text": "\\theta_0"
},
{
"math_id": 63,
"text": " \n d \\theta_i(t) \n = \n \\left[\\sum_{j=1}^m \\gamma^{ij}(\\theta_t)\\; \n \\int {{\\cal L}_t^\\ast\\, p(x,\\theta_t)}\\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j} dx - \\sum_{j=1}^m \\gamma^{ij}(\\theta_t)\\; \n \\int \\frac{1}{2} \\left[\\vert b(x,t) \\vert^2 - \\int \\vert b(z,t) \\vert^2 p(z,\\theta_t)dz\\right] \\; p(x,\\theta_t) \\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; \n d x \\right] dt"
},
{
"math_id": 64,
"text": " + \n \\sum_{k=1}^d\\; \\left[ \\sum_{j=1}^m \\gamma^{ij}(\\theta_t)\\; \n \\int \\left[ b_k(x,t) - \\int b_k(z,t) p(z,\\theta_t) dz \\right] \\; p(x,\\theta_t) \\;\n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; d x \\right]\n \\circ dY_t^k\\ ."
},
{
"math_id": 65,
"text": " \\left\\{ \\frac{\\partial\\sqrt{ p(\\cdot,\\theta)}}{\\partial \\theta_1},\\cdots,\n \\frac{\\partial \\sqrt{p(\\cdot,\\theta)}}{\\partial \\theta_n} \\right\\},"
},
{
"math_id": 66,
"text": " \\frac{1}{4} g_{ij}(\\theta) = \n \\left \\langle \\frac{\\partial \\sqrt{p}}{\n \\partial \\theta_i}\\, , \\frac{\\partial \\sqrt{p}}{\n \\partial \\theta_j}\\right \\rangle\n = \\frac{1}{4} \\int \\frac{1}{p(x,\\theta)}\\,\n \\frac{\\partial p(x,\\theta)}{\\partial \\theta_i}\\,\n \\frac{\\partial p(x,\\theta)}{\\partial \\theta_j}\\,\n d x ."
},
{
"math_id": 67,
"text": "g"
},
{
"math_id": 68,
"text": " d \\theta_i = \\left[ \\sum_{j=1}^n g^{ij}(\\theta_t)\\; \n \\int \\frac{F(p(x,\\theta_t))}{p(x,\\theta_t)}\\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; \n dx \\right] dt + \\sum_{k=1}^d\\; \\left[ \\sum_{j=1}^m g^{ij}(\\theta_t)\\; \n \\int \\frac{G_k(p(x,\\theta_t))}{p(x,\\theta_t)}\\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; dx \\right]\n \\circ dY_t^k\\ , \n"
},
{
"math_id": 69,
"text": " d \\theta_i(t) \n = \n \\left[ \\sum_{j=1}^m g^{ij}(\\theta_t)\\; \n \\int \\frac{{\\cal L}_t^\\ast\\, p(x,\\theta_t)}{p(x,\\theta_t)}\\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; \n dx \n - \\sum_{j=1}^m g^{ij}(\\theta_t) \n \\int \\frac{1}{2} \\vert b_t(x) \\vert^2 \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j} \n dx \\right] dt "
},
{
"math_id": 70,
"text": " + \\sum_{k=1}^d\\; \\left[ \\sum_{j=1}^m g^{ij}(\\theta_t)\\; \n \\int b_k(x,t)\\; \n \\frac{\\partial p(x,\\theta_t)}{\\partial \\theta_j}\\; dx \\right]\n \\circ dY_t^k\\ ."
},
{
"math_id": 71,
"text": "S_\\Theta^{1/2}"
},
{
"math_id": 72,
"text": "b(x)"
},
{
"math_id": 73,
"text": "|b(x)|^2"
},
{
"math_id": 74,
"text": "X"
},
{
"math_id": 75,
"text": "d p_t = {\\cal L}^*_t p_t \\ d t \n+ p_t[b(\\cdot,t) - E_{p_t}(b(\\cdot,t))]^T [ d Y_t - E_{p_t}(b(\\cdot,t)) dt]."
},
{
"math_id": 76,
"text": "p(\\cdot,\\theta_{0+\\delta t})"
},
{
"math_id": 77,
"text": "p_{0+\\delta t}"
},
{
"math_id": 78,
"text": "\\delta t"
},
{
"math_id": 79,
"text": "\\| \\cdot \\|"
},
{
"math_id": 80,
"text": " \\theta_{0+\\delta t} \\approx \\mbox{argmin}_\\theta\\ \\| p_{0+\\delta t}- p(\\cdot,\\theta) \\|."
},
{
"math_id": 81,
"text": "\\|\\cdot \\|"
},
{
"math_id": 82,
"text": "\\theta_t"
},
{
"math_id": 83,
"text": "E_t[\\|p_{0+\\delta t}-p(\\cdot,\\theta_{0+\\delta t})\\|^2]"
},
{
"math_id": 84,
"text": "(\\delta t)^2"
},
{
"math_id": 85,
"text": "\\|E[p_{0+\\delta t}-p(\\cdot,\\theta_{0+\\delta t})]\\|."
},
{
"math_id": 86,
"text": "p \\in L^2"
},
{
"math_id": 87,
"text": "\\sqrt{p} \\in L^2"
},
{
"math_id": 88,
"text": "p"
},
{
"math_id": 89,
"text": "\\sqrt{p}"
},
{
"math_id": 90,
"text": "\\pi(p)"
},
{
"math_id": 91,
"text": "\\theta_{0+\\delta t} \\approx \\mbox{argmin}_\\theta\\ \\| \\pi(p_{0+\\delta t})- p(\\cdot,\\theta) \\|."
},
{
"math_id": 92,
"text": "\\pi(p_{0+\\delta t})"
},
{
"math_id": 93,
"text": "dY"
}
]
| https://en.wikipedia.org/wiki?curid=74358254 |
74361 | Binary symmetric channel | A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "flipped" with a "crossover probability" of "p", and otherwise is received correctly. This model can be applied to varied communication channels such as telephone lines or disk drive storage.
The noisy-channel coding theorem applies to BSCp, saying that information can be transmitted at any rate up to the channel capacity with arbitrarily low error. The channel capacity is formula_0 bits, where formula_1 is the binary entropy function. Codes including Forney's code have been designed to transmit information efficiently across the channel.
Definition.
A binary symmetric channel with crossover probability formula_2, denoted by BSCp, is a channel with binary input and binary output and probability of error formula_2. That is, if formula_3 is the transmitted random variable and formula_4 the received variable, then the channel is characterized by the conditional probabilities:
formula_5
It is assumed that formula_6. If formula_7, then the receiver can swap the output (interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel with crossover probability formula_8.
Capacity.
The channel capacity of the binary symmetric channel, in bits, is:
formula_9
where formula_10 is the binary entropy function, defined by:
formula_11
Noisy-channel coding theorem.
Shannon's noisy-channel coding theorem gives a result about the rate of information that can be transmitted through a communication channel with arbitrarily low error. We study the particular case of formula_12.
The noise formula_13 that characterizes formula_14 is a random variable consisting of n independent random bits (n is defined below) where each random bit is a formula_15 with probability formula_2 and a formula_16 with probability formula_17. We indicate this by writing "formula_18".
What this theorem actually implies is, a message when picked from formula_23, encoded with a random encoding function formula_24, and sent across a noisy formula_14, there is a very high probability of recovering the original message by decoding, if formula_25 or in effect the rate of the channel is bounded by the quantity stated in the theorem. The decoding error probability is exponentially small.
Proof.
The theorem can be proved directly with a probabilistic method. Consider an encoding function formula_21 that is selected at random. This means that for each message formula_26, the value formula_27 is selected at random (with equal probabilities). For a given encoding function formula_24, the decoding function formula_28 is specified as follows: given any received codeword formula_29, we find the message formula_22 such that the Hamming distance formula_30 is as small as possible (with ties broken arbitrarily). (formula_31 is called a maximum likelihood decoding function.)
The proof continues by showing that at least one such choice formula_32 satisfies the conclusion of theorem, by integration over the probabilities. Suppose formula_2 and formula_20 are fixed. First we show that, for a fixed formula_33 and formula_24 chosen randomly, the probability of failure over formula_12 noise is exponentially small in "n". At this point, the proof works for a fixed message formula_34. Next we extend this result to work for all messages formula_34. We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name "random coding with expurgation".
Converse of Shannon's capacity theorem.
The converse of the capacity theorem essentially states that formula_35 is the best rate one can achieve over a binary symmetric channel. Formally the theorem states:
The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. The idea is the sender generates messages of dimension formula_25, while the channel formula_12 introduces transmission errors. When the capacity of the channel is formula_36, the number of errors is typically formula_37 for a code of block length formula_19. The maximum number of messages is formula_38. The output of the channel on the other hand has formula_39 possible values. If there is any confusion between any two messages, it is likely that formula_40. Hence we would have formula_41, a case we would like to avoid to keep the decoding error probability exponentially small.
Codes.
Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels. The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct.
The approach behind the design of codes which meet the channel capacities of formula_42 or the binary erasure channel formula_43 have been to correct a lesser number of errors with a high probability, and to achieve the highest possible rate. Shannon's theorem gives us the best rate which could be achieved over a formula_14, but it does not give us an idea of any explicit codes which achieve that rate. In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate. The first such code was due to George D. Forney in 1966. The code is a concatenated code by concatenating two different kinds of codes.
Forney's code.
Forney constructed a concatenated code formula_44 to achieve the capacity of the noisy-channel coding theorem for formula_12. In his code,
For the outer code formula_45, a Reed-Solomon code would have been the first code to have come in mind. However, we would see that the construction of such a code cannot be done in polynomial time. This is why a binary linear code is used for formula_45.
For the inner code formula_53 we find a linear code by exhaustively searching from the linear code of block length formula_19 and dimension formula_25, whose rate meets the capacity of formula_12, by the noisy-channel coding theorem.
The rate formula_58 which almost meets the formula_12 capacity. We further note that the encoding and decoding of formula_59 can be done in polynomial time with respect to formula_46. As a matter of fact, encoding formula_59 takes time formula_60. Further, the decoding algorithm described takes time formula_61 as long as formula_62; and formula_63.
Decoding error probability.
A natural decoding algorithm for formula_59 is to:
Note that each block of code for formula_53 is considered a symbol for formula_45. Now since the probability of error at any index formula_66 for formula_55 is at most formula_67 and the errors in formula_12 are independent, the expected number of errors for formula_55 is at most formula_68 by linearity of expectation. Now applying Chernoff bound, we have bound error probability of more than formula_69 errors occurring to be formula_70. Since the outer code formula_45 can correct at most formula_69 errors, this is the decoding error probability of formula_59. This when expressed in asymptotic terms, gives us an error probability of formula_71. Thus the achieved decoding error probability of formula_59 is exponentially small as the noisy-channel coding theorem.
We have given a general technique to construct formula_59. For more detailed descriptions on formula_53 and formula_45 please read the following references. Recently a few other codes have also been constructed for achieving the capacities. LDPC codes have been considered for this purpose for their faster decoding time.
Applications.
The binary symmetric channel can model a disk drive used for memory storage: the channel input represents a bit being written to the disk and the output corresponds to the bit later being read. Error could arise from the magnetization flipping, background noise or the writing head making an error. Other objects which the binary symmetric channel can model include a telephone or radio communication line or cell division, from which the daughter cells contain DNA information from their parent cell.
This channel is often used by theorists because it is one of the simplest noisy channels to analyze. Many problems in communication theory can be reduced to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1 - \\operatorname H_\\text{b}(p)"
},
{
"math_id": 1,
"text": "\\operatorname H_\\text{b}"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\operatorname {Pr} [ Y = 0 | X = 0 ] &= 1 - p \\\\\n\\operatorname {Pr} [ Y = 0 | X = 1 ] &= p \\\\\n\\operatorname {Pr} [ Y = 1 | X = 0 ] &= p \\\\\n\\operatorname {Pr} [ Y = 1 | X = 1 ] &= 1 - p\n\\end{align}"
},
{
"math_id": 6,
"text": "0 \\le p \\le 1/2"
},
{
"math_id": 7,
"text": "p > 1/2"
},
{
"math_id": 8,
"text": "1 - p \\le 1/2"
},
{
"math_id": 9,
"text": "\\ C_{\\text{BSC}} = 1 - \\operatorname H_\\text{b}(p), "
},
{
"math_id": 10,
"text": "\\operatorname H_\\text{b}(p)"
},
{
"math_id": 11,
"text": "\\operatorname H_\\text{b}(x)=x\\log_2\\frac{1}{x}+(1-x)\\log_2\\frac{1}{1-x}"
},
{
"math_id": 12,
"text": "\\text{BSC}_p"
},
{
"math_id": 13,
"text": "e"
},
{
"math_id": 14,
"text": "\\text{BSC}_{p}"
},
{
"math_id": 15,
"text": "1"
},
{
"math_id": 16,
"text": "0"
},
{
"math_id": 17,
"text": "1-p"
},
{
"math_id": 18,
"text": "e \\in \\text{BSC}_{p}"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "\\epsilon"
},
{
"math_id": 21,
"text": "E: \\{0,1\\}^k \\to \\{0,1\\}^n"
},
{
"math_id": 22,
"text": "m\\in\\{0,1\\}^{k}"
},
{
"math_id": 23,
"text": "\\{0,1\\}^k"
},
{
"math_id": 24,
"text": "E"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": "m \\in \\{0,1\\}^k"
},
{
"math_id": 27,
"text": "E(m) \\in \\{0,1\\}^n"
},
{
"math_id": 28,
"text": "D:\\{0,1\\}^n \\to \\{0,1\\}^k"
},
{
"math_id": 29,
"text": "y \\in \\{0,1\\}^n"
},
{
"math_id": 30,
"text": "\\Delta(y, E(m))"
},
{
"math_id": 31,
"text": "D"
},
{
"math_id": 32,
"text": "(E,D)"
},
{
"math_id": 33,
"text": "m \\in \\{0,1\\}^{k}"
},
{
"math_id": 34,
"text": "m"
},
{
"math_id": 35,
"text": "1 - H(p)"
},
{
"math_id": 36,
"text": "H(p)"
},
{
"math_id": 37,
"text": "2^{H(p + \\epsilon)n}"
},
{
"math_id": 38,
"text": "2^{k}"
},
{
"math_id": 39,
"text": "2^{n}"
},
{
"math_id": 40,
"text": "2^{k}2^{H(p + \\epsilon)n} \\ge 2^{n}"
},
{
"math_id": 41,
"text": "k \\geq \\lceil (1 - H(p + \\epsilon)n) \\rceil"
},
{
"math_id": 42,
"text": "\\text{BSC}"
},
{
"math_id": 43,
"text": "\\text{BEC}"
},
{
"math_id": 44,
"text": "C^{*} = C_\\text{out} \\circ C_\\text{in}"
},
{
"math_id": 45,
"text": "C_\\text{out}"
},
{
"math_id": 46,
"text": "N"
},
{
"math_id": 47,
"text": "1-\\frac{\\epsilon}{2}"
},
{
"math_id": 48,
"text": "F_{2^k}"
},
{
"math_id": 49,
"text": "k = O(\\log N)"
},
{
"math_id": 50,
"text": "D_\\text{out}"
},
{
"math_id": 51,
"text": "\\gamma"
},
{
"math_id": 52,
"text": "t_\\text{out}(N)"
},
{
"math_id": 53,
"text": "C_\\text{in}"
},
{
"math_id": 54,
"text": "1 - H(p) - \\frac{\\epsilon}{2}"
},
{
"math_id": 55,
"text": "D_\\text{in}"
},
{
"math_id": 56,
"text": "\\frac{\\gamma}{2}"
},
{
"math_id": 57,
"text": "t_\\text{in}(N)"
},
{
"math_id": 58,
"text": "R(C^{*}) = R(C_\\text{in}) \\times R(C_\\text{out}) = (1-\\frac{\\epsilon}{2}) ( 1 - H(p) - \\frac{\\epsilon}{2} ) \\geq 1 - H(p)-\\epsilon"
},
{
"math_id": 59,
"text": "C^{*}"
},
{
"math_id": 60,
"text": "O(N^{2})+O(Nk^{2}) = O(N^{2})"
},
{
"math_id": 61,
"text": "Nt_\\text{in}(k) + t_\\text{out}(N) = N^{O(1)} "
},
{
"math_id": 62,
"text": "t_\\text{out}(N) = N^{O(1)}"
},
{
"math_id": 63,
"text": "t_\\text{in}(k) = 2^{O(k)}"
},
{
"math_id": 64,
"text": "y_{i}^{\\prime} = D_\\text{in}(y_i), \\quad i \\in (0, N)"
},
{
"math_id": 65,
"text": "y^{\\prime} = (y_1^{\\prime} \\ldots y_N^{\\prime})"
},
{
"math_id": 66,
"text": "i"
},
{
"math_id": 67,
"text": "\\tfrac{\\gamma}{2}"
},
{
"math_id": 68,
"text": "\\tfrac{\\gamma N}{2}"
},
{
"math_id": 69,
"text": "\\gamma N"
},
{
"math_id": 70,
"text": "e^\\frac{-\\gamma N}{6}"
},
{
"math_id": 71,
"text": "2^{-\\Omega(\\gamma N)}"
}
]
| https://en.wikipedia.org/wiki?curid=74361 |
74371509 | Pedersen current | A Pedersen current is an electric current formed in the direction of the applied electric field when a conductive material with charge carriers is acted upon by an external electric field and an external magnetic field. Pedersen currents emerge in a material where the charge carriers collide with particles in the conductive material at approximately the same frequency as the gyratory frequency induced by the magnetic field. Pedersen currents are associated with a Pedersen conductivity related to the applied magnetic field and the properties of the material.
History.
The first expression for the Pedersen conductivity was formulated by Peder Oluf Pedersen from Denmark in his 1927 work "The Propagation of Radio Waves along the Surface of the Earth and in the Atmosphere", where he pointed out that the geomagnetic field means that the conductivity of the ionosphere is anisotropic.
Physical explanation.
Representation of the path of charged particles with initial velocities being acted upon by a magnetic field going into the paper, both with (B) and without (A) an applied electric field.
When a moving charge carrier in a conductor is under the influence of a magnetic field formula_0, the carrier experiences a force perpendicular to the direction of motion and the magnetic field, resulting in a gyratory path, which is circular in the absence of any other external force. When an electric field formula_1 is applied in addition to the magnetic field and perpendicular to that field, this gyratory motion is driven by the electric field, leading to a net drift in the direction formula_2 around the guiding center and a lack of mobility in the direction of the electric field. The charge carrier undergoes a helical motion whereby a charge carrier at rest acquires motion in the direction of the electric field according to Coulomb's law, gains a velocity perpendicular to the magnetic field, and subsequently is pushed in the direction formula_3 due to the Lorentz force (as formula_4 is in the direction of formula_1, formula_3 is initially in the same direction as formula_2.) The motion will then oscillate backwards against the electric field until it again reaches a velocity of zero in the direction of the electric field, before again being driven by the electric and magnetic fields, forming a helical path. As a result, in a vacuum, no net current is possible in the direction of the electric field. Likewise, when there is a dense material with a high frequency of collisions between the charge carriers and the conductive medium, mobility is very low and the charge carriers are basically stationary.
For a positively charged particle, over the course of this helical path, there is a positive skew in the location distribution of the charge carrier in the direction of the electric field, such that at any given point in time a measurement of the location of the charge carrier will on average result in a positive change from original position in the direction of the electric potential. During a collision with another particle in the medium, the velocity of the charge carrier is randomized at the point of collision. This location of collision is likely to be a positive change in the direction of the electric field from the original location of the charge carrier. After the velocity is randomised, the charge carrier will then restart helical motion from a different original location. Overall, this results in a bulk movement in the direction of the electric field such that a current is able to flow, which is known as the Pedersen Current, with the associated Pedersen Conductivity reaching a maximum when the frequency of collisions is approximately equal to the gyratory frequency so that the charge carriers experience one collision for every gyration.
The Pedersen conductivity is determined by the following equation:
formula_5
Where the electron density is formula_6, formula_7 is the magnetic field, formula_8 is the ion concentration for a given species, formula_9 is the collision frequency between ion species i and other particles, formula_10 is the gyrofrequency for that ion, formula_11 is the collision frequency for the electron, and formula_12 is the electron gyrofrequency.
A negative charge carrier undergoes a similar drift in the direction formula_2, but moves in the opposite direction to a positive charge carrier, and undergoes helical motion such that there is a net negative skew in the distribution of position from the original position over the gyration, and as these particles are negatively charged they will also produce a positive contribution to the Pedersen current.
Role in the Ionosphere.
Pedersen currents play an important role in the ionosphere, especially in polar regions. In the ionospheric dynamo region near the poles, the ion density is low enough and the magnetic field high enough for the collision frequency to be comparable to the gyration frequency, and the Earth's magnetic field has a large component perpendicular to the horizontal electric field due to the high inclination of the field near the poles. As a result, Pedersen currents are a significant mechanism for charge carrier movement. The magnitude of the Pedersen current balances the drag on the ionospheric plasma due to ion‐neutral collisions.
Pedersen currents in the ionosphere have a similar production mechanism to Hall currents, have a similar equation form for determining conductivity and have a similar conductivity profile and conducitivity dependence on various factors. The Pedersen and Hall conductivities are maximised during daytime or in auroral regions at night, as they depend on plasma density, which in turn depends on auroral or solar ionization. The conductivities also vary by about 40% over the solar cycle, reaching a maximum conductivity around solar maximum.
The Pedersen conductivity reaches a maximum in the ionosphere at an altitude of around 125 km.
Pedersen currents flow between the Region 1 and Region 2 Birkeland current sheets (see the figure), completing the circuit of the flow of charge through the ionosphere (at a given local time, one region involves current entering the ionosphere along the geomagnetic field lines, and the other region involves current leaving the ionosphere.) There is also a Pedersen current that flows across the pole from the dawn side (local time 6:00) to the dusk side (local time 18:00) of the region 1 current sheet.
Electrons have also been shown to carry Pedersen currents in the D layer of the ionosphere.
Joule heating.
The Joule heating of the ionosphere, a major source of energy loss from the magnetosphere, is closely related to the Pedersen conductivity through the following relation:
formula_13
Where formula_14 is the Joule heating per unit volume, formula_15 is the Pedersen conductivity, formula_16 and formula_7 are the electric and magnetic fields, and formula_17 is the neutral wind velocity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{B}"
},
{
"math_id": 1,
"text": "\\mathbf{E}"
},
{
"math_id": 2,
"text": "\\mathbf{E}\\times \\mathbf{B}"
},
{
"math_id": 3,
"text": "\\mathbf{v}\\times \\mathbf{B}"
},
{
"math_id": 4,
"text": "\\mathbf{v}"
},
{
"math_id": 5,
"text": "\\mathbf{\\sigma}_P = \\frac{en_e}B \\left \\{ \t\\sum_i C_i \t\\left [ \\frac{\\nu_{in} / \\omega_i}{1 + (\\nu_{in}^2 / \\omega_i^2)} \\right ] + \t \\frac{\\nu_e / \\omega_e}{1 + (\\nu_e^2 / \\omega_e^2)} \\right \\}"
},
{
"math_id": 6,
"text": "n_e"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "C_i"
},
{
"math_id": 9,
"text": "\\nu_{in}"
},
{
"math_id": 10,
"text": "\\omega_i"
},
{
"math_id": 11,
"text": "\\nu_e"
},
{
"math_id": 12,
"text": "\\omega_e"
},
{
"math_id": 13,
"text": "q_J = \\mathbf{\\sigma}_P (E - U_n \\times B)^2"
},
{
"math_id": 14,
"text": "q_J"
},
{
"math_id": 15,
"text": "\\mathbf{\\sigma}_P "
},
{
"math_id": 16,
"text": "E"
},
{
"math_id": 17,
"text": "U_n"
}
]
| https://en.wikipedia.org/wiki?curid=74371509 |
7437222 | Period (algebraic geometry) | Numbers expressible as integrals of algebraic functions
In algebraic geometry, a period is a number that can be expressed as an integral of an algebraic function over an algebraic domain. Sums and products of periods remain periods, such that the periods form a ring.
Maxim Kontsevich and Don Zagier gave a survey of periods and introduced some conjectures about them. Periods also arise in computing the integrals that arise from Feynman diagrams, and there has been intensive work trying to understand the connections.
Definition.
A real number is a period if it is of the form
formula_0
where formula_1 is a polynomial and formula_2 a rational function on formula_3 with rational coefficients. A complex number is a period if its real and imaginary parts are periods.
An alternative definition allows formula_1 and formula_2 to be algebraic functions; this looks more general, but is equivalent. The coefficients of the rational functions and polynomials can also be generalised to algebraic numbers because irrational algebraic numbers are expressible in terms of areas of suitable domains.
In the other direction, formula_2 can be restricted to be the constant function formula_4 or formula_5, by replacing the integrand with an integral of formula_6 over a region defined by a polynomial in additional variables. In other words, a (nonnegative) period is the volume of a region in formula_3 defined by a polynomial inequality.
Examples.
Besides the algebraic numbers, the following numbers are known to be periods:
An example of a real number that is not a period is given by Chaitin's constant Ω. Any other non-computable number also gives an example of a real number that is not a period. Currently there are no natural examples of computable numbers that have been proved not to be periods, however it is possible to construct artificial examples. Plausible candidates for numbers that are not periods include "e", 1/π, and the Euler–Mascheroni constant γ.
Properties and motivation.
The periods are intended to bridge the gap between the algebraic numbers and the transcendental numbers. The class of algebraic numbers is too narrow to include many common mathematical constants, while the set of transcendental numbers is not countable, and its members are not generally computable.
The set of all periods is countable, and all periods are computable, and in particular definable.
Conjectures.
Many of the constants known to be periods are also given by integrals of transcendental functions. Kontsevich and Zagier note that there "seems to be no universal rule explaining why certain infinite sums or integrals of transcendental functions are periods".
Kontsevich and Zagier conjectured that, if a period is given by two different integrals, then each integral can be transformed into the other using only the linearity of integrals (in both the integrand and the domain), changes of variables, and the Newton–Leibniz formula
formula_9
(or, more generally, the Stokes formula).
A useful property of algebraic numbers is that equality between two algebraic expressions can be determined algorithmically. The conjecture of Kontsevich and Zagier would imply that equality of periods is also decidable: inequality of computable reals is known recursively enumerable; and conversely if two integrals agree, then an algorithm could confirm so by trying all possible ways to transform one of them into the other one.
It is conjectured that Euler's number "e" and the Euler–Mascheroni constant γ are "not" periods.
Generalizations.
The periods can be extended to "exponential periods" by permitting the integrand formula_2 to be the product of an algebraic function and the exponential of an algebraic function. This extension includes all algebraic powers of "e", the gamma function of rational arguments, and values of Bessel functions.
Kontsevich and Zagier suggest that there are "indications" that periods can be naturally generalized even further, to include Euler's constant γ. With this inclusion, "all classical constants are periods in the appropriate sense".
References.
Footnotes
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_{P(x,y,z,\\ldots)\\ge0}Q(x,y,z,\\ldots) \\mathrm{d}x\\mathrm{d}y\\mathrm{d}z\\ldots"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "\\mathbb{R}^n"
},
{
"math_id": 4,
"text": "1"
},
{
"math_id": 5,
"text": "-1"
},
{
"math_id": 6,
"text": "\\pm 1"
},
{
"math_id": 7,
"text": "\\int_1^{a}\\frac{1}{x}\\ \\mathrm{d}x"
},
{
"math_id": 8,
"text": "= \\int_0^1\\frac{4}{x^2+1}\\ \\mathrm{d}x"
},
{
"math_id": 9,
"text": " \\int_a^b f'(x) \\, dx = f(b) - f(a) "
}
]
| https://en.wikipedia.org/wiki?curid=7437222 |
74386455 | Inozemtsev model | Statistical lattice model with long-range interactions
In statistical physics, the Inozemtsev model is a spin chain model, defined on a one-dimensional, periodic lattice. Unlike the prototypical Heisenberg spin chain, which only includes interactions between neighboring sites of the lattice, the Inozemtsev model has "long-range" interactions, that is, interactions between any pair of sites, regardless of the distance between them.
It was introduced in 1990 by Vladimir Inozemtsev as a model which interpolates between the Heisenberg XXX model and the Haldane–Shastry model. Like those models, the Inozemtsev model is exactly solvable.
Formulation.
For a chain with formula_0 spin 1/2 sites, the quantum phase space is described by the Hilbert space formula_1. The (elliptic) Inozemtsev model is given by the (unnormalised) Hamiltonian
formula_2
where formula_3 is the Weierstrass elliptic function and formula_4 is the Pauli vector acting on the formula_5th copy of the tensor product Hilbert space.
More precisely, the parameters used for the Weierstrass p are
formula_6
for formula_7 a positive real number. The notation formula_8 means sum over all pairs of integers except formula_9. This ensures that the function is formula_0-periodic in the real direction, and is real for real values of formula_10.
Exact solution.
The system has been exactly solved by means of a Bethe ansatz method. The Bethe ansatz equations were found by Inozemtsev. In fact, the model was first solved in the infinite size limit.
AdS/CFT correspondence.
The model can be used to understand certain aspects of the AdS/CFT correspondence proposed by Maldacena. Specifically, integrability techniques have turned out to be useful for an 'integrable' instance of the correspondence. On the string theory side of the correspondence, one has a type IIB superstring on formula_11, the product of five-dimensional Anti-de Sitter space with the five-dimensional sphere. On the conformal field theory (CFT) side one has N = 4 supersymmetric Yang–Mills theory (N = 4 SYM) on four-dimensional space.
Spin chains have turned out to be useful for computing specific anomalous dimensions on the CFT side, which can then provide evidence for the correspondence if matching observables are computed on the string theory side. In the so-called 'planar limit' or 'large N' limit of N = 4 SYM, in which the number of colors formula_12, which parametrizes the gauge group formula_13, is sent to infinity, determining one-loop anomalous dimensions becomes equivalent to the problem of diagonalizing an appropriate spin chain. The Inozemtsev model is one such model which has been useful in determining these quantities.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "\\mathcal{H} = (\\mathbb{C}^2)^{\\otimes L}"
},
{
"math_id": 2,
"text": "H_u = \\sum_{j < k}^L \\wp(j - k)\\frac{1 - \\vec \\sigma_j \\otimes \\vec \\sigma_k}{2}"
},
{
"math_id": 3,
"text": "\\wp(z)"
},
{
"math_id": 4,
"text": "\\vec \\sigma_j"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "\\wp(z) = \\wp(z;L, i\\omega) = \\frac{1}{z^2} + \\sum_{m,n}'\\left(\\frac{1}{(z - mL - in\\omega)^2} - \\frac{1}{(mL + in\\omega)^2}\\right)"
},
{
"math_id": 7,
"text": "\\omega"
},
{
"math_id": 8,
"text": "\\sum'"
},
{
"math_id": 9,
"text": "(m,n) = (0,0)"
},
{
"math_id": 10,
"text": "z"
},
{
"math_id": 11,
"text": "\\mathrm{AdS}_5 \\times \\mathrm{S}^5"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "SU(N)"
}
]
| https://en.wikipedia.org/wiki?curid=74386455 |
74387063 | Catalog of MCA Control Patterns | Metabolic control analysis patterns
Jannie Hofmeyr published the first catalog of control patterns in metabolic control analysis (MCA). His doctoral research. concerned the use of graphical patterns to elucidate chains of interaction in metabolic regulation, later published in the European Journal of Biochemistry. In his thesis, he cataloged 25 patterns for various biochemical networks. In later work, his research group, together with Carl D Christensen and Johann Rohwer, developed a Python based tool called SymCA that was part of the PySCeSToolbox toolkit that could generate patterns automatically and symbolically from a description of the network. This software was used to generate the patterns shown below.
The control equations, especially the numerators of the equations, can give information on the relative importance and routes by which perturbations travel through a biochemical network
Notation.
Control patterns describe how a perturbation to a given parameter affects the steady-state level of a given variable. For example, a concentration control coefficient can describe how the overexpression of a specific enzyme can influence steady-state metabolite concentrations. Flux control coefficients are similar in that they describe how a perturbation in a given enzyme affects steady-state flux through a pathway. Such coefficients can be written in terms of elasticity coefficients.
Elasticity coefficients are local properties that describe how a single reaction is influenced by changes in the substrates and products that might influence the rate. For example, given a reaction such as:
formula_0
we will assume it has a rate of reaction of formula_1. This reaction rate can be influenced by changes in the concentrations of substrate formula_2 or product formula_3. This influence is measured by an elasticity which is defined as:
formula_4
To make the notation manageable, a specific numbering scheme is used in the following patterns. If a substrate has an index of formula_5, then the reaction index will be formula_6. The product elasticity will also have an index of formula_7. This means that a product elasticity will have identical subscripts and superscripts making them easy to identify. The source boundary species is always labeled zero as well as the label for the first reaction.
For example, the following fragment of a network illustrates this labeling:
formula_8
then
formula_9
formula_10
formula_11
formula_12
formula_13
formula_14
formula_15
Linear Chains.
Three-Step Pathway.
Assuming the three steps are Irreversible.
Denominator:
formula_16
Assume that each of the following expressions is divided by d
formula_17
formula_18
Assuming the three steps are Reversible.
Denominator:
formula_19
Assume that each of the following expressions is divided by formula_20
formula_21
formula_22
formula_23
Four-Step Pathway.
Denominator:
formula_24
Assume that each of the following expressions is divided by formula_25.
formula_26
Linear Chains with Negative Feedback.
Three-Step Pathway.
Denominator:
formula_27
Assume that each of the following expressions is divided by formula_25.
formula_28
Four-Step Pathway.
Denominator:
formula_29
Assume that each of the following expressions is divided by formula_25.
formula_30
Branched Pathways.
At steady-state formula_31, therefore define the following two terms:
formula_32
Denominator:
formula_33
Assume that each of the following expressions is divided by formula_20.
formula_34
References.
<templatestyles src="Reflist/styles.css" />
This page will be placed in the following categories if it is moved to the .: | [
{
"math_id": 0,
"text": " S \\stackrel{v}{\\longrightarrow} P "
},
{
"math_id": 1,
"text": " v "
},
{
"math_id": 2,
"text": " S "
},
{
"math_id": 3,
"text": " P "
},
{
"math_id": 4,
"text": " \\varepsilon^{v}_s = \\frac{\\partial v}{\\partial s} \\frac{s}{v} "
},
{
"math_id": 5,
"text": " i "
},
{
"math_id": 6,
"text": " v_{i+1} "
},
{
"math_id": 7,
"text": " i+1 "
},
{
"math_id": 8,
"text": " X_o \\stackrel{v_1}{\\longrightarrow} S_1 \\stackrel{v_2}{\\longrightarrow} S_2 \\stackrel{v_3}{\\longrightarrow} "
},
{
"math_id": 9,
"text": " \\varepsilon^2_1 = \\frac{\\partial v_2}{\\partial s_1} \\frac{s_1}{v_2}, \\quad \\varepsilon^2_2 = \\frac{\\partial v_2}{\\partial s_2} \\frac{s_2}{v_2}, \\quad \\varepsilon^3_2 = \\frac{\\partial v_3}{\\partial s_2} \\frac{s_2}{v_3} "
},
{
"math_id": 10,
"text": " X_o \\stackrel{v_1}{\\longrightarrow} S_1 \\stackrel{v_2}{\\longrightarrow} X_1 "
},
{
"math_id": 11,
"text": " C^J_{e_1} = 1 \\qquad C^J_{e_2} = 0 "
},
{
"math_id": 12,
"text": " C^{s_1}_{e_1} = \\frac{1}{\\varepsilon^{2}_1}\\qquad C^{s_1}_{e_2} = \\frac{-1}{\\varepsilon^{2}_1} "
},
{
"math_id": 13,
"text": " C^J_{v_1} = \\frac{\\varepsilon^{2}_1}{\\varepsilon^{2}_1 - \\varepsilon^{1}_1}\\qquad C^J_{v_2} = \\frac{-\\varepsilon^{1}_1}{\\varepsilon^{2}_1 - \\varepsilon^{1}_1} "
},
{
"math_id": 14,
"text": " C^{s_1}_{v_1} = \\frac{1}{\\varepsilon^{2}_1 - \\varepsilon^{1}_1}\\qquad C^{s_1}_{v_2} = \\frac{-1}{\\varepsilon^{2}_1 - \\varepsilon^{1}_1} "
},
{
"math_id": 15,
"text": " X_o \\stackrel{v_1}{\\longrightarrow}S_1 \\stackrel{v_2}{\\longrightarrow} S_2 \\stackrel{v_3}{\\longrightarrow} X_1 "
},
{
"math_id": 16,
"text": " d = \\varepsilon^{2}_1 \\varepsilon^{3}_2"
},
{
"math_id": 17,
"text": "\n\\begin{array}{lll}\nC^J_{e_1} = 1 & C^J_{e_2} = 0 & C^J_{e_3} = 0\n\\end{array}\n"
},
{
"math_id": 18,
"text": "\n\\begin{array}{ll}\n C^{s_1}_{e_1} = \\varepsilon^{3}_2 & C^{s_2}_{e_1} = \\varepsilon^{2}_1 \\\\[6pt]\nC^{s_1}_{e_2} = -\\varepsilon^{3}_2 & C^{s_2}_{e_2} = 0 \\\\[6pt]\nC^{s_2}_{e_2} = 0 & C^{s_2}_{e_3} = - \\varepsilon^{2}_1\n\\end{array}\n"
},
{
"math_id": 19,
"text": " d = \\varepsilon^{2}_1 \\varepsilon^{3}_2 -\\varepsilon^{1}_1 \\varepsilon^{3}_2 + \\varepsilon^{1}_1 \\varepsilon^{2}_2 "
},
{
"math_id": 20,
"text": "d"
},
{
"math_id": 21,
"text": "\n\\begin{array}{lll}\nC^J_{e_1} = \\varepsilon^{2}_1 \\varepsilon^{3}_2 & C^J_{e_2} = -\\varepsilon^{1}_1 \\varepsilon^{3}_2 & C^J_{e_3} = \\varepsilon^{1}_1 \\varepsilon^{2}_2 \\\\[6pt]\n\\end{array}\n"
},
{
"math_id": 22,
"text": "\n\\begin{array}{ll}\n C^{s_1}_{e_1} = \\varepsilon^{3}_2 - \\varepsilon^{2}_2 & C^{s_2}_{e_1} = \\varepsilon^{2}_1 \\\\[6pt]\nC^{s_1}_{e_2} = -\\varepsilon^{3}_2 & C^{s_2}_{e_2} = -\\varepsilon^{1}_1 \\\\[6pt] C^{s_1}_{e_3} = \\varepsilon^{2}_2 & C^{s_2}_{e_3} = \\varepsilon^{1}_1 - \\varepsilon^{2}_1\n\\end{array}\n"
},
{
"math_id": 23,
"text": " X_o \\stackrel{v_1}{\\longrightarrow}S_1 \\stackrel{v_2}{\\longrightarrow} S_2 \\stackrel{v_3}{\\longrightarrow} S_3 \\stackrel{v_4}{\\longrightarrow} X_1 "
},
{
"math_id": 24,
"text": "\nd = \\varepsilon^1_1 \\varepsilon^2_2 \\varepsilon^3_3 - \\varepsilon^1_1 \\varepsilon^2_2 \\varepsilon^4_3 + \\varepsilon^1_1 \\varepsilon^3_2 \\varepsilon^4_3 - \\varepsilon^2_1 \\varepsilon^3_2 \\varepsilon^4_3\n"
},
{
"math_id": 25,
"text": " d "
},
{
"math_id": 26,
"text": "\n\\begin{array}{lll}\nC^J_{e_1} = -\\varepsilon^2_1 \\varepsilon^3_2 \\varepsilon^4_3 & C^J_{e_2} = \\varepsilon^1_1 \\varepsilon^3_2 \\varepsilon^4_3 & C^J_{e_3} = -\\varepsilon^1_1 \\varepsilon^2_2 \\varepsilon^4_3 & C^J_{e_4} = \\varepsilon^1_1 \\varepsilon^2_2 \\varepsilon^3_3 \\\\[4pt]\nC^{s_1}_{e_1} = -\\varepsilon^2_2 \\varepsilon^3_3 + \\varepsilon^2_2 \\varepsilon^4_3 - \\varepsilon^3_2 \\varepsilon^4_3 & C^{s_1}_{e_2} = -\\varepsilon^3_2 \\varepsilon^4_3 & C^{s_1}_{e_3} = -\\varepsilon^2_2 \\varepsilon^4_3 & C^{s_1}_{e_4} = \\varepsilon^2_2 \\varepsilon^3_3 \\\\[4pt]\nC^{s_2}_{e_1} = \\varepsilon^2_1 \\varepsilon^3_3 - \\varepsilon^2_1 \\varepsilon^4_3 & C^{s_2}_{e_2} = -\\varepsilon^1_1 \\varepsilon^3_3 + \\varepsilon^1_1 \\varepsilon^4_3 & C^{s_2}_{e_3} = -\\varepsilon^1_1 \\varepsilon^4_3 + \\varepsilon^2_1 \\varepsilon^4_3\n& C^{s_2}_{e_4} = \\varepsilon^1_1 \\varepsilon^3_3 - \\varepsilon^2_1 \\varepsilon^3_3 \\\\[4pt]\nC^{s_3}_{e_1} = -\\varepsilon^2_1 \\varepsilon^3_2 & C^{s_3}_{e_2} = \\varepsilon^1_1 \\varepsilon^3_2 & C^{s_2}_{e_3} = -\\varepsilon^1_1 \\varepsilon^2_2 & C^{s_2}_{e_4} = \\varepsilon^1_1 \\varepsilon^2_2 - \\varepsilon^1_1 \\varepsilon^3_2 + \\varepsilon^2_1 \\varepsilon^3_2\n\\end{array}\n"
},
{
"math_id": 27,
"text": "\nd = \\varepsilon^1_1 \\varepsilon^2_2 - \\varepsilon^1_1 \\varepsilon^3_2 + \\varepsilon^2_1 \\varepsilon^3_2 -\\varepsilon^1_2 \\varepsilon^2_1\n"
},
{
"math_id": 28,
"text": "\n\\begin{array}{lll}\nC^{J}_{e_1} = \\varepsilon^2_1 \\varepsilon^3_2 & C^{J}_{e_2} = -\\varepsilon^1_1 \\varepsilon^3_2 & C^{J}_{e_3} = \\varepsilon^1_1 \\varepsilon^2_2 - \\varepsilon^1_2 \\varepsilon^2_1 \\\\[4pt]\nC^{s_1}_{e_1} = \\varepsilon^3_2 - \\varepsilon^2_2 & C^{s_1}_{e_2} = -\\varepsilon^3_2 - \\varepsilon^1_2 & C^{s_1}_{e_3} = \\varepsilon^2_2 - \\varepsilon^1_2 \\\\[4pt]\nC^{s_2}_{e_1} = \\varepsilon^2_1 & C^{s_2}_{e_2} = -\\varepsilon^1_1 & C^{s_2}_{e_3} = \\varepsilon^1_1 - \\varepsilon^2_1 \\\\[4pt]\n\\end{array}\n"
},
{
"math_id": 29,
"text": "\nd = \\varepsilon^{1}_{1} \\varepsilon^{2}_{2} \\varepsilon^{4}_{3} - \\varepsilon^{1}_{1} \\varepsilon^{3}_{2} \\varepsilon^{4}_{3} - \\varepsilon^{1}_{3} \\varepsilon^{2}_{1} \\varepsilon^{3}_{2} + \\varepsilon^{2}_{1} \\varepsilon^{3}_{2} \\varepsilon^{4}_{3} - \\varepsilon^{1}_{1} \\varepsilon^{2}_{2} \\varepsilon^{3}_{3}\n"
},
{
"math_id": 30,
"text": "\n\\begin{array}{llll}\nC^J_{v_1} = \\varepsilon^{2}_{1} \\varepsilon^{3}_{2} \\varepsilon^{4}_{3} &\nC^J_{v_2} = -\\varepsilon^{1}_{1} \\varepsilon^{3}_{2} \\varepsilon^{4}_{3} &\nC^J_{v_3} = \\varepsilon^{1}_{1} \\varepsilon^{2}_{2} \\varepsilon^{4}_{3} &\nC^J_{v_4} = - \\varepsilon^{1}_{1} \\varepsilon^{2}_{2} \\varepsilon^{3}_{3} - \\varepsilon^{1}_{3} \\varepsilon^{2}_{1} \\varepsilon^{3}_{2} \\\\\nC^{S_1}_{v_1} =\n\\varepsilon^{2}_{2} \\varepsilon^{3}_{3} - \\varepsilon^{2}_{2} \\varepsilon^{4}_{3} + \\varepsilon^{3}_{2} \\varepsilon^{4}_{3} &\nC^{S_1}_{v_2} = \\varepsilon^{1}_{3} \\varepsilon^{3}_{2} - \\varepsilon^{3}_{2} \\varepsilon^{4}_{3} & \nC^{S_1}_{v_3} = - \\varepsilon^{1}_{3} \\varepsilon^{2}_{2} + \\varepsilon^{2}_{2} \\varepsilon^{4}_{3}\n&\nC^{S_1}_{v_4} = \\varepsilon^{1}_{3} \\varepsilon^{2}_{2} - \\varepsilon^{1}_{3} \\varepsilon^{3}_{2} - \\varepsilon^{2}_{2} \\varepsilon^{3}_{3} \\\\\nC^{S_2}_{v_1} = \n- \\varepsilon^{2}_{1} \\varepsilon^{3}_{3} + \\varepsilon^{2}_{1} \\varepsilon^{4}_{3} &\nC^{S_2}_{v_2} = \\varepsilon^{1}_{1} \\varepsilon^{3}_{3} - \\varepsilon^{1}_{1} \\varepsilon^{4}_{3} &\nC^{S_2}_{v_3} = \\varepsilon^{1}_{B} \\varepsilon^{4}_{3} + \\varepsilon^{1}_{3} \\varepsilon^{2}_{1} - \\varepsilon^{2}_{1} \\varepsilon^{4}_{3} &\nC^{S_2}_{v_4} = - \\varepsilon^{1}_{1} \\varepsilon^{3}_{3} - \\varepsilon^{1}_{3} \\varepsilon^{2}_{1} + \\varepsilon^{2}_{1} \\varepsilon^{3}_{3} \\\\\nC^{S_3}_{v_1} = \\varepsilon^{2}_{1} \\varepsilon^{3}_{2} &\nC^{S_3}_{v_2} = - \\varepsilon^{1}_{1} \\varepsilon^{3}_{2} & \nC^{S_3}_{v_3} = \\varepsilon^{1}_{1} \\varepsilon^{2}_{2} &\nC^{S_3}_{v_4} = - \\varepsilon^{1}_{1} \\varepsilon^{2}_{2} + \\varepsilon^{1}_{1} \\varepsilon^{3}_{2} - \\varepsilon^{2}_{1} \\varepsilon^{3}_{2}\n\\end{array}\n"
},
{
"math_id": 31,
"text": " v_1 = v_2 + v_3 "
},
{
"math_id": 32,
"text": " \\alpha = \\frac{v_2}{v_1} \\quad 1-\\alpha = \\frac{v_3}{v_1} "
},
{
"math_id": 33,
"text": "\nd = \\varepsilon^2_s \\alpha + \\varepsilon^3_s (1-\\alpha) -\\varepsilon^1_s \n"
},
{
"math_id": 34,
"text": "\n\\begin{array}{lll}\nC^{J_1}_{e_1} = \\varepsilon^{3}_s (1-\\alpha) + \\varepsilon^2_s \\alpha \\\\\nC^{J_1}_{e_1} = -\\varepsilon^1_s \\alpha \\\\\nC^{J_1}_{e_1} = -\\varepsilon^1_s (1-\\alpha) + \\varepsilon^2_s \\alpha\n\\end{array}\n"
}
]
| https://en.wikipedia.org/wiki?curid=74387063 |
74389181 | Haldane–Shastry model | Statistical lattice model with long-range interactions
In statistical physics, the Haldane–Shastry model is a spin chain model, defined on a one-dimensional, periodic lattice. Unlike the prototypical Heisenberg spin chain, which only includes interactions between neighboring sites of the lattice, the Haldane–Shastry model has "long-range" interactions, that is, interactions between any pair of sites, regardless of the distance between them.
The model is named after and was defined independently by Duncan Haldane and B. Sriram Shastry. It is an exactly solvable model, and was exactly solved by Shastry.
Formulation.
For a chain with formula_0 spin 1/2 sites, the quantum phase space is described by the Hilbert space formula_1. The Haldane–Shastry model is described by the Hamiltonian
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\mathcal{H} = (\\mathbb{C}^2)^{\\otimes N}"
},
{
"math_id": 2,
"text": "H_{\\mathrm{HS}} = \\frac{1}{2}\\sum_{m<n}^N \\frac{J_0}{\\sin^2{(n-m)\\pi/N}}\\vec \\sigma_m \\cdot \\vec \\sigma_n."
}
]
| https://en.wikipedia.org/wiki?curid=74389181 |
7439 | Constructible number | Number constructible via compass and straightedge
In geometry and algebra, a real number formula_0 is constructible if and only if, given a line segment of unit length, a line segment of length formula_1 can be constructed with compass and straightedge in a finite number of steps. Equivalently, formula_0 is constructible if and only if there is a closed-form expression for formula_0 using only integers and the operations for addition, subtraction, multiplication, division, and square roots.
The geometric definition of constructible numbers motivates a corresponding definition of constructible points, which can again be described either geometrically or algebraically. A point is constructible if it can be produced as one of the points of a compass and straightedge construction (an endpoint of a line segment or crossing point of two lines or circles), starting from a given unit length segment. Alternatively and equivalently, taking the two endpoints of the given segment to be the points (0, 0) and (1, 0) of a Cartesian coordinate system, a point is constructible if and only if its Cartesian coordinates are both constructible numbers. Constructible numbers and points have also been called ruler and compass numbers and ruler and compass points, to distinguish them from numbers and points that may be constructed using other processes.
The set of constructible numbers forms a field: applying any of the four basic arithmetic operations to members of this set produces another constructible number. This field is a field extension of the rational numbers and in turn is contained in the field of algebraic numbers. It is the Euclidean closure of the rational numbers, the smallest field extension of the rationals that includes the square roots of all of its positive numbers.
The proof of the equivalence between the algebraic and geometric definitions of constructible numbers has the effect of transforming geometric questions about compass and straightedge constructions into algebra, including several famous problems from ancient Greek mathematics. The algebraic formulation of these questions led to proofs that their solutions are not constructible, after the geometric formulation of the same problems previously defied centuries of attack.
Geometric definitions.
Geometrically constructible points.
Let formula_2 and formula_3 be two given distinct points in the Euclidean plane, and define formula_4 to be the set of points that can be constructed with compass and straightedge starting with formula_2 and formula_3. Then the points of formula_4 are called constructible points. formula_2 and formula_3 are, by definition, elements of formula_4. To more precisely describe the remaining elements of formula_4, make the following two definitions:
Then, the points of formula_4, besides formula_2 and formula_3 are:
As an example, the midpoint of constructed segment formula_5 is a constructible point. One construction for it is to construct two circles with formula_5 as radius, and the line through the two crossing points of these two circles. Then the midpoint of segment formula_5 is the point where this segment is crossed by the constructed line.
Geometrically constructible numbers.
The starting information for the geometric formulation can be used to define a Cartesian coordinate system in which the point formula_2 is associated to the origin having coordinates formula_6 and in which the point formula_3 is associated with the coordinates formula_7. The points of formula_4 may now be used to link the geometry and algebra by defining a constructible number to be a coordinate of a constructible point.
Equivalent definitions are that a constructible number is the formula_8-coordinate of a constructible point formula_9 or the length of a constructible line segment. In one direction of this equivalence, if a constructible point has coordinates formula_10, then the point formula_9 can be constructed as its perpendicular projection onto the formula_8-axis, and the segment from the origin to this point has length formula_8. In the reverse direction, if formula_8 is the length of a constructible line segment, then intersecting the formula_8-axis with a circle centered at formula_2 with radius formula_8 gives the point formula_9. It follows from this equivalence that every point whose Cartesian coordinates are geometrically constructible numbers is itself a geometrically constructible point. For, when formula_8 and formula_11 are geometrically constructible numbers, point formula_10 can be constructed as the intersection of lines through formula_9 and formula_12, perpendicular to the coordinate axes.
Algebraic definitions.
Algebraically constructible numbers.
The algebraically constructible real numbers are the subset of the real numbers that can be described by formulas that combine integers using the operations of addition, subtraction, multiplication, multiplicative inverse, and square roots of positive numbers. Even more simply, at the expense of making these formulas longer, the integers in these formulas can be restricted to be only 0 and 1. For instance, the square root of 2 is constructible, because it can be described by the formulas formula_13 or formula_14.
Analogously, the algebraically constructible complex numbers are the subset of complex numbers that have formulas of the same type, using a more general version of the square root that is not restricted to positive numbers but can instead take arbitrary complex numbers as its argument, and produces the principal square root of its argument. Alternatively, the same system of complex numbers may be defined as the complex numbers whose real and imaginary parts are both constructible real numbers. For instance, the complex number formula_15 has the formulas formula_16 or formula_17, and its real and imaginary parts are the constructible numbers 0 and 1 respectively.
These two definitions of the constructible complex numbers are equivalent. In one direction, if formula_18 is a complex number whose real part formula_8 and imaginary part formula_11 are both constructible real numbers, then replacing formula_8 and formula_11 by their formulas within the larger formula formula_19 produces a formula for formula_20 as a complex number. In the other direction, any formula for an algebraically constructible complex number can be transformed into formulas for its real and imaginary parts, by recursively expanding each operation in the formula into operations on the real and imaginary parts of its arguments, using the expansions
Algebraically constructible points.
The algebraically constructible points may be defined as the points whose two real Cartesian coordinates are both algebraically constructible real numbers. Alternatively, they may be defined as the points in the complex plane given by algebraically constructible complex numbers. By the equivalence between the two definitions for algebraically constructible complex numbers, these two definitions of algebraically constructible points are also equivalent.
Equivalence of algebraic and geometric definitions.
If formula_27 and formula_28 are the non-zero lengths of geometrically constructed segments then elementary compass and straightedge constructions can be used to obtain constructed segments of lengths formula_29, formula_30, formula_31, and formula_32. The latter two can be done with a construction based on the intercept theorem. A slightly less elementary construction using these tools is based on the geometric mean theorem and will construct a segment of length formula_33 from a constructed segment of length formula_27. It follows that every algebraically constructible number is geometrically constructible, by using these techniques to translate a formula for the number into a construction for the number.
In the other direction, a set of geometric objects may be specified by algebraically constructible real numbers: coordinates for points, slope and formula_11-intercept for lines, and center and radius for circles. It is possible (but tedious) to develop formulas in terms of these values, using only arithmetic and square roots, for each additional object that might be added in a single step of a compass-and-straightedge construction. It follows from these formulas that every geometrically constructible number is algebraically constructible.
Algebraic properties.
The definition of algebraically constructible numbers includes the sum, difference, product, and multiplicative inverse of any of these numbers, the same operations that define a field in abstract algebra. Thus, the constructible numbers (defined in any of the above ways) form a field. More specifically, the constructible real numbers form a Euclidean field, an ordered field containing a square root of each of its positive elements. Examining the properties of this field and its subfields leads to necessary conditions on a number to be constructible, that can be used to show that specific numbers arising in classical geometric construction problems are not constructible.
It is convenient to consider, in place of the whole field of constructible numbers, the subfield formula_34 generated by any given constructible number formula_35, and to use the algebraic construction of formula_35 to decompose this field. If formula_35 is a constructible real number, then the values occurring within a formula constructing it can be used to produce a finite sequence of real numbers formula_36 such that, for each formula_15, formula_37 is an extension of formula_38 of degree 2. Using slightly different terminology, a real number is constructible if and only if it lies in a field at the top of a finite tower of real quadratic extensions,
formula_39
starting with the rational field formula_40 where formula_35 is in formula_41 and for all formula_42, formula_43. It follows from this decomposition that the degree of the field extension formula_44 is formula_45, where formula_0 counts the number of quadratic extension steps.
Analogously to the real case, a complex number is constructible if and only if it lies in a field at the top of a finite tower of complex quadratic extensions. More precisely, formula_35 is constructible if and only if there exists a tower of fields
formula_46
where formula_35 is in formula_47, and for all formula_48, formula_49. The difference between this characterization and that of the real constructible numbers is only that the fields in this tower are not restricted to being real. Consequently, if a complex number a complex number formula_35 is constructible, then the above characterization implies that formula_44 is a power of two. However, this condition is not sufficient - there exist field extensions whose degree is a power of two, but which cannot be factored into a sequence of quadratic extensions.
To obtain a sufficient condition for constructibility, one must instead consider the splitting field formula_50 obtained by adjoining all roots of the minimal polynomial of formula_35. If the degree of this extension is a power of two, then its Galois group formula_51 is a 2-group, and thus admits a descending sequence of subgroups
formula_52
with formula_53 for formula_54 By the fundamental theorem of Galois theory, there is a corresponding tower of quadratic extensions
formula_55
whose topmost field contains formula_56 and from this it follows that formula_35 is constructible.
The fields that can be generated from towers of quadratic extensions of formula_40 are called "<dfn >iterated quadratic extensions</dfn>" of formula_40. The fields of real and complex constructible numbers are the unions of all real or complex iterated quadratic extensions of formula_40.
Trigonometric numbers.
Trigonometric numbers are the cosines or sines of angles that are rational multiples of formula_57. These numbers are always algebraic, but they may not be constructible. The cosine or sine of the angle formula_58 is constructible only for certain special numbers formula_59:
Thus, for example, formula_60 is constructible because 15 is the product of the Fermat primes 3 and 5; but formula_61 is not constructible (not being the product of distinct Fermat primes) and neither is formula_62 (being a non-Fermat prime).
Impossible constructions.
The ancient Greeks thought that certain problems of straightedge and compass construction they could not solve were simply obstinate, not unsolvable. However, the non-constructibility of certain numbers proves that these constructions are logically impossible to perform. (The problems themselves, however, are solvable using methods that go beyond the constraint of working only with straightedge and compass, and the Greeks knew how to solve them in this way. One such example is Archimedes' Neusis construction solution of the problem of Angle trisection.)
In particular, the algebraic formulation of constructible numbers leads to a proof of the impossibility of the following construction problems:
The problem of doubling the unit square is solved by the construction of another square on the diagonal of the first one, with side length formula_13 and area formula_63. Analogously, the problem of doubling the cube asks for the construction of the length formula_64 of the side of a cube with volume formula_63. It is not constructible, because the minimal polynomial of this length, formula_65, has degree 3 over formula_66. As a cubic polynomial whose only real root is irrational, this polynomial must be irreducible, because if it had a quadratic real root then the quadratic conjugate would provide a second real root.
In this problem, from a given angle formula_67, one should construct an angle formula_68. Algebraically, angles can be represented by their trigonometric functions, such as their sines or cosines, which give the Cartesian coordinates of the endpoint of a line segment forming the given angle with the initial segment. Thus, an angle formula_67 is constructible when formula_69 is a constructible number, and the problem of trisecting the angle can be formulated as one of constructing formula_70. For example, the angle formula_71 of an equilateral triangle can be constructed by compass and straightedge, with formula_72. However, its trisection formula_73 cannot be constructed, because formula_74 has minimal polynomial formula_75 of degree 3 over formula_66. Because this specific instance of the trisection problem cannot be solved by compass and straightedge, the general problem also cannot be solved.
A square with area formula_57, the same area as a unit circle, would have side length formula_76, a transcendental number. Therefore, this square and its side length are not constructible, because it is not algebraic over formula_66.
If a regular formula_59-gon is constructed with its center at the origin, the angles between the segments from the center to consecutive vertices are formula_58. The polygon can be constructed only when the cosine of this angle is a trigonometric number. Thus, for instance, a 15-gon is constructible, but the regular heptagon is not constructible, because 7 is prime but not a Fermat prime. For a more direct proof of its non-constructibility, represent the vertices of a regular heptagon as the complex roots of the polynomial formula_77. Removing the factor formula_78, dividing by formula_79, and substituting formula_80 gives the simpler polynomial formula_81, an irreducible cubic with three real roots, each two times the real part of a complex-number vertex. Its roots are not constructible, so the heptagon is also not constructible.
If two points and a circular mirror are given, where on the circle does one of the given points see the reflected image of the other? Geometrically, the lines from each given point to the point of reflection meet the circle at equal angles and in equal-length chords. However, it is impossible to construct a point of reflection using a compass and straightedge. In particular, for a unit circle with the two points formula_82 and formula_83 inside it, the solution has coordinates forming roots of an irreducible degree-four polynomial formula_84. Although its degree is a power of two, the splitting field of this polynomial has degree divisible by three, so it does not come from an iterated quadratic extension and Alhazen's problem has no compass and straightedge solution.
History.
The birth of the concept of constructible numbers is inextricably linked with the history of the three impossible compass and straightedge constructions: doubling the cube, trisecting an angle, and squaring the circle. The restriction of using only compass and straightedge in geometric constructions is often credited to Plato due to a passage in Plutarch. According to Plutarch, Plato gave the duplication of the cube (Delian) problem to Eudoxus and Archytas and Menaechmus, who solved the problem using mechanical means, earning a rebuke from Plato for not solving the problem using pure geometry. However, this attribution is challenged, due, in part, to the existence of another version of the story (attributed to Eratosthenes by Eutocius of Ascalon) that says that all three found solutions but they were too abstract to be of practical value. Proclus, citing Eudemus of Rhodes, credited Oenopides (c. 450 BCE) with two ruler and compass constructions, leading some authors to hypothesize that Oenopides originated the restriction. The restriction to compass and straightedge is essential to the impossibility of the classic construction problems. Angle trisection, for instance, can be done in many ways, several known to the ancient Greeks. The Quadratrix of Hippias of Elis, the conics of Menaechmus, or the marked straightedge (neusis) construction of Archimedes have all been used, as has a more modern approach via paper folding.
Although not one of the classic three construction problems, the problem of constructing regular polygons with straightedge and compass is often treated alongside them. The Greeks knew how to construct regular formula_59-gons with formula_85 (for any integer formula_86), 3, 5, or the product of any two or three of these numbers, but other regular formula_59-gons eluded them. In 1796 Carl Friedrich Gauss, then an eighteen-year-old student, announced in a newspaper that he had constructed a regular 17-gon with straightedge and compass. Gauss's treatment was algebraic rather than geometric; in fact, he did not actually construct the polygon, but rather showed that the cosine of a central angle was a constructible number. The argument was generalized in his 1801 book "Disquisitiones Arithmeticae" giving the sufficient condition for the construction of a regular formula_59-gon. Gauss claimed, but did not prove, that the condition was also necessary and several authors, notably Felix Klein, attributed this part of the proof to him as well. Alhazen's problem is also not one of the classic three problems, but despite being named after Ibn al-Haytham (Alhazen), a medieval Islamic mathematician, it already appears in Ptolemy's work on optics from the second century.
Pierre Wantzel proved algebraically that the problems of doubling the cube and trisecting the angle are impossible to solve using only compass and straightedge. In the same paper he also solved the problem of determining which regular polygons are constructible:
a regular polygon is constructible if and only if the number of its sides is the product of a power of two and any number of distinct Fermat primes (i.e., the sufficient conditions given by Gauss are also necessary). An attempted proof of the impossibility of squaring the circle was given by James Gregory in "" (The True Squaring of the Circle and of the Hyperbola) in 1667. Although his proof was faulty, it was the first paper to attempt to solve the problem using algebraic properties of π. It was not until 1882 that Ferdinand von Lindemann rigorously proved its impossibility, by extending the work of Charles Hermite and proving that π is a transcendental number. Alhazen's problem was not proved impossible to solve by compass and straightedge until the work of Jack Elkin.
The study of constructible numbers, per se, was initiated by René Descartes in La Géométrie, an appendix to his book "Discourse on the Method" published in 1637. Descartes associated numbers to geometrical line segments in order to display the power of his philosophical method by solving an ancient straightedge and compass construction problem put forth by Pappus.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "|r|"
},
{
"math_id": 2,
"text": "O"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "OA"
},
{
"math_id": 6,
"text": "(0,0)"
},
{
"math_id": 7,
"text": "(1, 0)"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "(x,0)"
},
{
"math_id": 10,
"text": "(x,y)"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "(0,y)"
},
{
"math_id": 13,
"text": "\\sqrt2"
},
{
"math_id": 14,
"text": "\\sqrt{1+1}"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "\\sqrt{-1}"
},
{
"math_id": 17,
"text": "\\sqrt{0-1}"
},
{
"math_id": 18,
"text": "q=x+iy"
},
{
"math_id": 19,
"text": "x+y\\sqrt{-1}"
},
{
"math_id": 20,
"text": "q"
},
{
"math_id": 21,
"text": "(a+ib)\\pm (c+id)=(a \\pm c)+i(b \\pm d)"
},
{
"math_id": 22,
"text": "(a+ib)(c+id)=(ac-bd) + i(ad+bc)"
},
{
"math_id": 23,
"text": "\\frac{1}{a+ib}=\\frac{a}{a^2+b^2} + i \\frac{-b}{a^2+b^2}"
},
{
"math_id": 24,
"text": "\\sqrt{a+ib} = \\frac{(a+r)\\sqrt{r}}{s} + i\\frac{b\\sqrt{r}}{s}"
},
{
"math_id": 25,
"text": "r=\\sqrt{a^2+b^2{}_{\\!}}"
},
{
"math_id": 26,
"text": "s=\\sqrt{(a+r)^2+b^2}"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": "b"
},
{
"math_id": 29,
"text": "a+b"
},
{
"math_id": 30,
"text": "|a-b|"
},
{
"math_id": 31,
"text": "ab"
},
{
"math_id": 32,
"text": "a/b"
},
{
"math_id": 33,
"text": "\\sqrt{a}"
},
{
"math_id": 34,
"text": "\\mathbb{Q}(\\gamma)"
},
{
"math_id": 35,
"text": "\\gamma"
},
{
"math_id": 36,
"text": "\\alpha_1,\\dots, \\alpha_n=\\gamma"
},
{
"math_id": 37,
"text": "\\mathbb{Q}(\\alpha_1,\\dots,\\alpha_i)"
},
{
"math_id": 38,
"text": "\\mathbb{Q}(\\alpha_1,\\dots,\\alpha_{i-1})"
},
{
"math_id": 39,
"text": "\\mathbb{Q} = K_0 \\subseteq K_1 \\subseteq \\dots \\subseteq K_n,"
},
{
"math_id": 40,
"text": "\\mathbb{Q}"
},
{
"math_id": 41,
"text": "K_n"
},
{
"math_id": 42,
"text": "0< j\\le n"
},
{
"math_id": 43,
"text": "[K_j:K_{j-1}]=2"
},
{
"math_id": 44,
"text": "[\\mathbb{Q}(\\gamma):\\mathbb{Q}]"
},
{
"math_id": 45,
"text": "2^r"
},
{
"math_id": 46,
"text": "\\mathbb{Q} = F_0 \\subseteq F_1 \\subseteq \\dots \\subseteq F_n,"
},
{
"math_id": 47,
"text": "F_n"
},
{
"math_id": 48,
"text": "0<j\\le n"
},
{
"math_id": 49,
"text": "[F_j:F_{j-1}]= 2"
},
{
"math_id": 50,
"text": "K=\\mathbb{Q}(\\gamma,\\gamma',\\gamma'',\\dots)"
},
{
"math_id": 51,
"text": "G=\\mathrm{Gal}(K/\\mathbb{Q})"
},
{
"math_id": 52,
"text": "G = G_n \\supseteq G_{n-1} \\supseteq \\cdots \\supseteq G_0 = 1,"
},
{
"math_id": 53,
"text": "|G_k| = 2^k"
},
{
"math_id": 54,
"text": "0\\leq k \\leq n."
},
{
"math_id": 55,
"text": "\\mathbb{Q} = F_0 \\subseteq F_1 \\subseteq \\dots \\subseteq F_n = K,"
},
{
"math_id": 56,
"text": "\\gamma,"
},
{
"math_id": 57,
"text": "\\pi"
},
{
"math_id": 58,
"text": "2\\pi/n"
},
{
"math_id": 59,
"text": "n"
},
{
"math_id": 60,
"text": "\\cos(\\pi/15)"
},
{
"math_id": 61,
"text": "\\cos(\\pi/9)"
},
{
"math_id": 62,
"text": "\\cos(\\pi/7)"
},
{
"math_id": 63,
"text": "2"
},
{
"math_id": 64,
"text": "\\sqrt[3]{2}"
},
{
"math_id": 65,
"text": "x^3-2"
},
{
"math_id": 66,
"text": "\\Q"
},
{
"math_id": 67,
"text": "\\theta"
},
{
"math_id": 68,
"text": "\\theta/3"
},
{
"math_id": 69,
"text": "x=\\cos\\theta"
},
{
"math_id": 70,
"text": "\\cos(\\tfrac{1}{3}\\arccos x)"
},
{
"math_id": 71,
"text": "\\theta=\\pi/3=60^\\circ"
},
{
"math_id": 72,
"text": "x=\\cos\\theta=\\tfrac12"
},
{
"math_id": 73,
"text": "\\theta/3=\\pi/9=20^\\circ"
},
{
"math_id": 74,
"text": "\\cos\\pi/9"
},
{
"math_id": 75,
"text": "8x^3-6x-1"
},
{
"math_id": 76,
"text": "\\sqrt\\pi"
},
{
"math_id": 77,
"text": "x^7-1"
},
{
"math_id": 78,
"text": "x-1"
},
{
"math_id": 79,
"text": "x^3"
},
{
"math_id": 80,
"text": "y=x+1/x"
},
{
"math_id": 81,
"text": "y^3+y^2-2y-1"
},
{
"math_id": 82,
"text": "(\\tfrac16,\\tfrac16)"
},
{
"math_id": 83,
"text": "(-\\tfrac12,\\tfrac12)"
},
{
"math_id": 84,
"text": "x^4-2x^3+4x^2+2x-1"
},
{
"math_id": 85,
"text": "n=2^h"
},
{
"math_id": 86,
"text": "h\\ge 2"
}
]
| https://en.wikipedia.org/wiki?curid=7439 |
74390 | Decay product | The remaining nuclide left over from radioactive decay
In nuclear physics, a decay product (also known as a daughter product, daughter isotope, radio-daughter, or daughter nuclide) is the remaining nuclide left over from radioactive decay. Radioactive decay often proceeds via a sequence of steps (decay chain). For example, 238U decays to 234Th which decays to 234mPa which decays, and so on, to 206Pb (which is stable):
formula_0
In this example:
These might also be referred to as the daughter products of 238U.
Decay products are important in understanding radioactive decay and the management of radioactive waste.
For elements above lead in atomic number, the decay chain typically ends with an isotope of lead or bismuth. Bismuth itself decays to thallium, but the decay is so slow as to be practically negligible.
In many cases, individual members of the decay chain are as radioactive as the parent, but far smaller in volume/mass. Thus, although uranium is not dangerously radioactive when pure, some pieces of naturally occurring pitchblende are quite dangerous owing to their radium-226 content, which is soluble and not a ceramic like the parent. Similarly, thorium gas mantles are very slightly radioactive when new, but become more radioactive after only a few months of storage as the daughters of 232Th build up.
Although it cannot be predicted whether any given atom of a radioactive substance will decay at any given time, the decay products of a radioactive substance are extremely predictable. Because of this, decay products are important to scientists in many fields who need to know the quantity or type of the parent product. Such studies are done to measure pollution levels (in and around nuclear facilities) and for other matters.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\ce{^{238}U ->} \\overbrace{ \\underbrace\\ce{^{234}Th}_\\ce{daughter~of~^{238}U} \\ce{->} \\underbrace\\ce{^{234\\!m}Pa}_\\ce{granddaughter~of~^{238}U} \\ce{-> \\cdots -> {^{206}Pb} }}^\\ce{decay~products~of~^{238}U}\n"
}
]
| https://en.wikipedia.org/wiki?curid=74390 |
7439014 | Q-gamma function | Function in q-analog theory
In q-analog theory, the formula_0-gamma function, or basic gamma function, is a generalization of the ordinary gamma function closely related to the double gamma function. It was introduced by . It is given by
formula_1
when formula_2, and
formula_3
if formula_4. Here formula_5 is the infinite q-Pochhammer symbol. The formula_0-gamma function satisfies the functional equation
formula_6
In addition, the formula_0-gamma function satisfies the q-analog of the Bohr–Mollerup theorem, which was found by Richard Askey ().<br>
For non-negative integers "n",
formula_7
where formula_8 is the q-factorial function. Thus the formula_0-gamma function can be considered as an extension of the q-factorial function to the real numbers.
The relation to the ordinary gamma function is made explicit in the limit
formula_9
There is a simple proof of this limit by Gosper. See the appendix of (Andrews (1986)).
Transformation properties.
The formula_0-gamma function satisfies the q-analog of the Gauss multiplication formula ():
formula_10
Integral representation.
The formula_0-gamma function has the following integral representation (Ismail (1981)):
formula_11
Stirling formula.
Moak obtained the following q-analogue of the Stirling formula (see ):
formula_12
formula_13
formula_14
where formula_15, formula_16 denotes the Heaviside step function, formula_17 stands for the Bernoulli number, formula_18 is the dilogarithm, and formula_19 is a polynomial of degree formula_20 satisfying
formula_21
Raabe-type formulas.
Due to I. Mező, the q-analogue of the Raabe formula exists, at least if we use the q-gamma function when formula_4. With this restriction
formula_22
El Bachraoui considered the case formula_23 and proved that
formula_24
Special values.
The following special values are known.
formula_25
formula_26
formula_27
formula_28
These are the analogues of the classical formula formula_29.
Moreover, the following analogues of the familiar identity formula_30 hold true:
formula_31
formula_32
formula_33
Matrix Version.
Let formula_34 be a complex square matrix and Positive-definite matrix. Then a q-gamma matrix function can be defined by q-integral:
formula_35
where formula_36 is the q-exponential function.
Other q-gamma functions.
For other q-gamma functions, see Yamasaki 2006.
Numerical computation.
An iterative algorithm to compute the q-gamma function was proposed by Gabutti and Allasia.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "q"
},
{
"math_id": 1,
"text": "\\Gamma_q(x) = (1-q)^{1-x}\\prod_{n=0}^\\infty \\frac{1-q^{n+1}}{1-q^{n+x}}=(1-q)^{1-x}\\,\\frac{(q;q)_\\infty}{(q^x;q)_\\infty}"
},
{
"math_id": 2,
"text": "|q|<1"
},
{
"math_id": 3,
"text": " \\Gamma_q(x)=\\frac{(q^{-1};q^{-1})_\\infty}{(q^{-x};q^{-1})_\\infty}(q-1)^{1-x}q^{\\binom{x}{2}} "
},
{
"math_id": 4,
"text": "|q|>1"
},
{
"math_id": 5,
"text": "(\\cdot;\\cdot)_\\infty"
},
{
"math_id": 6,
"text": "\\Gamma_q(x+1) = \\frac{1-q^{x}}{1-q}\\Gamma_q(x)=[x]_q\\Gamma_q(x)"
},
{
"math_id": 7,
"text": "\\Gamma_q(n)=[n-1]_q!"
},
{
"math_id": 8,
"text": "[\\cdot]_q"
},
{
"math_id": 9,
"text": "\\lim_{q \\to 1\\pm} \\Gamma_q(x) = \\Gamma(x)."
},
{
"math_id": 10,
"text": "\\Gamma_q(nx)\\Gamma_r(1/n)\\Gamma_r(2/n)\\cdots\\Gamma_r((n-1)/n)=\\left(\\frac{1-q^n}{1-q}\\right)^{nx-1}\\Gamma_r(x)\\Gamma_r(x+1/n)\\cdots\\Gamma_r(x+(n-1)/n), \\ r=q^n."
},
{
"math_id": 11,
"text": "\\frac{1}{\\Gamma_q(z)}=\\frac{\\sin(\\pi z)}{\\pi}\\int_0^\\infty\\frac{t^{-z}\\mathrm{d}t}{(-t(1-q);q)_{\\infty}}."
},
{
"math_id": 12,
"text": "\\log\\Gamma_q(x)\\sim(x-1/2)\\log[x]_q+\\frac{\\mathrm{Li}_2(1-q^x)}{\\log q}+C_{\\hat{q}}+\\frac{1}{2}H(q-1)\\log q+\\sum_{k=1}^\\infty\n\\frac{B_{2k}}{(2k)!}\\left(\\frac{\\log \\hat{q}}{\\hat{q}^x-1}\\right)^{2k-1}\\hat{q}^x p_{2k-3}(\\hat{q}^x), \\ x\\to\\infty,"
},
{
"math_id": 13,
"text": "\\hat{q}=\n \\left\\{\\begin{aligned}\n q \\quad \\mathrm{if} \\ &0<q\\leq1 \\\\\n 1/q \\quad \\mathrm{if} \\ &q\\geq1\n\\end{aligned}\\right\\},"
},
{
"math_id": 14,
"text": "C_q = \\frac{1}{2} \\log(2\\pi)+\\frac{1}{2}\\log\\left(\\frac{q-1}{\\log q}\\right)-\\frac{1}{24}\\log q+\\log\\sum_{m=-\\infty}^\\infty \\left(r^{m(6m+1)} - r^{(3m+1)(2m+1)}\\right),"
},
{
"math_id": 15,
"text": "r=\\exp(4\\pi^2/\\log q)"
},
{
"math_id": 16,
"text": "H"
},
{
"math_id": 17,
"text": "B_k"
},
{
"math_id": 18,
"text": "\\mathrm{Li}_2(z)"
},
{
"math_id": 19,
"text": "p_k"
},
{
"math_id": 20,
"text": "k"
},
{
"math_id": 21,
"text": " p_k(z)=z(1-z)p'_{k-1}(z)+(kz+1)p_{k-1}(z), p_0=p_{-1}=1, k=1,2,\\cdots."
},
{
"math_id": 22,
"text": " \\int_0^1\\log\\Gamma_q(x)dx=\\frac{\\zeta(2)}{\\log q}+\\log\\sqrt{\\frac{q-1}{\\sqrt[6]{q}}}+\\log(q^{-1};q^{-1})_\\infty \\quad(q>1). "
},
{
"math_id": 23,
"text": "0<q<1"
},
{
"math_id": 24,
"text": " \\int_0^1\\log\\Gamma_q(x)dx=\\frac{1}{2}\\log (1-q) - \\frac{\\zeta(2)}{\\log q}+\\log(q;q)_\\infty \\quad(0<q<1). "
},
{
"math_id": 25,
"text": "\\Gamma_{e^{-\\pi}}\\left(\\frac12\\right)=\\frac{e^{-7 \\pi /16} \\sqrt{e^\\pi-1}\\sqrt[4]{1+\\sqrt2}}{2^{15/16}\\pi^{3/4}} \\, \\Gamma \\left(\\frac{1}{4}\\right),"
},
{
"math_id": 26,
"text": "\\Gamma_{e^{-2\\pi}}\\left(\\frac12\\right)=\\frac{e^{-7 \\pi /8} \\sqrt{e^{2 \\pi}-1}}{2^{9/8} \\pi^{3/4}} \\, \\Gamma \\left(\\frac{1}{4}\\right),"
},
{
"math_id": 27,
"text": "\\Gamma_{e^{-4\\pi}}\\left(\\frac12\\right)=\\frac{e^{-7 \\pi /4} \\sqrt{e^{4 \\pi}-1}}{2^{7/4} \\pi^{3/4}} \\, \\Gamma \\left(\\frac{1}{4}\\right),"
},
{
"math_id": 28,
"text": "\\Gamma_{e^{-8\\pi}}\\left(\\frac12\\right)=\\frac{e^{-7 \\pi /2} \\sqrt{e^{8 \\pi}-1}}{2^{9/4} \\pi^{3/4} \\sqrt{1+\\sqrt2}} \\, \\Gamma \\left(\\frac{1}{4}\\right)."
},
{
"math_id": 29,
"text": "\\Gamma\\left(\\frac12\\right)=\\sqrt\\pi"
},
{
"math_id": 30,
"text": "\\Gamma\\left(\\frac14\\right)\\Gamma\\left(\\frac34\\right)=\\sqrt2\\pi"
},
{
"math_id": 31,
"text": "\\Gamma_{e^{-2\\pi}}\\left(\\frac14\\right)\\Gamma_{e^{-2\\pi}}\\left(\\frac34\\right)=\\frac{e^{-29 \\pi /16} \\left(e^{2 \\pi }-1\\right)\\sqrt[4]{1+\\sqrt2}}{2^{33/16} \\pi^{3/2}} \\, \\Gamma \\left(\\frac{1}{4}\\right)^2,"
},
{
"math_id": 32,
"text": "\\Gamma_{e^{-4\\pi}}\\left(\\frac14\\right)\\Gamma_{e^{-4\\pi}}\\left(\\frac34\\right)=\\frac{e^{-29 \\pi /8} \\left(e^{4 \\pi }-1\\right)}{2^{23/8} \\pi ^{3/2}} \\, \\Gamma \\left(\\frac{1}{4}\\right)^2,"
},
{
"math_id": 33,
"text": "\\Gamma_{e^{-8\\pi}}\\left(\\frac14\\right)\\Gamma_{e^{-8\\pi}}\\left(\\frac34\\right)=\\frac{e^{-29 \\pi /4} \\left(e^{8 \\pi }-1\\right)}{16 \\pi ^{3/2} \\sqrt{1+\\sqrt2}} \\, \\Gamma \\left(\\frac{1}{4}\\right)^2."
},
{
"math_id": 34,
"text": "A"
},
{
"math_id": 35,
"text": "\\Gamma_q(A):=\\int_0^{\\frac{1}{1-q}}t^{A-I}E_q(-qt)\\mathrm{d}_q t "
},
{
"math_id": 36,
"text": "E_q"
}
]
| https://en.wikipedia.org/wiki?curid=7439014 |
74395167 | Soddy circles of a triangle | Geometric concept
In geometry, the Soddy circles of a triangle are two circles associated with any triangle in the plane. Their centers are the Soddy centers of the triangle. They are all named for Frederick Soddy, who rediscovered Descartes' theorem on the radii of mutually tangent quadruples of circles.
Any triangle has three externally tangent circles centered at its vertices. Two more circles, its Soddy circles, are tangent to the three circles centered at the vertices; their centers are called Soddy centers. The line through the Soddy centers is the Soddy line of the triangle. These circles are related to many other notable features of the triangle. They can be generalized to additional triples of tangent circles centered at the vertices in which one circle surrounds the other two.
Construction.
Let formula_0 be the three vertices of a triangle, and let formula_1 be the lengths of the opposite sides, and formula_2 be the semiperimeter. Then the three circles centered at formula_0 have radii formula_3, respectively.
By Descartes' theorem, two more circles, sometimes also called Soddy circles, are tangent to these three circles. The centers of these two tangent circles are the Soddy
centers of the triangle.
Related features.
Each of the three circles centered at the vertices crosses two sides of the triangle at right angles, at one of the three "intouch points" of the triangle, where its incircle is tangent to the side. The two circles tangent to these three circles are separated by the incircle, one interior to it and one exterior. The Soddy centers lie at the common intersections of three hyperbolas, each having two triangle vertices as foci and passing through the third vertex.
The inner Soddy center is an equal detour point: the polyline connecting any two triangle vertices through the inner Soddy point is longer than the line segment connecting those vertices directly, by an amount that does not depend on which two vertices are chosen. By Descartes' theorem, the inner Soddy circle's curvature is formula_4, where formula_5 is the triangle's area, formula_6 is its circumradius, and formula_7 is its inradius. The outer Soddy circle has curvature formula_8. When this curvature is positive, the outer Soddy center is another equal detour point; otherwise the equal detour point is unique. When the outer Soddy circle has negative curvature, its center is the isoperimetric point of the triangle: the three triangles formed by this center and two vertices of the starting triangle all have the same perimeter. Triangles whose outer Soddy circle degenerates to a straight line with curvature zero have been called "Soddyian triangles". This happens when formula_9 and causes the curvature of the inner Soddy circle to be formula_10.
Excentric circles.
As well as the three externally tangent circles formed from a triangle, three more triples of tangent circles also have their centers at the triangle vertices, but with one of the circles surrounding the other two. Their triples of radii are formula_11 formula_12 or formula_13 where a negative radius indicates that the circle is tangent to the other two in its interior. Their points of tangency lie on the lines through the sides of the triangle, with each triple of circles having tangencies at the points where one of the three excircles is tangent to these lines. The pairs of tangent circles to these three triples of circles behave in analogous ways to the pair of inner and outer circles, and are also sometimes called "Soddy circles". Instead of lying on the intersection of the three hyperbolas, the centers of these circles lie where the opposite branch of one hyperbola with foci at the two vertices and passing through the third intersects the two ellipses with foci at other pairs of vertices and passing through the third.
Soddy lines.
The line through both Soddy centers, called the "Soddy line", also passes through the incenter of the triangle, which is the homothetic center of the two Soddy circles, and through the Gergonne point, the intersection of the three lines connecting the intouch points of the triangle to the opposite vertices. Four mutually tangent circles define six points of tangency, which can be grouped in three pairs of tangent points, each pair coming from two disjoint pairs of circles. The three lines through these three pairs of tangent points are concurrent, and the points of concurrency defined in this way from the inner and outer circles define two more triangle centers called the Eppstein points that also lie on the Soddy line.
The three additional pairs of excentric Soddy circles each are associated with a "Soddy line" through their centers. Each passes through the corresponding excenter of the triangle, which is the center of similitude for the two circles. Each Soddy line also passes through an analog of the Gergonne point and the Eppstein points. The four Soddy lines concur at the de Longchamps point, the reflection of the orthocenter of the triangle about the circumcenter.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A, B, C"
},
{
"math_id": 1,
"text": "a, b, c"
},
{
"math_id": 2,
"text": "s = \\tfrac12(a + b + c)"
},
{
"math_id": 3,
"text": "s-a, s-b, s-c"
},
{
"math_id": 4,
"text": "(4R + r + 2s) / \\Delta"
},
{
"math_id": 5,
"text": "\\Delta"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "(4R + r - 2s) / \\Delta"
},
{
"math_id": 9,
"text": "4R + r = 2s"
},
{
"math_id": 10,
"text": "4/r"
},
{
"math_id": 11,
"text": "(-s, s-c, s-b),"
},
{
"math_id": 12,
"text": "(s-c, -s, s-a),"
},
{
"math_id": 13,
"text": "(s-b, s-a, -s),"
}
]
| https://en.wikipedia.org/wiki?curid=74395167 |
74397287 | Gurjunene | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Gurjunene, also known as (-)-α-gurjunene, is a natural carbotricyclic sesquiterpene that is most commonly found in gurjun balsam, an essential oil compound extracted from plants of the genus "Dipterocarpus." The following reaction that synthesizes gurjunene can be catalyzed by alpha-gurjunene synthase:(2"E",6"E")-farnesyl diphosphate formula_0 (–)-α-gurjunene + diphosphate
Related compounds.
Several related compounds are known, including β-gurjunene and γ-gurjunene.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=74397287 |
74408247 | Reversible Hill equation | Mathematical concept
The classic Monod–Wyman–Changeux model (MWC) for cooperativity is generally published in an irreversible form. That is, there are no product terms in the rate equation which can be problematic for those wishing to build metabolic models since there are no product inhibition terms. However, a series of publications by Popova and Sel'kov derived the MWC rate equation for the reversible, multi-substrate, multi-product reaction.
The same problem applies to the classic Hill equation which is almost always shown in an irreversible form. Hofmeyr and Cornish-Bowden first published the reversible form of the Hill equation. The equation has since been discussed elsewhere and the model has also been used in a number of kinetic models such as a model of Phosphofructokinase and Glycolytic Oscillations in the Pancreatic β-cells or a model of a glucose-xylose co-utilizing S. cerevisiae strain. The model has also been discussed in modern enzyme kinetics textbooks.
Derivation.
Consider the simpler case where there are two binding sites. See the scheme shown below. Each site is assumed to bind either molecule of substrate S or product P. The catalytic reaction is shown by the two reactions at the base of the scheme triangle, that is S to P and P to S. The model assumes the binding steps are always at equilibrium. The reaction rate is given by:
formula_0
Invoking the rapid-equilibrium assumption we can write the various complexes in terms of equilibrium constants to give:
formula_1
where formula_2. The formula_3 and formula_4 terms are the ratio of substrate and product to their respective half-saturation constants, namely formula_5 and formula_6 and
Using the author's own notation, if an enzyme has formula_7 sites that can bind ligand, the form, in the general case, can be shown to be:
formula_8
The non-cooperative reversible Michaelis-Menten equation can be seen to emerge when we set the Hill coefficient to one.
If the enzyme is irreversible the equation turns into the simple Michaelis-Menten equation that is irreversible. When setting the equilibrium constant to infinity, the equation can be seen to revert to the simpler case where the product inhibits the reverse step.
A comparison has been made between the MWC and reversible Hill equation.
A modification of the reversible Hill equation was published by Westermark et al where modifiers affected the catalytic properties instead. This variant was shown to provide a much better fit for describing the kinetics of muscle phosphofructokinase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " v=k_1\\left(E S+2 E S_2+E S P\\right)-k_2\\left(E P+2 E P_2+E S P\\right) "
},
{
"math_id": 1,
"text": " v =\\frac{V_f \\sigma(1-\\rho)(\\sigma+\\pi)}{1+(\\sigma+\\pi)^2} "
},
{
"math_id": 2,
"text": "\\rho=\\Gamma / K_{eq}"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "\n\\pi"
},
{
"math_id": 5,
"text": "\\sigma = S/S_{0.5}"
},
{
"math_id": 6,
"text": " \\pi = P/P_{0.5}"
},
{
"math_id": 7,
"text": " h "
},
{
"math_id": 8,
"text": " v = \\frac{V_f \\sigma(1-\\rho)(\\sigma+\\pi)^{h-1}}{1+(\\sigma+\\pi)^h} "
}
]
| https://en.wikipedia.org/wiki?curid=74408247 |
744165 | Chordal graph | Graph where all long cycles have a chord
In the mathematical area of graph theory, a chordal graph is one in which all cycles of four or more vertices have a "chord", which is an edge that is not part of the cycle but connects two vertices of the cycle. Equivalently, every induced cycle in the graph should have exactly three vertices. The chordal graphs may also be characterized as the graphs that have perfect elimination orderings, as the graphs in which each minimal separator is a clique, and as the intersection graphs of subtrees of a tree. They are sometimes also called rigid circuit graphs or triangulated graphs: a chordal completion of a graph is typically called a triangulation of that graph.
Chordal graphs are a subset of the perfect graphs. They may be recognized in linear time, and several problems that are hard on other classes of graphs such as graph coloring may be solved in polynomial time when the input is chordal. The treewidth of an arbitrary graph may be characterized by the size of the cliques in the chordal graphs that contain it.
Perfect elimination and efficient recognition.
A "perfect elimination ordering" in a graph is an ordering of the vertices of the graph such that, for each vertex v, v and the neighbors of v that occur after v in the order form a clique. A graph is chordal if and only if it has a perfect elimination ordering.
(see also ) show that a perfect elimination ordering of a chordal graph may be found efficiently using an algorithm known as lexicographic breadth-first search. This algorithm maintains a partition of the vertices of the graph into a sequence of sets; initially this sequence consists of a single set with all vertices. The algorithm repeatedly chooses a vertex v from the earliest set in the sequence that contains previously unchosen vertices, and splits each set S of the sequence into two smaller subsets, the first consisting of the neighbors of v in S and the second consisting of the non-neighbors. When this splitting process has been performed for all vertices, the sequence of sets has one vertex per set, in the reverse of a perfect elimination ordering.
Since both this lexicographic breadth first search process and the process of testing whether an ordering is a perfect elimination ordering can be performed in linear time, it is possible to recognize chordal graphs in linear time. The graph sandwich problem on chordal graphs is NP-complete whereas the probe graph problem on chordal graphs has polynomial-time complexity.
The set of all perfect elimination orderings of a chordal graph can be modeled as the "basic words" of an antimatroid; use this connection to antimatroids as part of an algorithm for efficiently listing all perfect elimination orderings of a given chordal graph.
Maximal cliques and graph coloring.
Another application of perfect elimination orderings is finding a maximum clique of a chordal graph in polynomial-time, while the same problem for general graphs is NP-complete. More generally, a chordal graph can have only linearly many maximal cliques, while non-chordal graphs may have exponentially many. This implies that the class of chordal graphs has few cliques. To list all maximal cliques of a chordal graph, simply find a perfect elimination ordering, form a clique for each vertex v together with the neighbors of v that are later than v in the perfect elimination ordering, and test whether each of the resulting cliques is maximal.
The clique graphs of chordal graphs are the dually chordal graphs.
The largest maximal clique is a maximum clique, and, as chordal graphs are perfect, the size of this clique equals the chromatic number of the chordal graph. Chordal graphs are perfectly orderable: an optimal coloring may be obtained by applying a greedy coloring algorithm to the vertices in the reverse of a perfect elimination ordering.
The chromatic polynomial of a chordal graph is easy to compute. Find a perfect elimination ordering "v"1, "v"2, …, "vn". Let Ni equal the number of neighbors of vi that come after vi in that ordering. For instance, "Nn" = 0. The chromatic polynomial equals formula_0 (The last factor is simply x, so x divides the polynomial, as it should.) Clearly, this computation depends on chordality.
Minimal separators.
In any graph, a vertex separator is a set of vertices the removal of which leaves the remaining graph disconnected; a separator is minimal if it has no proper subset that is also a separator. According to a theorem of , chordal graphs are graphs in which each minimal separator is a clique; Dirac used this characterization to prove that chordal graphs are perfect.
The family of chordal graphs may be defined inductively as the graphs whose vertices can be divided into three nonempty subsets A, S, and B, such that &NoBreak;&NoBreak; and &NoBreak;&NoBreak; both form chordal induced subgraphs, S is a clique, and there are no edges from A to B. That is, they are the graphs that have a recursive decomposition by clique separators into smaller subgraphs. For this reason, chordal graphs have also sometimes been called decomposable graphs.
Intersection graphs of subtrees.
An alternative characterization of chordal graphs, due to , involves trees and their subtrees.
From a collection of subtrees of a tree, one can define a subtree graph, which is an intersection graph that has one vertex per subtree and an edge connecting any two subtrees that overlap in one or more nodes of the tree. Gavril showed that the subtree graphs are exactly the chordal graphs.
A representation of a chordal graph as an intersection of subtrees forms a tree decomposition of the graph, with treewidth equal to one less than the size of the largest clique in the graph; the tree decomposition of any graph "G" can be viewed in this way as a representation of "G" as a subgraph of a chordal graph. The tree decomposition of a graph is also the junction tree of the junction tree algorithm.
Relation to other graph classes.
Subclasses.
Interval graphs are the intersection graphs of subtrees of path graphs, a special case of trees. Therefore, they are a subfamily of chordal graphs.
Split graphs are graphs that are both chordal and the complements of chordal graphs. showed that, in the limit as n goes to infinity, the fraction of n-vertex chordal graphs that are split approaches one.
Ptolemaic graphs are graphs that are both chordal and distance hereditary.
Quasi-threshold graphs are a subclass of Ptolemaic graphs that are both chordal and cographs. Block graphs are another subclass of Ptolemaic graphs in which every two maximal cliques have at most one vertex in common. A special type is windmill graphs, where the common vertex is the same for every pair of cliques.
Strongly chordal graphs are graphs that are chordal and contain no n-sun (for "n" ≥ 3) as an induced subgraph. Here an n-sun is an n-vertex chordal graph G together with a collection of n degree-two vertices, adjacent to the edges of a Hamiltonian cycle in G.
K-trees are chordal graphs in which all maximal cliques and all maximal clique separators have the same size. Apollonian networks are chordal maximal planar graphs, or equivalently planar 3-trees. Maximal outerplanar graphs are a subclass of 2-trees, and therefore are also chordal.
Superclasses.
Chordal graphs are a subclass of the well known perfect graphs.
Other superclasses of chordal graphs include weakly chordal graphs, cop-win graphs, odd-hole-free graphs, even-hole-free graphs, and Meyniel graphs. Chordal graphs are precisely the graphs that are both odd-hole-free and even-hole-free (see holes in graph theory).
Every chordal graph is a strangulated graph, a graph in which every peripheral cycle is a triangle, because peripheral cycles are a special case of induced cycles. Strangulated graphs are graphs that can be formed by clique-sums of chordal graphs and maximal planar graphs. Therefore, strangulated graphs include maximal planar graphs.
Chordal completions and treewidth.
If G is an arbitrary graph, a chordal completion of G (or minimum fill-in) is a chordal graph that contains G as a subgraph. The parameterized version of minimum fill-in is fixed parameter tractable, and moreover, is solvable in parameterized subexponential time.
The treewidth of G is one less than the number of vertices in a maximum clique of a chordal completion chosen to minimize this clique size.
The k-trees are the graphs to which no additional edges can be added without increasing their treewidth to a number larger than k.
Therefore, the k-trees are their own chordal completions, and form a subclass of the chordal graphs. Chordal completions can also be used to characterize several other related classes of graphs.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x-N_1)(x-N_2)\\cdots(x-N_n)."
}
]
| https://en.wikipedia.org/wiki?curid=744165 |
74420348 | Generalized uncertainty principle | Physics generalization
The Generalized Uncertainty Principle (GUP) represents a pivotal extension of the Heisenberg Uncertainty Principle, incorporating the effects of gravitational forces to refine the limits of measurement precision within quantum mechanics. Rooted in advanced theories of quantum gravity, including string theory and loop quantum gravity, the GUP introduces the concept of a minimal measurable length. This fundamental limit challenges the classical notion that positions can be measured with arbitrary precision, hinting at a discrete structure of spacetime at the Planck scale. The mathematical expression of the GUP is often formulated as:
formula_0
In this equation, formula_1 and formula_2 denote the uncertainties in position and momentum, respectively. The term formula_3 represents the reduced Planck constant, while formula_4 is a parameter that embodies the minimal length scale predicted by the GUP. The GUP is more than a theoretical curiosity; it signifies a cornerstone concept in the pursuit of unifying quantum mechanics with general relativity. It posits an absolute minimum uncertainty in the position of particles, approximated by the Planck length, underscoring its significance in the realms of quantum gravity and string theory where such minimal length scales are anticipated.
Various quantum gravity theories, such as string theory, loop quantum gravity, and quantum geometry, propose a generalized version of the uncertainty principle (GUP), which suggests the presence of a minimum measurable length. In earlier research, multiple forms of the GUP have been introduced
Observable consequences.
The GUP's phenomenological and experimental implications have been examined across low and high-energy contexts, encompassing atomic systems, quantum optical systems, gravitational bar detectors, gravitational decoherence, and macroscopic harmonic oscillators, further extending to composite particles, astrophysical systems
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta x \\Delta p \\geq \\frac{\\hbar}{2} + \\beta \\Delta p^2"
},
{
"math_id": 1,
"text": "\\Delta x"
},
{
"math_id": 2,
"text": "\\Delta p"
},
{
"math_id": 3,
"text": "\\hbar"
},
{
"math_id": 4,
"text": "\\beta"
}
]
| https://en.wikipedia.org/wiki?curid=74420348 |
7442564 | Compact convergence | Type of mathematical convergence in topology
In mathematics compact convergence (or uniform convergence on compact sets) is a type of convergence that generalizes the idea of uniform convergence. It is associated with the compact-open topology.
Definition.
Let formula_0 be a topological space and formula_1 be a metric space. A sequence of functions
formula_2, formula_3
is said to converge compactly as formula_4 to some function formula_5 if, for every compact set formula_6,
formula_7
uniformly on formula_8 as formula_4. This means that for all compact formula_9,
formula_10 | [
{
"math_id": 0,
"text": "(X, \\mathcal{T})"
},
{
"math_id": 1,
"text": "(Y,d_{Y})"
},
{
"math_id": 2,
"text": "f_{n} : X \\to Y"
},
{
"math_id": 3,
"text": "n \\in \\mathbb{N},"
},
{
"math_id": 4,
"text": "n \\to \\infty"
},
{
"math_id": 5,
"text": "f : X \\to Y"
},
{
"math_id": 6,
"text": "K \\subseteq X"
},
{
"math_id": 7,
"text": "f_{n}|_{K} \\to f|_{K}"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "K \\subseteq X"
},
{
"math_id": 10,
"text": "\\lim_{n \\to \\infty} \\sup_{x \\in K} d_{Y} \\left( f_{n} (x), f(x) \\right) = 0."
},
{
"math_id": 11,
"text": "X = (0, 1) \\subseteq \\mathbb{R}"
},
{
"math_id": 12,
"text": "Y = \\mathbb{R}"
},
{
"math_id": 13,
"text": "f_{n} (x) := x^{n}"
},
{
"math_id": 14,
"text": "f_{n}"
},
{
"math_id": 15,
"text": "X=(0,1]"
},
{
"math_id": 16,
"text": "Y=\\R"
},
{
"math_id": 17,
"text": "f_n(x)=x^n"
},
{
"math_id": 18,
"text": "f_n"
},
{
"math_id": 19,
"text": "(0,1)"
},
{
"math_id": 20,
"text": "1"
},
{
"math_id": 21,
"text": "f_{n} \\to f"
},
{
"math_id": 22,
"text": "f_n\\to f"
},
{
"math_id": 23,
"text": "f"
}
]
| https://en.wikipedia.org/wiki?curid=7442564 |
744335 | Takens's theorem | Conditions under which a chaotic system can be reconstructed by observation
In the study of dynamical systems, a delay embedding theorem gives the conditions under which a chaotic dynamical system can be reconstructed from a sequence of observations of the state of that system. The reconstruction preserves the properties of the dynamical system that do not change under smooth coordinate changes (i.e., diffeomorphisms), but it does not preserve the geometric shape of structures in phase space.
Takens' theorem is the 1981 delay embedding theorem of Floris Takens. It provides the conditions under which a smooth attractor can be reconstructed from the observations made with a generic function. Later results replaced the smooth attractor with a set of arbitrary box counting dimension and the class of generic functions with other classes of functions.
It is the most commonly used method for attractor reconstruction.
Delay embedding theorems are simpler to state for
discrete-time dynamical systems.
The state space of the dynamical system is a ν-dimensional manifold M. The dynamics is given by a smooth map
formula_0
Assume that the dynamics f has a strange attractor formula_1 with box counting dimension dA. Using ideas from Whitney's embedding theorem, A can be embedded in k-dimensional Euclidean space with
formula_2
That is, there is a diffeomorphism φ that maps A into formula_3 such that the derivative of φ has full rank.
A delay embedding theorem uses an "observation function" to construct the embedding function. An observation function formula_4 must be twice-differentiable and associate a real number to any point of the attractor A. It must also be typical, so its derivative is of full rank and has no special symmetries in its components. The delay embedding theorem states that the function
formula_5
is an embedding of the strange attractor A in formula_6
Simplified version.
Suppose the formula_7-dimensional
state vector formula_8 evolves according to an unknown but continuous
and (crucially) deterministic dynamic. Suppose, too, that the
one-dimensional observable formula_9 is a smooth function of formula_10, and “coupled”
to all the components of formula_10. Now at any time we can look not just at
the present measurement formula_11, but also at observations made at times
removed from us by multiples of some lag formula_12, etc. If we use
formula_13 lags, we have a formula_13-dimensional vector. One might expect that, as the
number of lags is increased, the motion in the lagged space will become
more and more predictable, and perhaps in the limit formula_14 would become
deterministic. In fact, the dynamics of the lagged vectors become
deterministic at a finite dimension; not only that, but the deterministic
dynamics are completely equivalent to those of the original state space (precisely, they are related by a smooth, invertible change of coordinates,
or diffeomorphism). In fact, the theorem says that determinism appears once you reach dimension formula_15, and the minimal "embedding dimension" is often less.
Choice of delay.
Takens' theorem is usually used to reconstruct strange attractors out of experimental data, for which there is contamination by noise. As such, the choice of delay time becomes important. Whereas for data without noise, any choice of delay is valid, for noisy data, the attractor would be destroyed by noise for delays chosen badly.
The optimal delay is typically around one-tenth to one-half the mean orbital period around the attractor.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: M \\to M."
},
{
"math_id": 1,
"text": "A\\sub M"
},
{
"math_id": 2,
"text": "k > 2 d_A."
},
{
"math_id": 3,
"text": "\\R^k"
},
{
"math_id": 4,
"text": "\\alpha : M \\to \\R"
},
{
"math_id": 5,
"text": "\\varphi_T(x) = \\bigl(\\alpha(x), \\, \\alpha(f(x)), \\, \\dots, \\, \\alpha(f^{k-1}(x)) \\, \\bigr)"
},
{
"math_id": 6,
"text": "\\R^k."
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": "x_t"
},
{
"math_id": 9,
"text": "y"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "y(t)"
},
{
"math_id": 12,
"text": "\\tau: y_{t+\\tau}, y_{t+2\\tau} "
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": " k \\to \\infty "
},
{
"math_id": 15,
"text": "2d+1"
}
]
| https://en.wikipedia.org/wiki?curid=744335 |
74438260 | Issue Yield | Issue yield theory
In political science, Issue Yield refers to Issue Yield theory or its derived Issue Yield index.
Issue Yield theory.
Issue Yield theory was developed to explain party strategy and voting behavior in democratic elections. The theory focuses on the electoral risks and opportunities that specific policy issues present to political parties or candidates. The risk-opportunity mix of an issue (its "issue yield") gives incentives or disincentives to each party to emphasize that particular issue in their policy platform and election campaigns. Voters embrace these issue priorities and update their party support in line with their own policy preferences. Overall, by bringing together party strategy and voting behavior, Issue Yield theory seeks to account for the development of public policy and its variation among democratic societies.
According to the theory, high-yield issue goals are those that combine three characteristics:
If these conditions are fulfilled, the issue goal presents a win-win situation, allowing electoral expansion outside the party without losing existing voters.
Issue Yield theory was first presented in an article that appeared in the American Political Science Review in 2014; a simpler presentation (also framing the theory vis-à-vis other theoretical frameworks on the topic) was later published as an entry of the Routledge Handbook of Elections, Voting Behaviour and Public Opinion.
Issue Yield index.
The three aforementioned characteristics have been combined in a summary Issue Yield index, ranging between +1 (highest possible yield) and -1 (lowest possible yield), that can be calculated with data from any public opinion survey that includes a vote intention (or vote choice) item and at least one item concerning respondents' positions on an issue.
The index is constructed on the basis of three aggregate quantities: "i" (the share of the sample that supports the issue goal), "p" (the share of the sample that supports the party) and "f" (the share of the sample that supports both). These quantities are combined in a nonlinear fashion:
formula_0
For surveys that include specific measures of party credibility regarding an issue goal, a credibility-weighted measure has been developed that also takes into account "cred" (the share of issue goal supporters who consider the party credible on the issue goal) and "intcred" (the share considering the party credible on the issue goal among party supporters that also support the goal)
formula_1
Empirical applications.
The theory has been first applied to strategic choices of political parties, consistently showing that high-yield goals tend to receive significantly higher priority in party platforms and in party campaign communication, with Issue Yield-related considerations being more relevant than voters' general issue priorities.
In addition, the theory has been applied to explain individual-level party support, patterns of cabinet formation, social group campaign appeals and political inequality.
The theory also served as a foundation for the Issue Competition Comparative Project (ICCP), a research project that analyzed party competition in general elections of six West European countries (Austria, France, Germany, Italy, Netherlands, UK) in 2017-18. Results were presented in a special issue of West European Politics (later published as a book), with data publicly released as GESIS Study ZA7499. A later wave covered Spain (2019), Poland (2019), the United States (2020) and Germany (2021), as well as an ICCP item battery included in a general survey run in the Czech Republic (2021).
Software tools.
A free Stata add-on ADO file for calculating the Issue Yield index on survey datasets is available from the issueyield project.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{IY=\\frac{f-ip}{p(1-p)}+\\frac{i-p}{1-p}}"
},
{
"math_id": 1,
"text": "{IY=\\frac{(f-ip)intcred}{p(1-p)}+\\frac{(i-p)cred}{1-p}}"
}
]
| https://en.wikipedia.org/wiki?curid=74438260 |
74442379 | Causal notation | Notation to express cause and effect
Causal notation is notation used to express cause and effect.
In nature and human societies, many phenomena have causal relationships where one phenomenon A (a cause) impacts another phenomenon B (an effect). Establishing causal relationships is the aim of many scientific studies across fields ranging from biology and physics to social sciences and economics. It is also a subject of accident analysis, and can be considered a prerequisite for effective policy making.
To describe causal relationships between phenomena, non-quantitative visual notations are common, such as arrows, e.g. in the nitrogen cycle or many chemistry and mathematics textbooks. Mathematical conventions are also used, such as plotting an independent variable on a horizontal axis and a dependent variable on a vertical axis, or the notation formula_0 to denote that a quantity "formula_1" is a dependent variable which is a function of an independent variable "formula_2". Causal relationships are also described using quantitative mathematical expressions. (See Notations section.)
The following examples illustrate various types of causal relationships. These are followed by different notations used to represent causal relationships.
Examples.
What follows does not necessarily assume the convention whereby formula_1 denotes an independent variable, and
formula_3 denotes a function of the independent variable formula_1. Instead, formula_1 and formula_3 denote two quantities with an a priori unknown causal relationship, which can be related by a mathematical expression.
Ecosystem example: correlation without causation.
Imagine the number of days of weather below one degrees Celsius, formula_1, causes ice to form on a lake, formula_3, and it causes bears to go into hibernation formula_4. Even though formula_4 does not cause formula_3 and vice-versa, one can write an equation relating formula_4 and formula_3. This equation may be used to successfully calculate the number of hibernating bears formula_4, given the surface area of the lake covered by ice. However, melting the ice in a region of the lake by pouring salt onto it, will not cause bears to come out of hibernation. Nor will waking the bears by physically disturbing them cause the ice to melt. In this case the two quantities formula_3 and formula_4 are both caused by a confounding variable formula_1 (the outdoor temperature), but not by each other. formula_3 and formula_4 are related by correlation without causation.
Physics example: a unidirectional causal relationship.
Suppose an ideal solar-powered system is built such that if it is sunny and the sun provides an intensity formula_5 of formula_6 watts incident on a formula_7mformula_8 solar panel for formula_9 seconds, an electric motor raises a formula_10kg stone by formula_11 meters, formula_12. More generally, we assume the system is described by the following expression:
formula_13,
where formula_5 represents intensity of sunlight (Jformula_14sformula_15formula_14mformula_16), formula_17 is the surface area of the solar panel (mformula_18), formula_19 represents time (s), formula_20 represents mass (kg), formula_21 represents the acceleration due to Earth's gravity (formula_22 mformula_14sformula_16), and formula_23 represents the height the rock is lifted (m).
In this example, the fact that it is sunny and there is a light intensity formula_5, causes the stone to rise formula_12, not the other way around; lifting the stone (increasing formula_12) will not result in turning on the sun to illuminate the solar panel (an increase in formula_5). The causal relationship between formula_5 and formula_12 is unidirectional.
Medicine example: two causes for a single outcome.
Smoking, formula_3, and exposure to asbestos, formula_4, are both known causes of cancer, formula_1. One can write an equation formula_24 to describe an equivalent carcinogenicity between how many cigarettes a person smokes, formula_3, and how many grams of asbestos a person inhales, formula_4. Here, neither formula_3 causes formula_4 nor formula_4 causes formula_3, but they both have a common outcome.
Bartering example: a bidirectional causal relationship.
Consider a barter-based economy where the number of cows formula_25 one owns has value measured in a standard currency of chickens, formula_1. Additionally, the number of barrels of oil formula_26 one owns has value which can be measured in chickens, formula_1. If a marketplace exists where cows can be traded for chickens which can in turn be traded for barrels of oil, one can write an equation formula_27 to describe the value relationship between cows formula_25 and barrels of oil formula_26. Suppose an individual in this economy always keeps half of their value in the form of cows and the other half in the form of barrels of oil. Then, increasing their number of cows formula_28 by offering them 4 cows, will eventually lead to an increase in their number of barrels of oil formula_29, or vice-versa. In this case, the mathematical equality formula_27 describes a bidirectional causal relationship.
Notations.
Chemical reactions.
In chemistry, many chemical reactions are reversible and described using equations which tend towards a dynamic chemical equilibrium. In these reactions, adding a reactant or a product causes the reaction to occur producing more product, or more reactant, respectively. It is standard to draw “harpoon-type” arrows in place of an equals sign, ⇌, to denote the reversible nature of the reaction and the dynamic causal relationship between reactants and products.
Statistics: Do notation.
Do-calculus, and specifically the do operator, is used to describe causal relationships in the language of probability. A notation used in do-calculus is, for instance:
formula_30,
which can be read as: “the probability of formula_31 given that you do formula_32”. The expression above describes the case where formula_31 is independent of anything done to formula_32. It specifies that there is no unidirectional causal relationship where formula_32 causes formula_31.
Causal diagrams.
A causal diagram consists of a set of nodes which may or may not be interlinked by arrows. Arrows between nodes denote causal relationships with the arrow pointing from the cause to the effect. There exist several forms of causal diagrams including Ishikawa diagrams, directed acyclic graphs, causal loop diagrams, and why-because graphs (WBGs). The image below shows a partial why-because graph used to analyze the capsizing of the Herald of Free Enterprise.
Junction patterns.
Junction patterns can be used to describe the graph structure of Bayesian networks. Three possible patterns allowed in a 3-node directed acyclic graph (DAG) include:
Causal equality notation.
Various forms of causal relationships exist. For instance, two quantities formula_33 and formula_34 can both be caused by a confounding variable formula_35, but not by each other. Imagine a garbage strike in a large city, formula_35, causes an increase in the smell of garbage, formula_33 and an increase in the rat population formula_34. Even though formula_34 does not cause formula_33 and vice-versa, one can write an equation relating formula_34 and formula_33. The following table contains notation representing a variety of ways that formula_35, formula_33 and formula_34 may be related to each other.
It should be assumed that a relationship between two equations with identical senses of causality (such as formula_37, and formula_36) is one of pure correlation unless both expressions are proven to be bi-directional causal equalities. In that case, the overall causal relationship between formula_34 and formula_33 is bi-directionally causal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y=f(x)"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "f(y)"
},
{
"math_id": 4,
"text": "g(y)"
},
{
"math_id": 5,
"text": "I"
},
{
"math_id": 6,
"text": "100"
},
{
"math_id": 7,
"text": "1"
},
{
"math_id": 8,
"text": "^2"
},
{
"math_id": 9,
"text": "10~"
},
{
"math_id": 10,
"text": "2"
},
{
"math_id": 11,
"text": "50"
},
{
"math_id": 12,
"text": "h(I)"
},
{
"math_id": 13,
"text": "I \\times A \\times t = m \\times g \\times h ~"
},
{
"math_id": 14,
"text": "\\cdot"
},
{
"math_id": 15,
"text": "^{-1}"
},
{
"math_id": 16,
"text": "^{-2}"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "^{2}"
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "m"
},
{
"math_id": 21,
"text": "g"
},
{
"math_id": 22,
"text": "9.8"
},
{
"math_id": 23,
"text": "h"
},
{
"math_id": 24,
"text": "f(y) = g(y)"
},
{
"math_id": 25,
"text": "C"
},
{
"math_id": 26,
"text": "B"
},
{
"math_id": 27,
"text": "C(y) = B(y)"
},
{
"math_id": 28,
"text": "C(y)"
},
{
"math_id": 29,
"text": "B(y)"
},
{
"math_id": 30,
"text": "P(Y|do(X)) = P(Y)~"
},
{
"math_id": 31,
"text": "Y"
},
{
"math_id": 32,
"text": "X"
},
{
"math_id": 33,
"text": "a(s)"
},
{
"math_id": 34,
"text": "b(s)"
},
{
"math_id": 35,
"text": "s"
},
{
"math_id": 36,
"text": "s ~\\overset{\\rightarrow}{=}~ b\\left(s\\right)"
},
{
"math_id": 37,
"text": " s ~\\overset{\\rightarrow}{=}~ a\\left(s\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=74442379 |
74443495 | Progressive-iterative approximation method | Computer-aided geometric design
Progressive-iterative approximation method is an iterative method of data fitting with geometric meanings. Given the data points to be fitted, the method obtains a series of fitting curves (surfaces) by iteratively updating the control points, and the limit curve (surface) can interpolate or approximate the given data points. It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process. Therefore, it has been widely used in geometric design and related fields.
The study of the iterative method with geometric meaning can be traced back to the work of scholars such as Prof. Dongxu Qi and Prof. Carl de Boor in the 1970s. In 1975, Qi et al. developed and proved the "profit and loss" algorithm for uniform cubic B-spline curves, and in 1979, de Boor independently proposed this algorithm. In 2004, Hongwei Lin and coauthors proved that non-uniform cubic B-spline curves and surfaces have the "profit and loss" property. Later, in 2005, Lin et al. proved that the curves and surfaces with normalized and totally positive basis all have this property and named it progressive iterative approximation (PIA). In 2007, Maekawa et al. changed the algebraic distance in PIA to geometric distance and named it geometric interpolation (GI). In 2008, Cheng et al. extended it to subdivision surfaces and named the method progressive interpolation (PI). Since the iteration steps of the PIA, GI, and PI algorithms are similar and all have geometric meanings, we collectively referred to them as geometric iterative methods (GIM).
PIA is now extended to several common curves and surfaces in the geometric design field, including NURBS curves and surfaces, T-spline surfaces, implicit curves and surfaces, etc.
Iteration Methods.
Generally, progressive-iterative approximation can be divided into interpolation and approximation schemes. In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. Specifically, there are some representative iteration methods, such as local-PIA, implicit-PIA, fairing-PIA, and isogeometric least-squares progressive-iterative approximation (IG-LSPIA), which is specialized for solving the isogeometric analysis problem.
Interpolation scheme: PIA.
To facilitate the description of the PIA iteration format for different forms of curves and surfaces, we write B-spline curves and surfaces, NURBS curves and surfaces, B-spline solids, T-spline surfaces, and triangular Bernstein–Bézier (B–B) surfaces uniformly in the following form:
formula_0
For example:
Given an ordered data set formula_14,with parameters formula_15 satisfying formula_16, the initial fitting curve (surface) is
formula_17,
where the initial control points of the initial fitting curve (surface) formula_18 can be randomly selected. Suppose that after the formula_19th iteration, the formula_19th fitting curve (surface)formula_20 is generated by
formula_21
To construct the formula_22st curve (surface),we first calculate the "difference vectors",
formula_23
and then update the control points by
formula_24
leading to the formula_22st fitting curve (surface):
formula_25
In this way, we obtain a sequence of curves (surfaces)
formula_26
It has been proved that this sequence of curves (surfaces) converges to a limit curve (surface) that interpolates the give data points, i.e.,
formula_27
Approximation scheme: LSPIA.
For the B-spline curve and surface fitting problem, Deng and Lin proposed a least-squares progressive–iterative approximation(LSPIA), which allows the number of control points to be less than that of the data points and is more suitable for large-scale data fitting problems.
Assume that the number of data points is formula_28,and the number of control points is formula_29. Following the notations in the Section above, the formula_19th fitting curve (surface) generated after the formula_19th iteration is formula_20, i.e.,
formula_30
To generate the formula_22st fitting curve (surface),we compute the following difference vectors in turn:
"Difference vectors for data points":
formula_31 and,
"Difference vectors for control points"
formula_32
where formula_33 is the index set of the data points in the formula_34th group,whose parameters fall in the local support of the formula_34th basis function, i.e., formula_35. formula_36 are weights that guarantee the convergence of the algorithm, usually taken as formula_37.
Finally, the control points of the formula_22st curve (surface) are updated by formula_38 leading to the formula_22st fitting curve (surface) formula_39. In this way,we obtain a sequence of curve (surface),and the limit curve (surface) converges to the least-squares fitting result to the given data points.
Local-PIA.
In the local-PIA, the control points are divided into active and fixed control points, whose subscripts are denoted as formula_40 and formula_41, respectively. Assume that, the formula_42th fitting curve (surface) is formula_43,where the fixed control point satisfy
formula_44
Then,on the one hand, the iterative formula of the difference vector formula_45 corresponding to the fixed control points is
formula_46
On the other hand, the iterative formula of the difference vector formula_47 corresponding to the active control points is
formula_48
Arranging the above difference vectors into a one-dimensional sequence,
formula_49
the local iteration format in matrix form is,
formula_50
where, formula_51 is the iteration matrix,
formula_52
formula_53 and formula_54 are the identity matrices and
formula_55
The above local iteration format converges and can be extended to blending surfaces and subdivision surfaces.
Implicit-PIA.
The progressive iterative approximation format for implicit curve and surface reconstruction is presented in the following. Given an ordered point cloud formula_56 and a unit normal vector formula_57 on the data points, we want to reconstruct an implicit curve (surface) from the given point cloud. To avoid trivial solution, some offset points formula_58 are added to the point cloud. They are offset by a distance formula_59 along the unit normal vector of each point
formula_60
Assume that formula_61 is the value of the implicit function at the offset point
formula_62
Let the implicit curve after the formula_63th iteration be
formula_64
where formula_65 is the control point.
Define the difference vector of data points as
formula_66
Next, calculate the difference vector of control coefficients
formula_67
where formula_68 is the convergence coefficient. As a result, the new control coefficients are
formula_69
leading to the new algebraic B-spline curve
formula_70
The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions formula_71. The sequence converges to a minimization problem with constraints when the initial control coefficients formula_72.
Assume that the implicit surface generated after the formula_63th iteration is
formula_73
the iteration format is similar to that of the curve case.
Fairing-PIA.
To develop fairing-PIA, we first define the functionals as follows:
formula_74
where formula_75 represents the formula_76th derivative of the basis function formula_77, (e.g. B-spline basis function).
Let the curve after the formula_42th iteration be
formula_78
To construct the new curve formula_79,we first calculate the formula_80st difference vectors for data points,
formula_81
Then, the fitting difference vectors and the fairing vectors for control points are calculated by
formula_82
formula_83
Finally, the control points of the formula_22st curve are produced by
formula_84
where formula_85 is a normalization weight, and formula_86 is a smoothing weight corresponding to the formula_34th control point. The smoothing weights can be employed to adjust the smoothness individually, thus bringing great flexibility for smoothness. The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as follows
formula_87
In this way, we obtain a sequence of curves formula_88. The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal ( formula_89). Similarly, the fairing-PIA can be extended to the surface case.
IG-LSPIA.
Given a boundary value problem
formula_90
where formula_91 is the unknown solution,formula_92 and formula_93 are the differential operator and boundary operator, respectively. formula_94 and formula_95 are the continuous functions. In the isogeometric analysis method, NURBS basis functions are used as shape functions to solve the numerical solution of this boundary value problem. The same basis functions are applied to represent the numerical solution formula_96 and the geometric mapping formula_97:
formula_98
where formula_99 denotes the NURBS basis function,formula_100 is the control coefficient. After substituting the collocation points formula_101 into the strong form of PDE,we obtain a discretized problem
formula_102
where formula_103 and formula_104 denote the subscripts of internal and boundary collocation points, respectively.
Arranging the control coefficients formula_100 of the numerical solution formula_105 into an formula_106-dimensional column vector, i.e., formula_107, the discretized problem can be reformulated in matrix form
formula_108
where formula_109 is the collocation matrix,and formula_110 is the load vector.
Assume that the discretized load values are data points formula_111 to be fitted. Given the initial guess of the control coefficientsformula_112(formula_113),we obtain an initial blending function
formula_114
where formula_115, formula_116,represents the combination of different order derivatives of the NURBS basis functions determined using the operators formula_92 and formula_93
formula_117
where formula_118 and formula_119 indicate the interior and boundary of the parameter domain, respectively. Each formula_115 corresponds to the formula_120th control coefficient. Assume that formula_121 and formula_122 are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e.,
formula_123
The formula_42th blending function, generated after the formula_42th iteration of IG-LSPIA, is assumed to be as follows:
formula_124
Then, the difference vectors for collocation points (DCP) in the formula_80st iteration are obtained using
formula_125
Moreover, group all load values whose parameters fall in the local support of the formula_120th derivatives function, i.e., formula_126, into the formula_120th group corresponding to the formula_120th control coefficient, and denote the index set of the formula_120th group of load values as formula_127. Lastly, the differences for control coefficients (DCC) can be constructed as follows:formula_128
where formula_68 is a normalization weight to guarantee the convergence of the algorithm.
Thus, the new control coefficients are updated via the following formula,
formula_129
Consequently, the formula_80st blending function is generated as follows:
formula_130
The above iteration process is performed until the desired fitting precision is reached and a sequence of blending functions is obtained
formula_131
The IG-LSPIA converges to the solution of a constrained least-squares collocation problem.
Proof of convergence.
Non-singular case
formula_133
where,
formula_134
The convergence of the PIA is related to the properties of the collocation matrix. If the spectral radius of iteration matrix formula_135 is less than formula_136,then the PIA is convergent. It has been shown that the PIA methods for Bézier curves and surface, B-spline curves and surfaces, NURBS curves and surfaces, Triangular Bernstein-Bézier surface, and subdivision surfaces (Loop, Catmull-Clark, Doo-Sabin) are convergent.
formula_137
When the matrix formula_138 is nonsingular,the following results can be obtained.
Lemma If formula_139 ,where formula_140 is the largest eigenvalue of the matrix formula_138,then the eigenvalues of formula_141 are real numbers and satisfy formula_142.
Proof Since formula_138 is nonsingular,and formula_143,then formula_144. Moreover,
formula_145
In summary,formula_142.
Theorem If formula_139 ,LSPIA is convergent, and converges to the least-squares fitting result to the given data points.
Proof From the matrix form of iterative format, we obtain the following :
formula_146
According to above Lemma, the spectral radius of the matrix formula_141 satisfies
formula_147
Thus,the spectral radius of the iteration matrix satisfies
formula_148
When formula_149
formula_150
As a result,
formula_151
i.e., formula_152,which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points.
Singular case.
Lin et al. showed that LSPIA converges even when the iteration matrix is singular.
Applications.
Since PIA has obvious geometric meaning, constraints can be easily integrated in the iterations. Currently, PIA has been widely applied in many fields, such as data fitting, reverse engineering, geometric design, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis.
Data fitting
Implicit reconstruction
For implicit curve and surface reconstruction, the PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm.
Offset curve approximation
Firstly, the data points are sampled on the original curve. Then, the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points. Finally, the offset curve is approximated iteratively using the PIA method.
Mesh generation
Input a triangular mesh model, the algorithm first constructs the initial hexahedral mesh, and extracts the quadrilateral mesh of the surface as the initial boundary mesh. During the iterations, the movement of each mesh vertex is constrained to ensure the validity of the mesh. Finally, the hexahedral model is fitted to the given input model. The algorithm can guarantee the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero.
Data compression
First, the image data are converted into a one-dimensional sequence by Hilbert scan; then, these data points are fitted by LSPIA to generate a Hilbert curve; finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels.
Fairing curve and surface generation
Given a data point set, we first define the fairing functional, and calculate the fitting difference vector and the fairing vector of the control point; then, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the sufficient fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., when the smooth weights are all the same.
Isogeometric analysis
The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of data points (i.e., collocation points) and unrelated to the number of control coefficients.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{P}(\\mathbf{t})=\\sum_{i=1}^n\\mathbf{P}_iB_i(\\mathbf{t})."
},
{
"math_id": 1,
"text": "\\mathbf{P}(\\mathbf{t})"
},
{
"math_id": 2,
"text": "\\mathbf{t}"
},
{
"math_id": 3,
"text": "B_i(t)"
},
{
"math_id": 4,
"text": "\\mathbf{P}_i"
},
{
"math_id": 5,
"text": "n_u\\times n_v"
},
{
"math_id": 6,
"text": "\\mathbf{t}=(u,v)"
},
{
"math_id": 7,
"text": "B_i(\\mathbf{t})=N_i(u)N_i(v)"
},
{
"math_id": 8,
"text": "N_i(u)"
},
{
"math_id": 9,
"text": "N_i(v)"
},
{
"math_id": 10,
"text": "n_u \\times n_v \\times n_w"
},
{
"math_id": 11,
"text": "\\mathbf{t}=(u,v,w)"
},
{
"math_id": 12,
"text": "B_i(\\mathbf{t})=N_i(u)N_i(v)N_i(w)"
},
{
"math_id": 13,
"text": "N_i(w)"
},
{
"math_id": 14,
"text": "{\\mathbf{Q}_i,i=1,2,\\cdots,n}"
},
{
"math_id": 15,
"text": "t_i,i=1,2,\\cdots,n"
},
{
"math_id": 16,
"text": "t_1<t_2<\\cdots"
},
{
"math_id": 17,
"text": "\\mathbf{P}^{(0)}(t)=\\sum_{i=1}^n\\mathbf{P}_i^{(0)}B_i(t)"
},
{
"math_id": 18,
"text": "\\mathbf{P}_i^{(0)}"
},
{
"math_id": 19,
"text": "k"
},
{
"math_id": 20,
"text": "\\mathbf{P}^{(k)}(t)"
},
{
"math_id": 21,
"text": " \\mathbf{P}^{(k)}(t)=\\sum_{i=1}^n\\mathbf{P}_i^{(k)}B_i(t). "
},
{
"math_id": 22,
"text": "(k+1)"
},
{
"math_id": 23,
"text": " \\mathbf{\\Delta}^{(k)}_i=\\mathbf{Q}_i-\\mathbf{P}^{(k)}(t_i), i=1,2,\\cdots,n, "
},
{
"math_id": 24,
"text": "\\mathbf{P}_i^{(k+1)}=\\mathbf{P}_i^{(k)}+\\mathbf{\\Delta}_i^{(k)}, "
},
{
"math_id": 25,
"text": "\n\\mathbf{P}^{(k+1)}(t)=\\sum_{i=1}^n\\mathbf{P}_i^{(k+1)}B_i(t)."
},
{
"math_id": 26,
"text": "\n\\mathbf{P}^{(\\alpha)}(t),\\alpha=0,1,2,\\cdots.\n"
},
{
"math_id": 27,
"text": "\n\\lim \\limits_{\\alpha\\rightarrow\\infty}\\mathbf{P}^{(\\alpha)}(t_i)=\\mathbf{Q}_i, i=1,2,\\cdots,n.\n"
},
{
"math_id": 28,
"text": "m"
},
{
"math_id": 29,
"text": "n(n\\le m)"
},
{
"math_id": 30,
"text": "\n\\mathbf{P}^{(k)}(t)=\\sum_{j=1}^n\\mathbf{P}_j^{(k)}B_j(t).\n"
},
{
"math_id": 31,
"text": "\n\\mathbf{\\delta}^{(k)}_i=\\mathbf{Q}_i-\\mathbf{P}^{(k)}(t_i), i=1,2,\\cdots,m,\n"
},
{
"math_id": 32,
"text": "\n\\mathbf{\\Delta}^{(k)}_j=\\frac{\n\\sum_{i \\in I_j}{c_i B_j(t_i) \\mathbf{\\delta}_i^{(k)}}}{\\sum_{i \\in I_j}c_i B_j(t_i)},\nj = 1,2,\\cdots,n,\n"
},
{
"math_id": 33,
"text": "I_j"
},
{
"math_id": 34,
"text": "j"
},
{
"math_id": 35,
"text": "B_j(t_i)\\ne0"
},
{
"math_id": 36,
"text": "c_i, i \\in I_j"
},
{
"math_id": 37,
"text": "c_i = 1, i \\in I_j"
},
{
"math_id": 38,
"text": " \\mathbf{P}_j^{(k+1)}=\\mathbf{P}_j^{(k)}+\\mathbf{\\Delta}_j^{(k)}, "
},
{
"math_id": 39,
"text": "\\mathbf{P}^{(k+1)}(t)"
},
{
"math_id": 40,
"text": "I=\\left\\{i_1,i_2,\\cdots,i_I\\right\\}"
},
{
"math_id": 41,
"text": "J=\\left\\{j_1,j_2,\\cdots,j_J\\right\\}"
},
{
"math_id": 42,
"text": "k"
},
{
"math_id": 43,
"text": "\\mathbf{P}^{(k)}(t)=\\sum_{j=1}^n\\mathbf{P}_j^{(k)}B_j(t)"
},
{
"math_id": 44,
"text": "\n\\mathbf{P}_j^{(k)}=\\mathbf{P}_j^{(0)},\\quad j\\in J,\\quad k=0,1,2,\\cdots.\n"
},
{
"math_id": 45,
"text": "\\mathbf{\\Delta}_h^{(k+1)}"
},
{
"math_id": 46,
"text": "\n\\begin{aligned}\n\\mathbf{\\Delta}_h^{(k+1)}&=\\mathbf{Q}_h-\\sum_{j=1}^n\\mathbf{P}_j^{(k+1)}B_j(t_h)\\\\\n&=\\mathbf{Q}_h-\\sum_{j\\in J}\\mathbf{P}_j^{(k+1)}B_j(t_h)-\\sum_{i\\in I}\\left(\\mathbf{P}_i^{(k)}+\\mathbf{\\Delta}_i^{(k)}\\right)B_i(t_h)\\\\\n&=\\mathbf{Q}_h-\\sum_{j=1}^n\\mathbf{P}_j^{(k)}B_j(t_h)-\\sum_{i\\in I}\\mathbf{\\Delta}_i^{(k)}B_i(t_h)\\\\\n&=\\mathbf{\\Delta}_h^{(k)}-\\sum_{i\\in I}\\mathbf{\\Delta}_i^{(k)}B_i(t_h), \\quad h\\in J.\n\\end{aligned}\n"
},
{
"math_id": 47,
"text": "\\mathbf{D}_l^{(k+1)}"
},
{
"math_id": 48,
"text": "\n\\begin{aligned}\n\\mathbf{\\Delta}_l^{(k+1)}&=\\mathbf{Q}_l-\\sum_{j=1}^n\\mathbf{P}_j^{(k+1)}B_j(t_l)\\\\\n&=\\mathbf{Q}_l-\\sum_{j=1}^n\\mathbf{P}_j^{(k)}B_j(t_l)-\\sum_{i\\in I}\\mathbf{\\Delta}_i^{(k)}B_i(t_l)\\\\\n&=\\mathbf{\\Delta}_l^{(k)}-\\sum_{i\\in I}\\mathbf{\\Delta}_i^{(k)}B_i(t_l)\\\\\n&=-\\mathbf{\\Delta}_{i_1}^{(k)}B_{i_1}(t_l)-\\mathbf{\\Delta}_{i_2}^{(k)}B_{i_2}(t_l)-\\cdots+\\left(1-B_l(t_l)\\right)\\mathbf{\\Delta}_l ^{(k)}-\\cdots-\\mathbf{\\Delta}_{i_I}^{(k)}B_{i_I}(t_l),\\quad l\\in I.\n\\end{aligned}\n"
},
{
"math_id": 49,
"text": "\n\\mathbf{D}^{(k+1)}=\\left[\\mathbf{\\Delta}_{j_1}^{(k+1)} ,\\mathbf{\\Delta}_{j_2}^{(k+1)},\\cdots,\\mathbf{\\Delta}_{j_J}^{(k+1)},\\mathbf{\\Delta}_{i_1}^{(k+1)},\\mathbf{\\Delta}_{i_2}^{(k+1)},\\cdots,\\mathbf{\\Delta}_{i_I}^{(k+1)}\\right]^T,\\quad k=0,1,2,\\cdots,\n"
},
{
"math_id": 50,
"text": "\n\\mathbf{D}^{(k+1)}=\\mathbf{T}\\mathbf{D}^{(k)},\\quad k=0,1,2,\\cdots,\n"
},
{
"math_id": 51,
"text": "\\mathbf{T}"
},
{
"math_id": 52,
"text": "\n\\mathbf{T}=\n\\begin{bmatrix}\n\\mathbf{E}_J & -\\mathbf{B}_1\\\\\n0 & \\mathbf{E}_I-\\mathbf{B}_2\n\\end{bmatrix},\n"
},
{
"math_id": 53,
"text": "\\mathbf{E}_J"
},
{
"math_id": 54,
"text": "\\mathbf{E}_I"
},
{
"math_id": 55,
"text": "\n\\mathbf{B}_1=\n\\begin{bmatrix}\nB_{i_1}\\left(t_{j_1} \\right) & B_{i_2}\\left(t_{j_1} \\right) & \\cdots &B_{i_I}\\left(t_{j_1} \\right) \\\\\nB_{i_1}\\left(t_{j_2} \\right) & B_{i_2}\\left(t_{j_2} \\right) & \\cdots &B_{i_I}\\left(t_{j_2} \\right) \\\\\n\\vdots & \\vdots &\\vdots & \\vdots \\\\\nB_{i_1}\\left(t_{j_J} \\right) & B_{i_2}\\left(t_{j_J} \\right) & \\cdots &B_{i_I}\\left(t_{j_J} \\right) \\\\\n\\end{bmatrix},\n\n\\mathbf{B}_2=\n\\begin{bmatrix}\nB_{i_1}\\left(t_{i_1} \\right) & B_{i_2}\\left(t_{i_1} \\right) & \\cdots &B_{i_I}\\left(t_{i_1} \\right) \\\\\nB_{i_1}\\left(t_{i_2} \\right) & B_{i_2}\\left(t_{i_2} \\right) & \\cdots &B_{i_I}\\left(t_{i_2} \\right) \\\\\n\\vdots & \\vdots &\\vdots & \\vdots \\\\\nB_{i_1}\\left(t_{i_I} \\right) & B_{i_2}\\left(t_{i_I} \\right) & \\cdots &B_{i_I}\\left(t_{i_I} \\right) \\\\\n\\end{bmatrix}.\n"
},
{
"math_id": 56,
"text": "\\left\\{\\mathbf{Q}_i\\right\\}_{i=1}^n"
},
{
"math_id": 57,
"text": "\\left\\{\\mathbf{n}_i\\right\\}_{i=1}^n"
},
{
"math_id": 58,
"text": "\\left\\{\\mathbf{Q}_l\\right\\}_{l=n+1}^{2n}"
},
{
"math_id": 59,
"text": "\\sigma"
},
{
"math_id": 60,
"text": "\n\\mathbf{Q}_l=\\mathbf{Q}_i+\\sigma\\mathbf{n}_i,\\quad l=n+i,\\quad i=1,2,\\cdots,n.\n"
},
{
"math_id": 61,
"text": "\\epsilon"
},
{
"math_id": 62,
"text": "\nf\\left(\\mathbf{Q}_l\\right)=\\epsilon,\\quad l=n+1,n+2,\\cdots,2n.\n"
},
{
"math_id": 63,
"text": "\\alpha"
},
{
"math_id": 64,
"text": "\nf^{(\\alpha)}(x,y)=\\sum_{i=1}^{N_u}\\sum_{j=1}^{N_v}C_{ij}^{(\\alpha)}B_i(x)B_j(y),\n"
},
{
"math_id": 65,
"text": "C_{ij}^{(\\alpha)}"
},
{
"math_id": 66,
"text": "\n\\begin{aligned}\n\\delta_k^{(\\alpha)}&=0-f^{(\\alpha)}(x_k,y_k),\\quad k=1,2,\\cdots,n,\\\\\n\\delta_l^{(\\alpha)}&=\\epsilon-f^{(\\alpha)}(x_l,y_l),\\quad l=n+1,n+2,\\cdots, 2n.\n\\end{aligned}\n"
},
{
"math_id": 67,
"text": "\n{\\Delta}_{ij}^{(\\alpha)}=\\mu\\sum_{k=1}^{2n}B_i(x_k)B_j(y_k)\\delta_k^{(\\alpha)},\\quad i=1,2,\\cdots,N_u,\\quad j=1,2,\\cdots,N_v,\n"
},
{
"math_id": 68,
"text": "\\mu"
},
{
"math_id": 69,
"text": "\nC_{ij}^{(\\alpha+1)}=C_{ij}^{(\\alpha)}+\\Delta_{ij}^{(\\alpha)},\n"
},
{
"math_id": 70,
"text": "\nf^{(\\alpha+1)}(x,y)=\\sum_{i=1}^{N_u}\\sum_{j=1}^{N_v}C_{ij}^{(\\alpha+1)}B_i(x)B_j(y).\n"
},
{
"math_id": 71,
"text": "\\left\\{f^{(\\alpha)}(x,y), \\alpha=0,1,2,\\cdots\\right\\}"
},
{
"math_id": 72,
"text": "C_{ij}^{(0)}=0"
},
{
"math_id": 73,
"text": "\nf^{(\\alpha)}(x,y,z)=\\sum_{i=1}^{N_u}\\sum_{j=1}^{N_v}\\sum_{k=1}^{N_w}C_{ijk}^{(\\alpha)}B_i(x)B_j(y)B_k(z),\n"
},
{
"math_id": 74,
"text": "\n\\mathcal{F}_{r,j}(f) = \\int_{t_1}^{t_m}B_{r,j}(t)fdt,\\quad j=1,2,\\cdots,n,\\quad r=1,2,3,\n"
},
{
"math_id": 75,
"text": "B_{r,j}(t)"
},
{
"math_id": 76,
"text": "r"
},
{
"math_id": 77,
"text": "B_j(t)"
},
{
"math_id": 78,
"text": "\n\\mathbf{P}^{[k]}(t)=\\sum_{j=1}^nB_j(t)\\mathbf{P}_j^{[k]},\\quad t\\in[t_1,t_m].\n"
},
{
"math_id": 79,
"text": "\\mathbf{P}^{[k+1]}(t)"
},
{
"math_id": 80,
"text": "(k + 1)"
},
{
"math_id": 81,
"text": "\n\\mathbf{d}_i^{[k]} = \\mathbf{Q}_i - \\mathbf{P}^{[k]}(t_i),\\quad i=1,2,\\cdots,m.\n"
},
{
"math_id": 82,
"text": "\n\\mathbf{\\delta}_j^{[k]} = \\sum_{h\\in I_j}B_j(t_h)\\mathbf{d}_h^{[k]},\\quad j=1,2,\\cdots,n,\n"
},
{
"math_id": 83,
"text": "\n\\mathbf{\\eta}_{j}^{[k]} =\\sum_{l=1}^n \\mathcal{F}_{r,l}\\left(B_{r,j}(t)\\right)\\mathbf{P}_l^{[k]},\\quad j=1,2,\\cdots,n.\n"
},
{
"math_id": 84,
"text": "\n\\mathbf{P}_j^{[k+1]} = \\mathbf{P}_j^{[k]} + \\mu_j\n\\left[\n\\left(1-\\omega_j\\right)\\mathbf{\\delta}_j^{[k]} - \\omega_j\\mathbf{\\eta}_{j}^{[k]}\n\\right],\\quad j=1,2,\\cdots,n,\n"
},
{
"math_id": 85,
"text": "\\mu_j"
},
{
"math_id": 86,
"text": "\\omega_j"
},
{
"math_id": 87,
"text": "\n\\mathbf{P}^{[k+1]}(t)=\\sum_{j=1}^nB_j(t)\\mathbf{P}_j^{[k+1]},\\quad t\\in[t_1,t_m].\n"
},
{
"math_id": 88,
"text": "\\left\\{\\mathbf{P}^{[k]}(t),\\;k=1,2,3,\\cdots\\right\\}"
},
{
"math_id": 89,
"text": "\\omega_j=\\omega"
},
{
"math_id": 90,
"text": "\n\\left\\{\n\\begin{aligned}\n\\mathcal{L}u=f,&\\quad \\text{in}\\;\\Omega,\\\\\n\\mathcal{G}u=g,&\\quad \\text{on}\\;\\partial\\Omega,\n\\end{aligned}\n\\right.\n"
},
{
"math_id": 91,
"text": "u:\\Omega\\to\\mathbb{R}"
},
{
"math_id": 92,
"text": "\\mathcal{L}"
},
{
"math_id": 93,
"text": "\\mathcal{G}"
},
{
"math_id": 94,
"text": "f"
},
{
"math_id": 95,
"text": "g"
},
{
"math_id": 96,
"text": "u_h"
},
{
"math_id": 97,
"text": "G"
},
{
"math_id": 98,
"text": "\n\\begin{aligned}\nu_h\\left(\\hat{\\tau}\\right) &= \\sum_{j=1}^nR_{j}(\\hat\\tau )u_j,\\\\\nG({\\hat \\tau }) &= \\sum_{j=1}^nR_{j}(\\hat\\tau )P_j,\n\\end{aligned}\n"
},
{
"math_id": 99,
"text": "R_j(\\hat{\\tau})"
},
{
"math_id": 100,
"text": "u_j"
},
{
"math_id": 101,
"text": "\\hat\\tau_{i} ,i = 1,2,...,{m}"
},
{
"math_id": 102,
"text": "\n\\left\\{\n\\begin{aligned}\n\\mathcal{L}u_{h}(\\hat\\tau_{i})=f(G(\\hat\\tau_{i})),&\\quad i\\in\\mathcal{I_L},\\\\\n\\mathcal{G}u_{h}(\\hat\\tau_{j})=g(G(\\hat\\tau_{j})),&\\quad j\\in\\mathcal{I_G},\n\\end{aligned}\n\\right.\n"
},
{
"math_id": 103,
"text": "\\mathcal{I_L}"
},
{
"math_id": 104,
"text": "\\mathcal{I_G}"
},
{
"math_id": 105,
"text": "u_h(\\hat\\tau)"
},
{
"math_id": 106,
"text": "1"
},
{
"math_id": 107,
"text": "\\mathbf{U}=[u_1,u_2,...,u_n]^T"
},
{
"math_id": 108,
"text": "\n\\mathbf{AU}=\\mathbf{b},\n"
},
{
"math_id": 109,
"text": "\\mathbf{A}"
},
{
"math_id": 110,
"text": "\\mathbf{b}"
},
{
"math_id": 111,
"text": "\\left\\{b_i\\right\\}_{i=1}^m"
},
{
"math_id": 112,
"text": "\\left\\{u_j^{(0)}\\right\\}_{j=1}^n"
},
{
"math_id": 113,
"text": "n<m"
},
{
"math_id": 114,
"text": "\nU^{(0)}(\\hat\\tau) = \\sum_{j=1}^nA_j(\\hat\\tau)u_j^{(0)},\\quad\\hat\\tau\\in[\\hat\\tau_1,\\hat\\tau_m],\n"
},
{
"math_id": 115,
"text": "A_j(\\hat\\tau)"
},
{
"math_id": 116,
"text": "j=1,2,\\cdots,n"
},
{
"math_id": 117,
"text": "\nA_j(\\hat\\tau) = \\left\\{\n\\begin{aligned}\n\\mathcal{L}R_j(\\hat\\tau), &\\quad \\hat{\\tau}\\ \\text{in}\\ \\Omega_p^{in},\\\\\n\\mathcal{G}R_j(\\hat\\tau), &\\quad \\hat{\\tau}\\ \\text{in}\\ \\Omega_p^{bd}, \\quad j=1,2,\\cdots,n,\n\\end{aligned}\n\\right.\n"
},
{
"math_id": 118,
"text": "\\Omega_p^{in}"
},
{
"math_id": 119,
"text": "\\Omega_p^{bd}"
},
{
"math_id": 120,
"text": "j"
},
{
"math_id": 121,
"text": "J_{in}"
},
{
"math_id": 122,
"text": "J_{bd}"
},
{
"math_id": 123,
"text": "\nu_{j}^{(k)}=u_{j}^{*},\\quad j\\in J_{bd},\\quad k=0,1,2,\\cdots.\n"
},
{
"math_id": 124,
"text": "\nU^{(k)}(\\hat\\tau) = \\sum_{j=1}^nA_j(\\hat\\tau)u_j^{(k)},\\quad\\hat\\tau\\in[\\hat\\tau_1,\\hat\\tau_m].\n"
},
{
"math_id": 125,
"text": "\n\\begin{align}\n\\delta_i^{(k)}\n&= b_i-\\sum_{j=1}^{n}A_j(\\hat\\tau_i)u_j^{(k)}\\\\\n&= b_i-\\sum_{j\\in J_{bd}}A_j(\\hat\\tau_i)u_j^{(k)}\n-\\sum_{j\\in J_{in}}A_j(\\hat\\tau_i)u_j^{(k)}\n,\\quad i=1,2,...,m.\n\\end{align}\n"
},
{
"math_id": 126,
"text": "A_j(\\hat\\tau_i)\\ne 0"
},
{
"math_id": 127,
"text": "I_j"
},
{
"math_id": 128,
"text": "\nd_j^{(k)}=\\mu\\sum_{h\\in I_j}A_j(\\hat\\tau_h)\\delta_h^{(k)},\\quad j=1,2,...,n,\n"
},
{
"math_id": 129,
"text": "\nu_j^{(k+1)}=u_j^{(k)}+d_j^{(k)},\\quad j=1,2,...,n,\n"
},
{
"math_id": 130,
"text": "\nU^{(k+1)}(\\hat\\tau) = \\sum_{j=1}^nA_j(\\hat\\tau)u_j^{(k+1)}.\n"
},
{
"math_id": 131,
"text": "\n\\left \\{ U^{(k)}(\\hat\\tau),k=0,1,\\dots \\right \\}.\n"
},
{
"math_id": 132,
"text": "m=n"
},
{
"math_id": 133,
"text": "\n\\begin{align}\n\\mathbf{P^{(\\alpha+1)}}&=\\mathbf{P^{(\\alpha)}}+\\mathbf{\\Delta}^{(\\alpha)},\\\\\n&=\\mathbf{P}^{(\\alpha)}+\\mathbf{Q}-\\mathbf{B}\\mathbf{P}^{(\\alpha)},\\\\\n&=\\left(\\mathbf{I}-\\mathbf{B}\\right)\\mathbf{P}^{(\\alpha)}+\\mathbf{Q},\n\\end{align}\n"
},
{
"math_id": 134,
"text": "\n\\begin{align}\n&\\mathbf{Q}= \\left[\\mathbf{Q}_1,\\mathbf{Q}_2,\\cdots,\\mathbf{Q}_m\\right]^T,\\\\\n&\\mathbf{P^{(\\alpha)}} = \\left[\\mathbf{P}_1^{(\\alpha)},\\mathbf{P}_2^{(\\alpha)},\\cdots,\\mathbf{P}_n^{(\\alpha)}\\right]^T,\\\\\n&\\mathbf{\\Delta}^{(\\alpha)}= \\left[\\mathbf{\\Delta}_1^{(\\alpha)},\\mathbf{\\Delta}^{(\\alpha)}_2,\\cdots,\\mathbf{\\Delta}^{(\\alpha)}_n\\right]^T,\\\\\n&\\mathbf{B}=\\begin{bmatrix}\nB_1(t_1) & B_2(t_1) &\\cdots &B_n(t_1)\\\\\nB_1(t_2) & B_2(t_2) &\\cdots &B_n(t_2)\\\\\n\\vdots & \\vdots &\\ddots & \\vdots \\\\\nB_1(t_m) & B_2(t_m) &\\cdots &B_n(t_m)\\\\\n\\end{bmatrix}.\n\\end{align}\n"
},
{
"math_id": 135,
"text": "\\mathbf{I}-\\mathbf{B}"
},
{
"math_id": 136,
"text": "1"
},
{
"math_id": 137,
"text": "\n\\begin{align}\n\\mathbf{P^{(\\alpha+1)}}&=\\mathbf{P^{(\\alpha)}}+\\mu\\mathbf{B}^T\\mathbf{\\Delta}^{(\\alpha)},\\\\\n&=\\mathbf{P}^{(\\alpha)}+\\mu\\mathbf{B}^T\\left(\\mathbf{Q}-\\mathbf{B}\\mathbf{P}^{(\\alpha)}\\right),\\\\\n&=\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)\\mathbf{P}^{(\\alpha)}+\\mu\\mathbf{B}^T\\mathbf{Q}.\n\\end{align}\n"
},
{
"math_id": 138,
"text": "\\mathbf{B}^T\\mathbf{B}"
},
{
"math_id": 139,
"text": "0<\\mu<\\frac{2}{\\lambda_0}"
},
{
"math_id": 140,
"text": "\\lambda_0"
},
{
"math_id": 141,
"text": "\\mu\\mathbf{B}^T\\mathbf{B}"
},
{
"math_id": 142,
"text": "0<\\lambda(\\mu\\mathbf{B}^T\\mathbf{B})<2"
},
{
"math_id": 143,
"text": "\\mu>0"
},
{
"math_id": 144,
"text": "\\lambda(\\mu\\mathbf{B}^T\\mathbf{B})>0"
},
{
"math_id": 145,
"text": "\n\\lambda(\\mu\\mathbf{B}^T\\mathbf{B}) =\\mu\\lambda(\\mathbf{B}^T\\mathbf{B})<2\\frac{\\lambda(\\mathbf{B}^T\\mathbf{B})}{\\lambda_0}<2.\n"
},
{
"math_id": 146,
"text": "\n\\begin{align}\n\\mathbf{P^{(\\alpha+1)}}&=\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)\\mathbf{P}^{(\\alpha)}+\\mu\\mathbf{B}^T\\mathbf{Q},\\\\\n&=\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)\\left[\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)\\mathbf{P}^{(\\alpha-1)}+\\mu\\mathbf{B}^T\\mathbf{Q}\\right]+\\mu\\mathbf{B}^T\\mathbf{Q},\\\\\n&=\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)^2\\mathbf{P}^{(\\alpha-1)}+\\sum_{i=0}^1\\left( \\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)\\mu\\mathbf{B}^T\\mathbf{Q},\\\\\n&=\\cdots\\\\\n&=\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)^{\\alpha+1}\\mathbf{P}^{(0)}+\\sum_{i=0}^{\\alpha}\\left( \\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)^{\\alpha}\\mu\\mathbf{B}^T\\mathbf{Q}.\\\\\n\\end{align}\n"
},
{
"math_id": 147,
"text": "\n0<\\rho\\left({\\mu\\mathbf{B}^T\\mathbf{B}}\\right)<2,\n"
},
{
"math_id": 148,
"text": "\n0<\\rho\\left({\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}}\\right)<1.\n"
},
{
"math_id": 149,
"text": "\\alpha\\rightarrow \\infty"
},
{
"math_id": 150,
"text": "\n\\left(\\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)^{\\infty}=0,\\ \\sum_{i=0}^{\\infty}\\left( \\mathbf{I}-\\mu\\mathbf{B}^T\\mathbf{B}\\right)^{\\alpha}=\\frac{1}{\\mu}\\left(\\mathbf{B}^T\\mathbf{B}\\right)^{-1}.\n"
},
{
"math_id": 151,
"text": " \\mathbf{P}^{(\\infty)}=\\left(\\mathbf{B}^T\\mathbf{B}\\right)^{-1}\\mathbf{B}^T\\mathbf{Q},\n"
},
{
"math_id": 152,
"text": "\\mathbf{B}^T\\mathbf{B}\\mathbf{P}^{(\\infty)}=\\mathbf{B}^T\\mathbf{Q}"
}
]
| https://en.wikipedia.org/wiki?curid=74443495 |
74444880 | Isbell's zigzag theorem | Theorem of dominion in abstract algebra
Isbell's zigzag theorem, a theorem of abstract algebra characterizing the notion of a dominion, was introduced by American mathematician John R. Isbell in 1966. Dominion is a concept in semigroup theory, within the study of the properties of epimorphisms. For example, let U is a subsemigroup of S containing U, the inclusion map formula_0 is an epimorphism if and only if formula_1, furthermore, a map formula_2 is an epimorphism if and only if formula_3. The categories of rings and semigroups are examples of categories with non-surjective epimorphism, and the Zig-zag theorem gives necessary and sufficient conditions for determining whether or not a given morphism is epi. Proofs of this theorem are topological in nature, beginning with for semigroups, and continuing by , completing Isbell's original proof. The pure algebraic proofs were given by and .
Statement.
Zig-zag.
Zig-zag: If U is a submonoid of a monoid (or a subsemigroup of a semigroup) S, then a system of equalities;
formula_4
in which formula_5 and formula_6, is called a zig-zag of length m in S over U with value d. By the spine of the zig-zag we mean the ordered (2m + 1)-tuple formula_7.
Dominion.
Dominion: Let U be a submonoid of a monoid (or a subsemigroup of a semigroup) S. The dominion formula_8 is the set of all elements formula_9 such that, for all homomorphisms formula_10 coinciding on U, formula_11.
We call a subsemigroup U of a semigroup U closed
if formula_12, and dense if formula_1.
Isbell's zigzag theorem.
Isbell's zigzag theorem:
If U is a submonoid of a monoid S then formula_13 if and only if either formula_14 or there exists a zig-zag in S over U with value d that is, there is a sequence of factorizations of d of the form
formula_15
This statement also holds for semigroups.
For monoids, this theorem can be written more concisely:
Let S be a monoid, let U be a submonoid of S, and let formula_16. Then formula_17 if and only if formula_18 in the tensor product formula_19.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
Footnote.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U \\hookrightarrow S"
},
{
"math_id": 1,
"text": "\\rm{Dom}_S (U) = S"
},
{
"math_id": 2,
"text": "\\alpha \\colon S \\to T"
},
{
"math_id": 3,
"text": "\\rm{Dom}_T (\\rm{im} \\; \\alpha) = T"
},
{
"math_id": 4,
"text": "\\begin{align}\n\nd &= x_1 u_1, &u_1 &= v_1 y_1 \\\\\n\nx_{i - 1} v_{i - 1} &= x_i u_i, &u_i y_{i - 1} &= v_i y_i \\; (i = 2, \\dots, m) \\\\\n\nx_{m} v_{m} &= u_{m+1}, &u_{m + 1} y_{m} &= d\n\n\\end{align}"
},
{
"math_id": 5,
"text": "u_1, \\dots , u_{m + 1}, v_1, \\dots , v_{m} \\in U"
},
{
"math_id": 6,
"text": "x_1, \\dots , x_{m}, y_1, \\dots , y_{m} \\in S"
},
{
"math_id": 7,
"text": "(u_1,v_1,u_2,v_2,\\dots,u_{m},v_{m}, u_{m+1})"
},
{
"math_id": 8,
"text": "\\rm{Dom}_S (U)"
},
{
"math_id": 9,
"text": "s \\in S"
},
{
"math_id": 10,
"text": "f, g : S \\to T"
},
{
"math_id": 11,
"text": "f(s) = g(s)"
},
{
"math_id": 12,
"text": "\\rm{Dom}_S (U) = U"
},
{
"math_id": 13,
"text": "d \\in \\rm{Dom}_S (U)"
},
{
"math_id": 14,
"text": "d \\in U"
},
{
"math_id": 15,
"text": "d = x_1 u_1=x_1 v_1 y_1 = x_2 u_2 y_1 = x_2 v_2 y_2 = \\cdots = x_{m} v_{m} y_{m} = u_{m+1} y_{m}"
},
{
"math_id": 16,
"text": "d \\in S"
},
{
"math_id": 17,
"text": "d \\in \\mathrm{Dom}_{S} (U)"
},
{
"math_id": 18,
"text": "d \\otimes 1 = 1 \\otimes d"
},
{
"math_id": 19,
"text": "S \\otimes_{U} S"
},
{
"math_id": 20,
"text": "i: (\\mathbb{Z},\\cdot)\\hookrightarrow (\\mathbb{Q},\\cdot)"
},
{
"math_id": 21,
"text": "\\beta, \\gamma: \\mathbb{Q} \\to \\mathbb{R}"
},
{
"math_id": 22,
"text": "\\mathbb{Z}"
}
]
| https://en.wikipedia.org/wiki?curid=74444880 |
74446691 | Knapsack auction | A knapsack auction is an auction in which several identical items are sold, and there are several bidders with different valuations interested in different amounts of items. The goal is to choose a subset of the bidders with a total demand, at most, the number of items and, subject to that, a maximum total value. Finding this set of bidders requires solving an instance of the knapsack problem, which explains the term "knapsack auction".
An example application of a knapsack auction is auctioning broadcast time among advertisers. Here, the items are the time units (e.g., seconds). Each advertiser has an adversitement of a different length (different number of seconds) and a different value for an advertisement. The goal is to select a subset of advertisements to serve in a time slot of a specific length to maximize the total value.
Notation.
There are "m" identical items and "n" different bidders. The preferences of each bidder "i" are given by two numbers:
A "feasible outcome" of the auction is a subset "W" of "winning bidders", such that their total demand is at most "m": formula_0. The "value" of a set "W" of winners is the sum of values of the winners: formula_1. The goal is to find a feasible set of winners with a maximum total value.
In the broadcast time example, if there are 5 minutes allocated for advertisements, then "m"=300 (the number of seconds), "n"=the number of potential advertisers, "si"=the length of "i"'s advertisement in seconds, and "vi"=the money that "i" expects to gain if his advertisement is broadcast.
Baseline solutions.
If the demands and values of all bidders are publicly known, then the problem can be solved by any algorithm for the knapsack problem. The problem is NP-hard, but it has efficient constant-factor approximation algorithms as well as an FPTAS. In practice, usually the demands "si" are publicly known (e.g., the length of the advertisement of each advertiser must be known), but the valuations "vi" are the private information of the bidders. Therefore, the auction mechanism should incentivize the bidders to reveal their true valuations.
The VCG auction is a truthful mechanism that can be used to maximize the sum of values while incentivizing agents to reveal their true values. However, it only works if the outcome maximizes the values; it does "not" work with approximations (if the outcome is only approximately optimal, then VCG is no longer truthful). Finding the optimal outcome cannot be done in polynomial time unless P=NP. This raises the question: are there truthful mechanisms that work in polynomial time and attain an approximately-optimal outcome?
Truthful approximation mechanisms.
Mu'alem and Nisan gave the first affirmative answer to this question: they showed that combining two greedy algorithms yields a truthful 2-factor approximation mechanism.
Briest, Krysta and Vocking improved this result by showing a truthful FPTAS.
Dutting, Gkatzelis and Roughgarden presented a truthful deferred-acceptance auction that attains an O(log "m") approximation, and proved that no deferred-acceptance auction can achieve a better approximation. This shows a separation between the general class of truthful auctions and the sub-class of deferred-acceptance auctions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i\\in W} s_i \\leq m"
},
{
"math_id": 1,
"text": "\\sum_{i\\in W} v_i "
}
]
| https://en.wikipedia.org/wiki?curid=74446691 |
74448567 | LK-99 | Proposed superconducting material
<templatestyles src="Chembox/styles.css"/>
Chemical compound
LK-99 (from the Lee-Kim 1999 research), also called PCPOSOS, is a gray–black, polycrystalline compound, identified as a copper-doped lead‒oxyapatite. A team from Korea University led by Lee Sukbae () and Kim Ji-Hoon () began studying this material as a potential superconductor starting in 1999.1 In July 2023, they published preprints claiming that it acts as a room-temperature superconductor8 at temperatures of up to at ambient pressure.1
Many different researchers have attempted to replicate the work, and were able to reach initial results within weeks, as the process of producing the material is relatively straightforward. By mid-August 2023, the consensus was that LK-99 is not a superconductor at room temperature, and is an insulator in pure form.
As of 12 February 2024, no replications had gone through the peer review process of a journal, but some had been reviewed by a materials science lab. A number of replication attempts identified non-superconducting ferromagnetic and diamagnetic causes for observations that suggested superconductivity. A prominent cause was a copper sulfide impurity occurring during the proposed synthesis, which can produce resistance drops, lambda transition in heat capacity, and magnetic response in small samples.
After the initial preprints were published, Lee claimed they were incomplete, and coauthor Kim Hyun-Tak () said one of the papers contained flaws.
Chemical properties and structure.
The chemical composition of LK-99 is approximately Pb9Cu(PO4)6O, in which— compared to pure lead-apatite (Pb10(PO4)6O)5— approximately one quarter of Pb(II) ions in position 2 of the apatite structure are replaced by Cu(II) ions.9
The structure is similar to that of apatite, space group "P"63/"m" (No. 176).
Synthesis.
Lee "et al". provide a method for chemical synthesis of LK-992 in three steps. First they produce lanarkite from a 1:1 molar mixing of lead(II) oxide (PbO) and lead(II) sulfate (Pb(SO4)) powders, and heating at for 24 hours:
PbO + Pb(SO4) → Pb2(SO4)O.
Then, copper(I) phosphide (Cu3P) is produced by mixing copper (Cu) and phosphorus (P) powders in a 3:1 molar ratio in a sealed tube under a vacuum and heated to for 48 hours:3
3 Cu + P → Cu3P.
Then, lanarkite and copper phosphide crystals are ground into a powder, placed in a sealed tube under a vacuum, and heated to for between 5‒20 hours:3
Pb2(SO4)O + Cu3P → Pb10-"x"Cu"x"(PO4)6O + S (g), where 0.9 < x < 1.1.
There were a number of problems with the above synthesis from the initial paper. The reaction is not balanced, and others reported the presence of copper(I) sulfide () as well. For formula_0 a balanced reaction might be:
5 .
Many syntheses produced fragmentary results in different phases, where some of the resulting fragments were responsive to magnetic fields, other fragments were not. The first synthesis to produce pure crystals found them to be diamagnetic insulators.
Physical properties.
Some small LK-99 samples were reported to show strong diamagnetic properties, including a response confusingly referred to as "partial levitation" over a magnet. This was misinterpreted by some as a sign of superconductivity, although it is a sign of regular diamagnetism or ferromagnetism.
While initial preprints claimed the material was a room-temperature superconductor, they did not report observing any definitive features of superconductivity, such as zero resistance, the Meissner effect, flux pinning, AC magnetic susceptibility, the Josephson effect, a temperature-dependent critical field and current, or a sudden jump in specific heat around the critical temperature.
As it is common for a new material to spuriously seem like a potential candidate for high-temperature superconductivity, thorough experimental reports normally demonstrate a number of these expected properties. As of 15 2023,[ [update]] not one of these properties had been observed by the original experiment or any replications.
Proposed mechanism for superconductivity.
Partial replacement of Pb2+ ions with smaller Cu2+ ions is said to cause a 0.48% reduction in volume, creating internal stress in the material,8 causing a heterojunction quantum well between the Pb(1) and oxygen within the phosphate ([PO4]3−). This quantum well was proposed to be superconducting10, based on a 2021 paper by Kim Hyun-Tak describing a novel and complicated theory combining ideas from a classical theory of metal-insulator transitions, the standard Bardeen–Cooper–Schrieffer theory, and the theory of hole superconductivity by J.E.Hirsch.
Response.
On 31 July 2023, Sinéad Griffin of Lawrence Berkeley National Laboratory analyzed LK-99 with density functional theory (DFT), showing that its structure would have correlated isolated flat bands, and suggesting this might contribute to superconductivity. However, while other researchers agreed with the DFT analysis, a number suggested that this was not compatible with superconductivity, and that a structure different from what was described in Lee, "et al." would be necessary.
Analyses by industrial and experimental physicists noted experimental and theoretical shortcomings of the published works. Shortcomings included the lack of phase diagrams spanning temperature, stoichiometry, and stress; the lack of pathways for the very high "T"c of LK-99 compared to prior heavy fermion superconductors; the absence of flux pinning in any observations; the possibility of stochastic conductive artifacts in conductivity measurements; the high resistance and low current capacity of the alleged superconducting state; and the lack of direct transmission electron microscopy (TEM) of the materials.
Compound name.
The name LK-99 comes from the initials of discoverers Lee and Kim, and the year of discovery (1999). The pair had worked with Tong-Seek Chair () at Korea University in the 1990s.
In 2008, they founded the Quantum Energy Research Centre (퀀텀 에너지연구소; also known as Q-Centre) with other researchers from Korea University . Lee would later become CEO of Q-Centre, and Kim would become director of research and development.
Publication history.
Lee has stated that in 2020, an initial paper was submitted to "Nature", but was rejected. Similarly presented research on room-temperature superconductors (but a completely different chemical system) by Ranga P. Dias had been published in "Nature" earlier that year, and received with skepticism—Dias's paper would subsequently be retracted in 2022 after its data was questioned as having been falsified.
In 2020, Lee and Kim Ji-Hoon filed a patent application. A second patent application (additionally listing Young-Wan Kwon), was filed in 2021, which was published on 3 March 2023. A World Intellectual Property Organization (WIPO) patent was also published on 2 March 2023. On 4 April 2023, a Korean trademark application for "LK-99" was filed by the Q-Centre.
Scholarly articles and preprints.
A series of academic publications summarizing initial findings came out in 2023, with a total of seven authors across four publications.
On 31 March 2023, a Korean-language paper, "Consideration for the development of room-temperature ambient-pressure superconductor (LK-99)", was submitted to the "Korean Journal of Crystal Growth and Crystal Technology". It was accepted on 18 April, but was not widely read until three months later.
On 22 July 2023, two preprints appeared on arXiv. The first was submitted by Young-Wan Kwon, and listed Kwon, former Q-Centre CTO, as third author. The second preprint was submitted only 2 hours later by Kim Hyun-Tak, former principal researcher at the Electronics & Telecommunications Research Institute and professor at the College of William & Mary, listing himself as third author, as well as three new authors.
On 23 July, the findings were also submitted by Lee to "APL Materials" for peer review. On 3 August 2023, a newly-formed Korean LK-99 Verification Committee requested a high-quality sample from the original research team. The team responded that they would only provide the sample once the review process of their APL paper was completed, expected to take several weeks or months.
On 31 July 2023, a group led by Kapil Kumar published a preprint on arXiv documenting their replication attempts, which confirmed the structure using X-ray crystallography (XRD) but failed to find strong diamagnetism.
On 11 Aug 2023, P. Puphal et al., released their preprint synthesizing the first single crystals of Pb9Cu(PO4)6O finally disproving superconductivity in this chemical stoichiometry published later in APL Materials.
On 16 August 2023, "Nature" published an article declaring that LK-99 had been demonstrated to not be a superconductor, but rather an insulator. It cited statements by an condensed matter experimentalist at the University of California, Davis, and several studies previewed in August 2023.
Other discussion by authors.
On 26 July 2023, Kim Hyun-Tak stated in an interview with the "New Scientist" that the first paper submitted by Kwon contained "many defects" and was submitted without his permission.
On 28 July 2023, Kwon presented the findings at a symposium held at Korea University. That same day, Yonhap News Agency published an article quoting an official from Korea University as saying that Kwon was no longer in contact with the university. The article also quoted Lee saying that Kwon had left the Q-Centre Research Institute four months previously.
On the same day, Kim Hyun-Tak provided "The New York Times" with a new video presumably showing a sample displaying strong signs of diamagnetism. The video appears to show a sample different to the one in the original preprint. On 4 August 2023, he informed SBS News that high-quality LK-99 samples may exhibit diamagnetism over 5,000 times greater than graphite, which he claimed would be inexplicable unless the substance is a superconductor.
Response.
Materials scientists and superconductor researchers responded with skepticism. The highest-temperature superconductors known at the time of publication had a critical temperature of at pressures of over . The highest-temperature superconductors at atmospheric pressure (1 atm) had a critical temperature of at most .
On 2 August 2023, "The Korean Society of Superconductivity and Cryogenics" established a verification committee as a response to the controversy and unverified claims of LK-99, in order to arrive at conclusions over these claims. The verification committee is headed by Kim Chang-Young of Seoul National University and consists of members of the university, Sungkyunkwan University and Pohang University of Science and Technology. Upon formation, the verification committee did not agree that the two 22 July arXiv papers by Lee "et al." or the publicly available videos at the time supported the claim of LK-99 being a superconductor.
As of 15 2023,[ [update]] the measured properties do not prove that LK-99 is a superconductor. The published material does not explain how the LK-99's magnetisation can change, demonstrate its specific heat capacity, or demonstrate it crossing its transition temperature. A more likely explanation for LK-99's magnetic response is a mix of ferromagnetism and non-superconductive diamagnetism. A number of studies found that copper(I) sulfide contamination common to the synthesis process could closely replicate the observations that inspired the initial preprints.
Public response.
The claims in the 22 July papers by Lee "et al." went viral on social media platforms the following week. The viral nature of the claim resulted in posts from users using pseudonyms from Russia and China claiming to have replicated LK-99 on both Twitter and Zhihu. Other viral videos described themselves as having replicated samples of LK-99 "partially levitating", most of which were found to be fake.
Scientists interviewed by the press remained skeptical, because of the quality of both the original preprints, the lack of purity in the sample they reported, and the legitimacy of the claim after the failure of previous claims of room temperature superconductivity did not show legitimacy (such as the Ranga Dias affair). The "Korean Society of Superconductivity and Cryogenics" expressed concern on the social and economic impacts of the preliminary and unverified LK-99 research.
A video from Huazhong University of Science and Technology uploaded on 1 August 2023 by a postdoctoral researcher on the team of Chang Haixin, apparently showed a micrometre-sized sample of LK-99 partially levitating. This went viral on Chinese social media, becoming the most viewed video on Bilibili by the next day, and a prediction market briefly put the chance of successful replication at 60%. A researcher from the Chinese Academy of Sciences refused to comment on the video for the press, dismissing the claim as "ridiculous".
In early August, people began to create memes about "floating rocks", and there was a brief surge in Korean and Chinese technology stocks, despite warnings from the Korean stock exchange against speculative bets in light of the excitement around LK-99, which eventually fell on August 8. Following the publication of the "Nature" article on August 16 that proclaimed LK-99 is not a superconductor, South Korean superconductor stocks fell further, as the interest about LK-99 from investors in previous weeks disappeared.
Replication attempts.
After the July 2023 publication's release, independent groups reported that they had begun attempting to reproduce the synthesis, with initial results expected within weeks.
As of 15 2023,[ [update]] no replication attempts had yet been peer-reviewed by a journal. Of the non-peer-reviewed attempts, over 15 notable labs have published results that failed to observe any superconductivity, and a few have observed magnetic response in small fragments that could be explained by normal diamagnetism or ferromagnetism. Some demonstrated and replicated alternate causes of the observations in the original papers: Copper-deficient copper (I) sulfide has a known phase transition at from a low-temperature phase to a high-temperature superionic phase, with a sharp rise in resistivity and a λ-like-feature in the heat capacity. Furthermore, Cu2S is diamagnetic.
Only one attempt observed any sign of superconductivity: Southeast University claimed to measure very low resistance in a flake of LK-99, in one of four synthesis attempts, below a temperature of . Doubts were expressed by experts in the field, as they saw no dropoff to zero resistance, and used crude instruments that could not measure resistance below 10 μΩ (too high to distinguish superconductivity from less exotic low-temperature conductivity), and had large measurement artifacts.
Some replication efforts gained global visibility, with the aid of online replication trackers that catalogued new announcements and status updates.
Experimental studies.
Selected experimental studies.
Results Key:
<templatestyles src="Legend/styles.css" /> Success
<templatestyles src="Legend/styles.css" /> Partial success
<templatestyles src="Legend/styles.css" /> Partial failure
<templatestyles src="Legend/styles.css" /> Failure
Theoretical studies.
In the initial papers, the theoretical explanations for potential mechanisms of superconductivity in LK-99 were incomplete. Later analyses by other labs added simulations and theoretical evaluations of the material's electronic properties from first principles.
Selected theoretical studies:
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "x=1"
}
]
| https://en.wikipedia.org/wiki?curid=74448567 |
74449522 | Jennifer Taback | American mathematician
Jennifer Taback is an American mathematician whose research focuses on geometric group theory and combinatorial group theory. She is the Isaac Henry Wing Professor of Mathematics and Chair of the Mathematics Department at Bowdoin College in Maine.
Education and career.
After earning a bachelor's degree in mathematics at Yale University in 1993, Taback went to the University of Chicago for graduate study in mathematics, earning a master's degree in 1994 and completing her Ph.D. in 1998. Her 1998 doctoral dissertation, "Quasi-Isometric Rigity for formula_0", was supervised by Benson Farb.
After a postdoctoral stay at the University of California, Berkeley as Charles B. Morrey assistant professor, she became an assistant professor of mathematics at the University at Albany in 1999, moving to her present position at Bowdoin in 2004. She was tenured as an associate professor in 2007, and promoted to full professor in 2012.
She was given the Isaac Henry Wing Professorship in 2021; the professorship was endowed in 1906 by a former Bowdoin student.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PSL_2\\bigl(\\mathbb{Z}[\\tfrac1p]\\bigr)"
}
]
| https://en.wikipedia.org/wiki?curid=74449522 |
74451504 | Conformal linear transformation | A conformal linear transformation, also called a homogeneous similarity transformation or homogeneous similitude, is a similarity transformation of a Euclidean or pseudo-Euclidean vector space which fixes the origin. It can be written as the composition of an orthogonal transformation (an origin-preserving rigid transformation) with a uniform scaling (dilation). All similarity transformations (which globally preserve the shape but not necessarily the size of geometric figures) are also conformal (locally preserve shape). Similarity transformations which fix the origin also preserve scalar–vector multiplication and vector addition, making them linear transformations.
Every origin-fixing reflection or dilation is a conformal linear transformation, as is any composition of these basic transformations, including rotations and improper rotations and most generally similarity transformations. However, shear transformations and non-uniform scaling are not. Conformal linear transformations come in two types, "proper" transformations preserve the orientation of the space whereas "improper" transformations reverse it.
As linear transformations, conformal linear transformations are representable by matrices once the vector space has been given a basis, composing with each-other and transforming vectors by matrix multiplication. The Lie group of these transformations has been called the conformal orthogonal group, the conformal linear transformation group or the homogeneous similtude group.
Alternatively any conformal linear transformation can be represented as a versor (geometric product of vectors); every versor and its negative represent the same transformation, so the versor group (also called the Lipschitz group) is a double cover of the conformal orthogonal group.
Conformal linear transformations are a special type of Möbius transformations (conformal transformations mapping circles to circles); the conformal orthogonal group is a subgroup of the conformal group.
General properties.
Across all dimensions, a conformal linear transformation has the following properties:
Two dimensions.
In the Euclidean vector plane, an improper conformal linear transformation is a reflection across a line through the origin composed with a positive dilation. Given an orthonormal basis, it can be represented by a matrix of the form
formula_0
A proper conformal linear transformation is a rotation about the origin composed with a positive dilation. It can be represented by a matrix of the form
formula_1
Alternately a proper conformal linear transformation can be represented by a complex number of the form formula_2
Practical applications.
When composing multiple linear transformations, it is possible to create a shear/skew by composing a parent transform with a non-uniform scale, and a child transform with a rotation. Therefore, in situations where shear/skew is not allowed, transformation matrices must also have uniform scale in order to prevent a shear/skew from appearing as the result of composition. This implies conformal linear transformations are required to prevent shear/skew when composing multiple transformations.
In physics simulations, a sphere (or circle, hypersphere, etc.) is often defined by a point and a radius. Checking if a point overlaps the sphere can therefore be performed by using a distance check to the center. With a rotation or flip/reflection, the sphere is symmetric and invariant, therefore the same check works. With a uniform scale, only the radius needs to be changed. However, with a non-uniform scale or shear/skew, the sphere becomes "distorted" into an ellipsoid, therefore the distance check algorithm does not work correctly anymore.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}a&b\\\\b&-a\\end{bmatrix}."
},
{
"math_id": 1,
"text": "\\begin{bmatrix}a&-b\\\\b&a\\end{bmatrix}."
},
{
"math_id": 2,
"text": "a + bi."
}
]
| https://en.wikipedia.org/wiki?curid=74451504 |
744569 | Stone–von Neumann theorem | In mathematics and in theoretical physics, the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.
Representation issues of the commutation relations.
In quantum mechanics, physical observables are represented mathematically by linear operators on Hilbert spaces.
For a single particle moving on the real line formula_0, there are two important observables: position and momentum. In the Schrödinger representation quantum description of such a particle, the position operator x and momentum operator formula_1 are respectively given by
formula_2
on the domain formula_3 of infinitely differentiable functions of compact support on formula_0. Assume formula_4 to be a fixed "non-zero" real number—in quantum theory formula_4 is the reduced Planck constant, which carries units of action (energy "times" time).
The operators formula_5, formula_1 satisfy the canonical commutation relation Lie algebra,
formula_6
Already in his classic book, Hermann Weyl observed that this commutation law was "impossible to satisfy" for linear operators p, x acting on finite-dimensional spaces unless "ħ" vanishes. This is apparent from taking the trace over both sides of the latter equation and using the relation Trace("AB")
Trace("BA"); the left-hand side is zero, the right-hand side is non-zero. Further analysis shows that any two self-adjoint operators satisfying the above commutation relation cannot be both bounded (in fact, a theorem of Wielandt shows the relation cannot be satisfied by elements of "any" normed algebra). For notational convenience, the nonvanishing square root of ℏ may be absorbed into the normalization of p and x, so that, effectively, it is replaced by 1. We assume this normalization in what follows.
The idea of the Stone–von Neumann theorem is that any two irreducible representations of the canonical commutation relations are unitarily equivalent. Since, however, the operators involved are necessarily unbounded (as noted above), there are tricky domain issues that allow for counter-examples. To obtain a rigorous result, one must require that the operators satisfy the exponentiated form of the canonical commutation relations, known as the Weyl relations. The exponentiated operators are bounded and unitary. Although, as noted below, these relations are formally equivalent to the standard canonical commutation relations, this equivalence is not rigorous, because (again) of the unbounded nature of the operators. (There is also a discrete analog of the Weyl relations, which can hold in a finite-dimensional space,Chapter 14, Exercise 5 namely Sylvester's in the finite Heisenberg group, discussed below.)
Uniqueness of representation.
One would like to classify representations of the canonical commutation relation by two self-adjoint operators acting on separable Hilbert spaces, "up to unitary equivalence". By Stone's theorem, there is a one-to-one correspondence between self-adjoint operators and (strongly continuous) one-parameter unitary groups.
Let Q and P be two self-adjoint operators satisfying the canonical commutation relation, ["Q", "P"] = "i", and s and t two real parameters. Introduce "eitQ" and "eisP", the corresponding unitary groups given by functional calculus. (For the explicit operators "x" and "p" defined above, these are multiplication by "eitx" and pullback by translation "x" → "x" + "s".) A formal computationSection 14.2 (using a special case of the Baker–Campbell–Hausdorff formula) readily yields
formula_7
Conversely, given two one-parameter unitary groups "U"("t") and "V"("s") satisfying the braiding relation
formula_8 (E1)
formally differentiating at 0 shows that the two infinitesimal generators satisfy the above canonical commutation relation. This braiding formulation of the canonical commutation relations (CCR) for one-parameter unitary groups is called the Weyl form of the CCR.
It is important to note that the preceding derivation is purely formal. Since the operators involved are unbounded, technical issues prevent application of the Baker–Campbell–Hausdorff formula without additional domain assumptions. Indeed, there exist operators satisfying the canonical commutation relation but not the Weyl relations (E1).Example 14.5 Nevertheless, in "good" cases, we expect that operators satisfying the canonical commutation relation will also satisfy the Weyl relations.
The problem thus becomes classifying two jointly irreducible one-parameter unitary groups "U"("t") and "V"("s") which satisfy the Weyl relation on separable Hilbert spaces. The answer is the content of the Stone–von Neumann theorem: "all such pairs of one-parameter unitary groups are unitarily equivalent".Theorem 14.8 In other words, for any two such "U"("t") and "V"("s") acting jointly irreducibly on a Hilbert space H, there is a unitary operator "W" : "L"2(R) → "H" so that
formula_9
where p and x are the explicit position and momentum operators from earlier. When W is U in this equation, so, then, in the x-representation, it is evident that P is unitarily equivalent to "e"−"itQ" "P" "eitQ"
"P" + "t", and the spectrum of P must range along the entire real line. The analog argument holds for Q.
There is also a straightforward extension of the Stone–von Neumann theorem to n degrees of freedom.Theorem 14.8
Historically, this result was significant, because it was a key step in proving that Heisenberg's matrix mechanics, which presents quantum mechanical observables and dynamics in terms of infinite matrices, is unitarily equivalent to Schrödinger's wave mechanical formulation (see Schrödinger picture),
formula_10
Representation theory formulation.
In terms of representation theory, the Stone–von Neumann theorem classifies certain unitary representations of the Heisenberg group. This is discussed in more detail in the Heisenberg group section, below.
Informally stated, with certain technical assumptions, every representation of the Heisenberg group "H"2"n" + 1 is equivalent to the position operators and momentum operators on R"n". Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra) on a symplectic space of dimension 2"n".
More formally, there is a unique (up to scale) non-trivial central strongly continuous unitary representation.
This was later generalized by Mackey theory – and was the motivation for the introduction of the Heisenberg group in quantum physics.
In detail:
In all cases, if one has a representation "H"2"n" + 1 → "A", where "A" is an algebra and the center maps to zero, then one simply has a representation of the corresponding abelian group or algebra, which is Fourier theory.
If the center does not map to zero, one has a more interesting theory, particularly if one restricts oneself to "central" representations.
Concretely, by a central representation one means a representation such that the center of the Heisenberg group maps into the center of the algebra: for example, if one is studying matrix representations or representations by operators on a Hilbert space, then the center of the matrix algebra or the operator algebra is the scalar matrices. Thus the representation of the center of the Heisenberg group is determined by a scale value, called the quantization value (in physics terms, the Planck constant), and if this goes to zero, one gets a representation of the abelian group (in physics terms, this is the classical limit).
More formally, the group algebra of the Heisenberg group over its field of scalars "K", written "K"["H"], has center "K"[R], so rather than simply thinking of the group algebra as an algebra over the field K, one may think of it as an algebra over the commutative algebra "K"[R]. As the center of a matrix algebra or operator algebra is the scalar matrices, a "K"[R]-structure on the matrix algebra is a choice of scalar matrix – a choice of scale. Given such a choice of scale, a central representation of the Heisenberg group is a map of "K"[R]-algebras "K"["H"] → "A", which is the formal way of saying that it sends the center to a chosen scale.
Then the Stone–von Neumann theorem is that, given the standard quantum mechanical scale (effectively, the value of ħ), every strongly continuous unitary representation is unitarily equivalent to the standard representation with position and momentum.
Reformulation via Fourier transform.
Let G be a locally compact abelian group and "G"^ be the Pontryagin dual of G. The Fourier–Plancherel transform defined by
formula_11
extends to a C*-isomorphism from the group C*-algebra C*("G") of G and C0("G"^), i.e. the spectrum of C*("G") is precisely "G"^. When G is the real line R, this is Stone's theorem characterizing one-parameter unitary groups. The theorem of Stone–von Neumann can also be restated using similar language.
The group G acts on the C*-algebra C0("G") by right translation ρ: for s in G and f in C0("G"),
formula_12
Under the isomorphism given above, this action becomes the natural action of G on C*("G"^):
formula_13
So a covariant representation corresponding to the C*-crossed product
formula_14
is a unitary representation "U"("s") of G and "V"("γ") of "G"^ such that
formula_15
It is a general fact that covariant representations are in one-to-one correspondence with *-representation of the corresponding crossed product. On the other hand, all irreducible representations of
formula_16
are unitarily equivalent to the formula_17, the compact operators on "L"2("G")). Therefore, all pairs {"U"("s"), "V"("γ")} are unitarily equivalent. Specializing to the case where "G" = R yields the Stone–von Neumann theorem.
Heisenberg group.
The above canonical commutation relations for P, Q are identical to the commutation relations that specify the Lie algebra of the general Heisenberg group "H"2"n"+1 for n a positive integer. This is the Lie group of ("n" + 2) × ("n" + 2) square matrices of the form
formula_18
In fact, using the Heisenberg group, one can reformulate the Stone von Neumann theorem in the language of representation theory.
Note that the center of "H2n+1" consists of matrices M(0, 0, "c"). However, this center is "not" the identity operator in Heisenberg's original CCRs. The Heisenberg group Lie algebra generators, e.g. for "n"
1, are
formula_19
and the central generator "z" = log "M"(0, 0, 1) = exp("z") − 1 is not the identity.
<templatestyles src="Math_theorem/styles.css" />
All these representations are unitarily inequivalent; and any irreducible representation which is not trivial on the center of "Hn" is unitarily equivalent to exactly one of these.
Note that "Uh" is a unitary operator because it is the composition of two operators which are easily seen to be unitary: the translation to the "left" by "ha" and multiplication by a function of absolute value 1. To show "Uh" is multiplicative is a straightforward calculation. The hard part of the theorem is showing the uniqueness; this claim, nevertheless, follows easily from the Stone–von Neumann theorem as stated above. We will sketch below a proof of the corresponding Stone–von Neumann theorem for certain finite Heisenberg groups.
In particular, irreducible representations π, π′ of the Heisenberg group "Hn" which are non-trivial on the center of "Hn" are unitarily equivalent if and only if "π"("z") = "π′"("z") for any z in the center of "Hn".
One representation of the Heisenberg group which is important in number theory and the theory of modular forms is the theta representation, so named because the Jacobi theta function is invariant under the action of the discrete subgroup of the Heisenberg group.
Relation to the Fourier transform.
For any non-zero h, the mapping
formula_20
is an automorphism of "Hn" which is the identity on the center of "Hn". In particular, the representations "Uh" and "Uhα" are unitarily equivalent. This means that there is a unitary operator W on "L"2(R"n") such that, for any g in "Hn",
formula_21
Moreover, by irreducibility of the representations "Uh", it follows that up to a scalar, such an operator W is unique (cf. Schur's lemma). Since W is unitary, this scalar multiple is uniquely determined and hence such an operator W is unique.
<templatestyles src="Math_theorem/styles.css" />
This means that, ignoring the factor of (2"π")"n"/2 in the definition of the Fourier transform,
formula_22
This theorem has the immediate implication that the Fourier transform is unitary, also known as the Plancherel theorem. Moreover,
formula_23
<templatestyles src="Math_theorem/styles.css" />
From this fact the Fourier inversion formula easily follows.
Example: Segal–Bargmann space.
The Segal–Bargmann space is the space of holomorphic functions on C"n" that are square-integrable with respect to a Gaussian measure. Fock observed in 1920s that the operators
formula_24
acting on holomorphic functions, satisfy the same commutation relations as the usual annihilation and creation operators, namely,
formula_25
In 1961, Bargmann showed that "a" is actually the adjoint of "aj" with respect to the inner product coming from the Gaussian measure. By taking appropriate linear combinations of "aj" and "a", one can then obtain "position" and "momentum" operators satisfying the canonical commutation relations. It is not hard to show that the exponentials of these operators satisfy the Weyl relations and that the exponentiated operators act irreducibly.Section 14.4 The Stone–von Neumann theorem therefore applies and implies the existence of a unitary map from "L"2(R"n") to the Segal–Bargmann space that intertwines the usual annihilation and creation operators with the operators "aj" and "a". This unitary map is the Segal–Bargmann transform.
Representations of finite Heisenberg groups.
The Heisenberg group "Hn"("K") is defined for any commutative ring K. In this section let us specialize to the field "K"
Z/"p"Z for p a prime. This field has the property that there is an embedding ω of K as an additive group into the circle group T. Note that "Hn"("K") is finite with cardinality |"K"|2"n" + 1. For finite Heisenberg group "Hn"("K") one can give a simple proof of the Stone–von Neumann theorem using simple properties of character functions of representations. These properties follow from the orthogonality relations for characters of representations of finite groups.
For any non-zero h in K define the representation "Uh" on the finite-dimensional inner product space ℓ2("K""n") by
formula_26
<templatestyles src="Math_theorem/styles.css" />
It follows that
formula_27
By the orthogonality relations for characters of representations of finite groups this fact implies the corresponding Stone–von Neumann theorem for Heisenberg groups "Hn"(Z/"p"Z), particularly:
Actually, all irreducible representations of "Hn"("K") on which the center acts nontrivially arise in this way.Chapter 14, Exercise 5
Generalizations.
The Stone–von Neumann theorem admits numerous generalizations. Much of the early work of George Mackey was directed at obtaining a formulation of the theory of induced representations developed originally by Frobenius for finite groups to the context of unitary representations of locally compact topological groups.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\begin{align}[]\n[x \\psi](x_0) &= x_0 \\psi(x_0) \\\\[]\n[p \\psi](x_0) &= - i \\hbar \\frac{\\partial \\psi}{\\partial x}(x_0)\n\\end{align}"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "\\hbar"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": " [x,p] = x p - p x = i \\hbar."
},
{
"math_id": 7,
"text": "e^{itQ} e^{isP} = e^{-i st} e^{isP} e^{itQ} ."
},
{
"math_id": 8,
"text": "U(t)V(s) = e^{-i st} V(s) U(t) \\qquad \\forall s, t,"
},
{
"math_id": 9,
"text": "W^*U(t)W = e^{itx} \\quad \\text{and} \\quad W^*V(s)W = e^{isp},"
},
{
"math_id": 10,
"text": " [U(t)\\psi ] (x)=e^{itx} \\psi(x), \\qquad [V(s)\\psi ](x)= \\psi(x+s) ."
},
{
"math_id": 11,
"text": "f \\mapsto {\\hat f}(\\gamma) = \\int_G \\overline{\\gamma(t)} f(t) d \\mu (t)"
},
{
"math_id": 12,
"text": "(s \\cdot f)(t) = f(t + s)."
},
{
"math_id": 13,
"text": " \\widehat{ (s \\cdot f) }(\\gamma) = \\gamma(s) \\hat{f} (\\gamma)."
},
{
"math_id": 14,
"text": "C^*\\left( \\hat{G} \\right) \\rtimes_{\\hat{\\rho}} G "
},
{
"math_id": 15,
"text": "U(s) V(\\gamma) U^*(s) = \\gamma(s) V(\\gamma)."
},
{
"math_id": 16,
"text": "C_0(G) \\rtimes_\\rho G "
},
{
"math_id": 17,
"text": "{\\mathcal K}\\left(L^2(G)\\right)"
},
{
"math_id": 18,
"text": " \\mathrm{M}(a,b,c) = \\begin{bmatrix} 1 & a & c \\\\ 0 & 1_n & b \\\\ 0 & 0 & 1 \\end{bmatrix}. "
},
{
"math_id": 19,
"text": "\\begin{align}\n P &= \\begin{bmatrix} 0 & 1 & 0 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0 \\end{bmatrix}, &\n Q &= \\begin{bmatrix} 0 & 0 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 0 & 0 \\end{bmatrix}, &\n z &= \\begin{bmatrix} 0 & 0 & 1 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0 \\end{bmatrix},\n\\end{align}"
},
{
"math_id": 20,
"text": " \\alpha_h: \\mathrm{M}(a,b,c) \\to \\mathrm{M} \\left( -h^{-1} b,h a, c -a\\cdot b \\right) "
},
{
"math_id": 21,
"text": " W U_h(g) W^* = U_h \\alpha (g)."
},
{
"math_id": 22,
"text": " \\int_{\\mathbf{R}^n} e^{-i x \\cdot p} e^{i (b \\cdot x + h c)} \\psi (x+h a) \\ dx = e^{ i (h a \\cdot p + h (c - b \\cdot a))} \\int_{\\mathbf{R}^n} e^{-i y \\cdot ( p - b)} \\psi(y) \\ dy."
},
{
"math_id": 23,
"text": " (\\alpha_h)^2 \\mathrm{M}(a,b,c) =\\mathrm{M}(- a, -b, c). "
},
{
"math_id": 24,
"text": " a_j = \\frac{\\partial}{\\partial z_j}, \\qquad a_j^* = z_j, "
},
{
"math_id": 25,
"text": " \\left [a_j,a_k^* \\right ] = \\delta_{j,k}. "
},
{
"math_id": 26,
"text": "\\left[U_h \\mathrm{M}(a, b, c) \\psi\\right](x) = \\omega(b \\cdot x + h c) \\psi(x + ha). "
},
{
"math_id": 27,
"text": " \\frac{1}{\\left|H_n(\\mathbf{K})\\right|} \\sum_{g \\in H_n(K)} |\\chi(g)|^2 = \\frac{1}{|K|^{2n+1}} |K|^{2n} |K| = 1. "
}
]
| https://en.wikipedia.org/wiki?curid=744569 |
744589 | Column generation | Algorithm for solving linear programs
Column generation or delayed column generation is an efficient algorithm for solving large linear programs.
The overarching idea is that many linear programs are too large to consider all the variables explicitly. The idea is thus to start by solving the considered program with only a subset of its variables. Then iteratively, variables that have the potential to improve the objective function are added to the program. Once it is possible to demonstrate that adding new variables would no longer improve the value of the objective function, the procedure stops. The hope when applying a column generation algorithm is that only a very small fraction of the variables will be generated. This hope is supported by the fact that in the optimal solution, most variables will be non-basic and assume a value of zero, so the optimal solution can be found without them.
In many cases, this method allows to solve large linear programs that would otherwise be intractable. The classical example of a problem where it is successfully used is the cutting stock problem. One particular technique in linear programming which uses this kind of approach is the Dantzig–Wolfe decomposition algorithm. Additionally, column generation has been applied to many problems such as crew scheduling, vehicle routing, and the capacitated p-median problem.
Algorithm.
The algorithm considers two problems: the master problem and the subproblem. The master problem is the original problem with only a subset of variables being considered. The subproblem is a new problem created to identify an improving variable ("i.e." which can improve the objective function of the master problem).
The algorithm then proceeds as follow:
Finding an improving variable.
The most difficult part of this procedure is how to find a variable that can improve the objective function of the master problem. This can be done by finding the variable with the most negative reduced cost (assuming without loss of generality that the problem is a minimization problem). If no variable has a negative reduced cost, then the current solution of the master problem is optimal.
When the number of variables is very large, it is not possible to find an improving variable by calculating all the reduced cost and choosing a variable with a negative reduced cost. Thus, the idea is to compute only the variable having the minimum reduced cost. This can be done using an optimization problem called the pricing subproblem which strongly depends on the structure of the original problem. The objective function of the subproblem is the reduced cost of the searched variable with respect to the current dual variables, and the constraints require that the variable obeys the naturally occurring constraints. The column generation method is particularly efficient when this structure makes it possible to solve the sub-problem with an efficient algorithm, typically a dedicated combinatorial algorithm.
We now detail how and why to compute the reduced cost of the variables. Consider the following linear program in standard form:
formula_0
which we will call the primal problem as well as its dual linear program:
formula_1
Moreover, let formula_2 and formula_3 be optimal solutions for these two problems which can be provided by any linear solver. These solutions verify the constraints of their linear program and, by duality, have the same value of objective function (formula_4) which we will call formula_5. This optimal value is a function of the different coefficients of the primal problem: formula_6. Note that there exists a dual variable formula_7 for each constraint of the primal linear model. It is possible to show that an optimal dual variable formula_7 can be interpreted as the partial derivative of the optimal value formula_5 of the objective function with respect to the coefficient formula_8 of the right-hand side of the constraints:formula_9 or otherwise formula_10. More simply put, formula_7 indicates by how much increases locally the optimal value of the objective function when the coefficient formula_8 increases by one unit.
Consider now that a variable formula_11 was not considered until then in the primal problem. Note that this is equivalent to saying that the variable formula_11 was present in the model but took a zero value. We will now observe the impact on the primal problem of changing the value of formula_11 from formula_12 to formula_13. If formula_14 and formula_15 are respectively the coefficients associated with the variable formula_11 in the objective function and in the constraints then the linear program is modified as follows:
formula_16
In order to know if it is interesting to add the variable formula_11 to the problem ("i.e" to let it take a non-zero value), we want to know if the value formula_17 of the objective function of this new problem decreases as the value formula_13 of the variable formula_11 increases. In other words, we want to know formula_18. To do this, note that formula_17 can be expressed according to the value of the objective function of the initial primal problem: formula_19. We can then compute the derivative that interests us:
formula_20
In other words, the impact of changing the value formula_13 on the value formula_21 translates into two terms. First, this change directly impacts the objective function and second, the right-hand side of the constraints is modified which has an impact on the optimal variables formula_2 whose magnitude is measured using the dual variables formula_3. The derivative formula_18 is generally called the reduced cost of the variable formula_11 and will be denoted by formula_22 in the following.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\begin{align}\n&\\min_{x} c^T x \\\\\n&\\text{subject to} \\\\\n& A x = b \\\\\n& x \\in \\mathbb{R}^+\n\\end{align} "
},
{
"math_id": 1,
"text": " \\begin{align}\n&\\max_{u} u^T b \\\\\n&\\text{subject to} \\\\\n& u^T A \\leq c \\\\\n& u \\in \\mathbb{R}\n\\end{align} "
},
{
"math_id": 2,
"text": " x^* "
},
{
"math_id": 3,
"text": " u^* "
},
{
"math_id": 4,
"text": "c^T x^* = u^{*T} b"
},
{
"math_id": 5,
"text": " z^* "
},
{
"math_id": 6,
"text": " z^* = z^* (c, A, b) "
},
{
"math_id": 7,
"text": " u_i^* "
},
{
"math_id": 8,
"text": " b_i "
},
{
"math_id": 9,
"text": "u_i^* = \\frac{\\partial z^*}{\\partial b_i}"
},
{
"math_id": 10,
"text": "u^* = \\frac{\\partial z^*}{\\partial b}"
},
{
"math_id": 11,
"text": " y "
},
{
"math_id": 12,
"text": " 0 "
},
{
"math_id": 13,
"text": " \\hat{y} "
},
{
"math_id": 14,
"text": " c_y "
},
{
"math_id": 15,
"text": " A_y "
},
{
"math_id": 16,
"text": " \\begin{align}\n&\\min_{x} c^T x + c_y \\hat{y} \\\\\n&\\text{subject to} \\\\\n& A x = b - A_y \\hat{y}\\\\\n& x \\in \\mathbb{R}^+\n\\end{align} "
},
{
"math_id": 17,
"text": "z_{\\hat{y}}^*"
},
{
"math_id": 18,
"text": "\\frac{\\partial z_{\\hat{y}}^*}{\\partial \\hat{y}}"
},
{
"math_id": 19,
"text": "z_{\\hat{y}}^* = c_y \\hat{y} + z^*(c, A, b-A_y \\hat{y})"
},
{
"math_id": 20,
"text": " \\begin{align}\n\\frac{\\partial z_{\\hat{y}}^*}{\\partial \\hat{y}} & ~=~ & & c_y + \\frac{\\partial z^*}{\\partial \\hat{y}} \\\\\n& ~=~ & & c_y + \\frac{\\partial z^*}{\\partial c} \\frac{d c}{d \\hat{y}} + \\frac{\\partial z^*}{\\partial A} \\frac{d A}{d \\hat{y}} + \\frac{\\partial z^*}{\\partial b} \\frac{d b}{d \\hat{y}} \\\\\n& ~=~ & & c_y + \\frac{\\partial z^*}{\\partial b} \\frac{d b}{d \\hat{y}} \\\\\n& ~=~ & & c_y + u^* (-A_y) \\\\\n& ~=~ & & c_y - u^* A_y\n\\end{align} "
},
{
"math_id": 21,
"text": " z _ {\\hat{y}}^* "
},
{
"math_id": 22,
"text": " cr_y "
}
]
| https://en.wikipedia.org/wiki?curid=744589 |
74463631 | Lanyue | China crewed lunar surface lander
The Lanyue () lander, formerly known as the China crewed lunar surface lander () or simply as the lunar surface lander (), is a spacecraft under development by the China Academy of Space Technology. The purpose of the lander is to carry two astronauts to the lunar surface and to return them to lunar orbit after a set period of time. The lander's initial lunar-landing attempt is envisioned to occur by 2029.
Nomenclature.
The official names for both the crewed lunar lander and the next-generation crewed spacecraft, the Mengzhou (), were revealed by the China Manned Space Agency (CMSA) on 24 February 2024. One possible English translation for "Lanyue" is "Embracing the Moon" while the English translation for "Mengzhou" can be "Vessel of Dreams" or "Dream Vessel".
Overview.
Since at least August 2021, Western news media has reported that China's main spacecraft contractor was working on a human-rated landing system for lunar missions. On 12 July 2023, at the 9th China (International) Commercial Aerospace Forum in Wuhan, Hubei province, Zhang Hailian, a deputy chief designer with the CMSA, publicly introduced a preliminary plan to land two astronauts on the Moon by the year 2030. Under this plan, the astronauts will conduct scientific work upon landing on the Moon, including the collection of lunar rock and regolith samples. After a short stay on the lunar surface, they will carry the collected samples back into lunar orbit in their spacecraft and subsequently, to Earth.
The preliminary plan describes a 'landing segment' that consists of a new lunar-lander attached to a propulsion stage which together are to be launched autonomously into a trans-lunar injection (TLI) orbit by the under-development Long March 10 rocket. The lander-propulsion stage arrangement is somewhat analogous to the lander-orbiter architecture of the 2020 Chang'e 5 and the current Chang'e 6 robotic lunar sample-return missions; however, unlike the orbiters for the robotic missions, the propulsion stage for the crewed lander will descend from lunar orbit together with the lander rather than remaining in lunar orbit. (The propulsion stage will undergo a controlled impact landing on the Moon after it separates from the crewed lander during the final stages of the descent, while the lander itself will attempt a powered soft landing).
On 24 April 2024, Lin Xiqiang, deputy director of China Manned Space Agency (CMSA), stated that the initial development of various products for China's lunar missions, including the Lanyue lander, is complete; according to Lin, mechanical and thermal articles for the lander and other mission segments have been constructed and the requisite rocket engines are undergoing hot fire tests. Lin further elaborated that prototype production and tests are in full swing and that the crewed lunar exploration launch site is currently under construction near the existing coastal Wenchang spaceport in Hainan province.
Lander attributes.
A model of the under-development lunar lander was unveiled at an exhibition to mark three decades of China's human spaceflight program on 24 February 2023 at the National Museum of China in Beijing.
The physical model of the under-development lander, when considered together with the presentation by Zhang Hailian on 12 July 2023, suggests the future spacecraft will have the following components: four 7500-newton main engines, numerous attitude-control thrusters for precise maneuvering, a stowed lunar rover capable of carrying two astronauts, docking mechanisms (for docking with the Mengzhou spacecraft), a crew hatch (for EVAs), a ladder attached to one of the landing legs, two solar arrays, various antennaes and sensors.
The estimated mass of the fully-fuelled landing segment (lunar-lander plus propulsion-stage) is .
Lunar rover.
Models of the crewed lander includes a four-wheeled rover stowed on the lander's external wall. CMSA previously issued an open call to private, public, and educational institutions to submit development plans for the future lunar rover; according to CMSA, fourteen groups submitted proposals in response to the open solicitation and eleven of the fourteen proposals advanced to the expert-review stage. On 24 October 2023, CMSA announced that two of the remaining eleven submitted proposals have advanced to the detailed design phase while another six groups will receive continued support to enable them to continue research into innovative aspects of their proposals.
Survey of journal literature reveals that the planned lunar rover may incorporate "differential-braking" and "off-ground detection" technologies to enhance its anti-slip and steering-stability characteristics during high-speed traverse. Engineering prototypes have been built for design verification purposes.
The rover's planned mass is about 200 kilograms and will be able to carry two astronauts; it has a planned traverse-range of about 10 kilometres.
Lander mission architecture.
Under CMSA's crewed lunar landing plan, the landing segment initially will be injected into an Earth-Moon transfer orbit via the Long March 10 carrier rocket, and subsequently acquire lunar orbit under its own power. It then will await a lunar orbit rendezvous with and docking by the separately launched Mengzhou spacecraft (formerly known as the "next-generation crewed spacecraft", the analog to the Apollo program's Apollo command and service module) whereupon two astronauts will transfer to the lander, undock from Mengzhou, and maneuver the landing segment for a lunar-landing attempt.
The landing segment's powered descent phase will employ a "staged-descent" concept. Under this concept, the combined lander and propulsion stage will begin descending from lunar orbit with the latter providing the necessary deceleration; when the stack is close to the surface, the lander will separate from the propulsion stage and proceed to complete the powered descent and a soft-landing under the lander's own power (the discarded propulsion stage meanwhile will impact the lunar surface a safe distance away from the lander). At the conclusion of the surface portion of the mission, the full lunar lander will act as the ascent vehicle for the astronauts to return to lunar orbit. According to a report by the Xinhua News Agency, the lander also will be capable of autonomous flight operations.
As of 2022, the landing system is envisioned to enable a six-hour stay on the lunar surface by two astronauts. It is unclear from the source if the quoted 'six-hour stay on the moon' references the lander's total time on the lunar surface or the astronauts' surface-EVA duration; if the latter, then the proposed surface mission duration would be comparable to those carried out by the United States' Apollo 11 and Apollo 12 missions. During the previously-cited 2023 aerospace forum in Wuhan, Zhang Hailian also stated that a lunar surface-EVA spacesuit with an endurance period of no less than eight hours is currently under development.
Potential landing sites.
Members of the Chinese Academy of Sciences have begun site selection research ("suggestions") for the anticipated crewed lunar exploration program. Thirty prime landing sites have been identified (narrowed down from a preliminary list of 106 and an interim list of 50); the thirty sites are located in both the lunar north and south polar regions as well as in the lunar near and far sides. Numerous criteria, intended to maximize mission scientific value while taking into account crew safety and engineering feasibility, were considered by the team. Examples of the 30 prime sites include the following: Ina crater/depression, Reiner Gamma, and Rimae Bode on the lunar near side, Apollo basin, Aitken crater, and Mare Moscoviense on the lunar far side, Shackleton crater in the lunar south polar region, and Hermite crater in the lunar north polar region.
Earth-Moon trajectory design.
Preliminary trajectory designs based on specific landing sites and landing periods have also been carried out by a team from the Nanjing University of Aeronautics and Astronautics and from the China Astronaut Research and Training Center. In particular, the team analyzed possible Earth-Moon transfer trajectories based on seven potential landing sites, including Rimae Bode, a series of lunar rilles west of the Bode crater, and Mare Moscoviense, covering the period from 2027 until 2037. The analysis employed a dynamic weighting method that quantified mission efficiency factors and engineering constraints, combined with the application of a pseudostate trajectory model to optimize the computational efficiency of trajectory design and landing site/time selection.
The pseudostate model was first proposed by J.S. Wilson in 1969 to study Earth-Moon spacecraft transfer trajectories. In the context of an Earth-Moon-spacecraft three-body system, the pseudostate method usually is more computationally efficient than the traditional patched-conic method of trajectory design.
The patched-conic method essentially seeks to "patch" together two (Keplerian) two-body ellipses (the conics) at a point of intersection defined by the Moon's gravitational sphere of influence, while taking into account the various physical constraints. This method can result in large errors that may be controlled by a possibly unstable and time-consuming iterative computational process. The pseudostate model modifies the conic-patching method by defining a "pseudostate transformation sphere" (PTS), a region in which the spacecraft trajectory is calculated as an approximate solution to the restricted three-body problem. The method starts by calculating an initial simple two-body Earth-spacecraft ellipse and using it to propagate the spacecraft's position to a point within the Moon's PTS (the spacecraft's "pseudostate"), next the approximate restricted three-body solution is applied and the pseudostate is backward propagated to a point on the surface of the Laplace sphere, which defines the beginning of the Moon's gravitational sphere of influence, and finally a two-body Moon-spacecraft conic is calculated and the spacecraft location is forward propagated from the surface of the Laplace sphere to an arbitrary perilune point. The Laplace sphere and the gravitational sphere of influence concepts used in the two models, when applied to Earth-Moon system with an approximately circular orbit, is given by
formula_0
where formula_1 is the radius of Laplace sphere, formula_2 is the average Earth-Moon distance, formula_3 is the mass of the Moon and formula_4 is the Earth's mass. Strictly speaking, the Laplace sphere is not a sphere but a changing hypersurface defined at each point of the path of a gravitational mass. The criterion for calculating the Moon's Laplace sphere is to analyze the Moon's gravity as the primary force acting in the region under consideration while the Earth's gravity is treated as a perturbing force. The Laplace sphere differs from the Hill sphere because the calculation of the latter sphere requires the presence of stable orbits while the former does not..
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_\\text{Laplace} \\; = \\; D \\; \\left( \\frac{m}{M} \\right)^{2/5},"
},
{
"math_id": 1,
"text": "R_\\text{Laplace}"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "M"
}
]
| https://en.wikipedia.org/wiki?curid=74463631 |
74470076 | Coulomb gas | Many-body of charged particles
In statistical physics, a Coulomb gas is a many-body system of charged particles interacting under the electrostatic force. It is named after Charles-Augustin de Coulomb, as the force by which the particles interact is also known as the Coulomb force.
The system can be defined in any number of dimensions. While the three-dimensional Coulomb gas is the most experimentally realistic, the best understood is the two-dimensional Coulomb gas. The two-dimensional Coulomb gas is known to be equivalent to the continuum XY model of magnets and the sine-Gordon model (upon taking certain limits) in a physical sense, in that physical observables (correlation functions) calculated in one model can be used to calculate physical observables in another model. This aided the understanding of the BKT transition, and the discoverers earned a Nobel prize in physics for their work on this phase transition.
Formulation.
The setup starts with considering formula_0 charged particles in formula_1 with positions formula_2 and charges formula_3. From electrostatics, the pairwise potential energy between particles labelled by indices formula_4 is (up to scale factor)
formula_5
where formula_6 is the Coulomb kernel or Green's function of the Laplace equation in formula_7 dimensions, so
formula_8
The free energy due to these interactions is then (proportional to) formula_9, and the partition function is given by integrating over different configurations, that is, the positions of the charged particles.
Coulomb gas in conformal field theory.
The two-dimensional Coulomb gas can be used as a framework for describing fields in minimal models. This comes from the similarity of the two-point correlation function of the free boson formula_10,
formula_11
to the electric potential energy between two unit charges in two dimensions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\mathbb{R}^d"
},
{
"math_id": 2,
"text": "\\mathbf{r}_i"
},
{
"math_id": 3,
"text": "q_i"
},
{
"math_id": 4,
"text": "i,j"
},
{
"math_id": 5,
"text": "V_{ij} = q_iq_jg(|\\mathbf{r}_i - \\mathbf{r}_j|),"
},
{
"math_id": 6,
"text": "g(x)"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": " \n\\begin{align}\ng(x) = \n\\begin{cases}-\\log|x| & \\text{ if } d = 2, \\\\\n \\frac{1}{(d-2)|x|^{d-2}} & \\text{ if }d > 2.\n\\end{cases}\n\\end{align}\n"
},
{
"math_id": 9,
"text": "F = \\sum_{i \\neq j} V_{ij}"
},
{
"math_id": 10,
"text": "\\varphi"
},
{
"math_id": 11,
"text": "\\langle \\varphi(z, \\bar z) \\varphi(w, \\bar w) \\rangle = - \\log|z - w|^2"
}
]
| https://en.wikipedia.org/wiki?curid=74470076 |
74475015 | Projected normal distribution | Probability distribution
In directional statistics, the projected normal distribution (also known as offset normal distribution or angular normal distribution) is a probability distribution over directions that describes the radial projection of a random variable with n-variate normal distribution over the unit (n-1)-sphere.
Definition and properties.
Given a random variable formula_1 that follows a multivariate normal distribution formula_2, the projected normal distribution formula_3 represents the distribution of the random variable formula_4 obtained projecting formula_5 over the unit sphere. In the general case, the projected normal distribution can be asymmetric and multimodal. In case formula_6 is orthogonal to an eigenvector of formula_7, the distribution is symmetric.
Density function.
The density of the projected normal distribution formula_0 can be constructed from the density of its generator n-variate normal distribution formula_8 by re-parametrising to n-dimensional spherical coordinates and then integrating over the radial coordinate.
In spherical coordinates with radial component formula_9 and angles formula_10, a point formula_11 can be written as formula_12, with formula_13. The joint density becomes
formula_14
and the density of formula_0 can then be obtained as
formula_15
Circular distribution.
Parametrising the position on the unit circle in polar coordinates as formula_16, the density function can be written with respect to the parameters formula_17 and formula_18 of the initial normal distribution as
formula_19
where formula_20 and formula_21 are the density and cumulative distribution of a standard normal distribution, formula_22, and formula_23 is the indicator function.
In the circular case, if the mean vector formula_6 is parallel to the eigenvector associated to the largest eigenvalue of the covariance, the distribution is symmetric and has a mode at formula_24 and either a mode or an antimode at formula_25, where formula_26 is the polar angle of formula_27. If the mean is parallel to the eigenvector associated to the smallest eigenvalue instead, the distribution is also symmetric but has either a mode or an antimode at formula_24 and an antimode at formula_25.
Spherical distribution.
Parametrising the position on the unit sphere in spherical coordinates as formula_28 where formula_29 are the azimuth formula_30 and inclination formula_31 angles respectively, the density function becomes
formula_32
where formula_20, formula_21, formula_33, and formula_23 have the same meaning as the circular case.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{P N}_n(\\boldsymbol\\mu, \\boldsymbol\\Sigma)"
},
{
"math_id": 1,
"text": "\\boldsymbol X \\in \\R^n"
},
{
"math_id": 2,
"text": "\\mathcal{N}_n(\\boldsymbol\\mu,\\, \\boldsymbol\\Sigma)"
},
{
"math_id": 3,
"text": "\\mathcal{PN}_n(\\boldsymbol\\mu, \\boldsymbol\\Sigma)"
},
{
"math_id": 4,
"text": "\\boldsymbol Y = \\frac{\\boldsymbol X}{\\lVert \\boldsymbol X \\rVert}"
},
{
"math_id": 5,
"text": "\\boldsymbol X"
},
{
"math_id": 6,
"text": "\\boldsymbol \\mu"
},
{
"math_id": 7,
"text": "\\boldsymbol \\Sigma"
},
{
"math_id": 8,
"text": "\\mathcal{N}_n(\\boldsymbol\\mu, \\boldsymbol\\Sigma)"
},
{
"math_id": 9,
"text": "r \\in [0, \\infty)"
},
{
"math_id": 10,
"text": "\\boldsymbol \\theta = (\\theta_1, \\dots, \\theta_{n-1}) \\in [0, \\pi]^{n - 2} \\times [0, 2 \\pi)"
},
{
"math_id": 11,
"text": "\\boldsymbol x = (x_1, \\dots, x_n) \\in \\R^n"
},
{
"math_id": 12,
"text": "\\boldsymbol x = r \\boldsymbol v"
},
{
"math_id": 13,
"text": "\\lVert \\boldsymbol v \\rVert = 1"
},
{
"math_id": 14,
"text": "\np(r, \\boldsymbol \\theta | \\boldsymbol \\mu, \\boldsymbol \\Sigma) =\n\\frac{r^{n-1}}{\\sqrt{|\\boldsymbol \\Sigma|} (2 \\pi)^{\\frac{n}{2}}}\ne^{-\\frac{1}{2} (r \\boldsymbol v - \\boldsymbol \\mu)^\\top \\Sigma^{-1} (r \\boldsymbol v - \\boldsymbol \\mu)}\n"
},
{
"math_id": 15,
"text": "\np(\\boldsymbol \\theta | \\boldsymbol \\mu, \\boldsymbol \\Sigma) = \\int_0^\\infty p(r, \\boldsymbol \\theta | \\boldsymbol \\mu, \\boldsymbol \\Sigma) dr .\n"
},
{
"math_id": 16,
"text": "\\boldsymbol v = (\\cos\\theta, \\sin\\theta) "
},
{
"math_id": 17,
"text": "\\boldsymbol\\mu"
},
{
"math_id": 18,
"text": "\\boldsymbol\\Sigma"
},
{
"math_id": 19,
"text": "\np(\\theta | \\boldsymbol\\mu, \\boldsymbol\\Sigma) =\n\\frac{e^{-\\frac{1}{2} \\boldsymbol \\mu^\\top \\boldsymbol \\Sigma^{-1} \\boldsymbol \\mu}}{2 \\pi \\sqrt{|\\boldsymbol \\Sigma|} \\boldsymbol v^\\top \\boldsymbol \\Sigma^{-1} \\boldsymbol v}\n\\left( 1 + T(\\theta) \\frac{\\Phi(T(\\theta))}{\\phi(T(\\theta))} \\right) I_{[0, 2\\pi)}(\\theta)\n"
},
{
"math_id": 20,
"text": "\\phi"
},
{
"math_id": 21,
"text": "\\Phi"
},
{
"math_id": 22,
"text": "T(\\theta) = \\frac{\\boldsymbol v^\\top \\boldsymbol \\Sigma^{-1} \\boldsymbol \\mu}{\\sqrt{\\boldsymbol v^\\top \\boldsymbol \\Sigma^{-1} \\boldsymbol v}}"
},
{
"math_id": 23,
"text": "I"
},
{
"math_id": 24,
"text": "\\theta = \\alpha"
},
{
"math_id": 25,
"text": "\\theta = \\alpha + \\pi"
},
{
"math_id": 26,
"text": "\\alpha"
},
{
"math_id": 27,
"text": "\\boldsymbol \\mu = (r \\cos\\alpha, r \\sin\\alpha)"
},
{
"math_id": 28,
"text": "\\boldsymbol v = (\\cos\\theta_1 \\sin\\theta_2, \\sin\\theta_1 \\sin\\theta_2, \\cos\\theta_2)"
},
{
"math_id": 29,
"text": "\\boldsymbol \\theta = (\\theta_1, \\theta_2)"
},
{
"math_id": 30,
"text": "\\theta_1 \\in [0, 2\\pi)"
},
{
"math_id": 31,
"text": "\\theta_2 \\in [0, \\pi]"
},
{
"math_id": 32,
"text": "\np(\\boldsymbol \\theta | \\boldsymbol\\mu, \\boldsymbol\\Sigma) =\n\\frac{e^{-\\frac{1}{2} \\boldsymbol \\mu^\\top \\boldsymbol \\Sigma^{-1} \\boldsymbol \\mu}}{\\sqrt{|\\boldsymbol \\Sigma|} \\left( 2 \\pi \\boldsymbol v^\\top \\boldsymbol \\Sigma^{-1} \\boldsymbol v \\right)^{\\frac{3}{2}}}\n\\left(\\frac{\\Phi(T(\\boldsymbol \\theta))}{\\phi(T(\\boldsymbol \\theta))} + T(\\boldsymbol \\theta) \\left( 1 + T(\\boldsymbol \\theta) \\frac{\\Phi(T(\\boldsymbol \\theta))}{\\phi(T(\\boldsymbol \\theta))} \\right) \\right)\nI_{[0, 2\\pi)}(\\theta_1) I_{[0, \\pi]}(\\theta_2)\n"
},
{
"math_id": 33,
"text": "T"
}
]
| https://en.wikipedia.org/wiki?curid=74475015 |
74479198 | Basil Wrigley Wilson | South African oceanographer and civil engineer
Basil Wrigley Wilson (16 June 1909 – 9 February 1996) was an oceanographic engineer and researcher in the field of coastal engineering who made significant contributions to the study of ocean waves, ship motion, and mooring technology.
Life and career.
Wilson was born in Cape Town to expatriate English parents George Hough Wilson and Sarah Anne Wilson (née Hearn). His father was a journalist and editor of the Cape Times, and his grandmother was related to William Wrigley Jr. of the Wrigley Company, hence Basil's middle name.
He studied Civil engineering at the University of Cape Town, graduating with a Bachelor of Science degree in 1931. A year later, he took up employment as an engineer with the South African Railways and Harbours Administration, where he remained until 1952. During this time, he developed the first hydraulic model of a harbour in South Africa, at Gqeberha.
In 1942 he oversaw the design and operation of a large physical model of Table Bay and its harbour, and conducted experiments on methods for the control and reduction of the effects of storm surges, using much of the work as the basis for his Doctor of Science dissertation at the University of Cape Town in 1951.
In 1952 he moved to the United States and undertook a teaching and research position at Texas A&M University. He became a US Citizen in 1956. In addition to work on the dynamics of mooring lines for large ships, he developed a procedure for predicting the height and period characteristics of waves. He also undertook work on storm surges caused by hurricanes, researching the effects in New York Harbor and the Gulf of Mexico.
In 1968, Wilson entered private practice where he undertook engineering consulting work for various clients on subjects including earthquake engineering, tsunami hazards, and port engineering.
Wilson's formulas for simplified wind-wave prediction.
In 1965, Wilson proposed a method which can be used to approximate the significant wave height "H1/3" and period "T1/3" of wind waves generated by a constant wind of speed "U" blowing over a fetch length "F". The units for these quantities are as follows:
Under conditions were the wind blows for a sufficiently long time, for example during a prolonged storm, the wave height and period can be calculated as follows:
formula_0
formula_1
In these formulae, "g" denotes the acceleration due to gravity, which is approximately 9.807 m/s2. The wind speed "U" is measured at an elevation of 10 metres above the sea surface. Wilson's formulae apply when the duration of the wind blowing is sufficiently long, as when the wind blows for only a limited time, waves cannot attain the full height and period corresponding to the wind speed and fetch length. Work by Yoshimi Goda and other Japanese researchers subsequently provided modifications to the formulae to consider these effects.
Goda adapted Wilson's formula with a simple equation which is first used to calculate the minimum fetch ("Fmin"):
formula_2
In this equation, "t" is the wind duration (in hours) and "U" is the wind speed, in metres per second. If "F > Fmin", then the wave growth is limited by the wind duration, and "Fmin" is used instead of "F" in Wilson's equations. If "F < Fmin", the wave growth is limited by fetch length, and "F" is used.
Recognition and later life.
Wilson was recognised during his career with several awards, including the American Society of Civil Engineers Arthur M. Wellington Prize (1952), Norman Medal (1969), and the Moffatt-Nichol Harbor and Coastal Engineering Award (1983). He was also recognised by the Institution of Civil Engineers and South African Institution of Civil Engineers. In 1984, he was elected to the National Academy of Engineering.
He died in 1996 in Pasadena, California, survived by his wife and four children.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "gH_{1/3} / U^2 = 0.30 \\left\\{1-\\left[1+0.004 \\left(gF/U^2\\right)^{1/2}\\right]^{-2}\\right\\}"
},
{
"math_id": 1,
"text": "gT_{1/3} / (2\\pi U) = 1.37 \\left\\{1-\\left[1+0.008 \\left(gF/U^2\\right)^{1/3}\\right]^{-5}\\right\\}"
},
{
"math_id": 2,
"text": "F_{\\text{min}} = 1.0 t^{1.37} U^{0.63}"
}
]
| https://en.wikipedia.org/wiki?curid=74479198 |
74479627 | Reversible Michaelis–Menten kinetics | Enzyme kinetics for reversible reactions
Enzymes are proteins that act as biological catalysts by accelerating chemical reactions. Enzymes act on small molecules called substrates, which an enzyme converts into products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. The study of how fast an enzyme can transform a substrate into a product is called enzyme kinetics.
The rate of reaction of many chemical reactions shows a linear response as function of the concentration of substrate molecules. Enzymes however display a saturation effect where, as the substrate concentration is increased the reaction rate reaches a maximum value. Standard approaches to describing this behavior are based on models developed by Michaelis and Menten as well and Briggs and Haldane. Most elementary formulations of these models assume that the enzyme reaction is irreversible, that is product is not converted back to substrate. However, this is unrealistic when describing the kinetics of enzymes in an intact cell because there is product available. Reversible Michaelis–Menten kinetics, using the reversible form of the Michaelis–Menten equation, is therefore important when developing computer models of cellular processes involving enzymes.
In enzyme kinetics, the Michaelis–Menten kinetics kinetic rate law that describes the conversion of one substrate to one product, is often commonly depicted in its irreversible form as:
formula_0
where formula_1 is the reaction rate, formula_2 is the maximum rate when saturating levels of the substrate are present, formula_3 is the Michaelis constant and formula_4 the substrate concentration.
In practice, this equation is used to predict the rate of reaction when little or no product is present. Such situations arise in enzyme assays. When used to model enzyme rates in vivo , for example, to model a metabolic pathway, this representation is inadequate because under these conditions product is present. As a result, when building computer models of metabolism or other enzymatic processes, it is better to use the reversible form of the Michaelis–Menten equation.
To model the reversible form of the Michaelis–Menten equation, the following reversible mechanism is considered:
<chem>
{E} + {S}
<=>[k_{1}][k_{-1}]
ES
<=>[k_{2}][k_{-2}]
</chem>
To derive the rate equation, it is assumed that the concentration of enzyme-substrate complex is at steady-state, that is formula_5.
Following current literature convention, we will be using lowercase Roman lettering to indicate concentrations (this avoids cluttering the equations with square brackets). Thus formula_6 indicates the concentration of enzyme-substrate complex, ES.
The net rate of change of product (which is equal to formula_1) is given by the difference in forward and reverse rates:
formula_7
The total level of enzyme moiety is the sum total of free enzyme and enzyme-complex, that is formula_8. Hence the level of free formula_9 is given by the difference in the total enzyme concentration, formula_10 and the concentration of complex, that is:
formula_11
Using mass conservation we can compute the rate of change of formula_6 using the balance equation:
formula_12
where formula_13 has been replaced using formula_11. This leaves formula_6 as the only unknown. Solving for formula_6 gives:
formula_14
Inserting formula_6 into the rate equation formula_15 and rearranging gives:
formula_16
The following substitutions are now made:
formula_17
and
formula_18
after rearrangement, we obtain the reversible Michaelis–Menten equation in terms of four constants:
formula_19
Haldane relationship.
This is not the usual form in which the equation is used. Instead, the equation is set to zero, meaning formula_20, indicating we are at equilibrium and the concentrations formula_4 and formula_21 are now equilibrium concentrations, hence:
formula_22
Rearranging this gives the so-called Haldane relationship:
formula_23
The advantage of this is that one of the four constants can be eliminated and replaced with the equilibrium constant which is more likely to be known. In addition, it allows one to make a useful interpretation in terms of the thermodynamic and saturation effects (see next section). Most often the reverse maximum rate is eliminated to yield the final equation:
formula_24
Decomposition of the rate law.
The reversible Michaelis–Menten law, as with many enzymatic rate laws, can be decomposed into a capacity term, a thermodynamic term, and an enzyme saturation level. This is more easily seen when we write the reversible rate law as:
formula_25
where formula_26 is the capacity term, formula_27 the thermodynamic term and
formula_28
the saturation term. The separation can be even better appreciated if we look at the elasticity coefficient formula_29. According to elasticity algebra, the elasticity of a product is the sum of the sub-term elasticities, that is:
formula_30
Hence the elasticity of the reversible Michaelis–Menten rate law can easily be shown to be:
formula_31
Since the capacity term is a constant, the first elasticity is zero. The thermodynamic term can be easily shown to be:
formula_32
where formula_33 is the disequilibrium ratio and equals formula_34 and formula_35 the mass–action ratio
The saturation term becomes:
formula_36
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " v = \\frac{V_{\\max} s}{K_\\mathrm{m} + s} "
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "V_\\max"
},
{
"math_id": 3,
"text": "K_\\mathrm{m}"
},
{
"math_id": 4,
"text": " s "
},
{
"math_id": 5,
"text": " des/dt = 0"
},
{
"math_id": 6,
"text": " es "
},
{
"math_id": 7,
"text": "v = v_f - v_r = k_2 es -k_{-2} e\\ p "
},
{
"math_id": 8,
"text": "e_t = e + es "
},
{
"math_id": 9,
"text": "e"
},
{
"math_id": 10,
"text": "e_t"
},
{
"math_id": 11,
"text": " e = e_t - es "
},
{
"math_id": 12,
"text": " \\frac{des}{dt}=k_{1}\\left(e_t - es\\right) s + k_{-2}\\left(e_t - es\\right) p - \\left(k_{-1}+k_{2}\\right) es =0 "
},
{
"math_id": 13,
"text": " e "
},
{
"math_id": 14,
"text": " es = \\frac{\\mathrm{e_t}\\left( k_1 s + k_{-2}\\ p\\right)}{k_{-1} + k_{2} + k_1\\ s + k_{-2}\\ p} "
},
{
"math_id": 15,
"text": " v = k_2 es -k_{-2} e\\ p "
},
{
"math_id": 16,
"text": " v=e_t \\frac{k_1 k_2 s - k_{-1} k_{-2} p}{k_{-1}+k_2+k_1 s + k_{-2} p} "
},
{
"math_id": 17,
"text": " k_{2} = \\frac{V^f_{\\max}}{e_t}; \\quad K^s_m = \\frac{k_{-1}+k_2}{k_1} "
},
{
"math_id": 18,
"text": " k_{-2} = \\frac{V^r_{\\max}}{e_t}; \\quad K^p_{m}=\\frac{k_{-1}+k_2}{k_{-2}} "
},
{
"math_id": 19,
"text": " v=\\frac{\\frac{V^f_{\\max}}{K^s_m} s-\\frac{V^r_{\\max}}{K_m^p} p}{1+\\frac{s}{K_m^s}+\\frac{p}{K_m^p}} "
},
{
"math_id": 20,
"text": " v= 0"
},
{
"math_id": 21,
"text": " p "
},
{
"math_id": 22,
"text": " 0=V^f_{\\max} s_{eq} / K^s_m- V^r_{\\max} p_{eq} / K^p_m "
},
{
"math_id": 23,
"text": " K_{eq}=\\frac{p_{eq}}{s_{eq}}=\\frac{V^f_{\\max} K^p_m}{V^r_{\\max} K^s_m} "
},
{
"math_id": 24,
"text": " v=\\frac{V^f_{\\max} / K^S_m\\left(s - p / K_{eq}\\right)}{1 + s / K^s_m + p / K^p_m} "
},
{
"math_id": 25,
"text": " v=V^f_{\\max} \\cdot \\left(s- p / K_{eq}\\right) \\cdot \\frac{1}{1+ s / K^s_m + p / K^p_m} "
},
{
"math_id": 26,
"text": " V^f_{\\max} "
},
{
"math_id": 27,
"text": " \\left(s- p/K_{eq}\\right) "
},
{
"math_id": 28,
"text": " \\frac{1}{1+ s / K^s_m + p / K^p_m} "
},
{
"math_id": 29,
"text": " \\varepsilon^v_s "
},
{
"math_id": 30,
"text": " \\varepsilon^{a b}_x = \\varepsilon^a_x + \\varepsilon^b_x "
},
{
"math_id": 31,
"text": " \\varepsilon^{v}_s = \\varepsilon^{v_{cap}}_s + \\varepsilon^{v_{thermo}}_s + \\varepsilon^{v_{sat}}_s "
},
{
"math_id": 32,
"text": " \\varepsilon^{v_{thermo}}_s = \\frac{1}{1 - \\rho} "
},
{
"math_id": 33,
"text": " \\rho "
},
{
"math_id": 34,
"text": " \\Gamma/K_{eq}"
},
{
"math_id": 35,
"text": " \\Gamma "
},
{
"math_id": 36,
"text": " \\varepsilon^{v_{sat}}_s = \\frac{-s / K^s_m}{1+ s / K^s_m + p / K^p_m} "
}
]
| https://en.wikipedia.org/wiki?curid=74479627 |
744802 | Stein's lemma | Theorem of probability theory
Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice theory. The theorem gives a formula for the covariance of one random variable with the value of a function of another, when the two random variables are jointly normally distributed.
Note that the name "Stein's lemma" is also commonly used to refer to a different result in the area of statistical hypothesis testing, which connects the error exponents in hypothesis testing with the Kullback–Leibler divergence. This result is also known as the Chernoff–Stein lemma and is not related to the lemma discussed in this article.
Statement of the lemma.
Suppose "X" is a normally distributed random variable with expectation μ and variance σ2.
Further suppose "g" is a differentiable function for which the two expectations E("g"("X") ("X" − μ)) and E("g" ′("X")) both exist.
(The existence of the expectation of any random variable is equivalent to the finiteness of the expectation of its absolute value.)
Then
formula_0
In general, suppose "X" and "Y" are jointly normally distributed. Then
formula_1
For a general multivariate Gaussian random vector formula_2 it follows that
formula_3
Proof.
The univariate probability density function for the univariate normal distribution with expectation 0 and variance 1 is
formula_4
Since formula_5 we get from integration by parts:
formula_6.
The case of general variance formula_7 follows by substitution.
More general statement.
Isserlis' theorem is equivalently stated asformula_8where formula_9 is a zero-mean multivariate normal random vector.
Suppose "X" is in an exponential family, that is, "X" has the density
formula_10
Suppose this density has support formula_11 where formula_12 could be formula_13 and as formula_14, formula_15 where formula_16 is any differentiable function such that formula_17 or formula_18 if formula_12 finite. Then
formula_19
The derivation is same as the special case, namely, integration by parts.
If we only know formula_20 has support formula_21, then it could be the case that formula_22 but formula_23. To see this, simply put formula_24 and formula_25 with infinitely spikes towards infinity but still integrable. One such example could be adapted from formula_26 so that formula_27 is smooth.
Extensions to elliptically-contoured distributions also exist. | [
{
"math_id": 0,
"text": "E\\bigl(g(X)(X-\\mu)\\bigr)=\\sigma^2 E\\bigl(g'(X)\\bigr)."
},
{
"math_id": 1,
"text": "\\operatorname{Cov}(g(X),Y)= \\operatorname{Cov}(X,Y)E(g'(X))."
},
{
"math_id": 2,
"text": "(X_1, ..., X_n) \\sim N(\\mu, \\Sigma)"
},
{
"math_id": 3,
"text": "E\\bigl(g(X)(X-\\mu)\\bigr)=\\Sigma\\cdot E\\bigl(\\nabla g(X)\\bigr)."
},
{
"math_id": 4,
"text": "\\varphi(x)={1 \\over \\sqrt{2\\pi}}e^{-x^2/2}"
},
{
"math_id": 5,
"text": "\\int x \\exp(-x^2/2)\\,dx = -\\exp(-x^2/2)"
},
{
"math_id": 6,
"text": "E[g(X)X]\n= \\frac{1}{\\sqrt{2\\pi}}\\int g(x) x \\exp(-x^2/2)\\,dx\n= \\frac{1}{\\sqrt{2\\pi}}\\int g'(x) \\exp(-x^2/2)\\,dx\n= E[g'(X)]"
},
{
"math_id": 7,
"text": "\\sigma^2"
},
{
"math_id": 8,
"text": "\\operatorname{E}(X_1 f(X_1,\\ldots,X_n))=\\sum_{i=1}^{n} \\operatorname{Cov}(X_1,X_i)\\operatorname{E}(\\partial_{X_i}f(X_1,\\ldots,X_n))."
},
{
"math_id": 9,
"text": "(X_1,\\dots X_{n})"
},
{
"math_id": 10,
"text": "f_\\eta(x)=\\exp(\\eta'T(x) - \\Psi(\\eta))h(x)."
},
{
"math_id": 11,
"text": "(a,b) "
},
{
"math_id": 12,
"text": " a,b "
},
{
"math_id": 13,
"text": " -\\infty ,\\infty"
},
{
"math_id": 14,
"text": "x\\rightarrow a\\text{ or }b"
},
{
"math_id": 15,
"text": " \\exp (\\eta'T(x))h(x) g(x) \\rightarrow 0"
},
{
"math_id": 16,
"text": "g"
},
{
"math_id": 17,
"text": "E|g'(X)|<\\infty"
},
{
"math_id": 18,
"text": " \\exp (\\eta'T(x))h(x) \\rightarrow 0 "
},
{
"math_id": 19,
"text": "E\\left[\\left(\\frac{h'(X)}{h(X)} + \\sum \\eta_i T_i'(X)\\right)\\cdot g(X)\\right] = -E[g'(X)]. "
},
{
"math_id": 20,
"text": " X "
},
{
"math_id": 21,
"text": " \\mathbb{R} "
},
{
"math_id": 22,
"text": " E|g(X)| <\\infty \\text{ and } E|g'(X)| <\\infty "
},
{
"math_id": 23,
"text": " \\lim_{x\\rightarrow \\infty} f_\\eta(x) g(x) \\not= 0"
},
{
"math_id": 24,
"text": "g(x)=1 "
},
{
"math_id": 25,
"text": " f_\\eta(x) "
},
{
"math_id": 26,
"text": " f(x) = \\begin{cases} 1 & x \\in [n, n + 2^{-n}) \\\\ 0 & \\text{otherwise} \\end{cases} "
},
{
"math_id": 27,
"text": " f"
}
]
| https://en.wikipedia.org/wiki?curid=744802 |
744988 | Empirical Bayes method | A Bayesian statistical inference method in which the prior distribution is estimated from the data
Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Empirical Bayes, also known as maximum marginal likelihood, represents a convenient approach for setting hyperparameters, but has been mostly supplanted by fully Bayesian hierarchical analyses since the 2000s with the increasing availability of well-performing computation techniques. It is still commonly used, however, for variational methods in Deep Learning, such as variational autoencoders, where latent variable spaces are high-dimensional.
Introduction.
Empirical Bayes methods can be seen as an approximation to a fully Bayesian treatment of a hierarchical Bayes model.
In, for example, a two-stage hierarchical Bayes model, observed data formula_0 are assumed to be generated from an unobserved set of parameters formula_1 according to a probability distribution formula_2. In turn, the parameters formula_3 can be considered samples drawn from a population characterised by hyperparameters formula_4 according to a probability distribution formula_5. In the hierarchical Bayes model, though not in the empirical Bayes approximation, the hyperparameters formula_4 are considered to be drawn from an unparameterized distribution formula_6.
Information about a particular quantity of interest formula_7 therefore comes not only from the properties of those data formula_8 that directly depend on it, but also from the properties of the population of parameters formula_9 as a whole, inferred from the data as a whole, summarised by the hyperparameters formula_10.
Using Bayes' theorem,
formula_11
In general, this integral will not be tractable analytically or symbolically and must be evaluated by numerical methods. Stochastic (random) or deterministic approximations may be used. Example stochastic methods are Markov Chain Monte Carlo and Monte Carlo sampling. Deterministic approximations are discussed in quadrature.
Alternatively, the expression can be written as
formula_12
and the final factor in the integral can in turn be expressed as
formula_13
These suggest an iterative scheme, qualitatively similar in structure to a Gibbs sampler, to evolve successively improved approximations to formula_14 and formula_15. First, calculate an initial approximation to formula_14 ignoring the formula_16 dependence completely; then calculate an approximation to formula_15 based upon the initial approximate distribution of formula_14; then use this formula_15 to update the approximation for formula_14; then update formula_15; and so on.
When the true distribution formula_15 is sharply peaked, the integral determining formula_14 may be not much changed by replacing the probability distribution over formula_10 with a point estimate formula_17 representing the distribution's peak (or, alternatively, its mean),
formula_18
With this approximation, the above iterative scheme becomes the EM algorithm.
The term "Empirical Bayes" can cover a wide variety of methods, but most can be regarded as an early truncation of either the above scheme or something quite like it. Point estimates, rather than the whole distribution, are typically used for the parameter(s) formula_10. The estimates for formula_17 are typically made from the first approximation to formula_14 without subsequent refinement. These estimates for formula_17 are usually made without considering an appropriate prior distribution for formula_16.
Point estimation.
Robbins' method: non-parametric empirical Bayes (NPEB).
Robbins considered a case of sampling from a mixed distribution, where probability for each formula_19 (conditional on formula_20) is specified by a Poisson distribution,
formula_21
while the prior on "θ" is unspecified except that it is also i.i.d. from an unknown distribution, with cumulative distribution function formula_22. Compound sampling arises in a variety of statistical estimation problems, such as accident rates and clinical trials. We simply seek a point prediction of formula_20 given all the observed data. Because the prior is unspecified, we seek to do this without knowledge of "G".
Under squared error loss (SEL), the conditional expectation E("θ""i" | "Y""i" = "y""i") is a reasonable quantity to use for prediction. For the Poisson compound sampling model, this quantity is
formula_23
This can be simplified by multiplying both the numerator and denominator by formula_24, yielding
formula_25
where "pG" is the marginal probability mass function obtained by integrating out "θ" over "G".
To take advantage of this, Robbins suggested estimating the marginals with their empirical frequencies (formula_26), yielding the fully non-parametric estimate as:
formula_27
where formula_28 denotes "number of". (See also Good–Turing frequency estimation.)
Suppose each customer of an insurance company has an "accident rate" Θ and is insured against accidents; the probability distribution of Θ is the underlying distribution, and is unknown. The number of accidents suffered by each customer in a specified time period has a Poisson distribution with expected value equal to the particular customer's accident rate. The actual number of accidents experienced by a customer is the observable quantity. A crude way to estimate the underlying probability distribution of the accident rate Θ is to estimate the proportion of members of the whole population suffering 0, 1, 2, 3, ... accidents during the specified time period as the corresponding proportion in the observed random sample. Having done so, it is then desired to predict the accident rate of each customer in the sample. As above, one may use the conditional expected value of the accident rate Θ given the observed number of accidents during the baseline period. Thus, if a customer suffers six accidents during the baseline period, that customer's estimated accident rate is 7 × [the proportion of the sample who suffered 7 accidents] / [the proportion of the sample who suffered 6 accidents]. Note that if the proportion of people suffering "k" accidents is a decreasing function of "k", the customer's predicted accident rate will often be lower than their observed number of accidents.
This shrinkage effect is typical of empirical Bayes analyses.
Parametric empirical Bayes.
If the likelihood and its prior take on simple parametric forms (such as 1- or 2-dimensional likelihood functions with simple conjugate priors), then the empirical Bayes problem is only to estimate the marginal formula_29 and the hyperparameters formula_16 using the complete set of empirical measurements. For example, one common approach, called parametric empirical Bayes point estimation, is to approximate the marginal using the maximum likelihood estimate (MLE), or a moments expansion, which allows one to express the hyperparameters formula_16 in terms of the empirical mean and variance. This simplified marginal allows one to plug in the empirical averages into a point estimate for the prior formula_3. The resulting equation for the prior formula_3 is greatly simplified, as shown below.
There are several common parametric empirical Bayes models, including the Poisson–gamma model (below), the Beta-binomial model, the Gaussian–Gaussian model, the Dirichlet-multinomial model, as well specific models for Bayesian linear regression (see below) and Bayesian multivariate linear regression. More advanced approaches include hierarchical Bayes models and Bayesian mixture models.
Gaussian–Gaussian model.
For an example of empirical Bayes estimation using a Gaussian-Gaussian model, see Empirical Bayes estimators.
Poisson–gamma model.
For example, in the example above, let the likelihood be a Poisson distribution, and let the prior now be specified by the conjugate prior, which is a gamma distribution (formula_30) (where formula_31):
formula_32
It is straightforward to show the posterior is also a gamma distribution. Write
formula_33
where the marginal distribution has been omitted since it does not depend explicitly on formula_3.
Expanding terms which do depend on formula_3 gives the posterior as:
formula_34
So the posterior density is also a gamma distribution formula_35, where formula_36, and formula_37. Also notice that the marginal is simply the integral of the posterior over all formula_38, which turns out to be a negative binomial distribution.
To apply empirical Bayes, we will approximate the marginal using the maximum likelihood estimate (MLE). But since the posterior is a gamma distribution, the MLE of the marginal turns out to be just the mean of the posterior, which is the point estimate formula_39 we need. Recalling that the mean formula_40 of a gamma distribution formula_41 is simply formula_42, we have
formula_43
To obtain the values of formula_44 and formula_45, empirical Bayes prescribes estimating mean formula_46 and variance formula_47 using the complete set of empirical data.
The resulting point estimate formula_48 is therefore like a weighted average of the sample mean formula_49 and the prior mean formula_50. This turns out to be a general feature of empirical Bayes; the point estimates for the prior (i.e. mean) will look like a weighted averages of the sample estimate and the prior estimate (likewise for estimates of the variance).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y = \\{y_1, y_2, \\dots, y_n\\}"
},
{
"math_id": 1,
"text": "\\theta = \\{\\theta_1, \\theta_2, \\dots, \\theta_n\\}"
},
{
"math_id": 2,
"text": "p(y\\mid\\theta)\\,"
},
{
"math_id": 3,
"text": "\\theta"
},
{
"math_id": 4,
"text": "\\eta\\,"
},
{
"math_id": 5,
"text": "p(\\theta\\mid\\eta)\\,"
},
{
"math_id": 6,
"text": "p(\\eta)\\,"
},
{
"math_id": 7,
"text": "\\theta_i\\;"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "\\theta\\;"
},
{
"math_id": 10,
"text": "\\eta\\;"
},
{
"math_id": 11,
"text": "\np(\\theta\\mid y)\n= \\frac{p(y \\mid \\theta) p(\\theta)}{p(y)}\n= \\frac {p(y \\mid \\theta)}{p(y)} \\int p(\\theta \\mid \\eta) p(\\eta) \\, d\\eta \\,.\n"
},
{
"math_id": 12,
"text": "\np(\\theta\\mid y)\n= \\int p(\\theta\\mid\\eta, y) p(\\eta \\mid y) \\; d \\eta\n= \\int \\frac{p(y \\mid \\theta) p(\\theta \\mid \\eta)}{p(y \\mid \\eta)} p(\\eta \\mid y) \\; d \\eta\\,,\n"
},
{
"math_id": 13,
"text": "\n p(\\eta \\mid y) = \\int p(\\eta \\mid \\theta) p(\\theta \\mid y) \\; d \\theta .\n"
},
{
"math_id": 14,
"text": "p(\\theta\\mid y)\\;"
},
{
"math_id": 15,
"text": "p(\\eta\\mid y)\\;"
},
{
"math_id": 16,
"text": "\\eta"
},
{
"math_id": 17,
"text": "\\eta^{*}\\;"
},
{
"math_id": 18,
"text": "\n p(\\theta\\mid y) \\simeq \\frac{p(y \\mid \\theta) \\; p(\\theta \\mid \\eta^{*})}{p(y \\mid \\eta^{*})}\\,.\n"
},
{
"math_id": 19,
"text": "y_i"
},
{
"math_id": 20,
"text": "\\theta_i"
},
{
"math_id": 21,
"text": "p(y_i\\mid\\theta_i)={{\\theta_i}^{y_i} e^{-\\theta_i} \\over {y_i}!}"
},
{
"math_id": 22,
"text": "G(\\theta)"
},
{
"math_id": 23,
"text": "\\operatorname{E}(\\theta_i\\mid y_i) = {\\int (\\theta^{y_i+1} e^{-\\theta} / {y_i}!)\\,dG(\\theta) \\over {\\int (\\theta^{y_i} e^{-\\theta} / {y_i}!)\\,dG(\\theta}) }."
},
{
"math_id": 24,
"text": "({y_i}+1)"
},
{
"math_id": 25,
"text": " \\operatorname{E}(\\theta_i\\mid y_i)= {{(y_i + 1) p_G(y_i + 1) }\\over {p_G(y_i)}},"
},
{
"math_id": 26,
"text": " \\#\\{Y_j\\}"
},
{
"math_id": 27,
"text": " \\operatorname{E}(\\theta_i\\mid y_i) \\approx (y_i + 1) { {\\#\\{Y_j = y_i + 1\\}} \\over {\\#\\{ Y_j = y_i\\}} },"
},
{
"math_id": 28,
"text": "\\#"
},
{
"math_id": 29,
"text": "m(y\\mid\\eta)"
},
{
"math_id": 30,
"text": "G(\\alpha,\\beta)"
},
{
"math_id": 31,
"text": "\\eta = (\\alpha,\\beta)"
},
{
"math_id": 32,
"text": " \\rho(\\theta\\mid\\alpha,\\beta) \\, d\\theta = \\frac{(\\theta/\\beta)^{\\alpha-1} \\, e^{-\\theta / \\beta} }{\\Gamma(\\alpha)} \\, (d\\theta/\\beta) \\text{ for } \\theta > 0, \\alpha > 0, \\beta > 0 \\,\\! ."
},
{
"math_id": 33,
"text": " \\rho(\\theta\\mid y) \\propto \\rho(y\\mid \\theta) \\rho(\\theta\\mid\\alpha, \\beta) ,"
},
{
"math_id": 34,
"text": " \\rho(\\theta\\mid y) \\propto (\\theta^y\\, e^{-\\theta}) (\\theta^{\\alpha-1}\\, e^{-\\theta / \\beta}) = \\theta^{y+ \\alpha -1}\\, e^{- \\theta (1+1 / \\beta)} . "
},
{
"math_id": 35,
"text": "G(\\alpha',\\beta')"
},
{
"math_id": 36,
"text": "\\alpha' = y + \\alpha"
},
{
"math_id": 37,
"text": "\\beta' = (1+1 / \\beta)^{-1}"
},
{
"math_id": 38,
"text": "\\Theta"
},
{
"math_id": 39,
"text": "\\operatorname{E}(\\theta\\mid y)"
},
{
"math_id": 40,
"text": "\\mu"
},
{
"math_id": 41,
"text": "G(\\alpha', \\beta')"
},
{
"math_id": 42,
"text": "\\alpha' \\beta'"
},
{
"math_id": 43,
"text": " \\operatorname{E}(\\theta\\mid y) = \\alpha' \\beta' = \\frac{\\bar{y}+\\alpha}{1+1 / \\beta} = \\frac{\\beta}{1+\\beta}\\bar{y} + \\frac{1}{1+\\beta} (\\alpha \\beta). "
},
{
"math_id": 44,
"text": "\\alpha"
},
{
"math_id": 45,
"text": "\\beta"
},
{
"math_id": 46,
"text": "\\alpha\\beta"
},
{
"math_id": 47,
"text": "\\alpha\\beta^2"
},
{
"math_id": 48,
"text": " \\operatorname{E}(\\theta\\mid y) "
},
{
"math_id": 49,
"text": "\\bar{y}"
},
{
"math_id": 50,
"text": "\\mu = \\alpha\\beta"
}
]
| https://en.wikipedia.org/wiki?curid=744988 |
74500588 | DPQ | DPQ or Dpq can refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "Dpq"
},
{
"math_id": 1,
"text": "D_{PQ}"
}
]
| https://en.wikipedia.org/wiki?curid=74500588 |
74503991 | Oppenheimer–Snyder model | Exact solution to the Einstein field equations
In general relativity, the Oppenheimer–Snyder model is a solution to the Einstein field equations based on the Schwarzschild metric describing the collapse of an object of extreme mass into a black hole. It is named after physicists J. Robert Oppenheimer and Hartland Snyder, who published it in 1939.
During the collapse of a star to a black hole the geometry on the outside of the sphere is the Schwarzschild geometry. However the geometry inside is, curiously enough, the same Robertson-Walker geometry as in the rest of the observable universe.
History.
Albert Einstein, who had developed his theory of general relativity in 1915, initially denied the possibility of black holes, even though they were a genuine implication of the Schwarzschild metric, obtained by Karl Schwarzschild in 1916, the first known non-trivial exact solution to Einstein's field equations. In 1939, Einstein published "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses" in the "Annals of Mathematics", claiming to provide "a clear understanding as to why these 'Schwarzschild singularities' do not exist in physical reality."
Months after the issuing of Einstein's article, J. Robert Oppenheimer and his student Hartland Snyder studied this topic with their paper "On Continued Gravitational Contraction" making the opposite argument as Einstein's. They showed when a sufficiently massive star runs out of thermonuclear fuel, it will undergo continued gravitational contraction and become separated from the rest of the universe by a boundary called the event horizon, which not even light can escape. This paper predicted the existence of what are today known as black holes. The term "black hole" was coined decades later, in the fall of 1967, by John Archibald Wheeler at a conference held by the Goddard Institute for Space Studies in New York City; it appeared for the first time in print the following year. Oppenheimer and Snyder used Einstein's own theory of gravity to prove how black holes could develop for the first time in contemporary physics, but without referencing the aforementioned article by Einstein. Oppenheimer and Snyder did, however, refer to an earlier article by Oppenheimer and Volkoff on neutron stars, improving upon the work of Lev Davidovich Landau. Previously, and in the same year, Oppenheimer and three colleagues, Richard Tolman, Robert Serber, and George Volkoff, had investigated the stability of neutron stars, obtaining the Tolman-Oppenheimer-Volkoff limit. Oppenheimer would not revisit the topic in future publications.
Model.
The Oppenheimer–Snyder model of continued gravitational collapse is described by the line element
formula_0
The quantities appearing in this expression are as follows:
formula_8
This expression is valid both in the matter region formula_10, and the vacuum region formula_11, and continuously transitions between the two.
Reception and legacy.
Kip Thorne recalled that physicists were initially skeptical of the model, viewing as "truly strange" at the time. He explained further, "It was hard for people of that era to understand the paper because the things that were being smoked out of the mathematics were so different from any mental picture of how things should behave in the universe." Oppenheimer himself thought little of this discovery. However, some considered the model's discovery to be more significant than Oppenheimer did, and model would later be described as forward thinking. Freeman Dyson thought it was Oppenheimer's greatest contribution to science. Lev Davidovich Landau added the Oppenheimer-Snyder paper to his "golden list" of classic papers. John Archibald Wheeler was initially an opponent of the model until the late 1950s, when he was asked to teach a course on general relativity at Princeton University. Wheeler claimed at a conference in 1958 that the Oppenheimer-Snyder model had neglected the many features of a realistic star. However, he later changed his mind completely after being informed by Edward Teller that a computer simulation ran by Stirling Colgate and his team at the Lawrence Livermore National Laboratory had shown a sufficiently heavy star would undergo continued gravitational contraction in a manner similar to the idealized scenario described by Oppenheimer and Snyder. Wheeler subsequently played a key role in reviving interest in general relativity in the United States, and popularized the term "black hole" in the late 1960s. Various theoretical physicists pursued this topic and by the late 1960s and early 1970s, advances in observational astronomy, such as radio telescopes, changed the attitude of the scientific community. Pulsars had already been discovered and black holes were no longer considered mere textbook curiosities. Cygnus X-1, the first solid black-hole candidate, was discovered by the "Uhuru" X-ray space telescope in 1971. Jeremy Bernstein described it as "one of the great papers in twentieth-century physics."
After winning the Nobel Prize in Physics in 2020, Roger Penrose would credit the Oppenheimer–Snyder model as one of his inspirations for research.
"The Hindu" wrote in 2023:
<templatestyles src="Template:Blockquote/styles.css" />The world of physics does indeed remember the paper. While Oppenheimer is remembered in history as the “father of the atomic bomb”, his greatest contribution as a physicist was on the physics of black holes. The work of Oppenheimer and Hartland Snyder helped transform black holes from figments of mathematics to real, physical possibilities – something to be found in the cosmos out there.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nds^2 = -d\\tau^2 + A^2(\\eta) \\left(\\frac{dR^2}{1 - 2M \\frac{R^2_-}{R^2_b} \\frac{1}{R_+}} + R^2 d\\Omega^2 \\right)\n"
},
{
"math_id": 1,
"text": "(\\tau, R, \\theta, \\phi)"
},
{
"math_id": 2,
"text": "\\theta, \\phi"
},
{
"math_id": 3,
"text": "R_b"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "R_- = \\mathrm{min}(R, R_b)"
},
{
"math_id": 6,
"text": "R_+ = \\mathrm{max}(R, R_b)"
},
{
"math_id": 7,
"text": "\\eta"
},
{
"math_id": 8,
"text": " \\tau(\\eta, R) = \\frac{1}{2}\\sqrt \\frac{R_+^3}{2M} (\\eta + \\sin \\eta)."
},
{
"math_id": 9,
"text": "A(\\eta) = \\frac{1 + \\cos \\eta}{2}"
},
{
"math_id": 10,
"text": "R < R_b"
},
{
"math_id": 11,
"text": "R > R_b"
}
]
| https://en.wikipedia.org/wiki?curid=74503991 |
7450513 | DKL | DKL may refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "D_\\text{KL}"
}
]
| https://en.wikipedia.org/wiki?curid=7450513 |
74515112 | Galia Dafni | Mathematician
Galia Devora Dafni is a mathematician specializing in harmonic analysis and function spaces. Educated in the US, she works in Canada as a professor of mathematics and statistics at Concordia University. She is also affiliated with the Centre de Recherches Mathématiques, where she is deputy director for publications and communication.
Education.
Dafni lived in Texas as a teenager. After beginning her undergraduate studies at the University of Texas at Austin, Dafni transferred to Pennsylvania State University, where she earned a bachelor's degree in 1988 in mathematics and computer science, "with highest distinction and with honors in mathematics". She went to Princeton University for graduate study in mathematics, earning a master's degree in 1990 and completing her Ph.D. in 1993. Her doctoral dissertation, "Hardy Spaces on Strongly Pseudoconvex Domains in formula_0 and Domains of Finite Type in formula_1", was supervised by Elias M. Stein.
Career.
After another year as an instructor at Princeton, Dafni continues through three postdoctoral positions: as Charles B. Morrey Jr. Assistant Professor of Mathematics at the University of California, Berkeley from 1994 to 1996, as Ralph Boas Assistant Professor of Mathematics at Northwestern University from 1996 to 1998, and as a postdoctoral fellow and research assistant professor at Concordia University from 1998 to 2000. Her move to Montreal and Concordia was motivated in part by a two-body problem with her husband, who also worked in Montreal.
Finally, in 2000, she obtained a regular-rank assistant professorship at Concordia, supported by a 5-year NSERC University Faculty Award, through a program to support women in STEM. She obtained tenure there as an associate professor in 2005, and since became a full professor.
Personal life.
Dafni is married to Henri Darmon, a mathematician at another Montreal university, McGill University. They met in the early 1990s at Princeton, where Darmon was a postdoctoral researcher.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C^n"
},
{
"math_id": 1,
"text": "C^2"
}
]
| https://en.wikipedia.org/wiki?curid=74515112 |
7451902 | Pipe network analysis | In fluid dynamics, pipe network analysis is the analysis of the fluid flow through a hydraulics network, containing several or many interconnected branches. The aim is to determine the flow rates and pressure drops in the individual sections of the network. This is a common problem in hydraulic design.
Description.
To direct water to many users, municipal water supplies often route it through a water supply network. A major part of this network will consist of interconnected pipes. This network creates a special class of problems in hydraulic design, with solution methods typically referred to as "pipe network analysis". Water utilities generally make use of specialized software to automatically solve these problems. However, many such problems can also be addressed with simpler methods, like a spreadsheet equipped with a solver, or a modern graphing calculator.
Deterministic network analysis.
Once the friction factors of the pipes are obtained (or calculated from pipe friction laws such as the Darcy-Weisbach equation), we can consider how to calculate the flow rates and head losses on the network. Generally the head losses (potential differences) at each node are neglected, and a solution is sought for the steady-state flows on the network, taking into account the pipe specifications (lengths and diameters), pipe friction properties and known flow rates or head losses.
The steady-state flows on the network must satisfy two conditions:
If there are sufficient known flow rates, so that the system of equations given by (1) and (2) above is closed (number of unknowns = number of equations), then a "deterministic" solution can be obtained.
The classical approach for solving these networks is to use the Hardy Cross method. In this formulation, first you go through and create guess values for the flows in the network. The flows are expressed via the volumetric flow rates Q. The initial guesses for the Q values must satisfy the Kirchhoff laws (1). That is, if Q7 enters a junction and Q6 and Q4 leave the same junction, then the initial guess must satisfy Q7 = Q6 + Q4. After the initial guess is made, then, a loop is considered so that we can evaluate our second condition. Given a starting node, we work our way around the loop in a clockwise fashion, as illustrated by Loop 1. We add up the head losses according to the Darcy–Weisbach equation for each pipe if Q is in the same direction as our loop like Q1, and subtract the head loss if the flow is in the reverse direction, like Q4. In other words, we add the head losses around the loop in the direction of the loop; depending on whether the flow is with or against the loop, some pipes will have head losses and some will have head gains (negative losses).
To satisfy the Kirchhoff's second laws (2), we should end up with 0 about each loop at the steady-state solution. If the actual sum of our head loss is not equal to 0, then we will adjust all the flows in the loop by an amount given by the following formula, where a positive adjustment is in the clockwise direction.
formula_0
where
The clockwise specifier (c) means only the flows that are moving clockwise in our loop, while the counter-clockwise specifier (cc) is only the flows that are moving counter-clockwise.
This adjustment doesn't solve the problem, since most networks have several loops. It is okay to use this adjustment, however, because the flow changes won't alter condition 1, and therefore, the other loops still satisfy condition 1. However, we should use the results from the first loop before we progress to other loops.
An adaptation of this method is needed to account for water reservoirs attached to the network, which are joined in pairs by the use of 'pseudo-loops' in the Hardy Cross scheme. This is discussed further on the Hardy Cross method site.
The modern method is simply to create a set of conditions from the above Kirchhoff laws (junctions and head-loss criteria). Then, use a Root-finding algorithm to find "Q" values that satisfy all the equations. The literal friction loss equations use a term called "Q"2, but we want to preserve any changes in direction. Create a separate equation for each loop where the head losses are added up, but instead of squaring "Q", use |"Q"|·"Q" instead (with |"Q"| the absolute value of "Q") for the formulation so that any sign changes reflect appropriately in the resulting head-loss calculation.
Probabilistic network analysis.
In many situations, especially for real water distribution networks in cities (which can extend between thousands to millions of nodes), the number of known variables (flow rates and/or head losses) required to obtain a deterministic solution will be very large. Many of these variables will not be known, or will involve considerable uncertainty in their specification. Furthermore, in many pipe networks, there may be considerable variability in the flows, which can be described by fluctuations about mean flow rates in each pipe. The above deterministic methods are unable to account for these uncertainties, whether due to lack of knowledge or flow variability.
For these reasons, a probabilistic method for pipe network analysis has recently been developed, based on the maximum entropy method of Jaynes. In this method, a continuous relative entropy function is defined over the unknown parameters. This entropy is then maximized subject to the constraints on the system, including Kirchhoff's laws, pipe friction properties and any specified mean flow rates or head losses, to give a probabilistic statement (probability density function) which describes the system. This can be used to calculate mean values (expectations) of the flow rates, head losses or any other variables of interest in the pipe network. This analysis has been extended using a reduced-parameter entropic formulation, which ensures consistency of the analysis regardless of the graphical representation of the network. A comparison of Bayesian and maximum entropy probabilistic formulations for the analysis of pipe flow networks has also been presented, showing that under certain assumptions (Gaussian priors), the two approaches lead to equivalent predictions of mean flow rates.
Other methods of stochastic optimization of water distribution systems rely on metaheuristic algorithms, such as simulated annealing and genetic algorithms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Delta Q = - \\frac{ \\sum{\\scriptstyle\\text{head loss}_c} - \\sum{\\scriptstyle\\text{head loss}_{cc}}}{n \\cdot (\\sum\\frac{\\text{head loss}_c}{Q_c} + \\sum\\frac{\\text{head loss}_{cc}}{Q_{cc}})},"
}
]
| https://en.wikipedia.org/wiki?curid=7451902 |
74527567 | Hand's paradox | Statistical paradox
In statistics, Hand's paradox arises from ambiguity when comparing two treatments. It shows that a comparison of the effects of the treatments applied to two independent groups that can contradict a comparison between the effects of both treatments applied to a single group.
Paradox.
Comparisons of two treatments often involve comparing the responses of a random sample of patients receiving one treatment with an independent random sample receiving the other. One commonly used measure of the difference is then the probability that a randomly chosen member of one group will have a higher score than a randomly chosen member of the other group. However, in many situations, interest really lies on which of the two treatments will give a randomly chosen patient the greater probability of doing better. These two measures, a comparison between two randomly chosen patients, one from each group, and a comparison of treatment effects on a randomly chosen patient, can lead to different conclusions.
This has been called Hand's paradox, and appears to have first been described by David J. Hand.
Examples.
Example 1.
Label the two treatments A and B and suppose that:
Patient 1 would have response values 2 and 3 to A and B respectively. Patient 2 would have response values 4 and 5 to A and B respectively. Patient 3 would have response values 6 and 1 to A and B respectively.
Then the probability that the response to A of a randomly chosen patient is greater than the response to B of a randomly chosen patient is 6/9 = 2/3. But the probability that a randomly chosen patient will have a greater response to A than B is 1/3. Thus a simple comparison of two independent groups may suggest that patients have a higher probability of doing better under A, whereas in fact patients have a higher probability of doing better under B.
Example 2.
Suppose we have two random variables, formula_0 and formula_1, corresponding to the effects of two treatments. If we assume that formula_2 and formula_3 are independent, then formula_4, suggesting that A is more likely to benefit a patient than B. In contrast, the joint distribution which "minimizes" formula_5 leads to formula_6. This means that it is possible that in up to 62% of cases treatment B is better than treatment A.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_A \\sim N(1,1)"
},
{
"math_id": 1,
"text": "x_B \\sim N(0,1)"
},
{
"math_id": 2,
"text": "x_A"
},
{
"math_id": 3,
"text": "x_B"
},
{
"math_id": 4,
"text": " \\Pr(x_A > x_B) = 0.76 "
},
{
"math_id": 5,
"text": "Pr(x_A>x_B )"
},
{
"math_id": 6,
"text": "Pr(x_A>x_B )\\ge 0.38"
}
]
| https://en.wikipedia.org/wiki?curid=74527567 |
74529407 | Slab method | Ray-box intersection method
In computer graphics, the slab method is an algorithm used to solve the ray-box intersection problem in case of an axis-aligned bounding box (AABB), i.e. to determine the intersection points between a ray and the box. Due to its efficient nature, that can allow for a branch-free implementation, it is widely used in computer graphics applications.
Algorithm.
The idea behind the algorithm is to clip the ray with the planes containing the six faces of the box. Each pair of parallel planes defines a slab, and the volume contained in the box is the intersection of the three slabs. Therefore, the portion of ray within the box (if any, given that the ray effectively intersects the box) will be given by the intersection of the portions of ray within each of the three slabs.
A tridimensional AABB can be represented by two triples formula_0 and formula_1 denoting the low and high bounds of the box along each dimension. A point formula_2 along a ray with origin formula_3 and direction formula_4 can be expressed in parametric form as
formula_5.
Assuming that all intersections exist, i.e. formula_6, solving for formula_7 gives
formula_8
and therefore the two intersections of the ray with the two planes orthogonal to the formula_9-th coordinate axis will be given by
formula_10
The close and far extrema of the segment within the formula_9-th slab will be given by
formula_11
and the intersection of all these segments is
formula_12
Such resulting segment will be inside the box, and therefore an intersection exists, only if formula_13. The sign of formula_14 determines if the intersection happens ahead or behind the origin of the ray, which might be interesting in applications such as ray casting, where only intersections in front of the camera are of interest. The two intersection points will therefore be given by
formula_15
While the equations above are well defined for real-valued variables only if formula_6, i.e. if the ray is not parallel to any of the coordinate axes, the algorithm can be applied to an extended real number arithmetic (such as the one implemented by IEEE 754) to handle rays parallel to an axis, as long as the origin of the ray itself does not lie on one of the faces of the bounding box. In such arithmetic, the intersections with the planes parallel to the ray will be given by formula_16 or formula_17, and the algorithm will still work as expected. If the origin lies on a face of the bounding box, then for some formula_9 it will happen that formula_18, which is undefined (in IEEE 754 it is represented by NaN). However, implementations of the IEEE 754-2008 minNum and maxNum functions will treat NaN as a missing value, and when comparing a well-defined value with a NaN they will always return the well-defined value, and they will therefore be able to handle even such corner case. An alternative approach to handle corner cases is to avoid divisions by zero altogether, which can be achieved by replacing the inverse of zero with a large arbitrary constant number.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol l = (l_0, l_1, l_2)"
},
{
"math_id": 1,
"text": "\\boldsymbol h = (h_0, h_1, h_2)"
},
{
"math_id": 2,
"text": "\\boldsymbol p(t) = (p_0(t), p_1(t), p_2(t))"
},
{
"math_id": 3,
"text": "\\boldsymbol o = (o_0, o_1, o_2)"
},
{
"math_id": 4,
"text": "\\boldsymbol r = (r_0, r_1, r_2)"
},
{
"math_id": 5,
"text": "\\boldsymbol p(t) = \\boldsymbol o + t \\boldsymbol r"
},
{
"math_id": 6,
"text": "r_i \\ne 0 \\; \\forall i"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "t = \\frac{\\boldsymbol p - \\boldsymbol o}{\\boldsymbol r}"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "\n\\begin{align}\nt_i^{\\text{low}} &= \\frac{l_i - o_i}{r_i} \\\\\nt_i^{\\text{high}} &= \\frac{h_i - o_i}{r_i} .\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\n\\begin{align}\nt_i^{\\text{close}} &= \\min \\left\\{ t_i^{\\text{low}}, t_i^{\\text{high}} \\right\\} \\\\\nt_i^{\\text{far}} &= \\max \\left\\{ t_i^{\\text{low}}, t_i^{\\text{high}} \\right\\}\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\n\\begin{align}\nt^{\\text{close}} &= \\max_i \\left\\{ t_i^{\\text{close}} \\right\\} \\\\\nt^{\\text{far}} &= \\min_i \\left\\{ t_i^{\\text{far}} \\right\\} .\n\\end{align}\n"
},
{
"math_id": 13,
"text": "t^{\\text{close}} \\le t^{\\text{far}}"
},
{
"math_id": 14,
"text": "t^{\\text{close}}"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\boldsymbol p^{\\text{close}} &= \\boldsymbol o + t^{\\text{close}} \\boldsymbol r \\\\\n\\boldsymbol p^{\\text{far}} &= \\boldsymbol o + t^{\\text{far}} \\boldsymbol r .\n\\end{align}\n"
},
{
"math_id": 16,
"text": "t = +\\infty"
},
{
"math_id": 17,
"text": "t = -\\infty"
},
{
"math_id": 18,
"text": "t_i = \\frac{0}{0}"
}
]
| https://en.wikipedia.org/wiki?curid=74529407 |
74532862 | Albert Strickler | Swiss mechanical engineer
Albert Strickler (25 July 1887 – 1 February 1963) was a Swiss mechanical engineer recognized for contributions to our understanding of hydraulic roughness in open channel and pipe flow. Strickler proposed that hydraulic roughness could be characterized as a function of measurable surface roughness and described the concept of relative roughness, the ratio of hydraulic radius to surface roughness. He applied these concepts to the development of a dimensionally homogeneous form of the Manning formula.
Life.
Albert Strickler was the only child of Albert Strickler, Sr. (1853–1936) and Maria Auguste Flentjen (1863–1945) of Wädenswil, Canton of Zürich, Switzerland. He was married twice, the second time as a widower. Neither marriage produced children.
Strickler graduated from ETH Zurich as a mechical engineer in 1911. He earned a Ph.D. in 1917 while serving as the principal assistant to Professor Franz Prasil (1857–1929). Throughout his career, he was involved in the development of hydropower with interests ranging from hydraulic machinery to the regulation of river flows for inland navigation. Prior to World War II, he was the vice president of the Association of Exporting Electricity and a member of the board of directors on the Gotthard Electricity Mains AG, Altdorf, Uri. He subsequently worked as an engineering consultant until illness forced his withdrawal from practice in 1950.
Strickler's Equation.
In 1923, Strickler published a report examining 34 formulas for the computation of flow in pipes and open channels and related experimental data.
The report validated the Gauckler formula and by inference, the Manning formula. Strickler proposed that the Ganguillet-Kutter n-value, used to characterize hydraulic roughness in the Manning formula, could be defined as a function of surface roughness, formula_0.
formula_1
Strickler’s equation introduces a new emperical coefficient which must be determined experimentally to define n-value. However, unlike n-value, which has units of T/L1/3, formula_0 has units of length, and at least in theory, is a measurable quantity. A measurable quantity is potentially useful for channel design and stream restoration engineering where the design value of hydraulic roughness may be unknown.
Stricker proposed that for a fixed boundary, surface roughness could be defined by the median grain size of a river’s bed material. He also noted that the onset of sediment transport, the mobile boundary condition, increased the observed hydraulic roughness.
For fixed boundary, gravel bed rivers, Strickler’s equation can be quantified as:
formula_2
where formula_3 is the median grain size in meters.
Later researchers produced variations on Strickler’s equation proposing different measures of surface roughness and corresponding variations in the empherical coefficient. For example, Strickler’s equation has been used to estimate n-values for riprap lined channels from stone gradation.
The equation also describes the scaling of hydraulic roughness in Froude scaled, physical hydraulic models.
In 1933, Johann Nikuradse published a study of hydraulic roughness in pipes that validated Strickler’s observations of the influence of surface roughness in turbulent flows.
Dimensionally Homogeneous Gauckler–Manning–Strickler Formula.
Given the Gauckler–Manning–Strickler formula:
formula_4
where:
formula_5 is velocity in meters per second,
formula_6 is n-value in seconds per meter1/3,
formula_7 is the Strickler coefficient, formula_8,
formula_9 is hydraulic radius in meters, and
formula_10 is the dimensionless water surface slope.
Substituting Strickler’s equation for n-value and rearranging terms produces a dimensionally homogeneous form of the Manning’s formula:
formula_11
Where formula_12 is acceleration due to gravity in meters per second2.
The first term on the right-hand side of the equation is the dimensionless ratio of hydraulic radius to roughness height, commonly referred to as relative roughness. The remaining term, known as the boundary shear velocity, approximates the flow of water downhill under the influence of gravity and has units of velocity, i.e., L/T.
From experimental data, Stickler proposed that the dimensionally homogeneous form of the Manning formula could be quantified as:
formula_13
where formula_14
In civil engineering practice, the Manning formula is more widely used than Stricker’s dimensionally homogeneous form of the equation. However, Strickler’s observations on the influence of surface roughness and the concept of relative roughness are common features of a variety of formulas used to estimate hydraulic roughness.
Publications.
Source:
References.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
See also.
| [
{
"math_id": 0,
"text": "{k}_{S}"
},
{
"math_id": 1,
"text": "n \\propto \\sqrt[6]{k}_{S}"
},
{
"math_id": 2,
"text": "n = \\frac{1}{21.1} {D}_{50}^{1/6}"
},
{
"math_id": 3,
"text": "{D}_{50}"
},
{
"math_id": 4,
"text": "V = \\left(\\frac{1}{n}\\right) {R}_{H}^{2/3} {S}^{1/2} = {K}_{S} {R}_{H}^{2/3} {S}^{1/2}"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "{K}_{S}"
},
{
"math_id": 8,
"text": "1/n"
},
{
"math_id": 9,
"text": "{R}_{H}"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "V \\propto {\\left(\\frac{{R}_{H}}{{k}_{S}}\\right)}^{1/6} \\sqrt{g{R}_{H}S}"
},
{
"math_id": 12,
"text": "g"
},
{
"math_id": 13,
"text": "V=\\alpha \\sqrt[6]{\\frac{{R}_{H}}{{k}_{S}}}\\sqrt{2g{R}_{H}S}"
},
{
"math_id": 14,
"text": "\\alpha =21.1/\\sqrt{2g} "
}
]
| https://en.wikipedia.org/wiki?curid=74532862 |
74536335 | Perfect ideal | A type of ideal relevant for Noetherian rings
In commutative algebra, a perfect ideal is a proper ideal formula_0 in a Noetherian ring formula_1 such that its grade equals the projective dimension of the associated quotient ring.
formula_2
A perfect ideal is unmixed.
For a regular local ring formula_1 a prime ideal formula_0 is perfect if and only if formula_3 is Cohen-Macaulay.
The notion of perfect ideal was introduced in 1913 by Francis Sowerby Macaulay in connection to what nowadays is called a Cohen-Macaulay ring, but for which Macaulay did not have a name for yet. As Eisenbud and Gray point out, Macaulay's original definition of perfect ideal formula_0 coincides with the modern definition when formula_0 is a homogeneous ideal in polynomial ring, but may differ otherwise. Macaulay used Hilbert functions to define his version of perfect ideals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\textrm{grade}(I)=\\textrm{proj}\\dim(R/I)."
},
{
"math_id": 3,
"text": "R/I"
}
]
| https://en.wikipedia.org/wiki?curid=74536335 |
74536549 | Grade (ring theory) | Invariant for finitely generated modules over a Noetherian ring
In commutative and homological algebra, the grade of a finitely generated module formula_0 over a Noetherian ring formula_1 is a cohomological invariant defined by vanishing of Ext-modules
formula_2
For an ideal formula_3 the grade is defined via the quotient ring viewed as a module over formula_1
formula_4
The grade is used to define perfect ideals. In general we have the inequality
formula_5
where the projective dimension is another cohomological invariant.
The grade is tightly related to the depth, since
formula_6
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\textrm{grade}\\,M=\\textrm{grade}_R\\,M=\\inf\\left\\{i\\in\\mathbb{N}_0:\\textrm{Ext}_R^i(M,R)\\neq 0\\right\\}."
},
{
"math_id": 3,
"text": "I\\triangleleft R"
},
{
"math_id": 4,
"text": "\\textrm{grade}\\,I=\\textrm{grade}_R\\,I=\\textrm{grade}_R\\,R/I=\\inf\\left\\{i\\in\\mathbb{N}_0:\\textrm{Ext}_R^i(R/I,R)\\neq 0\\right\\}."
},
{
"math_id": 5,
"text": "\\textrm{grade}_R\\,I\\leq\\textrm{proj}\\dim(R/I)"
},
{
"math_id": 6,
"text": "\\textrm{grade}_R\\,I=\\textrm{depth}_{I}(R)."
}
]
| https://en.wikipedia.org/wiki?curid=74536549 |
74537234 | List of repunit primes | This is a list of repunit primes in various bases.
Base 2 repunit primes.
Base-2 repunit primes are called Mersenne primes.
Base 3 repunit primes.
The first few base-3 repunit primes are
13, 1093, 797161, 3754733257489862401973357979128773, 6957596529882152968992225251835887181478451547013 (sequence in the OEIS),
corresponding to formula_0 of
3, 7, 13, 71, 103, 541, 1091, 1367, 1627, 4177, 9011, 9551, 36913, 43063, 49681, 57917, 483611, 877843, 2215303, 2704981, 3598867, 7973131, 8530117, ... (sequence in the OEIS).
Base 4 repunit primes.
The only base-4 repunit prime is 5 (formula_1). formula_2, and 3 always divides formula_3 when "n" is odd and formula_4 when "n" is even. For "n" greater than 2, both formula_3 and formula_4 are greater than 3, so removing the factor of 3 still leaves two factors greater than 1. Therefore, the number cannot be prime.
Base 5 repunit primes.
The first few base-5 repunit primes are
31, 19531, 12207031, 305175781, 177635683940025046467781066894531, 14693679385278593849609206715278070972733319459651094018859396328480215743184089660644531, 35032461608120426773093239582247903282006548546912894293926707097244777067146515037165954709053039550781, 815663058499815565838786763657068444462645532258620818469829556933715405574685778402862015856733535201783524826169013977050781 (sequence in the OEIS),
corresponding to formula_0 of
3, 7, 11, 13, 47, 127, 149, 181, 619, 929, 3407, 10949, 13241, 13873, 16519, 201359, 396413, 1888279, 3300593, 4939471, ... (sequence in the OEIS).
Base 6 repunit primes.
The first few base-6 repunit primes are
7, 43, 55987, 7369130657357778596659, 3546245297457217493590449191748546458005595187661976371, 133733063818254349335501779590081460423013416258060407531857720755181857441961908284738707408499507 (sequence in the OEIS),
corresponding to formula_0 of
2, 3, 7, 29, 71, 127, 271, 509, 1049, 6389, 6883, 10613, 19889, 79987, 608099, 1365019, 3360347, ... (sequence in the OEIS).
Base 7 repunit primes.
The first few base-7 repunit primes are
2801, 16148168401, 85053461164796801949539541639542805770666392330682673302530819774105141531698707146930307290253537320447270457,138502212710103408700774381033135503926663324993317631729227790657325163310341833227775945426052637092067324133850503035623601
corresponding to formula_0 of
5, 13, 131, 149, 1699, 14221, 35201, 126037, 371669, 1264699, ... (sequence in the OEIS).
Base 8 repunit primes.
The only base-8 repunit prime is 73 (formula_5). formula_6, and 7 always divides formula_7 when "n" is not divisible by 3 and formula_4 when "n" is divisible by 3. For "n" greater than 3, both formula_7 and formula_4 are greater than 7, so removing the factor of 7 still leaves two factors greater than 1. Therefore, the number cannot be prime.
Base 9 repunit primes.
There are no base-9 repunit primes. formula_8, and formula_9 and formula_10 are even, and one of formula_9 and formula_10 is divisible by 4. For "n" greater than 1, both formula_9 and formula_10 are greater than 4, so removing the factor of 8 (which is equivalent to removing the factor 4 from formula_9 or formula_10, and removing the factor 2 from the other number) still leaves two factors greater than 1. Therefore, the number cannot be prime.
Base 11 repunit primes.
The first few base-11 repunit primes are
50544702849929377, 6115909044841454629, 1051153199500053598403188407217590190707671147285551702341089650185945215953, 567000232521795739625828281267171344486805385881217575081149660163046217465544573355710592079769932651989153833612198334843467861091902034340949
corresponding to formula_0 of
17, 19, 73, 139, 907, 1907, 2029, 4801, 5153, 10867, 20161, 293831, 1868983, ... (sequence in the OEIS).
Base 12 repunit primes.
The first few base-12 repunit primes are
13, 157, 22621, 29043636306420266077, 43570062353753446053455610056679740005056966111842089407838902783209959981593077811330507328327968191581, 388475052482842970801320278964160171426121951256610654799120070705613530182445862582590623785872890159937874339918941
corresponding to formula_0 of
2, 3, 5, 19, 97, 109, 317, 353, 701, 9739, 14951, 37573, 46889, 769543, ... (sequence in the OEIS).
Base 16 repunit primes.
The only base-16 repunit prime is 17 (formula_11). formula_12, and 3 always divides formula_13, and 5 always divides formula_14 when "n" is odd and formula_13 when "n" is even. For "n" greater than 2, both formula_14 and formula_13 are greater than 15, so removing the factor of 15 still leaves two factors greater than 1. Therefore, the number cannot be prime.
Base 20 repunit primes.
The first few base-20 repunit primes are
421, 10778947368421, 689852631578947368421
corresponding to formula_0 of
3, 11, 17, 1487, 31013, 48859, 61403, 472709, 984349, ... (sequence in the OEIS).
Bases formula_15 such that formula_16 is prime for prime formula_17.
Smallest base formula_15 such that formula_16 is prime (where formula_17 is the formula_0th prime) are
2, 2, 2, 2, 5, 2, 2, 2, 10, 6, 2, 61, 14, 15, 5, 24, 19, 2, 46, 3, 11, 22, 41, 2, 12, 22, 3, 2, 12, 86, 2, 7, 13, 11, 5, 29, 56, 30, 44, 60, 304, 5, 74, 118, 33, 156, 46, 183, 72, 606, 602, 223, 115, 37, 52, 104, 41, 6, 338, 217, 13, 136, 220, 162, 35, 10, 218, 19, 26, 39, 12, 22, 67, 120, 195, 48, 54, 463, 38, 41, 17, 808, 404, 46, 76, 793, 38, 28, 215, 37, 236, 59, 15, 514, 260, 498, 6, 2, 95, 3, ... (sequence in the OEIS)
Smallest base formula_15 such that formula_18 is prime (where formula_17 is the formula_0th prime) are
3, 2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, 159, 10, 16, 209, 2, 16, 23, 273, 2, 460, 22, 3, 36, 28, 329, 43, 69, 86, 271, 396, 28, 83, 302, 209, 11, 300, 159, 79, 31, 331, 52, 176, 3, 28, 217, 14, 410, 252, 718, 164, ... (sequence in the OEIS)
List of repunit primes base formula_15.
Smallest prime formula_19 such that formula_16 is prime are (start with formula_20, 0 if no such formula_17 exists)
3, 3, 0, 3, 3, 5, 3, 0, 19, 17, 3, 5, 3, 3, 0, 3, 25667, 19, 3, 3, 5, 5, 3, 0, 7, 3, 5, 5, 5, 7, 0, 3, 13, 313, 0, 13, 3, 349, 5, 3, 1319, 5, 5, 19, 7, 127, 19, 0, 3, 4229, 103, 11, 3, 17, 7, 3, 41, 3, 7, 7, 3, 5, 0, 19, 3, 19, 5, 3, 29, 3, 7, 5, 5, 3, 41, 3, 3, 5, 3, 0, 23, 5, 17, 5, 11, 7, 61, 3, 3, 4421, 439, 7, 5, 7, 3343, 17, 13, 3, 0, 3, ... (sequence in the OEIS)
Smallest prime formula_19 such that formula_18 is prime are (start with formula_20, 0 if no such formula_17 exists)
3, 3, 3, 5, 3, 3, 0, 3, 5, 5, 5, 3, 7, 3, 3, 7, 3, 17, 5, 3, 3, 11, 7, 3, 11, 0, 3, 7, 139, 109, 0, 5, 3, 11, 31, 5, 5, 3, 53, 17, 3, 5, 7, 103, 7, 5, 5, 7, 1153, 3, 7, 21943, 7, 3, 37, 53, 3, 17, 3, 7, 11, 3, 0, 19, 7, 3, 757, 11, 3, 5, 3, 7, 13, 5, 3, 37, 3, 3, 5, 3, 293, 19, 7, 167, 7, 7, 709, 13, 3, 3, 37, 89, 71, 43, 37, (>800000), 19, 7, 3, 7, ... (sequence in the OEIS)
For more information, see.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "11_4"
},
{
"math_id": 2,
"text": "4^n-1 = \\left(2^n+1\\right)\\left(2^n-1\\right)"
},
{
"math_id": 3,
"text": "2^n + 1"
},
{
"math_id": 4,
"text": "2^n - 1"
},
{
"math_id": 5,
"text": "111_8"
},
{
"math_id": 6,
"text": "8^n-1 = \\left(4^n+2^n+1\\right)\\left(2^n-1\\right)"
},
{
"math_id": 7,
"text": "4^n + 2^n + 1"
},
{
"math_id": 8,
"text": "9^n-1 = \\left(3^n+1\\right)\\left(3^n-1\\right)"
},
{
"math_id": 9,
"text": "3^n + 1"
},
{
"math_id": 10,
"text": "3^n - 1"
},
{
"math_id": 11,
"text": "11_{16}"
},
{
"math_id": 12,
"text": "16^n-1 = \\left(4^n+1\\right)\\left(4^n-1\\right)"
},
{
"math_id": 13,
"text": "4^n - 1"
},
{
"math_id": 14,
"text": "4^n + 1"
},
{
"math_id": 15,
"text": "b"
},
{
"math_id": 16,
"text": "R_p(b)"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "R_p(-b)"
},
{
"math_id": 19,
"text": "p>2"
},
{
"math_id": 20,
"text": "b=2"
}
]
| https://en.wikipedia.org/wiki?curid=74537234 |
74538512 | Igor L. Markov | American computer scientist and engineer
Igor Leonidovich Markov (; born in 1973 in Kyiv, Ukraine) is an American professor, computer scientist and engineer. Markov is known for results in quantum computation, work on limits of computation, research on algorithms for optimizing integrated circuits and on electronic design automation, as well as artificial intelligence. Additionally, Markov is an American non-profit executive responsible for aid to Ukraine worth over a hundred million dollars.
Igor L. Markov has no known relation to the mathematician Andrey Markov.
Career.
Markov obtained an M.A. degree in mathematics and a Doctor of Philosophy degree in Computer Science from UCLA in 2001.
From the early 2000s through 2018 he was a professor at University of Michigan, where he supervised doctoral dissertations and degrees of 12 students in Electrical engineering and Computer science.
He worked as a principal engineer at Synopsys during a sabbatical leave. In 2013-2014 he was a visiting professor at Stanford University.
Markov worked at Google on Search and Information Retrieval,
and at Meta on Machine Learning platforms. As of 2024, he works at Synopsys.
Markov is a member of the Board of Directors of Nova Ukraine, a California 501(c)(3) charity organization that provides humanitarian aid in Ukraine. At Nova Ukraine, Markov leads government and media relations, as well as advocacy efforts. Markov curated publicity efforts, established and curated large medical and evacuation projects, and contributed to fundraising.
Markov is a member of the Board of Directors of the American Coalition for Ukraine, an umbrella organization that coordinates one hundred US-based nonprofits concerned about events in Ukraine.
Awards and distinctions.
ACM Special Interest Group on Design Automation honored Markov with an Outstanding New Faculty Award in 2004.
Markov was the 2009 recipient of IEEE CEDA Ernest S. Kuh Early Career Award "for outstanding contributions to algorithms, methodologies and software for the physical design of integrated circuits."
Markov became ACM Distinguished Scientist in 2011. In 2013 he was named an IEEE fellow "for contributions to optimization methods in electronic design automation".
Award-winning publications.
Markov's peer-reviewed scholarly work was recognized with five best-paper awards, including four at major conferences and a journal in the field of electronic design automation, and one in theoretical computer science:
Books and other publications.
Markov co-authored over 200 peer-reviewed publications in journals and archival conference proceedings, and Google Scholar reported over 19,000 citations of his publications as of October 2023.
In a 2014 Nature article, Markov surveyed known limits to computation, pointing out that many of them are fairly lose and do not restrict near-term technologies. When practical technologies encounter serious limits, understanding these limits can lead to workarounds. More often,
what is practically achievable depends on technology-specific engineering limitations.
Markov co-edited the two-volume Electronic Design Automation handbook published in second edition by Taylor & Francis in 2016. He also co-authored five scholarly books published by Springer, among them are two textbooks:
Markov's other books cover uncertainty in logic circuits, dealing with functional design errors in digital circuits, and physical synthesis of integrated circuits.
Key technical contributions.
Quantum computing.
Markov’s contributions include results on quantum circuit synthesis (creating circuits from specifications) and simulation of quantum circuits on conventional computers (obtaining the output of a quantum computer without a quantum computer).
Physical design of integrated circuits.
Markov's Capo placer provided a baseline for comparisons used in the placement literature. The placer was commercialized and used to design industry chips. Markov's contributions include algorithms, methodologies and software for
Machine learning.
Markov led the development of an end-to-end AI platform called Looper, which supports the full machine learning lifecycle from model training, deployment, and inference all the way to evaluation and tuning of products. Looper provides easy-to-use APIs for optimization, personalization, and feedback collection.
Activity on social media.
Markov was awarded a Top Writer status on Quora in 2018, 2017, 2016, 2015 and 2014, he has over 80,000 followers. His contributions were republished by "Huffington Post", "Slate", and "Forbes".
Markov is a moderator for the cs.ET (Emerging Technologies in Computing and Communications) subject area on arXiv. | [
{
"math_id": 0,
"text": " O(n^2/\\log n) "
},
{
"math_id": 1,
"text": "n "
},
{
"math_id": 2,
"text": "(23/48)\\times 4^n - (3/2) \\times 2^n + 4/3 "
},
{
"math_id": 3,
"text": "2^{n+1} - 2n"
},
{
"math_id": 4,
"text": "(x,y)"
}
]
| https://en.wikipedia.org/wiki?curid=74538512 |
74548923 | Minflux | Principle and applications of MINFLUX microscopy
MINFLUX, or minimal fluorescence photon fluxes microscopy, is a super-resolution light microscopy method that images and tracks objects in two and three dimensions with single-digit nanometer resolution.
MINFLUX uses a structured excitation beam with at least one intensity minimum – typically a doughnut-shaped beam with a central intensity zero – to elicit photon emission from a fluorophore. The position of the excitation beam is controlled with sub-nanometer precision, and when the intensity zero is positioned exactly on the fluorophore, the system records no emission. Thus, the system requires few emitted photons to determine the fluorophore's location with high precision. In practice, overlapping the intensity zero and the fluorophore would require "a priori" location knowledge to position the beam. As this is not the case, the excitation beam is moved around in a defined pattern to probe the emission from the fluorophore near the intensity minimum.
Each localization takes less than 5 microseconds, so MINFLUX can construct images of nanometric structures or track single molecules in fixed and live specimens by pooling the locations of fluorescent labels. Because the goal is to locate the point where a fluorophore stops emitting, MINFLUX significantly reduces the number of fluorescence photons needed for localization compared to other methods.
A commercial MINFLUX system is available from abberior instruments GmbH.
Principle.
MINFLUX overcomes the Abbe diffraction limit in light microscopy and distinguishes individual fluorescing molecules by leveraging the photophysical properties of fluorophores. The system temporarily silences (sets in an OFF-state) all but one molecule within a diffraction-limited area (DLA) and then locates that single active (in an ON-state) molecule. Super-resolution microscopy techniques like stochastic optical reconstruction microscopy (STORM) and photoactivated localization microscopy (PALM) do the same. However, MINFLUX differs in how it determines the molecule’s location.
The excitation beam used in MINFLUX has a local intensity minimum or intensity zero. The position of this intensity zero in a sample is adjusted via control electronics and actuators with sub-nanometer spatial and sub-microsecond temporal precision. When the active molecule located at formula_0 is in a non-zero intensity area of the excitation beam, it fluoresces. The number of photons formula_1 emitted by the active molecule is proportional to the excitation beam intensity at that position.
In the vicinity of the excitation beam intensity zero, the intensity formula_2 of the emission from the active molecule when the intensity zero is located at position formula_3 can be approximated by a quadratic function. Therefore, the recorded number of emission photons is:
formula_4
where formula_5 is a measure of the collection efficiency of detection, the absorption cross-section of the emitter, and the quantum yield of fluorescence.
In other words, photon fluxes emitted by the active molecule when it is located close to the zero-intensity point of the excitation beam carry information about its distance to the center of the beam. That information can be used to find the position of the active molecule. The position is probed with a set of formula_6 excitation intensities formula_7. For example, the active molecule is excited with the same doughnut-shaped beam moved to different positions. The probing results in a corresponding set of photon counts formula_8. These photon counts are probabilistic; each time such a set is measured, the result is a different realization of photon numbers fluctuating around a mean value. Since their distribution follows Poissonian statistics, the expected position of the active molecule can be estimated from the photon numbers, using, for example, a maximum likelihood estimation of the form:
formula_9
The position formula_10 maximizes the likelihood that the measured set of photon counts occurred exactly as recorded and is thus, an estimate of the active molecule’s location.
Localization process.
Recordings of the emitting active molecule at two different excitation beam positions are needed to use the quadratic approximation in the one-dimensional basic principle described above. Each recording provides a one-dimensional distance value to the center of the excitation beam. In two dimensions, at least three recording points are needed to ascertain a location that can be used to move the MINFLUX excitation beam toward the target molecule. These recording points demarcate a probing area "L". Balzarotti et al. use the Cramér-Rao limit to show that constricting this probing area significantly improves localization precision, more so than increasing the number of emitted photons:
formula_11
where formula_12 is the Cramér-Rao limit, formula_13 is the diameter of the probing area, and formula_14 is the number of emitted photons.
MINFLUX takes advantage of this feature when localizing an active fluorophore. It records photon fluxes using a probing scheme of at least three recording points around the probing area formula_13 and one point at the center. These fluxes differ at each recording point as the active molecule is excited by different light intensities. Those flux patterns inform the repositioning of the probing area to center on the active molecule. Then the probing process is repeated. With each probing iteration, MINFLUX constricts the probing area formula_13, narrowing the space where the active molecule can be located. Thus, the distance remaining between the intensity zero and the active molecule is determined more precisely at each iteration. The steadily improving positional information minimizes the number of fluorescence photons and the time that MINFLUX needs to achieve precise localizations.
Applications.
By pooling the determined locations of multiple fluorescent molecules in a specimen, MINFLUX generates images of nanoscopic structures with a resolution of 1–3 nm. MINFLUX has been used to image DNA origami and the nuclear pore complex and to elucidate the architecture of subcellular structures in mitochondria and photoreceptors. Because MINFLUX does not collect large numbers of photons emitted from target molecules, localization is faster than with conventional camera-based systems. Thus, MINFLUX can iteratively localize the same molecule at microsecond intervals over a defined period. MINFLUX has been used to track the movement of the motor protein kinesin-1, both "in vitro" and "in vivo", and to monitor configurational changes of the mechanosensitive ion channel PIEZO1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{r}_m"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "\\vec{r}"
},
{
"math_id": 4,
"text": "n(\\vec{r},\\vec{r}_m) = cI = c(\\vec{r}-\\vec{r}_m)^2"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "\\{I_0, ..., I_{K-1}\\}"
},
{
"math_id": 8,
"text": "\\{n_0, ..., n_{K-1}\\}"
},
{
"math_id": 9,
"text": "\\widehat{\\vec{r}_m} = argmax \\mathcal{L} (\\vec{r}\\mid\\{n_0, ..., n_{K-1}\\})"
},
{
"math_id": 10,
"text": "\\widehat{\\vec{r}_m}"
},
{
"math_id": 11,
"text": "\\sigma_B \\propto \\frac{L}{\\sqrt{N}}"
},
{
"math_id": 12,
"text": "\\sigma_B"
},
{
"math_id": 13,
"text": "L"
},
{
"math_id": 14,
"text": "N"
}
]
| https://en.wikipedia.org/wiki?curid=74548923 |
7455080 | Normal convergence | In mathematics normal convergence is a type of convergence for series of functions. Like absolute-convergence, it has the useful property that it is preserved when the order of summation is changed.
History.
The concept of normal convergence was first introduced by René Baire in 1908 in his book "Leçons sur les théories générales de l'analyse".
Definition.
Given a set "S" and functions formula_0 (or to any normed vector space), the series
formula_1
is called normally convergent if the series of uniform norms of the terms of the series converges, i.e.,
formula_2
Distinctions.
Normal convergence implies uniform absolute convergence, i.e., uniform convergence of the series of nonnegative functions formula_3; this fact is essentially the Weierstrass M-test. However, they should not be confused; to illustrate this, consider
formula_4
Then the series formula_3 is uniformly convergent (for any "ε" take "n" ≥ 1/"ε"), but the series of uniform norms is the harmonic series and thus diverges. An example using continuous functions can be made by replacing these functions with bump functions of height 1/"n" and width 1 centered at each natural number "n".
As well, normal convergence of a series is different from "norm-topology convergence", i.e. convergence of the partial sum sequence in the topology induced by the uniform norm. Normal convergence implies norm-topology convergence if and only if the space of functions under consideration is complete with respect to the uniform norm. (The converse does not hold even for complete function spaces: for example, consider the harmonic series as a sequence of constant functions).
Generalizations.
Local normal convergence.
A series can be called "locally normally convergent on "X"" if each point "x" in "X" has a neighborhood "U" such that the series of functions "ƒ""n" restricted to the domain "U"
formula_5
is normally convergent, i.e. such that
formula_6
where the norm formula_7 is the supremum over the domain "U".
Compact normal convergence.
A series is said to be "normally convergent on compact subsets of "X"" or "compactly normally convergent on "X"" if for every compact subset "K" of "X", the series of functions "ƒ""n" restricted to "K"
formula_8
is normally convergent on "K".
Note: if "X" is locally compact (even in the weakest sense), local normal convergence and compact normal convergence are equivalent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_n : S \\to \\mathbb{C}"
},
{
"math_id": 1,
"text": "\\sum_{n=0}^\\infty f_n(x)"
},
{
"math_id": 2,
"text": "\\sum_{n=0}^\\infty \\|f_n\\| := \\sum_{n=0}^\\infty \\sup_{x \\in S} |f_n(x)| < \\infty."
},
{
"math_id": 3,
"text": "\\sum_{n=0}^\\infty |f_n(x)|"
},
{
"math_id": 4,
"text": "f_n(x) = \\begin{cases} 1/n, & x = n, \\\\ 0, & x \\ne n. \\end{cases}"
},
{
"math_id": 5,
"text": "\\sum_{n=0}^\\infty f_n\\mid_U"
},
{
"math_id": 6,
"text": "\\sum_{n=0}^\\infty \\| f_n\\|_U < \\infty"
},
{
"math_id": 7,
"text": "\\|\\cdot\\|_U "
},
{
"math_id": 8,
"text": "\\sum_{n=0}^\\infty f_n\\mid_K"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "\\tau: \\mathbb{N} \\to \\mathbb{N}"
},
{
"math_id": 11,
"text": "\\sum_{n=0}^\\infty f_{\\tau(n)}(x)"
}
]
| https://en.wikipedia.org/wiki?curid=7455080 |
7455223 | Baer ring | In abstract algebra and functional analysis, Baer rings, Baer *-rings, Rickart rings, Rickart *-rings, and AW*-algebras are various attempts to give an algebraic analogue of von Neumann algebras, using axioms about annihilators of various sets.
Any von Neumann algebra is a Baer *-ring, and much of the theory of projections in von Neumann algebras can be extended to all Baer *-rings, For example, Baer *-rings can be divided into types I, II, and III in the same way as von Neumann algebras.
In the literature, left Rickart rings have also been termed left PP-rings. ("Principal implies projective": See definitions below.)
Definitions.
In operator theory, the definitions are strengthened slightly by requiring the ring "R" to have an involution formula_2. Since this makes "R" isomorphic to its opposite ring "R"op, the definition of Rickart *-ring is left-right symmetric.
Properties.
The projections in a Rickart *-ring form a lattice, which is complete if the ring is a Baer *-ring.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X \\subseteq R"
},
{
"math_id": 1,
"text": "\\{r\\in R\\mid rX=\\{0\\}\\}"
},
{
"math_id": 2,
"text": "*:R\\rightarrow R"
},
{
"math_id": 3,
"text": "\\{0\\}"
}
]
| https://en.wikipedia.org/wiki?curid=7455223 |
7455447 | Taurodontism | Molar condition in which the root is relatively short
Medical condition
Taurodontism is defined as the enlargement of pulp chambers with the furcation area being displaced toward the apex of the root of a tooth. It cannot be diagnosed clinically and requires radiographic visualization since the crown of a taurodontic tooth appears normal and its distinguishing features are present below the alveolar margin. Taurodontism can present in deciduous or permanent dentition, unilaterally or bilaterally, but is most common in the permanent molar teeth of humans. The underlying mechanism of taurodontism is the failure or late invagination of Hertwig's epithelial root sheath, which leads an apical shift of the root furcation.
Classification.
The term was coined by Sir Arthur Keith. It comes from the Latin "taurus" meaning "bull" and the Greek ὀδούς ("odous"), genitive singular ὀδόντος ("odontos") meaning "tooth", to indicate the similarity of these teeth to those of hoofed/ungulates, or cud-chewing animals.
Radiographic characteristics of taurodontism include:
Earlier classification systems considered only the apical displacement of the pulp chamber floor; whereas, later systems additionally consider the position of the pulp chamber in relation to the cemento-enamel junction and alveolar margin.
Shaw 1928.
One of the first attempts to classify taurodontism was made by C.J.Shaw. He used the apical displacement of the pulp chamber floor to classify taurodontism into four distinct categories: cynodont (normal), hypotaurodont, hypertaurodont, and mesotaurodont.
Shifman & Chanannel 1978.
Later, Shifman & Chanannel quantified the degree of taurodontism based on a mathematical formula relating the anatomical landmarks as shown in the figure above. The anatomical landmark ratio is calculated as shown below:
Landmark ratio.
formula_0
Where, A = the lowest point of the pulp chamber roof, B = the highest point of the pulp chamber floor, and C = the longest root’s apex.
Using this formula, a tooth is a taurodont if the landmark ratio is ≥ 0.2 and the distance from the highest point of the pulp chamber floor (B) to the cemento-enamel junction (D) is ≥ 2.5 mm. The full classification system based on this formula are displayed in the table below:
It is important to note that historically, there has been professional debate regarding the taurodont classification systems as to: 1) how much displacement and/or morphologic change constitutes taurodontism, 2) whether classification should be indexed stepwise or on a continuum, and 3) whether to include certain teeth in the occurrence of taurodontism. For example, as premolars are narrow mesio-distally, taurodontism is hard to identify radiographically on premolars. Therefore, some researchers exclude premolars from their classification systems. Additionally, there has been criticism over the use of landmarks that undergo changes. For example, due to trauma or wear, tertiary dentin can be deposited which can then alter some measurements; thus, caution should be employed when diagnosing taurodontism in this case. Finally, as these measurements are dimensionally quite small, they are also subject to large relative error.
Clinical Considerations.
The altered morphology of taurodont teeth can present challenges during dental treatment. Most notably, endodontists will have difficulty in not only removing the voluminous pulp, but also filling the large pulp chamber and complex root canal system.
Prosthodontists and orthodontists should also exercise caution in using taurodont teeth as sites for dental anchorage. Due to the apical displacement of the furcation area, the taurodont tooth is not held as securely in the alveolar socket. Conversely, this may make taurodont teeth easier to extract.
Finally, taurodont teeth may have favorable prognosis from a periodontal point of view, as the furcation area is apical and thus less susceptible to periodontal damage.
Anthropology.
Taurodontism is still a condition of anthropological importance as it was seen in Neanderthals.
The trait "is common among extant New World monkeys, apes, and fossil hominins".
Related conditions.
Although taurodontism is frequently an isolated anomaly, it may be found in association with several other conditions such as: amelogenesis imperfecta, Down syndrome, Klinefelter syndrome, Mohr syndrome, Wolf-Hirschhorn syndrome, Lowe syndrome, ectodermal dysplasia and tricho-dento-osseous syndrome.
Taurodontism may be related to:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Landmark ratio} = \\frac{\\text{distance from A to B}}{\\text{distance from B to C}}"
}
]
| https://en.wikipedia.org/wiki?curid=7455447 |
7455643 | Thermal comfort | Satisfaction with the thermal environment
Thermal comfort is the condition of mind that expresses subjective satisfaction with the thermal environment. The human body can be viewed as a heat engine where food is the input energy. The human body will release excess heat into the environment, so the body can continue to operate. The heat transfer is proportional to temperature difference. In cold environments, the body loses more heat to the environment and in hot environments the body does not release enough heat. Both the hot and cold scenarios lead to discomfort. Maintaining this standard of thermal comfort for occupants of buildings or other enclosures is one of the important goals of HVAC (heating, ventilation, and air conditioning) design engineers.
Thermal neutrality is maintained when the heat generated by human metabolism is allowed to dissipate, thus maintaining thermal equilibrium with the surroundings. The main factors that influence thermal comfort are those that determine heat gain and loss, namely metabolic rate, clothing insulation, air temperature, mean radiant temperature, air speed and relative humidity. Psychological parameters, such as individual expectations, also affect thermal comfort. The thermal comfort temperature may vary greatly between individuals and depending on factors such as activity level, clothing, and humidity. People are highly sensitive to even small differences in environmental temperature. At 24 °C, a difference of 0.38 °C can be detected between the temperature of two rooms.
The Predicted Mean Vote (PMV) model stands among the most recognized thermal comfort models. It was developed using principles of heat balance and experimental data collected in a controlled climate chamber under steady state conditions. The adaptive model, on the other hand, was developed based on hundreds of field studies with the idea that occupants dynamically interact with their environment. Occupants control their thermal environment by means of clothing, operable windows, fans, personal heaters, and sun shades. The PMV model can be applied to air-conditioned buildings, while the adaptive model can be applied only to buildings where no mechanical systems have been installed. There is no consensus about which comfort model should be applied for buildings that are partially air-conditioned spatially or temporally.
Thermal comfort calculations in accordance with the ANSI/ASHRAE Standard 55, the ISO 7730 Standard and the EN 16798-1 Standard can be freely performed with either the CBE Thermal Comfort Tool for ASHRAE 55, with the Python package pythermalcomfort or with the R package comf.
Significance.
Satisfaction with the thermal environment is important because thermal conditions are potentially life-threatening for humans if the core body temperature reaches conditions of hyperthermia, above 37.5–38.3 °C (99.5–100.9 °F), or hypothermia, below 35.0 °C (95.0 °F). Buildings modify the conditions of the external environment and reduce the effort that the human body needs to do in order to stay stable at a normal human body temperature, important for the correct functioning of human physiological processes.
The Roman writer Vitruvius actually linked this purpose to the birth of architecture. David Linden also suggests that the reason why we associate tropical beaches with paradise is because in those environments is where human bodies need to do less metabolic effort to maintain their core temperature. Temperature not only supports human life; coolness and warmth have also become in different cultures a symbol of protection, community and even the sacred.
In building science studies, thermal comfort has been related to productivity and health. Office workers who are satisfied with their thermal environment are more productive. The combination of high temperature and high relative humidity reduces thermal comfort and indoor air quality.
Although a single static temperature can be comfortable, people are attracted by thermal changes, such as campfires and cool pools. Thermal pleasure is caused by varying thermal sensations from a state of unpleasantness to a state of pleasantness, and the scientific term for it is positive thermal alliesthesia. From a state of thermal neutrality or comfort any change will be perceived as unpleasant. This challenges the assumption that mechanically controlled buildings should deliver uniform temperatures and comfort, if it is at the cost of excluding thermal pleasure.
Influencing factors.
Since there are large variations from person to person in terms of physiological and psychological satisfaction, it is hard to find an optimal temperature for everyone in a given space. Laboratory and field data have been collected to define conditions that will be found comfortable for a specified percentage of occupants.
There are numerous factors that directly affect thermal comfort that can be grouped in two categories:
Even if all these factors may vary with time, standards usually refer to a steady state to study thermal comfort, just allowing limited temperature variations.
Personal factors.
Metabolic rate.
People have different metabolic rates that can fluctuate due to activity level and environmental conditions. ASHRAE 55-2017 defines metabolic rate as the rate of transformation of chemical energy into heat and mechanical work by metabolic activities of an individual, per unit of skin surface area.3
Metabolic rate is expressed in units of met, equal to 58.2 W/m² (18.4 Btu/h·ft²). One met is equal to the energy produced per unit surface area of an average person seated at rest.
ASHRAE 55 provides a table of metabolic rates for a variety of activities. Some common values are 0.7 met for sleeping, 1.0 met for a seated and quiet position, 1.2–1.4 met for light activities standing, 2.0 met or more for activities that involve movement, walking, lifting heavy loads or operating machinery. For intermittent activity, the standard states that it is permissible to use a time-weighted average metabolic rate if individuals are performing activities that vary over a period of one hour or less. For longer periods, different metabolic rates must be considered.
According to ASHRAE Handbook of Fundamentals, estimating metabolic rates is complex, and for levels above 2 or 3 met – especially if there are various ways of performing such activities – the accuracy is low. Therefore, the standard is not applicable for activities with an average level higher than 2 met. Met values can also be determined more accurately than the tabulated ones, using an empirical equation that takes into account the rate of respiratory oxygen consumption and carbon dioxide production. Another physiological yet less accurate method is related to the heart rate, since there is a relationship between the latter and oxygen consumption.
The Compendium of Physical Activities is used by physicians to record physical activities. It has a different definition of met that is the ratio of the metabolic rate of the activity in question to a resting metabolic rate. As the formulation of the concept is different from the one that ASHRAE uses, these met values cannot be used directly in PMV calculations, but it opens up a new way of quantifying physical activities.
Food and drink habits may have an influence on metabolic rates, which indirectly influences thermal preferences. These effects may change depending on food and drink intake.
Body shape is another factor that affects metabolic rate and hence thermal comfort. Heat dissipation depends on body surface area. The surface area of an average person is 1.8 m2 (19 ft2). A tall and skinny person has a larger surface-to-volume ratio, can dissipate heat more easily, and can tolerate higher temperatures more than a person with a rounded body shape.
Clothing insulation.
The amount of thermal insulation worn by a person has a substantial impact on thermal comfort, because it influences the heat loss and consequently the thermal balance. Layers of insulating clothing prevent heat loss and can either help keep a person warm or lead to overheating. Generally, the thicker the garment is, the greater insulating ability it has. Depending on the type of material the clothing is made out of, air movement and relative humidity can decrease the insulating ability of the material.
1 clo is equal to 0.155 m2·K/W (0.88 °F·ft2·h/Btu). This corresponds to trousers, a long sleeved shirt, and a jacket. Clothing insulation values for other common ensembles or single garments can be found in ASHRAE 55.
Skin wetness.
Skin wetness is defined as "the proportion of the total skin surface area of the body covered with sweat".
The wetness of skin in different areas also affects perceived thermal comfort. Humidity can increase wetness in different areas of the body, leading to a perception of discomfort. This is usually localized in different parts of the body, and local thermal comfort limits for skin wetness differ by locations of the body. The extremities are much more sensitive to thermal discomfort from wetness than the trunk of the body. Although local thermal discomfort can be caused by wetness, the thermal comfort of the whole body will not be affected by the wetness of certain parts.
Environmental factors.
Air temperature.
The air temperature is the average temperature of the air surrounding the occupant, with respect to location and time. According to ASHRAE 55 standard, the spatial average takes into account the ankle, waist and head levels, which vary for seated or standing occupants. The temporal average is based on three-minutes intervals with at least 18 equally spaced points in time. Air temperature is measured with a dry-bulb thermometer and for this reason it is also known as dry-bulb temperature.
Mean radiant temperature.
The radiant temperature is related to the amount of radiant heat transferred from a surface, and it depends on the material's ability to absorb or emit heat, or its emissivity. The mean radiant temperature depends on the temperatures and emissivities of the surrounding surfaces as well as the view factor, or the amount of the surface that is “seen” by the object. So the mean radiant temperature experienced by a person in a room with the sunlight streaming in varies based on how much of their body is in the sun.
Air speed.
Air speed is defined as the rate of air movement at a point, without regard to direction. According to ANSI/ASHRAE Standard 55, it is the average speed of the air surrounding a representative occupant, with respect to location and time. The spatial average is for three heights as defined for average air temperature. For an occupant moving in a space the sensors shall follow the movements of the occupant. The air speed is averaged over an interval not less than one and not greater than three minutes. Variations that occur over a period greater than three minutes shall be treated as multiple different air speeds.
Relative humidity.
Relative humidity (RH) is the ratio of the amount of water vapor in the air to the amount of water vapor that the air could hold at the specific temperature and pressure. While the human body has thermoreceptors in the skin that enable perception of temperature, relative humidity is detected indirectly. Sweating is an effective heat loss mechanism that relies on evaporation from the skin. However at high RH, the air has close to the maximum water vapor that it can hold, so evaporation, and therefore heat loss, is decreased. On the other hand, very dry environments (RH < 20–30%) are also uncomfortable because of their effect on the mucous membranes. The recommended level of indoor humidity is in the range of 30–60% in air conditioned buildings, but new standards such as the adaptive model allow lower and higher humidity, depending on the other factors involved in thermal comfort.
Recently, the effects of low relative humidity and high air velocity were tested on humans after bathing. Researchers found that low relative humidity engendered thermal discomfort as well as the sensation of dryness and itching. It is recommended to keep relative humidity levels higher in a bathroom than other rooms in the house for optimal conditions.
Various types of apparent temperature have been developed to combine air temperature and air humidity.
For higher temperatures, there are quantitative scales, such as the heat index.
For lower temperatures, a related interplay was identified only qualitatively:
There has been controversy over why damp cold air feels colder than dry cold air. Some believe it is because when the humidity is high, our skin and clothing become moist and are better conductors of heat, so there is more cooling by conduction.
The influence of humidity can be exacerbated with the combined use of fans (forced convection cooling).
Natural ventilation.
Many buildings use an HVAC unit to control their thermal environment. Other buildings are naturally ventilated (or would have cross ventilation) and do not rely on mechanical systems to provide thermal comfort. Depending on the climate, this can drastically reduce energy consumption. It is sometimes seen as a risk, though, since indoor temperatures can be too extreme if the building is poorly designed. Properly designed, naturally ventilated buildings keep indoor conditions within the range where opening windows and using fans in the summer, and wearing extra clothing in the winter, can keep people thermally comfortable.
Models and indices.
There are several different models or indices that can be used to assess thermal comfort conditions indoors as described below.
PMV/PPD method.
The PMV/PPD model was developed by P.O. Fanger using heat-balance equations and empirical studies about skin temperature to define comfort. Standard thermal comfort surveys ask subjects about their thermal sensation on a seven-point scale from cold (−3) to hot (+3). Fanger's equations are used to calculate the predicted mean vote (PMV) of a group of subjects for a particular combination of air temperature, mean radiant temperature, relative humidity, air speed, metabolic rate, and clothing insulation. PMV equal to zero is representing thermal neutrality, and the comfort zone is defined by the combinations of the six parameters for which the PMV is within the recommended limits (−0.5 < PMV < +0.5).
Although predicting the thermal sensation of a population is an important step in determining what conditions are comfortable, it is more useful to consider whether or not people will be satisfied. Fanger developed another equation to relate the PMV to the Predicted Percentage of Dissatisfied (PPD). This relation was based on studies that surveyed subjects in a chamber where the indoor conditions could be precisely controlled.
The PMV/PPD model is applied globally but does not directly take into account the adaptation mechanisms and outdoor thermal conditions.
ASHRAE Standard 55-2017 uses the PMV model to set the requirements for indoor thermal conditions. It requires that at least 80% of the occupants be satisfied.
The CBE Thermal Comfort Tool for ASHRAE 55 allows users to input the six comfort parameters to determine whether a certain combination complies with ASHRAE 55. The results are displayed on a psychrometric or a temperature-relative humidity chart and indicate the ranges of temperature and relative humidity that will be comfortable with the given the values input for the remaining four parameters.
The PMV/PPD model has a low prediction accuracy. Using the world largest thermal comfort field survey database, the accuracy of PMV in predicting occupant's thermal sensation was only 34%, meaning that the thermal sensation is correctly predicted one out of three times. The PPD was overestimating subject's thermal unacceptability outside the thermal neutrality ranges (-1≤PMV≤1). The PMV/PPD accuracy varies strongly between ventilation strategies, building types and climates.
Elevated air speed method.
ASHRAE 55 2013 accounts for air speeds above separately than the baseline model. Because air movement can provide direct cooling to people, particularly if they are not wearing much clothing, higher temperatures can be more comfortable than the PMV model predicts. Air speeds up to are allowed without local control, and 1.2 m/s is possible with local control. This elevated air movement increases the maximum temperature for an office space in the summer to 30 °C from 27.5 °C ().
Virtual Energy for Thermal Comfort.
"Virtual Energy for Thermal Comfort" is the amount of energy that will be required to make a non-air-conditioned building relatively as comfortable as one with air-conditioning. This is based on the assumption that the home will eventually install air-conditioning or heating.
Passive design improves thermal comfort in a building, thus reducing demand for heating or cooling. In many developing countries, however, most occupants do not currently heat or cool, due to economic constraints, as well as climate conditions which border lines comfort conditions such as cold winter nights in Johannesburg (South Africa) or warm summer days in San Jose, Costa Rica. At the same time, as incomes rise, there is a strong tendency to introduce cooling and heating systems. If we recognize and reward passive design features that improve thermal comfort today, we diminish the risk of having to install HVAC systems in the future, or we at least ensure that such systems will be smaller and less frequently used. Or in case the heating or cooling system is not installed due to high cost, at least people should not suffer from discomfort indoors. To provide an example, in San Jose, Costa Rica, if a house were being designed with high level of glazing and small opening sizes, the internal temperature would easily rise above and natural ventilation would not be enough to remove the internal heat gains and solar gains. This is why Virtual Energy for Comfort is important.
World Bank's assessment tool the EDGE software (Excellence in Design for Greater Efficiencies) illustrates the potential issues with discomfort in buildings and has created the concept of Virtual Energy for Comfort which provides for a way to present potential thermal discomfort. This approach is used to award for design solutions which improves thermal comfort even in a fully free running building.
Despite the inclusion of requirements for overheating in CIBSE, overcooling has not been assessed. However, overcooling can be an issue, mainly in the developing world, for example in cities such as Lima (Peru), Bogota, and Delhi, where cooler indoor temperatures can occur frequently. This may be a new area for research and design guidance for reduction of discomfort.
Cooling Effect.
ASHRAE 55-2017 defines the Cooling Effect (CE) at elevated air speed (above ) as the value that, when subtracted from both the air temperature and the mean radiant temperature, yields the same SET value under still air (0.1 m/s) as in the first SET calculation under elevated air speed.
formula_0
The CE can be used to determine the PMV adjusted for an environment with elevated air speed using the adjusted temperature, the adjusted radiant temperature and still air (). Where the adjusted temperatures are equal to the original air and mean radiant temperatures minus the CE.
Local thermal discomfort.
Avoiding local thermal discomfort, whether caused by a vertical air temperature difference between the feet and the head, by an asymmetric radiant field, by local convective cooling (draft), or by contact with a hot or cold floor, is essential to providing acceptable thermal comfort. People are generally more sensitive to local discomfort when their thermal sensation is cooler than neutral, while they are less sensitive to it when their body is warmer than neutral.
Radiant temperature asymmetry.
Large differences in the thermal radiation of the surfaces surrounding a person may cause local discomfort or reduce acceptance of the thermal conditions. ASHRAE Standard 55 sets limits on the allowable temperature differences between various surfaces. Because people are more sensitive to some asymmetries than others, for example that of a warm ceiling versus that of hot and cold vertical surfaces, the limits depend on which surfaces are involved. The ceiling is not allowed to be more than + warmer, whereas a wall may be up to + warmer than the other surfaces.
Draft.
While air movement can be pleasant and provide comfort in some circumstances, it is sometimes unwanted and causes discomfort. This unwanted air movement is called "draft" and is most prevalent when the thermal sensation of the whole body is cool. People are most likely to feel a draft on uncovered body parts such as their head, neck, shoulders, ankles, feet, and legs, but the sensation also depends on the air speed, air temperature, activity, and clothing.
Floor surface temperature.
Floors that are too warm or too cool may cause discomfort, depending on footwear. ASHRAE 55 recommends that floor temperatures stay in the range of in spaces where occupants will be wearing lightweight shoes.
Standard effective temperature.
Standard effective temperature (SET) is a model of human response to the thermal environment. Developed by A.P. Gagge and accepted by ASHRAE in 1986, it is also referred to as the Pierce Two-Node model. Its calculation is similar to PMV because it is a comprehensive comfort index based on heat-balance equations that incorporates the personal factors of clothing and metabolic rate. Its fundamental difference is it takes a two-node method to represent human physiology in measuring skin temperature and skin wettedness.
The SET index is defined as the equivalent dry bulb temperature of an isothermal environment at 50% relative humidity in which a subject, while wearing clothing standardized for activity concerned, would have the same heat stress (skin temperature) and thermoregulatory strain (skin wettedness) as in the actual test environment.
Research has tested the model against experimental data and found it tends to overestimate skin temperature and underestimate skin wettedness. Fountain and Huizenga (1997) developed a thermal sensation prediction tool that computes SET. The SET index can also be calculated using either the CBE Thermal Comfort Tool for ASHRAE 55, the Python package pythermalcomfort, or the R package comf.
Adaptive comfort model.
The adaptive model is based on the idea that outdoor climate might be used as a proxy of indoor comfort because of a statistically significant correlation between them. The adaptive hypothesis predicts that contextual factors, such as having access to environmental controls, and past thermal history can influence building occupants' thermal expectations and preferences. Numerous researchers have conducted field studies worldwide in which they survey building occupants about their thermal comfort while taking simultaneous environmental measurements. Analyzing a database of results from 160 of these buildings revealed that occupants of naturally ventilated buildings accept and even prefer a wider range of temperatures than their counterparts in sealed, air-conditioned buildings because their preferred temperature depends on outdoor conditions. These results were incorporated in the ASHRAE 55-2004 standard as the adaptive comfort model. The adaptive chart relates indoor comfort temperature to prevailing outdoor temperature and defines zones of 80% and 90% satisfaction.
The ASHRAE-55 2010 Standard introduced the prevailing mean outdoor temperature as the input variable for the adaptive model. It is based on the arithmetic average of the mean daily outdoor temperatures over no fewer than 7 and no more than 30 sequential days prior to the day in question. It can also be calculated by weighting the temperatures with different coefficients, assigning increasing importance to the most recent temperatures. In case this weighting is used, there is no need to respect the upper limit for the subsequent days. In order to apply the adaptive model, there should be no mechanical cooling system for the space, occupants should be engaged in sedentary activities with metabolic rates of 1–1.3 met, and a prevailing mean temperature of .
This model applies especially to occupant-controlled, natural-conditioned spaces, where the outdoor climate can actually affect the indoor conditions and so the comfort zone. In fact, studies by de Dear and Brager showed that occupants in naturally ventilated buildings were tolerant of a wider range of temperatures. This is due to both behavioral and physiological adjustments, since there are different types of adaptive processes. ASHRAE Standard 55-2010 states that differences in recent thermal experiences, changes in clothing, availability of control options, and shifts in occupant expectations can change people's thermal responses.
Adaptive models of thermal comfort are implemented in other standards, such as European EN 15251 and ISO 7730 standard. While the exact derivation methods and results are slightly different from the ASHRAE 55 adaptive standard, they are substantially the same. A larger difference is in applicability. The ASHRAE adaptive standard only applies to buildings without mechanical cooling installed, while EN15251 can be applied to mixed-mode buildings, provided the system is not running.
There are basically three categories of thermal adaptation, namely: behavioral, physiological, and psychological.
Psychological adaptation.
An individual's comfort level in a given environment may change and adapt over time due to psychological factors. Subjective perception of thermal comfort may be influenced by the memory of previous experiences. Habituation takes place when repeated exposure moderates future expectations, and responses to sensory input. This is an important factor in explaining the difference between field observations and PMV predictions (based on the static model) in naturally ventilated buildings. In these buildings, the relationship with the outdoor temperatures has been twice as strong as predicted.
Psychological adaptation is subtly different in the static and adaptive models. Laboratory tests of the static model can identify and quantify non-heat transfer (psychological) factors that affect reported comfort. The adaptive model is limited to reporting differences (called psychological) between modeled and reported comfort.
Thermal comfort as a "condition of mind" is "defined" in psychological terms. Among the factors that affect the condition of mind (in the laboratory) are a sense of control over the temperature, knowledge of the temperature and the appearance of the (test) environment. A thermal test chamber that appeared residential "felt" warmer than one which looked like the inside of a refrigerator.
Physiological adaptation.
The body has several thermal adjustment mechanisms to survive in drastic temperature environments. In a cold environment the body utilizes vasoconstriction; which reduces blood flow to the skin, skin temperature and heat dissipation. In a warm environment, vasodilation will increase blood flow to the skin, heat transport, and skin temperature and heat dissipation. If there is an imbalance despite the vasomotor adjustments listed above, in a warm environment sweat production will start and provide evaporative cooling. If this is insufficient, hyperthermia will set in, body temperature may reach , and heat stroke may occur. In a cold environment, shivering will start, involuntarily forcing the muscles to work and increasing the heat production by up to a factor of 10. If equilibrium is not restored, hypothermia can set in, which can be fatal. Long-term adjustments to extreme temperatures, of a few days to six months, may result in cardiovascular and endocrine adjustments. A hot climate may create increased blood volume, improving the effectiveness of vasodilation, enhanced performance of the sweat mechanism, and the readjustment of thermal preferences. In cold or underheated conditions, vasoconstriction can become permanent, resulting in decreased blood volume and increased body metabolic rate.
Behavioral adaptation.
In naturally ventilated buildings, occupants take numerous actions to keep themselves comfortable when the indoor conditions drift towards discomfort. Operating windows and fans, adjusting blinds/shades, changing clothing, and consuming food and drinks are some of the common adaptive strategies. Among these, adjusting windows is the most common. Those occupants who take these sorts of actions tend to feel cooler at warmer temperatures than those who do not.
The behavioral actions significantly influence energy simulation inputs, and researchers are developing behavior models to improve the accuracy of simulation results. For example, there are many window-opening models that have been developed to date, but there is no consensus over the factors that trigger window opening.
People might adapt to seasonal heat by becoming more nocturnal, doing physical activity and even conducting business at night.
Specificity and sensitivity.
Individual differences.
The thermal sensitivity of an individual is quantified by the descriptor "F""S", which takes on higher values for individuals with lower tolerance to non-ideal thermal conditions. This group includes pregnant women, the disabled, as well as individuals whose age is below fourteen or above sixty, which is considered the adult range. Existing literature provides consistent evidence that sensitivity to hot and cold surfaces usually declines with age. There is also some evidence of a gradual reduction in the effectiveness of the body in thermo-regulation after the age of sixty. This is mainly due to a more sluggish response of the counteraction mechanisms in lower parts of the body that are used to maintain the core temperature of the body at ideal values. Seniors prefer warmer temperatures than young adults (76 vs 72 degrees F or 24.4 vs 22.2 Celsius).
Situational factors include the health, psychological, sociological, and vocational activities of the persons.
Biological sex differences.
While thermal comfort preferences between sexes seem to be small, there are some average differences. Studies have found males on average report discomfort due to rises in temperature much earlier than females. Males on average also estimate higher levels of their sensation of discomfort than females. One recent study tested males and females in the same cotton clothing, performing mental jobs while using a dial vote to report their thermal comfort to the changing temperature.
Many times, females preferred higher temperatures than males. But while females tend to be more sensitive to temperatures, males tend to be more sensitive to relative-humidity levels.
An extensive field study was carried out in naturally ventilated residential buildings in Kota Kinabalu, Sabah, Malaysia. This investigation explored the sexes thermal sensitivity to the indoor environment in non-air-conditioned residential buildings. Multiple hierarchical regression for categorical moderator was selected for data analysis; the result showed that as a group females were slightly more sensitive than males to the indoor air temperatures, whereas, under thermal neutrality, it was found that males and females have similar thermal sensation.
Regional differences.
In different areas of the world, thermal comfort needs may vary based on climate. In China the climate has hot humid summers and cold winters, causing a need for efficient thermal comfort. Energy conservation in relation to thermal comfort has become a large issue in China in the last several decades due to rapid economic and population growth. Researchers are now looking into ways to heat and cool buildings in China for lower costs and also with less harm to the environment.
In tropical areas of Brazil, urbanization is creating urban heat islands (UHI). These are urban areas that have risen over the thermal comfort limits due to a large influx of people and only drop within the comfortable range during the rainy season. Urban heat islands can occur over any urban city or built-up area with the correct conditions.
In the hot, humid region of Saudi Arabia, the issue of thermal comfort has been important in mosques, because they are very large open buildings that are used only intermittently (very busy for the noon prayer on Fridays) it is hard to ventilate them properly. The large size requires a large amount of ventilation, which requires a lot of energy since the buildings are used only for short periods of time. Temperature regulation in mosques is a challenge due to the intermittent demand, leading to many mosques being either too hot or too cold. The stack effect also comes into play due to their large size and creates a large layer of hot air above the people in the mosque. New designs have placed the ventilation systems lower in the buildings to provide more temperature control at ground level. New monitoring steps are also being taken to improve efficiency.
Thermal stress.
The concept of thermal comfort is closely related to thermal stress. This attempts to predict the impact of solar radiation, air movement, and humidity for military personnel undergoing training exercises or athletes during competitive events. Several thermal stress indices have been proposed, such as the Predicted Heat Strain (PHS) or the humidex. Generally, humans do not perform well under thermal stress. People's performances under thermal stress is about 11% lower than their performance at normal thermal wet conditions. Also, human performance in relation to thermal stress varies greatly by the type of task which the individual is completing. Some of the physiological effects of thermal heat stress include increased blood flow to the skin, sweating, and increased ventilation.
Predicted Heat Strain (PHS).
The PHS model, developed by the International Organization for Standardization (ISO) committee, allows the analytical evaluation of the thermal stress experienced by a working subject in a hot environment. It describes a method for predicting the sweat rate and the internal core temperature that the human body will develop in response to the working conditions. The PHS is calculated as a function of several physical parameters, consequently it makes it possible to determine which parameter or group of parameters should be modified, and to what extent, in order to reduce the risk of physiological strains. The PHS model does not predict the physiological response of an individual subject, but only considers standard subjects in good health and fit for the work they perform. The PHS can be determined using either the Python package pythermalcomfort or the R package comf.
American Conference on Governmental Industrial Hygienists (ACGIH) Action Limits and Threshold Limit Values.
ACGIH has established Action Limits and Threshold Limit Values for heat stress based upon the estimated metabolic rate of a worker and the environmental conditions the worker is subjected to.
This methodology has been adopted by the Occupational Safety and Health Administration (OSHA) as an effective method of assesing heat stress within workplaces.
Research.
The factors affecting thermal comfort were explored experimentally in the 1970s. Many of these studies led to the development and refinement of ASHRAE Standard 55 and were performed at Kansas State University by Ole Fanger and others. Perceived comfort was found to be a complex interaction of these variables. It was found that the majority of individuals would be satisfied by an ideal set of values. As the range of values deviated progressively from the ideal, fewer and fewer people were satisfied. This observation could be expressed statistically as the percent of individuals who expressed satisfaction by "comfort conditions" and the "predicted mean vote" (PMV). This approach was challenged by the adaptive comfort model, developed from the ASHRAE 884 project, which revealed that occupants were comfortable in a broader range of temperatures.
This research is applied to create Building Energy Simulation (BES) programs for residential buildings. Residential buildings in particular can vary much more in thermal comfort than public and commercial buildings. This is due to their smaller size, the variations in clothing worn, and different uses of each room. The main rooms of concern are bathrooms and bedrooms. Bathrooms need to be at a temperature comfortable for a human with or without clothing. Bedrooms are of importance because they need to accommodate different levels of clothing and also different metabolic rates of people asleep or awake. Discomfort hours is a common metric used to evaluate the thermal performance of a space.
Thermal comfort research in clothing is currently being done by the military. New air-ventilated garments are being researched to improve evaporative cooling in military settings. Some models are being created and tested based on the amount of cooling they provide.
In the last twenty years, researchers have also developed advanced thermal comfort models that divide the human body into many segments, and predict local thermal discomfort by considering heat balance. This has opened up a new arena of thermal comfort modeling that aims at heating/cooling selected body parts.
Another area of study is the hue-heat hypothesis that states that an environment with warm colors (red, orange yellow hues) will feel warmer in terms of temperature and comfort, while an environment with cold colors (blue, green hues) will feel cooler. The hue-heat hypothesis has both been investigated scientifically and ingrained in popular culture in the terms warm and cold colors
Medical environments.
Whenever the studies referenced tried to discuss the thermal conditions for different groups of occupants in one room, the studies ended up simply presenting comparisons of thermal comfort satisfaction based on the subjective studies. No study tried to reconcile the different thermal comfort requirements of different types of occupants who compulsorily must stay in one room. Therefore, it looks to be necessary to investigate the different thermal conditions required by different groups of occupants in hospitals to reconcile their different requirements in this concept. To reconcile the differences in the required thermal comfort conditions it is recommended to test the possibility of using different ranges of local radiant temperature in one room via a suitable mechanical system.
Although different researches are undertaken on thermal comfort for patients in hospitals, it is also necessary to study the effects of thermal comfort conditions on the quality and the quantity of healing for patients in hospitals. There are also original researches that show the link between thermal comfort for staff and their levels of productivity, but no studies have been produced individually in hospitals in this field. Therefore, research for coverage and methods individually for this subject is recommended. Also research in terms of cooling and heating delivery systems for patients with low levels of immune-system protection (such as HIV patients, burned patients, etc.) are recommended. There are important areas, which still need to be focused on including thermal comfort for staff and its relation with their productivity, using different heating systems to prevent hypothermia in the patient and to improve the thermal comfort for hospital staff simultaneously.
Finally, the interaction between people, systems and architectural design in hospitals is a field in which require further work needed to improve the knowledge of how to design buildings and systems to reconcile many conflicting factors for the people occupying these buildings.
Personal comfort systems.
Personal comfort systems (PCS) refer to devices or systems which heat or cool a building occupant personally. This concept is best appreciated in contrast to central HVAC systems which have uniform temperature settings for extensive areas. Personal comfort systems include fans and air diffusers of various kinds (e.g. desk fans, nozzles and slot diffusers, overhead fans, high-volume low-speed fans etc.) and personalized sources of radiant or conductive heat (footwarmers, legwarmers, hot water bottles etc.). PCS has the potential to satisfy individual comfort requirements much better than current HVAC systems, as interpersonal differences in thermal sensation due to age, sex, body mass, metabolic rate, clothing and thermal adaptation can amount to an equivalent temperature variation of 2–5 K, which is impossible for a central, uniform HVAC system to cater to. Besides, research has shown that the perceived ability to control one's thermal environment tends to widen one's range of tolerable temperatures. Traditionally, PCS devices have been used in isolation from one another. However, it has been proposed by Andersen et al. (2016) that a network of PCS devices which generate well-connected microzones of thermal comfort, and report real-time occupant information and respond to programmatic actuation requests (e.g. a party, a conference, a concert etc.) can combine with occupant-aware building applications to enable new methods of comfort maximization.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " SET(t_{a}, t_{r}, v, met, clo, RH) = SET(t_{a} - CE, t_{r} - CE, v = 0.1, met, clo, RH) "
}
]
| https://en.wikipedia.org/wiki?curid=7455643 |
7455889 | Zero object (algebra) | Algebraic structure with only one element
In algebra, the zero object of a given algebraic structure is, in the sense explained below, the simplest object of such structure. As a set it is a singleton, and as a magma has a trivial structure, which is also an abelian group. The aforementioned abelian group structure is usually identified as addition, and the only element is called zero, so the object itself is typically denoted as {0}. One often refers to "the" trivial object (of a specified category) since every trivial object is isomorphic to any other (under a unique isomorphism).
Instances of the zero object include, but are not limited to the following:
These objects are described jointly not only based on the common singleton and trivial group structure, but also because of shared category-theoretical properties.
In the last three cases the scalar multiplication by an element of the base ring (or field) is defined as:
"κ"0 = 0, where "κ" ∈ "R".
The most general of them, the zero module, is a finitely-generated module with an empty generating set.
For structures requiring the multiplication structure inside the zero object, such as the trivial ring, there is only one possible, 0 × 0 = 0, because there are no non-zero elements. This structure is associative and commutative. A ring R which has both an additive and multiplicative identity is trivial if and only if 1 = 0, since this equality implies that for all r within R,
formula_0
In this case it is possible to define division by zero, since the single element is its own multiplicative inverse. Some properties of {0} depend on exact definition of the multiplicative identity; see "" below.
Any trivial algebra is also a trivial ring. A trivial algebra over a field is simultaneously a zero vector space considered below. Over a commutative ring, a trivial algebra is simultaneously a zero module.
The trivial ring is an example of a rng of square zero. A trivial algebra is an example of a zero algebra.
The zero-dimensional <templatestyles src="Template:Visible anchor/styles.css" />vector space is an especially ubiquitous example of a zero object, a vector space over a field with an empty basis. It therefore has dimension zero. It is also a trivial group over addition, and a "trivial module" mentioned above.
Properties.
The zero ring, zero module and zero vector space are the zero objects of, respectively, the category of pseudo-rings, the category of modules and the category of vector spaces. However, the zero ring is not a zero object in the category of rings, since there is no ring homomorphism of the zero ring in any other ring.
The zero object, by definition, must be a terminal object, which means that a morphism "A" → {0} must exist and be unique for an arbitrary object A. This morphism maps any element of A to 0.
The zero object, also by definition, must be an initial object, which means that a morphism {0} → "A" must exist and be unique for an arbitrary object A. This morphism maps 0, the only element of {0}, to the zero element 0 ∈ "A", called the zero vector in vector spaces. This map is a monomorphism, and hence its image is isomorphic to {0}. For modules and vector spaces, this subset {0} ⊂ "A" is the only empty-generated submodule (or 0-dimensional linear subspace) in each module (or vector space) A.
Unital structures.
The {0} object is a terminal object of any algebraic structure where it exists, like it was described for examples above. But its existence and, if it exists, the property to be an initial object (and hence, a "zero object" in the category-theoretical sense) depend on exact definition of the multiplicative identity 1 in a specified structure.
If the definition of 1 requires that 1 ≠ 0, then the {0} object cannot exist because it may contain only one element. In particular, the zero ring is not a field. If mathematicians sometimes talk about a field with one element, this abstract and somewhat mysterious mathematical object is not a field.
In categories where the multiplicative identity must be preserved by morphisms, but can equal to zero, the {0} object can exist. But not as initial object because identity-preserving morphisms from {0} to any object where 1 ≠ 0 do not exist. For example, in the category of rings Ring the ring of integers Z is the initial object, not {0}.
If an algebraic structure requires the multiplicative identity, but neither its preservation by morphisms nor 1 ≠ 0, then zero morphisms exist and the situation is not different from non-unital structures considered in the previous section.
Notation.
Zero vector spaces and zero modules are usually denoted by 0 (instead of {0}). This is always the case when they occur in an exact sequence. | [
{
"math_id": 0,
"text": "r = r \\times 1 = r \\times 0 = 0 ."
}
]
| https://en.wikipedia.org/wiki?curid=7455889 |
74566624 | FGX | FGX can refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "FGX"
}
]
| https://en.wikipedia.org/wiki?curid=74566624 |
74567 | Dynamic random-access memory | Type of computer memory
Dynamic random-access memory (dynamic RAM or DRAM) is a type of random-access semiconductor memory that stores each bit of data in a memory cell, usually consisting of a tiny capacitor and a transistor, both typically based on metal–oxide–semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external "memory refresh" circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence.
DRAM typically takes the form of an integrated circuit chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in digital electronics where low-cost and high-capacity computer memory is required. One of the largest applications for DRAM is the "main memory" (colloquially called the "RAM") in modern computers and graphics cards (where the "main memory" is called the "graphics memory"). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the cache memories in processors.
The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities with a simultaneous reduction in cost per bit. Refreshing the data consumes power and a variety of techniques are used to manage the overall power consumption.
DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down. In 2018, a "key characteristic of the DRAM market is that there are currently only three major suppliers — Micron Technology, SK Hynix and Samsung Electronics" that are "keeping a pretty tight rein on their capacity". There is also Kioxia (previously Toshiba Memory Corporation after 2017 spin-off) which doesn't manufacture DRAM. Other manufacturers make and sell DIMMs (but not the DRAM chips in them), such as Kingston Technology, and some manufacturers that sell stacked DRAM (used e.g. in the fastest supercomputers on the exascale), separately such as Viking Technology. Others sell such integrated into other products, such as Fujitsu into its CPUs, AMD in GPUs, and Nvidia, with HBM2 in some of their GPU chips.
History.
The cryptanalytic machine code-named "Aquarius" used at Bletchley Park during World War II incorporated a hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store. ... The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')".
Toshiba invented and introduced a dynamic RAM for its electronic calculator ""Toscal" BC-1411", which was introduced in November 1965; it used a form of capacitive DRAM (180 bit) built from discrete bipolar memory cells.
In 1967, Tomohisa Yoshimaru and Hiroshi Komikawa from Toshiba applied for an American patent of the concept with a priority of May, 1966 due to an early Japanese application.
The earliest forms of DRAM mentioned above used bipolar transistors. While it offered improved performance over magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory. Capacitors had also been used for earlier memory schemes, such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.
In 1966, Dr. Robert Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory and was trying to create an alternative to SRAM which required six MOS transistors for each bit of data. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of the single-transistor MOS DRAM memory cell. He filed a patent in 1967, and was granted U.S. patent number 3,387,286 in 1968. MOS memory offered higher performance, was cheaper, and consumed less power, than magnetic-core memory.
MOS DRAM chips were commercialized in 1969 by Advanced Memory Systems, Inc of Sunnyvale, CA. This 1024 bit chip was sold to Honeywell, Raytheon, Wang Laboratories, and others.
The same year, Honeywell asked Intel to make a DRAM using a three-transistor cell that they had developed. This became the Intel 1102 in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially available DRAM, the Intel 1103, in October 1970, despite initial problems with low yield until the fifth revision of the masks. The 1103 was designed by Joel Karp and laid out by Pat Earhart. The masks were cut by Barbara Maness and Judy Garcia. MOS memory overtook magnetic-core memory as the dominant memory technology in the early 1970s.
The first DRAM with multiplexed row and column address lines was the Mostek MK4096 4 Kbit DRAM designed by Robert Proebsting and introduced in 1973. This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles. This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16 Kbit density, the cost advantage increased; the 16 Kbit Mostek MK4116 DRAM, introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64 Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers, which dominated the US and worldwide markets during the 1980s and 1990s.
Early in 1985, Gordon Moore decided to withdraw Intel from producing DRAM.
By 1986, many, but not all, United States chip makers had stopped making DRAMs. Micron Technology and Texas Instruments continued to produce them commercially, and IBM produced them for internal use.
In 1985, when 64K DRAM memory chips were the most common memory chips used in computers, and when more than 60 percent of those chips were produced by Japanese companies, semiconductor makers in the United States accused Japanese companies of export dumping for the purpose of driving makers in the United States out of the commodity memory chip business. Prices for the 64K product plummeted to as low as 35 cents apiece from $3.50 within 18 months, with disastrous financial consequences for some U.S. firms. On 4 December 1985 the US Commerce Department's International Trade Administration ruled in favor of the complaint.
Synchronous dynamic random-access memory (SDRAM) was developed by Samsung. The first commercial SDRAM chip was the Samsung KM48SL2000, which had a capacity of 16Mb, and was introduced in 1992. The first commercial DDR SDRAM (double data rate SDRAM) memory chip was Samsung's 64Mb DDR SDRAM chip, released in 1998.
Later, in 2001, Japanese DRAM makers accused Korean DRAM manufacturers of dumping.
In 2002, US computer makers made claims of DRAM price fixing.
Principles of operation.
DRAM is usually arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit. The figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in height and width.
The long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column (the illustration to the right does not include this important detail). They are generally known as the "+" and "−" bit lines.
A sense amplifier is essentially a pair of cross-connected inverters between the bit-lines. The first inverter is connected with input from the + bit-line and output to the − bit-line. The second inverter's input is from the − bit-line with output to the + bit-line. This results in positive feedback which stabilizes after one bit-line is fully at its highest voltage and the other bit-line is at the lowest possible voltage.
To write to memory.
To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value. Due to the sense amplifier's positive feedback configuration, it will hold a bit-line at stable voltage even after the forcing voltage is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right.
Refresh rate.
Typically, manufacturers specify that each row must be refreshed every 64 ms or less, as defined by the JEDEC standard.
Some systems refresh every row in a burst of activity involving all rows every 64 ms. Other systems refresh one row at a time staggered throughout the 64 ms interval. For example, a system with 213 = 8,192 rows would require a staggered refresh rate of one row every 7.8 μs which is 64 ms divided by 8,192 rows. A few real-time systems refresh a portion of memory at a time determined by an external timer function that governs the operation of the rest of a system, such as the vertical blanking interval that occurs every 10–20 ms in video equipment.
The row address of the row that will be refreshed next is maintained by external logic or a counter within the DRAM. A system that provides the row address (and the refresh command) does so to have greater control over when to refresh and which row to refresh. This is done to minimize conflicts with memory accesses, since such a system has both knowledge of the memory access patterns and the refresh requirements of the DRAM. When the row address is supplied by a counter within the DRAM, the system relinquishes control over which row is refreshed and only provides the refresh command. Some modern DRAMs are capable of self-refresh; no external logic is required to instruct the DRAM to refresh or to provide a row address.
Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes.
Memory timing.
Many parameters are required to fully describe the timing of DRAM operation. Here are some examples for two timing grades of asynchronous DRAM, from a data sheet published in 1998:
Thus, the generally quoted number is the minimum /RAS low time. This is the time to open a row, allowing the sense amplifiers to settle. Note that the data access for a bit in the row is shorter, since that happens as soon as the sense amplifier has settled, but the DRAM requires additional time to propagate the amplified data back to recharge the cells. The time to read additional bits from an open page is much less, defined by the /CAS to /CAS cycle time. The quoted number is the clearest way to compare between the performance of different DRAM memories, as it sets the slower limit regardless of the row length or page size. Bigger arrays forcibly result in larger bit line capacitance and longer propagation delays, which cause this time to increase as the sense amplifier settling time is dependent on both the capacitance as well as the propagation latency. This is countered in modern DRAM chips by instead integrating many more complete DRAM arrays within a single chip, to accommodate more capacity without becoming too slow.
When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This was generally described as "5-2-2-2" timing, as bursts of four reads within a page were common.
When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers represent "t"CL-"t"RCD-"t"RP-"t"RAS in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing is 3-4-4-8 with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at 2-2-2-5 timing.
Minimum random access time has improved from "t"RAC = 50 ns to "t"RCD + "t"CL = 22.5 ns, and even the premium 20 ns variety is only 2.5 times better compared to the typical case (~2.22 times better). CAS latency has improved even less, from "t"CAC = 13 ns to 10 ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns , while the EDO DRAM can output one word per "t"PC = 20 ns (50 Mword/s).
Memory cell design.
Each bit of data in a DRAM is stored as a positive or negative electrical charge in a capacitive structure. The structure providing the capacitance, as well as the transistors that control access to it, is collectively referred to as a "DRAM cell". They are the fundamental building block in DRAM arrays. Multiple DRAM memory cell variants exist, but the most commonly used variant in modern DRAMs is the one-transistor, one-capacitor (1T1C) cell. The transistor is used to admit current into the capacitor during writes, and to discharge the capacitor during reads. The access transistor is designed to maximize drive strength and minimize transistor-transistor leakage (Kenner, pg. 34).
The capacitor has two terminals, one of which is connected to its access transistor, and the other to either ground or VCC/2. In modern DRAMs, the latter case is more common, since it allows faster operation. In modern DRAMs, a voltage of +VCC/2 across the capacitor is required to store a logic one; and a voltage of -VCC/2 across the capacitor is required to store a logic zero. The electrical charge stored in the capacitor is measured in coulombs. For a logic one, the charge is: formula_0, where "Q" is the charge in coulombs and "C" is the capacitance in farads. A logic zero has a charge of: formula_1.
Reading or writing a logic one requires the wordline is driven to a voltage greater than the sum of VCC and the access transistor's threshold voltage (VTH). This voltage is called "VCC pumped" (VCCP). The time required to discharge a capacitor thus depends on what logic value is stored in the capacitor. A capacitor containing logic one begins to discharge when the voltage at the access transistor's gate terminal is above VCCP. If the capacitor contains a logic zero, it begins to discharge when the gate terminal voltage is above VTH.
Capacitor design.
Up until the mid-1980s, the capacitors in DRAM cells were co-planar with the access transistor (they were constructed on the surface of the substrate), thus they were referred to as "planar" capacitors. The drive to increase both density and, to a lesser extent, performance, required denser designs. This was strongly motivated by economics, a major consideration for DRAM devices, especially commodity DRAMs. The minimization of DRAM cell area can produce a denser device and lower the cost per bit of storage. Starting in the mid-1980s, the capacitor was moved above or below the silicon substrate in order to meet these objectives. DRAM cells featuring capacitors above the substrate are referred to as "stacked" or "folded plate" capacitors. Those with capacitors buried beneath the substrate surface are referred to as "trench" capacitors. In the 2000s, manufacturers were sharply divided by the type of capacitor used in their DRAMs and the relative cost and long-term scalability of both designs have been the subject of extensive debate. The majority of DRAMs, from major manufactures such as Hynix, Micron Technology, Samsung Electronics use the stacked capacitor structure, whereas smaller manufacturers such Nanya Technology use the trench capacitor structure (Jacob, pp. 355–357).
The capacitor in the stacked capacitor scheme is constructed above the surface of the substrate. The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape. There are two basic variations of the stacked capacitor, based on its location relative to the bitline—capacitor-over-bitline (COB) and capacitor-under-bitline (CUB). In a former variation, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal. In the latter variation, the capacitor is constructed above the bitline, which is almost always made of polysilicon, but is otherwise identical to the COB variation. The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface. However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that the capacitor contact does not touch the bitline. CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp. 33–42).
The trench capacitor is constructed by etching a deep hole into the silicon substrate. The substrate volume surrounding the hole is then heavily doped to produce a buried n+ plate and to reduce resistance. A layer of oxide-nitride-oxide dielectric is grown or deposited, and finally the hole is filled by depositing doped polysilicon, which forms the top plate of the capacitor. The top of the capacitor is connected to the access transistor's drain terminal via a polysilicon strap (Kenner, pp. 42–44). A trench capacitor's depth-to-width ratio in DRAMs of the mid-2000s can exceed 50:1 (Jacob, p. 357).
Trench capacitors have numerous advantages. Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp. 356–357). Alternatively, the capacitance can be increased by etching a deeper hole without any increase to surface area (Kenner, pg. 44). Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate. The fact that the capacitor is under the logic means that it is constructed before the transistors are. This allows high-temperature processes to fabricate the capacitors, which would otherwise be degrading the logic transistors and their performance. This makes trench capacitors suitable for constructing embedded DRAM (eDRAM) (Jacob, p. 357). Disadvantages of trench capacitors are difficulties in reliably constructing the capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, pg. 44).
Historical cell designs.
First-generation DRAM ICs (those with capacities of 1 Kbit), of which the first was the Intel 1103, used a three-transistor, one-capacitor (3T1C) DRAM cell. By the second-generation, the requirement to reduce cost by fitting the same amount of bits in a smaller area led to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16 Kbit capacities continued to use the 3T1C cell for performance reasons (Kenner, p. 6). These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read). A second performance advantage relates to the 3T1C cell has separate transistors for reading and writing; the memory controller can exploit this feature to perform atomic read-modify-writes, where a value is read, modified, and then written back as a single, indivisible operation (Jacob, p. 459).
Proposed cell designs.
The one-transistor, zero-capacitor (1T, or 1T0C) DRAM cell has been a topic of research since the late-1990s. "1T DRAM" is a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as "1T DRAM", particularly in comparison to the 3T and 4T DRAM which it replaced in the 1970s.
In 1T DRAM cells, the bit of data is still stored in a capacitive region controlled by a transistor, but this capacitance is no longer provided by a separate capacitor. 1T DRAM is a "capacitorless" bit cell design that stores data using the parasitic body capacitance that is inherent to silicon on insulator (SOI) transistors. Considered a nuisance in logic design, this floating body effect can be used for data storage. This gives 1T DRAM cells the greatest density as well as allowing easier integration with high-performance logic circuits since they are constructed with the same SOI process technologies.
Refreshing of cells remains necessary, but unlike with 1T1C DRAM, reads in 1T DRAM are non-destructive; the stored charge causes a detectable shift in the threshold voltage of the transistor. Performance-wise, access times are significantly better than capacitor-based DRAMs, but slightly worse than SRAM. There are several types of 1T DRAMs: the commercialized Z-RAM from Innovative Silicon, the TTRAM from Renesas and the A-RAM from the UGR/CNRS consortium.
Array structures.
DRAM cells are laid out in a regular rectangular, grid-like pattern to facilitate their control and access via wordlines and bitlines. The physical layout of the DRAM cells in an array is typically designed so that two adjacent DRAM cells in a column share a single bitline contact to reduce their area. DRAM cell area is given as "n"F2, where "n" is a number derived from the DRAM cell design, and "F" is the smallest feature size of a given process technology. This scheme permits comparison of DRAM size over different process technology generations, as DRAM cell area scales at linear or near-linear rates with respect to feature size. The typical area for modern DRAM cells varies between 6–8 F2.
The horizontal wire, the wordline, is connected to the gate terminal of every access transistor in its row. The vertical bitline is connected to the source terminal of the transistors in its column. The lengths of the wordlines and bitlines are limited. The wordline length is limited by the desired performance of the array, since propagation time of the signal that must transverse the wordline is determined by the RC time constant. The bitline length is limited by its capacitance (which increases with length), which must be kept within a range for proper sensing (as DRAMs operate by sensing the charge of the capacitor released onto the bitline). Bitline length is also limited by the amount of operating current the DRAM can draw and by how power can be dissipated, since these two characteristics are largely determined by the charging and discharging of the bitline.
Bitline architecture.
Sense amplifiers are required to read the state contained in the DRAM cells. When the access transistor is activated, the electrical charge in the capacitor is shared with the bitline. The bitline's capacitance is much greater than that of the capacitor (approximately ten times). Thus, the change in bitline voltage is minute. Sense amplifiers are required to resolve the voltage differential into the levels specified by the logic signaling system. Modern DRAMs use differential sense amplifiers, and are accompanied by requirements as to how the DRAM arrays are constructed. Differential sense amplifiers work by driving their outputs to opposing extremes based on the relative voltages on pairs of bitlines. The sense amplifiers function effectively and efficient only if the capacitance and voltages of these bitline pairs are closely matched. Besides ensuring that the lengths of the bitlines and the number of attached DRAM cells attached to them are equal, two basic architectures to array design have emerged to provide for the requirements of the sense amplifiers: open and folded bitline arrays.
Open bitline arrays.
The first generation (1 Kbit) DRAM ICs, up until the 64 Kbit generation (and some 256 Kbit generation devices) had open bitline array architectures. In these architectures, the bitlines are divided into multiple segments, and the differential sense amplifiers are placed in between bitline segments. Because the sense amplifiers are placed between bitline segments, to route their outputs outside the array, an additional layer of interconnect placed above those used to construct the wordlines and bitlines is required.
The DRAM cells that are on the edges of the array do not have adjacent segments. Since the differential sense amplifiers require identical capacitance and bitline lengths from both segments, dummy bitline segments are provided. The advantage of the open bitline array is a smaller array area, although this advantage is slightly diminished by the dummy bitline segments. The disadvantage that caused the near disappearance of this architecture is the inherent vulnerability to noise, which affects the effectiveness of the differential sense amplifiers. Since each bitline segment does not have any spatial relationship to the other, it is likely that noise would affect only one of the two bitline segments.
Folded bitline arrays.
The folded bitline array architecture routes bitlines in pairs throughout the array. The close proximity of the paired bitlines provide superior common-mode noise rejection characteristics over open bitline arrays. The folded bitline array architecture began appearing in DRAM ICs during the mid-1980s, beginning with the 256 Kbit generation. This architecture is favored in modern DRAM ICs for its superior noise immunity.
This architecture is referred to as "folded" because it takes its basis from the open array architecture from the perspective of the circuit schematic. The folded array architecture appears to remove DRAM cells in alternate pairs (because two DRAM cells share a single bitline contact) from a column, then move the DRAM cells from an adjacent column into the voids.
The location where the bitline twists occupies additional area. To minimize area overhead, engineers select the simplest and most area-minimal twisting scheme that is able to reduce noise under the specified limit. As process technology improves to reduce minimum feature sizes, the signal to noise problem worsens, since coupling between adjacent metal wires is inversely proportional to their pitch. The array folding and bitline twisting schemes that are used must increase in complexity in order to maintain sufficient noise reduction. Schemes that have desirable noise immunity characteristics for a minimal impact in area is the topic of current research (Kenner, p. 37).
Future array architectures.
Advances in process technology could result in open bitline array architectures being favored if it is able to offer better long-term area efficiencies; since folded array architectures require increasingly complex folding schemes to match any advance in process technology. The relationship between process technology, array architecture, and area efficiency is an active area of research.
Row and column redundancy.
The first DRAM integrated circuits did not have any redundancy. An integrated circuit with a defective DRAM cell would be discarded. Beginning with the 64 Kbit generation, DRAM arrays have included spare rows and columns to improve yields. Spare rows and columns provide tolerance of minor fabrication defects which have caused a small number of rows or columns to be inoperable. The defective rows and columns are physically disconnected from the rest of the array by a triggering a programmable fuse or by cutting the wire by a laser. The spare rows or columns are substituted in by remapping logic in the row and column decoders (Jacob, pp. 358–361).
Error detection and correction.
Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. The majority of one-off ("soft") errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read/write them.
The problem can be mitigated by using redundant memory bits and additional circuitry that use these bits to detect and correct soft errors. In most cases, the detection and correction are performed by the memory controller; sometimes, the required logic is transparently implemented within DRAM chips or modules, enabling the ECC memory functionality for otherwise ECC-incapable systems. The extra memory bits are used to record parity and to enable missing data to be reconstructed by error-correcting code (ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, a SECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected.
Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory. The Schroeder et al. 2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors and that trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data. A 2010 study at the University of Rochester also gave evidence that a substantial fraction of memory errors are intermittent hard errors. Large scale studies on non-ECC main memory in PCs and laptops suggest that undetected memory errors account for a substantial number of system failures: the 2011 study reported a 1-in-1700 chance per 1.5% of memory tested (extrapolating to an approximately 26% chance for total memory) that a computer would have a memory error every eight months.
Security.
Data remanence.
Although dynamic memory is only specified and "guaranteed" to retain its contents when supplied with power and refreshed every short period of time (often 64 ms), the memory cell capacitors often retain their values for significantly longer time, particularly at low temperatures. Under some conditions most of the data in DRAM can be recovered even if it has not been refreshed for several minutes.
This property can be used to circumvent security and recover data stored in the main memory that is assumed to be destroyed at power-down. The computer could be quickly rebooted, and the contents of the main memory read out; or by removing a computer's memory modules, cooling them to prolong data remanence, then transferring them to a different computer to be read out. Such an attack was demonstrated to circumvent popular disk encryption systems, such as the open source TrueCrypt, Microsoft's BitLocker Drive Encryption, and Apple's FileVault. This type of attack against a computer is often called a cold boot attack.
Memory corruption.
Dynamic memory, by definition, requires periodic refresh. Furthermore, reading dynamic memory is a destructive operation, requiring a recharge of the storage cells in the row that has been read. If these processes are imperfect, a read operation can cause soft errors. In particular, there is a risk that some charge can leak between nearby cells, causing the refresh or read of one row to cause a "disturbance error" in an adjacent or even nearby row. The awareness of disturbance errors dates back to the first commercially available DRAM in the early 1970s (the Intel 1103). Despite the mitigation techniques employed by manufacturers, commercial researchers proved in a 2014 analysis that commercially available DDR3 DRAM chips manufactured in 2012 and 2013 are susceptible to disturbance errors. The associated side effect that led to observed bit flips has been dubbed "row hammer".
Packaging.
Memory module.
Dynamic RAM ICs can be packaged in molded epoxy cases, with an internal lead frame for interconnections between the silicon die and the package leads. The original IBM PC design used ICs, including those for DRAM, packaged in dual in-line packages (DIP), soldered directly to the main board or mounted in sockets. As memory density skyrocketed, the DIP package was no longer practical. For convenience in handling, several dynamic RAM integrated circuits may be mounted on a single memory module, allowing installation of 16-bit, 32-bit or 64-bit wide memory in a single unit, without the requirement for the installer to insert multiple individual integrated circuits. Memory modules may include additional devices for parity checking or error correction. Over the evolution of desktop computers, several standardized types of memory module have been developed. Laptop computers, game consoles, and specialized devices may have their own formats of memory modules not interchangeable with standard desktop parts for packaging or proprietary reasons.
Embedded.
DRAM that is integrated into an integrated circuit designed in a logic-optimized process (such as an application-specific integrated circuit, microprocessor, or an entire system on a chip) is called "embedded DRAM" (eDRAM). Embedded DRAM requires DRAM cell designs that can be fabricated without preventing the fabrication of fast-switching transistors used in high-performance logic, and modification of the basic logic-optimized process technology to accommodate the process steps required to build DRAM cell structures.
Versions.
Since the fundamental DRAM cell and array has maintained the same basic structure for many years, the types of DRAM are mainly distinguished by the many different interfaces for communicating with DRAM chips.
Asynchronous DRAM.
The original DRAM, now known by the retronym "asynchronous DRAM" was the first type of DRAM in use. From its origins in the late 1960s, it was commonplace in computing up until around 1997, when it was mostly replaced by "Synchronous DRAM". In the present day, manufacture of asynchronous RAM is relatively rare.
Principles of operation.
An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically one or four) bidirectional data lines. There are four active-low control signals:
This interface provides direct control of internal timing. When RAS is driven low, a CAS cycle must not be attempted until the sense amplifiers have sensed the memory state, and RAS must not be returned high until the storage cells have been refreshed. When RAS is driven high, it must be held high long enough for precharging to complete.
Although the DRAM is asynchronous, the signals are typically generated by a clocked memory controller, which limits their timing to multiples of the controller's clock cycle.
RAS Only Refresh.
Classic asynchronous DRAM is refreshed by opening each row in turn.
The refresh cycles are distributed across the entire refresh interval in such a way that all rows are refreshed within the required interval. To refresh one row of the memory array using RAS only refresh (ROR), the following steps must occur:
This can be done by supplying a row address and pulsing RAS low; it is not necessary to perform any CAS cycles. An external counter is needed to iterate over the row addresses in turn. In some designs, the CPU handled RAM refresh, among these the Zilog Z80 is perhaps the best known example, hosting a row counter in a processor register, R, and including internal timers that would periodically poll the row at R and then increment the value in the register. Refreshes were interleaved with common instructions like memory reads. In other systems, especially home computers, refresh was often handled by the video circuitry as it often had to read from large areas of memory, and performed refreshes as part of these operations.
CAS before RAS refresh.
For convenience, the counter was quickly incorporated into the DRAM chips themselves. If the CAS line is driven low before RAS (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open. This is known as CAS-before-RAS (CBR) refresh. This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM.
Hidden refresh.
Given support of CAS-before-RAS refresh, it is possible to deassert RAS while holding CAS low to maintain data output. If RAS is then asserted again, this performs a CBR refresh cycle while the DRAM outputs remain valid. Because data output is not interrupted, this is known as "hidden refresh".
Page mode DRAM.
Page mode DRAM is a minor modification to the first-generation DRAM IC interface which improved the performance of reads and writes to a row by avoiding the inefficiency of precharging and opening the same row repeatedly to access a different column. In page mode DRAM, after a row was opened by holding RAS low, the row could be kept open, and multiple reads or writes could be performed to any of the columns in the row. Each column access was initiated by asserting CAS and presenting a column address. For reads, after a delay ("t"CAC), valid data would appear on the data out pins, which were held at high-Z before the appearance of valid data. For writes, the write enable signal and write data would be presented along with the column address.
Page mode DRAM was in turn later improved with a small modification which further reduced latency. DRAMs with this improvement were called fast page mode DRAMs (FPM DRAMs). In page mode DRAM, CAS was asserted before the column address was supplied. In FPM DRAM, the column address could be supplied while CAS was still deasserted. The column address propagated through the column address data path, but did not output data on the data pins until CAS was asserted. Prior to CAS being asserted, the data out pins were held at high-Z. FPM DRAM reduced "t"CAC latency. Fast page mode DRAM was introduced in 1986 and was used with Intel 80486.
"Static column" is a variant of fast page mode in which the column address does not need to be stored in, but rather, the address inputs may be changed with CAS held low, and the data output will be updated accordingly a few nanoseconds later.
"Nibble mode" is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of CAS. The difference from normal page mode is that the address inputs are not used for the second through fourth CAS edges; they are generated internally starting with the address supplied for the first CAS edge.
Extended data out DRAM.
Extended data out DRAM (EDO DRAM) was invented and patented in the 1990s by Micron Technology who then licensed technology to many other memory manufacturers. EDO RAM, sometimes referred to as "hyper page mode" enabled DRAM, is similar to fast page mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance. It is up to 30% faster than FPM DRAM, which it began to replace in 1995 when Intel introduced the 430FX chipset with EDO DRAM support. Irrespective of the performance gains, FPM and EDO SIMMs can be used interchangeably in many (but not all) applications.
To be precise, EDO DRAM begins data output on the falling edge of CAS but does not stop the output when CAS rises again. It holds the output valid (thus extending the data output time) until either RAS is deasserted, or a new CAS falling edge selects a different column address.
Single-cycle EDO has the ability to carry out a complete memory transaction in one clock cycle. Otherwise, each sequential RAM access within the same page takes two clock cycles instead of three, once the page has been selected. EDO's performance and capabilities created an opportunity to reduce the immense performance loss associated with a lack of L2 cache in low-cost, commodity PCs. This was also good for notebooks due to difficulties with their limited form factor, and battery life limitations. Additionally, for systems with an L2 cache, the availability of EDO memory improved the average memory latency seen by applications over earlier FPM implementations.
Single-cycle EDO DRAM became very popular on video cards towards the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.
Burst EDO DRAM.
An evolution of EDO DRAM, burst EDO DRAM (BEDO DRAM), could process four memory addresses in one burst, for a maximum of 5-1-1-1, saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipeline stage allowing page-access cycle to be divided into two parts. During a memory-read operation, the first part accessed the data from the memory array to the output stage (second latch). The second part drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, quicker access time is achieved (up to 50% for large blocks of data) than with traditional EDO.
Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM. Even though BEDO RAM was superior to SDRAM in some ways, the latter technology quickly displaced BEDO.
Synchronous dynamic RAM.
Synchronous dynamic RAM (SDRAM) significantly revises the asynchronous memory interface, adding a clock (and a clock enable) line. All other signals are received on the rising edge of the clock.
The RAS and CAS inputs no longer act as strobes, but are instead, along with WE, part of a 3-bit command controlled by a new active-low strobe, "chip select" or CS:
The OE line's function is extended to a per-byte "DQM" signal, which controls data input (writes) in addition to data output (reads). This allows DRAM chips to be wider than 8 bits while still supporting byte-granularity writes.
Many timing parameters remain under the control of the DRAM controller. For example, a minimum time must elapse between a row being activated and a read or write command. One important parameter must be programmed into the SDRAM chip itself, namely the CAS latency. This is the number of clock cycles allowed for internal operations between a read command and the first data word appearing on the data bus. The "Load mode register" command is used to transfer this value to the SDRAM chip. Other configurable parameters include the length of read and write bursts, i.e. the number of words transferred per read or write command.
The most significant change, and the primary reason that SDRAM has supplanted asynchronous RAM, is the support for multiple internal banks inside the DRAM chip. Using a few bits of "bank address" which accompany each command, a second bank can be activated and begin reading data "while a read from the first bank is in progress". By alternating banks, an SDRAM device can keep the data bus continuously busy, in a way that asynchronous DRAM cannot.
Single data rate synchronous DRAM.
Single data rate SDRAM (SDR SDRAM or SDR) is the original generation of SDRAM; it made a single transfer of data per clock cycle.
Double data rate synchronous DRAM.
Double data rate SDRAM (DDR SDRAM or DDR) was a later development of SDRAM, used in PC memory beginning in 2000. Subsequent versions are numbered sequentially ("DDR2", "DDR3", etc.). DDR SDRAM internally performs double-width accesses at the clock rate, and uses a double data rate interface to transfer one half on each clock edge. DDR2 and DDR3 increased this factor to 4× and 8×, respectively, delivering 4-word and 8-word bursts over 2 and 4 clock cycles, respectively. The internal access rate is mostly unchanged (200 million per second for DDR-400, DDR2-800 and DDR3-1600 memory), but each access transfers more data.
Direct Rambus DRAM.
"Direct RAMBUS DRAM" ("DRDRAM") was developed by Rambus. First supported on motherboards in 1999, it was intended to become an industry standard, but was outcompeted by DDR SDRAM, making it technically obsolete by 2003.
Reduced Latency DRAM.
Reduced Latency DRAM (RLDRAM) is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth, mainly intended for networking and caching applications.
Graphics RAM.
Graphics RAMs are asynchronous and synchronous DRAMs designed for graphics-related tasks such as texture memory and framebuffers, found on video cards.
Video DRAM.
Video DRAM (VRAM) is a dual-ported variant of DRAM that was once commonly used to store the frame-buffer in some graphics adaptors.
Window DRAM.
Window DRAM (WRAM) is a variant of VRAM that was once used in graphics adaptors such as the Matrox Millennium and ATI 3D Rage Pro. WRAM was designed to perform better and cost less than VRAM. WRAM offered up to 25% greater bandwidth than VRAM and accelerated commonly used graphical operations such as text drawing and block fills.
Multibank DRAM.
Multibank DRAM (MDRAM) is a type of specialized DRAM developed by MoSys. It is constructed from small memory banks of 256 kB, which are operated in an interleaved fashion, providing bandwidths suitable for graphics cards at a lower cost to memories such as SRAM. MDRAM also allows operations to two banks in a single clock cycle, permitting multiple concurrent accesses to occur if the accesses were independent. MDRAM was primarily used in graphic cards, such as those featuring the Tseng Labs ET6x00 chipsets. Boards based upon this chipset often had the unusual capacity of 2.25 MB because of MDRAM's ability to be implemented more easily with such capacities. A graphics card with 2.25 MB of MDRAM had enough memory to provide 24-bit color at a resolution of 1024×768—a very popular setting at the time.
Synchronous graphics RAM.
Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptors. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.
Graphics double data rate SDRAM.
Graphics double data rate SDRAM is a type of specialized DDR SDRAM designed to be used as the main memory of graphics processing units (GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 2020, there are seven, successive generations of GDDR: GDDR2, GDDR3, GDDR4, GDDR5, GDDR5X, GDDR6 and GDDR6X.
Pseudostatic RAM.
Pseudostatic RAM (PSRAM or PSDRAM) is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM. PSRAM is used in the Apple iPhone and other embedded systems such as XFlar Platform.
Some DRAM components have a "self-refresh mode". While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, rather than to allow operation without a separate DRAM controller as is in the case of mentioned PSRAMs.
An embedded variant of PSRAM was sold by MoSys under the name 1T-SRAM. It is a set of small DRAM banks with an SRAM cache in front to make it behave much like a true SRAM. It is used in Nintendo GameCube and Wii video game consoles.
Cypress Semiconductor's HyperRAM is a type of PSRAM supporting a JEDEC-compliant 8-pin HyperBus or Octal xSPI interface.
References.
<templatestyles src="Reflist/styles.css" />
External links.
| [
{
"math_id": 0,
"text": "Q = {V_{CC} \\over 2} \\cdot C"
},
{
"math_id": 1,
"text": "Q = {-V_{CC} \\over 2} \\cdot C"
}
]
| https://en.wikipedia.org/wiki?curid=74567 |
745714 | Graphic matroid | Matroid with graph forests as independent sets
In the mathematical theory of matroids, a graphic matroid (also called a cycle matroid or polygon matroid) is a matroid whose independent sets are the forests in a given finite undirected graph. The dual matroids of graphic matroids are called co-graphic matroids or bond matroids. A matroid that is both graphic and co-graphic is sometimes called a planar matroid (but this should not be confused with matroids of rank 3, which generalize planar point configurations); these are exactly the graphic matroids formed from planar graphs.
Definition.
A matroid may be defined as a family of finite sets (called the "independent sets" of the matroid) that is closed under subsets and that satisfies the "exchange property": if sets formula_0 and formula_1 are both independent, and formula_0 is larger than formula_1, then there is an element formula_2 such that formula_3 remains independent. If formula_4 is an undirected graph, and formula_5 is the family of sets of edges that form forests in formula_4, then formula_5 is clearly closed under subsets (removing edges from a forest leaves another forest). It also satisfies the exchange property: if formula_0 and formula_1 are both forests, and formula_0 has more edges than formula_1, then it has fewer connected components, so by the pigeonhole principle there is a component formula_6 of formula_0 that contains vertices from two or more components of formula_1. Along any path in formula_6 from a vertex in one component of formula_1 to a vertex of another component, there must be an edge with endpoints in two components, and this edge may be added to formula_1 to produce a forest with more edges. Thus, formula_5 forms the independent sets of a matroid, called the graphic matroid of formula_4 or formula_7. More generally, a matroid is called graphic whenever it is isomorphic to the graphic matroid of a graph, regardless of whether its elements are themselves edges in a graph.
The bases of a graphic matroid formula_7 are the full spanning forests of formula_4, and the circuits of formula_7 are the simple cycles of formula_4. The rank in formula_7 of a set formula_8 of edges of a graph formula_4 is formula_9 where formula_10 is the number of vertices in the subgraph formed by the edges in formula_8 and formula_11 is the number of connected components of the same subgraph. The corank of the graphic matroid is known as the circuit rank or cyclomatic number.
The lattice of flats.
The closure formula_12 of a set formula_13 of edges in formula_7 is a flat consisting of the edges that are not independent of formula_13 (that is, the edges whose endpoints are connected to each other by a path in formula_13). This flat may be identified with the partition of the vertices of formula_4 into the connected components of the subgraph formed by formula_13: Every set of edges having the same closure as formula_13 gives rise to the same partition of the vertices, and formula_12 may be recovered from the partition of the vertices, as it consists of the edges whose endpoints both belong to the same set in the partition. In the lattice of flats of this matroid, there is an order relation formula_14 whenever the partition corresponding to flat formula_15 is a refinement of the partition corresponding to flat formula_16.
In this aspect of graphic matroids, the graphic matroid for a complete graph formula_17 is particularly important, because it allows every possible partition of the vertex set to be formed as the set of connected components of some subgraph. Thus, the lattice of flats of the graphic matroid of formula_17 is naturally isomorphic to the lattice of partitions of an formula_10-element set. Since the lattices of flats of matroids are exactly the geometric lattices, this implies that the lattice of partitions is also geometric.
Representation.
The graphic matroid of a graph formula_4 can be defined as the column matroid of any oriented incidence matrix of formula_4. Such a matrix has one row for each vertex, and one column for each edge. The column for edge formula_18 has formula_19 in the row for one endpoint, formula_20 in the row for the other endpoint, and formula_21 elsewhere; the choice of which endpoint to give which sign is arbitrary. The column matroid of this matrix has as its independent sets the linearly independent subsets of columns.
If a set of edges contains a cycle, then the corresponding columns (multiplied by formula_20 if necessary to reorient the edges consistently around the cycle) sum to zero, and is not independent. Conversely, if a set of edges forms a forest, then by repeatedly removing leaves from this forest it can be shown by induction that the corresponding set of columns is independent. Therefore, the column matrix is isomorphic to formula_7.
This method of representing graphic matroids works regardless of the field over which the incidence is defined. Therefore, graphic matroids form a subset of the regular matroids, matroids that have representations over all possible fields.
The lattice of flats of a graphic matroid can also be realized as the lattice of a hyperplane arrangement, in fact as a subset of the braid arrangement, whose hyperplanes are the diagonals formula_22. Namely, if the vertices of formula_4 are formula_23 we include the hyperplane formula_24 whenever formula_25 is an edge of formula_4.
Matroid connectivity.
A matroid is said to be connected if it is not the direct sum of two smaller matroids; that is, it is connected if and only if there do not exist two disjoint subsets of elements such that the rank function of the matroid equals the sum of the ranks in these separate subsets. Graphic matroids are connected if and only if the underlying graph is both connected and 2-vertex-connected.
Minors and duality.
A matroid is graphic if and only if its minors do not include any of five forbidden minors: the uniform matroid formula_26, the Fano plane or its dual, or the duals of formula_27 and formula_28 defined from the complete graph formula_29 and the complete bipartite graph formula_30. The first three of these are the forbidden minors for the regular matroids, and the duals of formula_27 and formula_28 are regular but not graphic.
If a matroid is graphic, its dual (a "co-graphic matroid") cannot contain the duals of these five forbidden minors. Thus, the dual must also be regular, and cannot contain as minors the two graphic matroids formula_27 and formula_28.
Because of this characterization and Wagner's theorem characterizing the planar graphs as the graphs with no formula_29 or formula_30 graph minor, it follows that a graphic matroid formula_7 is co-graphic if and only if formula_4 is planar; this is Whitney's planarity criterion. If formula_4 is planar, the dual of formula_7 is the graphic matroid of the dual graph of formula_4. While formula_4 may have multiple dual graphs, their graphic matroids are all isomorphic.
Algorithms.
A minimum weight basis of a graphic matroid is a minimum spanning tree (or minimum spanning forest, if the underlying graph is disconnected). Algorithms for computing minimum spanning trees have been intensively studied; it is known how to solve the problem in linear randomized expected time in a comparison model of computation, or in linear time in a model of computation in which the edge weights are small integers and bitwise operations are allowed on their binary representations. The fastest known time bound that has been proven for a deterministic algorithm is slightly superlinear.
Several authors have investigated algorithms for testing whether a given matroid is graphic. For instance, an algorithm of solves this problem when the input is known to be a binary matroid. solves this problem for arbitrary matroids given access to the matroid only through an independence oracle, a subroutine that determines whether or not a given set is independent.
Related classes of matroids.
Some classes of matroid have been defined from well-known families of graphs, by phrasing a characterization of these graphs in terms that make sense more generally for matroids. These include the bipartite matroids, in which every circuit is even, and the Eulerian matroids, which can be partitioned into disjoint circuits. A graphic matroid is bipartite if and only if it comes from a bipartite graph and a graphic matroid is Eulerian if and only if it comes from an Eulerian graph. Within the graphic matroids (and more generally within the binary matroids) these two classes are dual: a graphic matroid is bipartite if and only if its dual matroid is Eulerian, and a graphic matroid is Eulerian if and only if its dual matroid is bipartite.
Graphic matroids are one-dimensional rigidity matroids, matroids describing the degrees of freedom of structures of rigid beams that can rotate freely at the vertices where they meet. In one dimension, such a structure has a number of degrees of freedom equal to its number of connected components (the number of vertices minus the matroid rank) and in higher dimensions the number of degrees of freedom of a "d"-dimensional structure with "n" vertices is "dn" minus the matroid rank. In two-dimensional rigidity matroids, the Laman graphs play the role that spanning trees play in graphic matroids, but the structure of rigidity matroids in dimensions greater than two is not well understood.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "x\\in A\\setminus B"
},
{
"math_id": 3,
"text": "B\\cup\\{x\\}"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "M(G)"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "r(X)=n-c"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "\\operatorname{cl}(S)"
},
{
"math_id": 13,
"text": "S"
},
{
"math_id": 14,
"text": "x\\le y"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "K_n"
},
{
"math_id": 18,
"text": "e"
},
{
"math_id": 19,
"text": "+1"
},
{
"math_id": 20,
"text": "-1"
},
{
"math_id": 21,
"text": "0"
},
{
"math_id": 22,
"text": "H_{ij}=\\{(x_1,\\ldots,x_n) \\in \\mathbb{R}^n \\mid x_i = x_j\\}"
},
{
"math_id": 23,
"text": "v_1,\\ldots,v_n,"
},
{
"math_id": 24,
"text": "H_{ij}"
},
{
"math_id": 25,
"text": "e = v_iv_j"
},
{
"math_id": 26,
"text": "U{}^2_4"
},
{
"math_id": 27,
"text": "M(K_5)"
},
{
"math_id": 28,
"text": "M(K_{3,3})"
},
{
"math_id": 29,
"text": "K_5"
},
{
"math_id": 30,
"text": "K_{3,3}"
}
]
| https://en.wikipedia.org/wiki?curid=745714 |
74571954 | Caravelli-Traversa-Di Ventra equation | Equation used in neuromorphic engineering
The Caravelli-Traversa-Di Ventra equation (CTDV) is a closed-form equation to the evolution of networks of memristors. It was derived by Francesco Caravelli (Los Alamos National Laboratory), Fabio L. Traversa (Memcomputing Inc.) and Massimiliano Di Ventra (UC San Diego) to study the exact evolution of complex circuits made of resistances with memory (memristors).
A memristor is a resistive device whose resistance changes as a function of the history of the applied voltage or current. A physical realization of the memristor was introduced in the "Nature" paper by Strukov and collaborators while studying titanium dioxide junctions, with a resistance experimentally observed to change approximately in accordance to the model
formula_0
formula_1
where formula_2 is a parameter describing the evolution of resistance, formula_3 is the current across the device and formula_4 is an effective parameter which characterizes the response of the device to a current flow. If the device decays over time to a high resistance state, one can also add a term formula_5 to the right-hand side of the evolution for formula_6, where formula_7 is a decay constant. However, such resistive switching has been known since the late 60's. The model above is often called Williams-Strukov or Strukov model. Albeit this model is too simplistic to represent real devices, it still serves as a good model exhibiting a pinched hysteresis loop in the current-voltage diagram. However, because of Kirchhoff's laws, the evolution of networks of these components becomes utterly complicated, in particular for disordered neuromorphic materials such as nanowires. Often, these are called memristive networks. The simplest example of a memristive circuit or network is a memristors crossbar. A memristor crossbar is often used as a way to address single memristors for a variety of applications in artificial intelligence. However, this is a one particular example of memristive network arranged on a two dimensional grid. Memristive networks have also important applications, for instance, in reservoir computing. A network of memristors can serve as a reservoir for nonlinearly transforming an input signal into a high-dimensional feature space. The memristor-based reservoir concept was introduced by Kulkarni and Teuscher in 2012. While this model was initially employed for tasks like wave pattern classification and associative memory, the readout mechanism utilized a genetic algorithm, which inherently operates non-linearly.
A memristive network is a circuit that satisfies the Kirchhoff laws, e.g. the conservation of the currents at the nodes, and in which every circuit element is a memristive component. Kirchhoff's laws can be written in terms of the sum of the currents on node n as
formula_8
formula_9
where the first equation represents the time evolution of the memristive element's internal memory either in current or voltage, and the second equation represents the conservation of currents at the nodes. Since every element is Ohmic, then, formula_10 which is Ohm's law and formula_6 is the memory parameters. These parameters typically represent the internal memory of the resistive device and are associated to physical properties of the device changing as an effect of current/voltage. These equations become quickly highly nonlinear because the memristive device is typically nonlinear, and moreover Kirchhoff's laws introduce a higher layer of complexity. A silver nanowire connectome can be described using graph theory, and have applications ranging from sensors to information storage.Since memristive devices behave as axons in a neuronal network, the theory of memristive networks is the theory of nanoscale electric physical devices whose behavior parallels the one of real neuronal circuits.
In neuromorphic engineering, the goal is the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures similar to the ones in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations. The development of the formalism of memristive networks is used to understand the behavior of memristors for a variety of purposes, including modelling and understand electronic plasticity in real circuits. Side applications of such theory is to understand the role of instances in memcomputers and self-organizing logic gates.
In a typical memristive network simulation one has to solve first for Kirchhoff's laws numerically, obtain voltage drops and currents for each device, and then evolve the parameters of the memristive device and/or junction to obtain the resistance or conductance. This means that effectively, as memristive devices change their resistance or conductance, such devices are interacting. Even for the simple memristor model, such a problem leads to nonlinearities strongly dependent on the circuit realizations. The CTDV equation is a model for the evolution of networks of arbitrary circuits composed of devices such as in eqn. (1), with the inclusion of a decay parameter controlling the volatility. It can be considered a generalization of the Strukov et al. model to arbitrary circuits.
For the case of the Strukov et al. model, equations (2) can be written explicitly by integrating analytically Kirchhoff's laws. The evolution of a network of memristive devices can be written in a closed form (Caravelli-Traversa-Di Ventra equation):
formula_11
as a function of the properties of the physical memristive network and the external sources, where formula_12 is the internal memory parameter of each device. The equation is valid in the case of the Strukov original toy model and it can be considered as a generalization of the single device model; in the case of ideal memristors, formula_13, although the hypothesis of the existence of an ideal memristor is debatable. In the equation above, formula_7 is the "forgetting" time scale constant, typically associated to memory volatility, while formula_14 is the adimensional ratio between the resistance gap and "off" resistance value. formula_15 is the vector of the voltage sources in series to each junction. Instead, formula_16 is a projection matrix in which the circuit enters directly, by projecting on the fundamental loops of the graph; such matrix enforces Kirchhoff's laws. Interestingly, the equation is valid for any network topology simply by changing the corresponding matrix formula_16. The constant formula_4 has the dimension of a voltage and is associated to the properties of the memristor; its physical origin is the charge mobility in the conductor. The diagonal matrix and vector formula_17 and formula_18 respectively, are instead the dynamical internal value of the memristive devices, with values between 0 and 1. This equation thus requires adding extra constraints on the memory values in order to be reliable, but can be used for instance to predict analytically the presence of instantonic transitions in memristive networks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R(x)=R_{off}(1-x)+x R_{on}\n "
},
{
"math_id": 1,
"text": " \\frac{dx}{dt}=\\frac{1}{\\beta} I\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (1)\n "
},
{
"math_id": 2,
"text": " x\n "
},
{
"math_id": 3,
"text": " I\n "
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "-\\alpha x"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": " \\frac{dx_i}{dt}=f(x_i; I_i)\n "
},
{
"math_id": 9,
"text": " \\sum_{i\\rightarrow n} I_i=0 \\ \\ \\ \\ \\ (2)\n "
},
{
"math_id": 10,
"text": "V=R(x)I"
},
{
"math_id": 11,
"text": " \\frac{d}{dt} \\vec{x} = -\\alpha \\vec{x}+\\frac{1}{\\beta} (I-\\chi \\Omega X)^{-1} \\Omega \\vec S "
},
{
"math_id": 12,
"text": " x_i "
},
{
"math_id": 13,
"text": "\\alpha=0"
},
{
"math_id": 14,
"text": "\\chi=\\frac{R_\\text{off}-R_\\text{on}}{R_\\text{off}}"
},
{
"math_id": 15,
"text": " \\vec S "
},
{
"math_id": 16,
"text": "\\Omega"
},
{
"math_id": 17,
"text": "X=\\operatorname{diag}(\\vec X)"
},
{
"math_id": 18,
"text": "\\vec x"
}
]
| https://en.wikipedia.org/wiki?curid=74571954 |
7457346 | Logic optimization | Process in digital electronics and integrated circuit design
Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design.
Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one. Usually, the smaller circuit with the same function is cheaper, takes less space, consumes less power, has shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit.
In terms of Boolean algebra, the optimization of a complex Boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one.
Motivation.
The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic in integrated circuits.
With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA) industry was to find the most simple circuit representation of the given design description. While two-level logic optimization had long existed in the form of the Quine–McCluskey algorithm, later followed by the Espresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption of Hardware description languages for circuit description, formalized the logic optimization domain as it exists today, including Logic Friday (graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic).
Methods.
The methods of logic circuit simplifications are equally applicable to Boolean expression minimization.
Classification.
Today, logic optimization is divided into various categories:
Two-level logic optimization
Multi-level logic optimization
Sequential logic optimization
Combinational logic optimization
Graphical optimization methods
Tabular optimization methods
Algebraic optimization methods
Graphical methods.
Graphical methods represent the required logical function by a diagram representing the logic variables and value of the function. By manipulating or inspecting a diagram, much tedious calculation may be eliminated.
Graphical minimization methods for two-level logic include:
Boolean expression minimization.
The same methods of Boolean expression minimization (simplification) listed below may be applied to the circuit optimization.
For the case when the Boolean function is specified by a circuit (that is, we want to find an equivalent circuit of minimum size possible), the unbounded circuit minimization problem was long-conjectured to be formula_0-complete in time complexity, a result finally proved in 2008, but there are effective heuristics such as Karnaugh maps and the Quine–McCluskey algorithm that facilitate the process.
Boolean function minimizing methods include:
Optimal multi-level methods.
Methods that find optimal circuit representations of Boolean functions are often referred to as "exact synthesis" in the literature. Due to the computational complexity, exact synthesis is tractable only for small Boolean functions. Recent approaches map the optimization problem to a Boolean satisfiability problem. This allows finding optimal circuit representations using a SAT solver.
Heuristic methods.
A heuristic method uses established rules that solve a practical useful subset of the much larger possible set of problems. The heuristic method may not produce the theoretically optimum solution, but if useful, will provide most of the optimization desired with a minimum of effort. An example of a computer system that uses heuristic methods for logic optimization is the Espresso heuristic logic minimizer.
Two-level versus multi-level representations.
While a two-level circuit representation of circuits strictly refers to the flattened view of the circuit in terms of SOPs (sum-of-products) — which is more applicable to a PLA implementation of the design — a multi-level representation is a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional representation (binary decision diagrams, algebraic decision diagrams) of the circuit. In sum-of-products (SOP) form, AND gates form the smallest unit and are stitched together using ORs, whereas in product-of-sums (POS) form it is opposite. POS form requires parentheses to group the OR terms together under AND gates, because OR has lower precedence than AND. Both SOP and POS forms translate nicely into circuit logic.
If we have two functions "F"1 and "F"2:
formula_1
formula_2
The above 2-level representation takes six product terms and 24 transistors in CMOS Rep.
A functionally equivalent representation in multilevel can be:
"P" = "B" + "C".
"F"1 = "AP" + "AD".
"F"2 = "A'P" + "A'E".
While the number of levels here is 3, the total number of product terms and literals reduce because of the sharing of the term B + C.
Similarly, we distinguish between combinational circuits and sequential circuits. Combinational circuits produce their outputs based only on the current inputs. They can be represented by Boolean relations. Some examples are priority encoders, binary decoders, multiplexers, demultiplexers.
Sequential circuits produce their output based on both current and past inputs, depending on a clock signal to distinguish the previous inputs from the current inputs. They can be represented by finite state machines. Some examples are flip-flops and counters.
Example.
While there are many ways to minimize a circuit, this is an example that minimizes (or simplifies) a Boolean function. The Boolean function carried out by the circuit is directly related to the algebraic expression from which the function is implemented.
Consider the circuit used to represent formula_3. It is evident that two negations, two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need two inverters, two AND gates, and an OR gate.
The circuit can simplified (minimized) by applying laws of Boolean algebra or using intuition. Since the example states that formula_4 is true when formula_5 is false and the other way around, one can conclude that this simply means formula_6. In terms of logical gates, inequality simply means an XOR gate (exclusive or). Therefore, formula_7. Then the two circuits shown below are equivalent, as can be checked using a truth table:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma_2^P"
},
{
"math_id": 1,
"text": "F_1 = AB + AC + AD,\\,"
},
{
"math_id": 2,
"text": "F_2 = A'B + A'C + A'E.\\,"
},
{
"math_id": 3,
"text": "(A \\wedge \\bar{B}) \\vee (\\bar{A} \\wedge B)"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "A \\neq B"
},
{
"math_id": 7,
"text": "(A \\wedge \\bar{B}) \\vee (\\bar{A} \\wedge B) \\iff A \\neq B"
}
]
| https://en.wikipedia.org/wiki?curid=7457346 |
74575092 | Kohn–Luttinger superconductivity | Superconductivity mechanism based on attractive forces generated by screened Coulomb interaction
Kohn–Luttinger superconductivity is a theoretical mechanism for unconventional superconductivity proposed by Walter Kohn and Joaquin Mazdak Luttinger based on
Friedel oscillations. In contrast to BCS theory, in which Cooper pairs are formed due to electron–phonon interaction, Kohn–Luttinger mechanism is based on fact that screened Coulomb interaction oscillates as formula_0 and can create Cooper instability for non-zero angular momentum formula_1.
Since Kohn–Luttinger mechanism does not require any additional interactions beyond Coulomb interactions, it can lead to superconductivity in any electronic system.
However, the estimated critical temperature, formula_2, for Kohn–Luttinger superconductor is exponential in formula_3 and thus is extremely small. For example, for metals the critical temperature is given by
formula_4
where formula_5 is Boltzmann constant and formula_6 is Fermi energy. However, Kohn and Luttinger conjectured that nonspherical Fermi surfaces and variation of parameters may enhance the effect. Indeed, it is proposed that Kohn–Luttinger mechanism is responsible for superconductivity in rhombohedral graphene, which has an annular Fermi surface.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\cos(2k_F r + \\phi)/r^3"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "T_{\\rm c}"
},
{
"math_id": 3,
"text": "-\\ell^4"
},
{
"math_id": 4,
"text": "\\frac{k_{\\rm B} T_{\\rm c}}{E_{\\rm F}} = \\exp(-(2 \\ell )^4),"
},
{
"math_id": 5,
"text": "k_{\\rm B}"
},
{
"math_id": 6,
"text": "E_{\\rm F}"
}
]
| https://en.wikipedia.org/wiki?curid=74575092 |
74576446 | Kac ring | Toy model in statistical physics.
In statistical mechanics, the Kac ring is a toy model introduced by Mark Kac in 1956 to explain how the second law of thermodynamics emerges from time-symmetric interactions between molecules (see reversibility paradox). Although artificial, the model is notable as a mathematically transparent example of coarse-graining and is used as a didactic tool in non-equilibrium thermodynamics.
Formulation.
The Kac ring consists of N equidistant points in a circle. Some of these points are "marked". The number of marked points is M, where formula_0. Each point represents a site occupied by a ball, which is "black" or "white". After a unit of time, each ball moves to a neighboring point counterclockwise. Whenever a ball leaves a marked site, it switches color from black to white and vice versa. (If, however, the starting point is not marked, the ball completes its move without changing color.)
An imagined observer can only measure coarse-grained (or macroscopic) quantities: the ratio
formula_1
and the overall color
formula_2
where B, W denote the total number of black and white balls respectively. Without the knowledge of detailed (microscopic) configuration, any distribution of M marks is considered equally likely. This assumption of equiprobability is comparable to "Stosszahlansatz", which leads to Boltzmann equation.
Detailed evolution.
Let formula_3 denote the color of a ball at point k and time t with a convention
formula_4
The microscopic dynamics can be mathematically formulated as
formula_5
where
formula_6
and formula_7 is taken modulo N. In analogy to molecular motion, the system is time-reversible. Indeed, if balls would move clockwise (instead of counterclockwise) and marked points changed color upon entering them (instead of leaving), the motion would be equivalent, except going backward in time. Moreover, the evolution of formula_3 is periodic, where the period is at most formula_8. (After N steps, each ball visits all M marked points and changes color by a factor formula_9.) Periodicity of the Kac ring is a manifestation of more general Poincaré recurrence.
Coarse-graining.
Assuming that all balls are initially white,
formula_10
where formula_11 is the number of times the ball will leave a marked point during its journey. When marked locations are unknown (and all possibilities equally likely), X becomes a random variable. Considering the limit when N approaches infinity but t, i, and μ remain constant, the random variable X converges to the binomial distribution, i.e.:
formula_12
Hence, the overall color after t steps will be
formula_13
Since formula_14 the overall color will, on average, converge monotonically and exponentially to 50% grey (a state that is analogical to thermodynamic equilibrium). An identical result is obtained for a ring rotating clockwise. Consequently, the coarse-grained evolution of the Kac ring is irreversible.
It is also possible to show that the variance approaches zero:
formula_15
Therefore, when N is huge (of order 1023), the observer has to be extremely lucky (or patient) to detect any significant deviation from the ensemble averaged behavior.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0 < 2M < N"
},
{
"math_id": 1,
"text": "\\mu = \\frac{M}{N} < 0.5 "
},
{
"math_id": 2,
"text": "\\delta = \\frac{W-B}{N},"
},
{
"math_id": 3,
"text": "\\eta_k(t)"
},
{
"math_id": 4,
"text": " \\eta_k = \\begin{cases}\n+1 & \\text{ball is white}\\\\\n-1 & \\text{ball is black}\n\\end{cases}. "
},
{
"math_id": 5,
"text": " \\eta_k(t) = \\epsilon_{k-1} \\eta_{k-1}(t-1), "
},
{
"math_id": 6,
"text": " \\epsilon_k = \\begin{cases}\n+1 & \\text{unmarked site}\\\\\n-1 & \\text{marked site}\n\\end{cases} "
},
{
"math_id": 7,
"text": "k-1"
},
{
"math_id": 8,
"text": "2N"
},
{
"math_id": 9,
"text": "(-1)^M"
},
{
"math_id": 10,
"text": " \\eta_k(t) = \\epsilon_{k-1} \\epsilon_{k-2} \\cdots \\epsilon_{k-t} = (-1)^{X}, "
},
{
"math_id": 11,
"text": "X = X(k,t)"
},
{
"math_id": 12,
"text": " \\lim_{N \\to \\infty} \\text{Pr}(X = i) = \n\\mu^i (1-\\mu)^{t-i}\n\\begin{pmatrix}\nt \\\\\ni\n\\end{pmatrix}\n,"
},
{
"math_id": 13,
"text": "\n\\begin{align}\n\\lim_{N \\to \\infty} \\langle \\delta(t) \\rangle &= \n\\lim_{N \\to \\infty} \\frac{1}{N} \\sum_k \\langle \\eta_k(t) \\rangle\\\\\n &= \\lim_{N \\to \\infty} \\langle \\eta_1(t) \\rangle \\\\\n&= \\sum_{i=0}^t (-1)^i \\mu^i (1-\\mu)^{t-i}\n\\begin{pmatrix}\nt \\\\\ni\n\\end{pmatrix} \\\\\n&= (1-2\\mu)^t \n\\end{align} ."
},
{
"math_id": 14,
"text": " 0<1-2\\mu<1 "
},
{
"math_id": 15,
"text": "\n\\lim_{N \\to \\infty} \\text{Var}(\\delta(t)) = 0 .\n"
}
]
| https://en.wikipedia.org/wiki?curid=74576446 |
745789 | Subadditivity | Property of some mathematical functions
In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.
Definitions.
A subadditive function is a function formula_0, having a domain "A" and an ordered codomain "B" that are both closed under addition, with the following property:
formula_1
An example is the square root function, having the non-negative real numbers as domain and codomain:
since formula_2 we have:
formula_3
A sequence formula_4 is called subadditive if it satisfies the inequality
formula_5
for all "m" and "n". This is a special case of subadditive function, if a sequence is interpreted as a function on the set of natural numbers.
Note that while a concave sequence is subadditive, the converse is false. For example, randomly assign formula_6 with values in formula_7; then the sequence is subadditive but not concave.
Properties.
Sequences.
A useful result pertaining to subadditive sequences is the following lemma due to Michael Fekete.
<templatestyles src="Math_theorem/styles.css" />
Fekete's Subadditive Lemma — For every subadditive sequence formula_8, the limit formula_9 is equal to the infimum formula_10. (The limit may be formula_11.)
<templatestyles src="Math_proof/styles.css" />Proof
Let formula_12.
By definition, formula_13. So it suffices to show formula_14.
If not, then there exists a subsequence formula_15, and an formula_16, such that formula_17 for all formula_18.
Since formula_12, there exists an formula_19 such that formula_20.
By infinitary pigeonhole principle, there exists a sub-subsequence formula_15, whose indices all belong to the same residue class modulo formula_21, and so they advance by multiples of formula_21. This sequence, continued for long enough, would be forced by subadditivity to dip below the formula_22 slope line, a contradiction.
In more detail, by subadditivity, we have
formula_23
which implies formula_24
The analogue of Fekete's lemma holds for superadditive sequences as well, that is:
formula_25 (The limit then may be positive infinity: consider the sequence formula_26.)
There are extensions of Fekete's lemma that do not require the inequality formula_27 to hold for all "m" and "n", but only for "m" and "n" such that formula_28
<templatestyles src="Math_proof/styles.css" />Proof
Continue the proof as before, until we have just used the infinite pigeonhole principle.
Consider the sequence formula_29. Since formula_30, we have formula_31. Similarly, we have formula_32, etc.
By the assumption, for any formula_33, we can use subadditivity on them if
formula_34
If we were dealing with continuous variables, then we can use subadditivity to go from formula_35 to formula_36, then to formula_37, and so on, which covers the entire interval formula_38.
Though we don't have continuous variables, we can still cover enough integers to complete the proof. Let formula_39 be large enough, such that
formula_40
then let formula_41 be the smallest number in the intersection formula_42. By the assumption on formula_39, it's easy to see (draw a picture) that the intervals formula_43 and formula_44 touch in the middle. Thus, by repeating this process, we cover the entirety of formula_45.
With that, all formula_46 are forced down as in the previous proof.
Moreover, the condition formula_27 may be weakened as follows: formula_47 provided that formula_48 is an increasing function such that the integral formula_49 converges (near the infinity).
There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present.
Besides, analogues of Fekete's lemma have been proved for subadditive real maps (with additional assumptions) from finite subsets of an amenable group
and further, of a cancellative left-amenable semigroup.
Functions.
<templatestyles src="Math_theorem/styles.css" />
Theorem: — For every measurable subadditive function formula_50 the limit formula_51 exists and is equal to formula_52 (The limit may be formula_53)
If "f" is a subadditive function, and if 0 is in its domain, then "f"(0) ≥ 0. To see this, take the inequality at the top. formula_54. Hence formula_55
A concave function formula_56 with formula_57 is also subadditive.
To see this, one first observes that formula_58.
Then looking at the sum of this bound for formula_59 and formula_60, will finally verify that "f" is subadditive.
The negative of a subadditive function is superadditive.
Examples in various domains.
Entropy.
Entropy plays a fundamental role in information theory and statistical physics, as well as in quantum mechanics in a generalized formulation due to von Neumann.
Entropy appears always as a subadditive quantity in all of its formulations, meaning the entropy of a supersystem or a set union of random variables is always less or equal than the sum of the entropies of its individual components.
Additionally, entropy in physics satisfies several more strict inequalities such as the Strong Subadditivity of Entropy in classical statistical mechanics and its quantum analog.
Economics.
Subadditivity is an essential property of some particular cost functions. It is, generally, a necessary and sufficient condition for the verification of a natural monopoly. It implies that production from only one firm is socially less expensive (in terms of average costs) than production of a fraction of the original quantity by an equal number of firms.
Economies of scale are represented by subadditive average cost functions.
Except in the case of complementary goods, the price of goods (as a function of quantity) must be subadditive. Otherwise, if the sum of the cost of two items is cheaper than the cost of the bundle of two of them together, then nobody would ever buy the bundle, effectively causing the price of the bundle to "become" the sum of the prices of the two separate items. Thus proving that it is not a sufficient condition for a natural monopoly; since the unit of exchange may not be the actual cost of an item. This situation is familiar to everyone in the political arena where some minority asserts that the loss of some particular freedom at some particular level of government means that many governments are better; whereas the majority assert that there is some other correct unit of cost.
Finance.
Subadditivity is one of the desirable properties of coherent risk measures in risk management. The economic intuition behind risk measure subadditivity is that a portfolio risk exposure should, at worst, simply equal the sum of the risk exposures of the individual positions that compose the portfolio. The lack of subadditivity is one of the main critiques of VaR models which do not rely on the assumption of normality of risk factors. The Gaussian VaR ensures subadditivity: for example, the Gaussian VaR of a two unitary long positions portfolio formula_61 at the confidence level formula_62 is, assuming that the mean portfolio value variation is zero and the VaR is defined as a negative loss,
formula_63
where formula_64 is the inverse of the normal cumulative distribution function at probability level formula_65, formula_66 are the individual positions returns variances and formula_67 is the linear correlation measure between the two individual positions returns. Since variance is always positive,
formula_68
Thus the Gaussian VaR is subadditive for any value of formula_69 and, in particular, it equals the sum of the individual risk exposures when formula_70 which is the case of no diversification effects on portfolio risk.
Thermodynamics.
Subadditivity occurs in the thermodynamic properties of non-ideal solutions and mixtures like the excess molar volume and heat of mixing or excess enthalpy.
Combinatorics on words.
A factorial language formula_71 is one where if a word is in formula_71, then all factors of that word are also in formula_71. In combinatorics on words, a common problem is to determine the number formula_72 of length-formula_73 words in a factorial language. Clearly formula_74, so formula_75 is subadditive, and hence Fekete's lemma can be used to estimate the growth of formula_72.
For every formula_76, sample two strings of length formula_73 uniformly at random on the alphabet formula_77. The expected length of the longest common subsequence is a "super"-additive function of formula_73, and thus there exists a number formula_78, such that the expected length grows as formula_79. By checking the case with formula_80, we easily have formula_81. The exact value of even formula_82, however, is only known to be between 0.788 and 0.827.
External links.
"This article incorporates material from subadditivity on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "f \\colon A \\to B"
},
{
"math_id": 1,
"text": "\\forall x, y \\in A, f(x+y)\\leq f(x)+f(y)."
},
{
"math_id": 2,
"text": "\\forall x, y \\geq 0"
},
{
"math_id": 3,
"text": "\\sqrt{x+y}\\leq \\sqrt{x}+\\sqrt{y}."
},
{
"math_id": 4,
"text": "\\left \\{ a_n \\right \\}_{n \\geq 1}"
},
{
"math_id": 5,
"text": " a_{n+m}\\leq a_n+a_m"
},
{
"math_id": 6,
"text": "a_1, a_2, ..."
},
{
"math_id": 7,
"text": "[0.5, 1]"
},
{
"math_id": 8,
"text": "{\\left \\{ a_n \\right \\}}_{n=1}^\\infty"
},
{
"math_id": 9,
"text": "\\displaystyle \\lim_{n \\to \\infty} \\frac{a_n}{n}"
},
{
"math_id": 10,
"text": "\\inf \\frac{a_n}{n}"
},
{
"math_id": 11,
"text": "-\\infty"
},
{
"math_id": 12,
"text": "s^* := \\inf_n \\frac{a_n}n"
},
{
"math_id": 13,
"text": "\\liminf_n \\frac{a_n}n \\geq s^*"
},
{
"math_id": 14,
"text": "\\limsup_n \\frac{a_n}n \\leq s^*"
},
{
"math_id": 15,
"text": "(a_{n_k})_k"
},
{
"math_id": 16,
"text": "\\epsilon > 0"
},
{
"math_id": 17,
"text": "\\frac{a_{n_k}}{n_k} > s^* + \\epsilon"
},
{
"math_id": 18,
"text": "k"
},
{
"math_id": 19,
"text": "a_m"
},
{
"math_id": 20,
"text": "\\frac{a_m}m < s^* + \\epsilon/2"
},
{
"math_id": 21,
"text": "m"
},
{
"math_id": 22,
"text": "s^* + \\epsilon"
},
{
"math_id": 23,
"text": "\\begin{aligned} a_{n_2} &\\leq a_{n_1} + a_m (n_2-n_1)/m \\\\ a_{n_3} &\\leq a_{n_2} + a_m (n_3-n_2)/m \\leq a_{n_1} + a_m (n_3-n_1)/m \\\\ \\cdots & \\cdots \\end{aligned}"
},
{
"math_id": 24,
"text": "\\limsup_k a_{n_k}/n_k \\leq a_m/m < s^* + \\epsilon"
},
{
"math_id": 25,
"text": "a_{n+m}\\geq a_n + a_m."
},
{
"math_id": 26,
"text": "a_n = \\log n!"
},
{
"math_id": 27,
"text": "a_{n+m}\\le a_n + a_m"
},
{
"math_id": 28,
"text": "\\frac 1 2 \\le \\frac m n \\le 2."
},
{
"math_id": 29,
"text": "a_m, a_{2m}, a_{3m}, ..."
},
{
"math_id": 30,
"text": "2m/m = 2"
},
{
"math_id": 31,
"text": "a_{2m} \\leq 2a_m"
},
{
"math_id": 32,
"text": "a_{3m} \\leq a_{2m}+a_m \\leq 3a_m"
},
{
"math_id": 33,
"text": "s, t \\in \\N"
},
{
"math_id": 34,
"text": "\\ln(s+t) \\in [\\ln(1.5 s), \\ln (3s)] = \\ln s + [\\ln 1.5, \\ln 3]"
},
{
"math_id": 35,
"text": "a_{n_k}"
},
{
"math_id": 36,
"text": "a_{n_k} + [\\ln 1.5, \\ln 3]"
},
{
"math_id": 37,
"text": "a_{n_k} + \\ln 1.5 + [\\ln 1.5, \\ln 3]"
},
{
"math_id": 38,
"text": "a_{n_k} + [\\ln 1.5, +\\infty)"
},
{
"math_id": 39,
"text": "n_k"
},
{
"math_id": 40,
"text": "\\ln (2) > \\ln(1.5) + \\ln \\left(\\frac{1.5 n_k + m}{1.5 n_k}\\right) "
},
{
"math_id": 41,
"text": "n'"
},
{
"math_id": 42,
"text": "(n_k + m\\Z) \\cap (\\ln n_k + [\\ln(1.5), \\ln (3)])"
},
{
"math_id": 43,
"text": "\\ln n_k + [\\ln(1.5), \\ln (3)]"
},
{
"math_id": 44,
"text": "\\ln n' + [\\ln(1.5), \\ln (3)]"
},
{
"math_id": 45,
"text": "(n_k + m\\Z) \\cap (\\ln n_k + [\\ln(1.5), \\infty])"
},
{
"math_id": 46,
"text": "a_{n_k}, a_{n_{k+1}}, ..."
},
{
"math_id": 47,
"text": "a_{n+m}\\le a_n + a_m + \\phi(n+m)"
},
{
"math_id": 48,
"text": "\\phi"
},
{
"math_id": 49,
"text": "\\int \\phi(t) t^{-2} \\, dt"
},
{
"math_id": 50,
"text": "f : (0,\\infty) \\to \\R,"
},
{
"math_id": 51,
"text": "\\lim_{t\\to\\infty} \\frac{f(t)}{t}"
},
{
"math_id": 52,
"text": "\\inf_{t>0} \\frac{f(t)}{t}."
},
{
"math_id": 53,
"text": "-\\infty."
},
{
"math_id": 54,
"text": "f(x) \\ge f(x+y) - f(y)"
},
{
"math_id": 55,
"text": "f(0) \\ge f(0+y) - f(y) = 0"
},
{
"math_id": 56,
"text": "f: [0,\\infty) \\to \\mathbb{R}"
},
{
"math_id": 57,
"text": "f(0) \\ge 0"
},
{
"math_id": 58,
"text": "f(x) \\ge \\textstyle{\\frac{y}{x+y}} f(0) + \\textstyle{\\frac{x}{x+y}} f(x+y)"
},
{
"math_id": 59,
"text": "f(x)"
},
{
"math_id": 60,
"text": "f(y)"
},
{
"math_id": 61,
"text": " V "
},
{
"math_id": 62,
"text": " 1-p "
},
{
"math_id": 63,
"text": " \\text{VaR}_p \\equiv z_{p}\\sigma_{\\Delta V} = z_{p}\\sqrt{\\sigma_x^2+\\sigma_y^2+2\\rho_{xy}\\sigma_x \\sigma_y} "
},
{
"math_id": 64,
"text": " z_p "
},
{
"math_id": 65,
"text": " p "
},
{
"math_id": 66,
"text": " \\sigma_x^2,\\sigma_y^2 "
},
{
"math_id": 67,
"text": " \\rho_{xy} "
},
{
"math_id": 68,
"text": " \\sqrt{\\sigma_x^2+\\sigma_y^2+2\\rho_{xy}\\sigma_x \\sigma_y} \\leq \\sigma_x + \\sigma_y "
},
{
"math_id": 69,
"text": " \\rho_{xy} \\in [-1,1] "
},
{
"math_id": 70,
"text": " \\rho_{xy}=1 "
},
{
"math_id": 71,
"text": "L"
},
{
"math_id": 72,
"text": "A(n)"
},
{
"math_id": 73,
"text": "n"
},
{
"math_id": 74,
"text": "A(m+n) \\leq A(m)A(n)"
},
{
"math_id": 75,
"text": "\\log A(n)"
},
{
"math_id": 76,
"text": "k \\geq 1"
},
{
"math_id": 77,
"text": "1, 2, ..., k"
},
{
"math_id": 78,
"text": "\\gamma_k \\geq 0"
},
{
"math_id": 79,
"text": "\\sim \\gamma_k n"
},
{
"math_id": 80,
"text": "n=1"
},
{
"math_id": 81,
"text": "\\frac 1k < \\gamma_k \\leq 1"
},
{
"math_id": 82,
"text": "\\gamma_2"
}
]
| https://en.wikipedia.org/wiki?curid=745789 |
74583808 | Pines' demon | Quasiparticle in condensed matter physics
In condensed matter physics, Pines' demon or, simply demon is a collective excitation of electrons which corresponds to electrons in different energy bands moving out of phase with each other. Equivalently, a demon corresponds to counter-propagating currents of electrons from different bands. Named after David Pines, who coined the term in 1956, demons are quantum mechanical excited states of a material belonging to a broader class of exotic collective excitations, such as the magnon, phason, or exciton. Pines' demon was first experimentally observed in 2023 by A. A. Husain et al. within the transition-metal oxide distrontium ruthenate (Sr2RuO4).
History.
Demons were originally theorized in 1956 by David Pines in the context of multiband metals with two energy bands: a heavy electron band with large effective mass formula_0 and a light electron band with effective mass formula_1. In the limit of formula_2, the two bands are "kinematically" decoupled, so electrons in one band are unable to scatter to the other band while conserving momentum and energy. Within this limit, Pines pointed out that the two bands can be thought of as two "distinct" species of charge particles, so that it becomes possible for excitations of the two bands to be either in-phase or out-of-phase with each other. The in-phase excitation of the two bands was not a new type of excitation, it was simply the plasmon, an excitation proposed earlier by David Pines and David Bohm in 1952 which explained peaks observed in early electron energy-loss spectra of solids. The "out-of-phase" excitation was termed the "demon" by Pines after James Clerk Maxwell, since he thought Maxwell "lived too early to have a particle or excitation named in his honor." Pines explained his terminology by making the term a half backronym because particles commonly have suffix "-on" and the excitation involved distinct electron motion, resulting in D.E.M.on, or simply demon for short.
The demon was historically referred to as an acoustic plasmon, due to its gapless nature which is also shared with acoustic phonons. However, with the rise of two-dimensional materials (such as graphene) and surface plasmons, the term acoustic plasmon has taken on a very different meaning as the ordinary plasmon in a low-dimensional system. Such acoustic plasmons are distinct from the demon because they do not consist of out-of-phase currents from different bands, do not exist in "bulk" materials, and do couple to light, unlike the demon. A more detailed comparison of plasmons and demons is shown in the table below.
The demon excitation, unlike the plasmon, was only discovered many decades later in 2023 by A. A. Husain et al. in the unconventional superconducting material Sr2RuO4 using a momentum-resolved variant of high-resolution electron energy-loss spectroscopy.
Relationship with the plasmon.
The plasmon is a quantized vibration of the charge density in a material where all electron bands move "in-phase". The plasmon is also "massive" (i.e., has an energy gap) in "bulk" materials due to the energy cost needed to overcome the long-ranged Coulomb interaction, with the energy cost being the plasma frequency formula_3. Plasmons exist in all conducting materials and play a dominant role in shaping the dielectric function of a metal at optical frequencies. Historically, plasmons were observed as early as 1941 by G. Ruthemann. The behavior of plasmons has widespread implications,as they play a role as a tool for biological microscopy (surface plasmon resonance microscopy), plasmon-based electronics (plasmonics), and underlay the original formulation of the transmission-line with a junction plasmon (transmon) device now used in superconducting qubits for quantum computing.
The demon excitation on the other hand holds a number key distinctions from the plasmon (and acoustic plasmon), as summarized in the table below.
Theoretical significance.
Early studies of the demon in the context of superconductivity showed, under the two band picture presented by Pines, that superconducting pairing of the light electron band can be enhanced through the existence of demons, while the pairing of the heavy electrons would be more or less unaffected. The implication being that demons would allow for "orbital-selective" effects on superconducting pairing. However, for the simple case of spherically symmetric metals with two bands, natural realizations of demon-enhanced superconductivity seemed unlikely, as the heavy (d-)electrons play the dominant role in superconductivity of most transition metal considered at the time. However, more recent studies on high-temperature superconducting metal hydrides, where light electron bands participate in superconductivity, suggest demons may be playing an active role in such systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_d"
},
{
"math_id": 1,
"text": "m_s"
},
{
"math_id": 2,
"text": "m_d \\gg m_s"
},
{
"math_id": 3,
"text": "\\omega_p"
}
]
| https://en.wikipedia.org/wiki?curid=74583808 |
74584293 | Marco Antei | Italian mathematician and LGBT activist
Marco Antei (born 1978, Sanremo) is an Italian mathematician and LGBT+ activist.
Career.
Antei was awarded his PhD in mathematics in 2008 from the University of Lille under the supervision of Michel Emsalem. He later worked at the Max Planck Institute for Mathematics in Bonn, the KAIST in Daejeon, the Ben-Gurion University of the Negev in Beersheba, the Côte d'Azur University in Nice before joining the University of Costa Rica. He has been lecturer at Lucerne University of Applied Sciences and Arts since 2022. Antei studies within the field of geometry. His areas of interest in research focus on algebraic and arithmetic geometry, and applications. He particularly studies the fundamental group scheme, torsors and their connections.
The fundamental group scheme.
The existence of the fundamental group scheme was conjectured by Alexander Grothendieck, while the first proof of its existence is due, for schemes defined over fields, to Madhav Nori.
Antei, Michel Emsalem and Carlo Gasbarri proved the existence of the fundamental group scheme formula_0 for schemes defined over Dedekind schemes and they also defined, and proved the existence of, the "quasi-finite fundamental group scheme" formula_1.
Award.
In 2020 Antei received the "Innovating professor" award at the University of Costa Rica, for being able to move from in presence classes to virtual classes, at the beginning of the COVID-19 pandemic, in the best possible way. The awarded professors have been selected by the students of the UCR.
Activism.
Antei stands out for his commitment in the fight for the rights of the LGBT+ community. In particular in 2015 he created the first LGBT+ association in the province of Imperia, subsidiary of the national association Arcigay, and he has been president of it until 2018, then again since 2023. During his mandate he also organized in November 2016 the first Transgender Day of Remembrance in the city of Sanremo. In 2020, he was featured in a remote meeting with European Commission President Ursula von der Leyen where the rights of LGBT+ people within the European Union were discussed
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi_1(X,x)"
},
{
"math_id": 1,
"text": "\\pi^{\\text{qf}}(X,x)"
}
]
| https://en.wikipedia.org/wiki?curid=74584293 |
7459891 | Underground nuclear weapons testing | Test detonation of nuclear weapons underground
Underground nuclear testing is the test detonation of nuclear weapons that is performed underground. When the device being tested is buried at sufficient depth, the nuclear explosion may be contained, with no release of radioactive materials to the atmosphere.
The extreme heat and pressure of an underground nuclear explosion causes changes in the surrounding rock. The rock closest to the location of the test is vaporised, forming a cavity. Farther away, there are zones of crushed, cracked, and irreversibly strained rock. Following the explosion, the rock above the cavity may collapse, forming a rubble chimney. If this chimney reaches the surface, a bowl-shaped subsidence crater may form.
The first underground test took place in 1951. Further tests soon led scientists to conclude that even notwithstanding environmental and diplomatic considerations, underground testing was of far greater scientific value than all other forms of testing. This understanding strongly influenced the governments of the first three nuclear powers to sign of the Limited Test Ban Treaty in 1963, which banned all nuclear tests except for those performed underground. From then until the signing of the Comprehensive Nuclear-Test-Ban Treaty in 1996, most nuclear tests were performed underground, which prevented additional nuclear fallout from entering into the atmosphere.
Background.
Public concern about fallout from nuclear testing grew in the early 1950s. Fallout was discovered after the "Trinity" test, the first ever atomic bomb test, in 1945. Photographic film manufacturers later reported 'fogged' films; this was traced to packaging materials sourced from Indiana crops, contaminated by "Trinity" and later tests at the Nevada Test Site, over 1,000 miles (≈1600 kilometres) away. Intense fallout from the 1953 "Simon" test was documented as far as Albany, New York.
The fallout from the March 1954 "Bravo" test in the Pacific Ocean had "scientific, political and social implications that have continued for more than 40 years". The multi-megaton test caused fallout to occur on the islands of the Rongerik and Rongelap atolls, and a Japanese fishing boat known as the "Daigo Fukuryū Maru" (Lucky Dragon). Prior to this test, there was "insufficient" appreciation of the dangers of fallout.
The test became an international incident. In a Public Broadcasting Service (PBS) interview, the historian Martha Smith argued: "In Japan, it becomes a huge issue in terms of not just the government and its protest against the United States, but all different groups and all different peoples in Japan start to protest. It becomes a big issue in the media. There are all kinds of letters and protests that come from, not surprisingly, Japanese fishermen, the fishermen's wives; there are student groups, all different types of people; that protest against the Americans' use of the Pacific for nuclear testing. They're very concerned about, first of all, why the United States even has the right to be carrying out those kinds of tests in the Pacific. They're also concerned about the health and environmental impact." The Prime Minister of India "voiced the heightened international concern" when he called for the elimination of all nuclear testing worldwide.
Knowledge about fallout and its effects grew, and with it concern about the global environment and long-term genetic damage. Talks between the United States, the United Kingdom, Canada, France, and the Soviet Union began in May 1955 on the subject of an international agreement to end nuclear tests. On August 5, 1963, representatives of the United States, the Soviet Union, and the United Kingdom signed the Limited Test Ban Treaty, forbidding testing of nuclear weapons in the atmosphere, in space, and underwater. Agreement was facilitated by the decision to allow underground testing, eliminating the need for on-site inspections that concerned the Soviets. Underground testing was allowed, provided that it does not cause "radioactive debris to be present outside the territorial limits of the State under whose jurisdiction or control such explosion is conducted".
Early history of underground testing.
Following analysis of underwater detonations that were part of "Operation Crossroads" in 1946, inquiries were made regarding the possible military value of an underground explosion. The US Joint Chiefs of Staff thus obtained the agreement of the United States Atomic Energy Commission (AEC) to perform experiments on both surface and sub-surface detonations. The Alaskan island of Amchitka was initially selected for these tests in 1950, but the site was later deemed unsuitable and the tests were moved to the Nevada Test Site.
The first underground nuclear test was conducted on 29 November 1951. This was the 1.2 kiloton "Buster-Jangle Uncle", which detonated 5.2 m (17 ft) beneath ground level. The test was designed as a scaled-down investigation of the effects of a 23-kiloton ground-penetrating gun-type fission weapon that was then being considered for use as a cratering and bunker-buster weapon. The explosion resulted in a cloud that rose to 3,500 m (11,500 ft), and deposited fallout to the north and north-northeast. The resulting crater was 79 m (260 ft) wide and 16 m (53 ft) deep.
The next underground test was "Teapot Ess", on 23 March 1955. The one-kiloton explosion was an operational test of an 'Atomic Demolition Munition' (ADM). It was detonated 20.4 m (67 ft) underground, in a shaft lined with corrugated steel, which was then back-filled with sandbags and dirt. Because the ADM was buried underground, the explosion blew tons of earth upwards, creating a crater 91 m (300 ft) wide and 39 m (128 ft) deep. The resulting mushroom cloud rose to a height of and subsequent radioactive fallout drifted in an easterly direction, travelling as far as from ground zero.
On 26 July 1957, "Plumbbob Pascal-A" was detonated at the bottom of a shaft. According to one description, it "ushered in the era of underground testing with a magnificent pyrotechnic roman candle!" As compared with an above-ground test, the radioactive debris released to the atmosphere was reduced by a factor of ten. Theoretical work began on possible containment schemes.
"Plumbbob Rainier" was detonated at underground on 19 September 1957. The 1.7 kt explosion was the first to be entirely contained underground, producing no fallout. The test took place in a 1,600 – 2,000 ft (488 – 610 m) horizontal tunnel in the shape of a hook. The hook "was designed so explosive force will seal off the non-curved portion of tunnel nearest the detonation before gases and fission fragments can be vented around the curve of the tunnel's hook". This test would become the prototype for larger, more powerful tests. Rainier was announced in advance, so that seismic stations could attempt to record a signal.
Analysis of samples collected after the test enabled scientists to develop an understanding of underground explosions that "persists essentially unaltered today". The information would later provide a basis for subsequent decisions to agree to the Limited Test Ban Treaty.
"Cannikin", the last test at the facility on Amchitka, was detonated on 6 November 1971. At approximately five megatons, it was the largest underground test in US history.
Effects.
The effects of an underground nuclear test may vary according to factors including the depth and yield of the explosion, as well as the nature of the surrounding rock. If the test is conducted at sufficient depth, the test is said to be "contained", with no venting of gases or other contaminants to the environment. In contrast, if the device is buried at insufficient depth ("underburied"), then rock may be expelled by the explosion, forming a subsidence crater surrounded by ejecta, and releasing high-pressure gases to the atmosphere (the resulting crater is usually conical in profile, circular, and may range between tens to hundreds of metres in diameter and depth).
One figure used in determining how deeply the device should be buried is the "scaled depth of burial", or "-burst" (SDOB) This figure is calculated as the burial depth in metres divided by the cube root of the yield in kilotons. It is estimated that, in order to ensure containment, this figure should be greater than 100.
The energy of the nuclear explosion is released in one microsecond. In the following few microseconds, the test hardware and surrounding rock are vaporised, with temperatures of several million degrees and pressures of several million atmospheres. Within milliseconds, a bubble of high-pressure gas and steam is formed. The heat and expanding shock wave cause the surrounding rock to vaporise, or be melted further away, creating a "melt cavity". The shock-induced motion and high internal pressure cause this cavity to expand outwards, which continues over several tenths of a second until the pressure has fallen sufficiently, to a level roughly comparable with the weight of the rock above, and can no longer grow. Although not observed in every explosion, four distinct zones (including the melt cavity) have been described in the surrounding rock. The "crushed zone", about two times the radius of the cavity, consists of rock that has lost all of its former integrity. The "cracked zone", about three times the cavity radius, consists of rock with radial and concentric fissures. Finally, the "zone of irreversible strain" consists of rock deformed by the pressure. The following layer undergoes only an elastic deformation; the strain and subsequent release then forms a seismic wave. A few seconds later the molten rock starts collecting on the bottom of the cavity and the cavity content begins cooling. The rebound after the shock wave causes compressive forces to build up around the cavity, called a stress containment cage, sealing the cracks.
Several minutes to days later, once the heat dissipates enough, the steam condenses, and the pressure in the cavity falls below the level needed to support the overburden, the rock above the void falls into the cavity. Depending on various factors, including the yield and characteristics of the burial, this collapse may extend to the surface. If it does, a subsidence crater is created. Such a crater is usually bowl-shaped, and ranges in size from a few tens of metres to over a kilometre in diameter. At the Nevada Test Site, 95 percent of tests conducted at a scaled depth of burial (SDOB) of less than 150 caused surface collapse, compared with about half of tests conducted at a SDOB of less than 180. The radius "r" (in feet) of the cavity is proportional to the cube root of the yield "y" (in kilotons), "r" = 55 * formula_0; an 8 kiloton explosion will create a cavity with radius of .
Other surface features may include disturbed ground, pressure ridges, faults, water movement (including changes to the water table level), rockfalls, and ground slump. Most of the gas in the cavity is composed of steam; its volume decreases dramatically as the temperature falls and the steam condenses. There are however other gases, mostly carbon dioxide and hydrogen, which do not condense and remain gaseous. The carbon dioxide is produced by thermal decomposition of carbonates, hydrogen is created by reaction of iron and other metals from the nuclear device and surrounding equipment. The amount of carbonates and water in the soil and the available iron have to be considered in evaluating the test site containment; water-saturated clay soils may cause structural collapse and venting. Hard basement rock may reflect shock waves of the explosion, also possibly causing structural weakening and venting. The noncondensible gases may stay absorbed in the pores in the soil. Large amount of such gases can however maintain enough pressure to drive the fission products to the ground.
Escape of radioactivity from the cavity is known as containment failure. Massive, prompt, uncontrolled releases of fission products, driven by the pressure of steam or gas, are known as venting; an example of such failure is the "Baneberry" test. Slow, low-pressure uncontrolled releases of radioactivity are known as seeps; these have little to no energy, are not visible and have to be detected by instruments. Late-time seeps are releases of noncondensable gases days or weeks after the blast, by diffusion through pores and crack, probably assisted by a decrease of atmospheric pressure (so called "atmospheric pumping"). When the test tunnel has to be accessed, controlled tunnel purging is performed; the gases are filtered, diluted by air and released to atmosphere when the winds will disperse them over sparsely populated areas. Small activity leaks resulting from operational aspects of tests are called operational releases; they may occur e.g. during drilling into the explosion location during core sampling, or during the sampling of explosion gases. The radionuclide composition differs by the type of releases; large prompt venting releases significant fraction (up to 10%) of fission products, while late-time seeps contain only the most volatile gases. Soil absorbs the reactive chemical compounds, so the only nuclides filtered through soil into the atmosphere are the noble gases, primarily krypton-85 and xenon-133.
The released nuclides can undergo bio-accumulation. Radioactive isotopes like iodine-131, strontium-90 and caesium-137 are concentrated in the milk of grazing cows; cow milk is therefore a convenient, sensitive fallout indicator. Soft tissues of animals can be analyzed for gamma emitters, bones and liver for strontium and plutonium, and blood, urine and soft tissues are analyzed for tritium.
Although there were early concerns about earthquakes arising as a result of underground tests, there is no evidence that this has occurred. However, fault movements and ground fractures have been reported, and explosions often precede a series of aftershocks, thought to be a result of cavity collapse and chimney formation. In a few cases, seismic energy released by fault movements has exceeded that of the explosion itself.
International treaties.
Signed in Moscow on August 5, 1963, by representatives of the United States, the Soviet Union, and the United Kingdom, the Limited Test Ban Treaty agreed to ban nuclear testing in the atmosphere, in space, and underwater. Due to the Soviet government's concern about the need for on-site inspections, underground tests were excluded from the ban. 108 countries would eventually sign the treaty, with the significant exception of China.
In 1974, the United States and the Soviet Union signed the Threshold Test Ban Treaty (TTBT) which banned underground tests with yields greater than 150 kilotons. By the 1990s, technologies to monitor and detect underground tests had matured to the point that tests of one kiloton or over could be detected with high probability, and in 1996 negotiations began under the auspices of the United Nations to develop a comprehensive test ban. The resulting Comprehensive Nuclear-Test-Ban Treaty was signed in 1996 by the United States, Russia, United Kingdom, France, and China. However, following the United States Senate decision not to ratify the treaty in 1999, it is still yet to be ratified by 8 of the required 44 'Annex 2' states and so has not entered into force as United Nations law.
Monitoring.
In the late 1940s, the United States began to develop the capability to detect atmospheric testing using air sampling; this system was able to detect the first Soviet test in 1949. Over the next decade, this system was improved, and a network of seismic monitoring stations was established to detect underground tests. Development of the Threshold Test Ban Treaty in the mid-1970s led to an improved understanding of the relationship between test yield and resulting seismic magnitude.
When negotiations began in the mid-1990s to develop a comprehensive test ban, the international community was reluctant to rely upon the detection capabilities of individual nuclear weapons states (especially the United States), and instead wanted an international detection system. The resulting International Monitoring System (IMS) consists of a network of 321 monitoring stations, and 16 radionuclide laboratories. Fifty "primary" seismic stations send data continuously to the International Data Center, along with 120 "auxiliary" stations which send data on request. The resulting data is used to locate the epicentre, and distinguish between the seismic signatures of an underground nuclear explosion and an earthquake. Additionally, eighty radionuclide stations detect radioactive particles vented by underground explosions. Certain radionuclides constitute clear evidence of nuclear tests; the presence of noble gases can indicate whether an underground explosion has taken place. Finally, eleven hydroacoustic stations and sixty infrasound stations monitor underwater and atmospheric tests.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt[3]{y}"
}
]
| https://en.wikipedia.org/wiki?curid=7459891 |
74600162 | Irene Gregory | American aerospace engineer
Irene Michelle Gregory is an American aerospace engineer whose research involves control theory and its applications in the control of aircraft. In works with Naira Hovakimyan and others, she has pioneered the use of formula_0 adaptive control techniques in this application, which combine the protection against uncertain data or modeling errors of robust control with the fast estimation provided by adaptive control. She works as senior technologist for advanced control theory and applications in the NASA Engineering & Safety Center at the Langley Research Center.
Education and career.
Gregory studied aeronautics and astronautics at the Massachusetts Institute of Technology, earning bachelor's and master's degrees there, and has been working at the Langley Research Center since at least 1991.
She completed a Ph.D. in Control and Dynamic Systems at the California Institute of Technology, in 2004. Her doctoral dissertation, "Design and Stability Analysis of an Integrated Controller for Highly Flexible Advanced Aircraft Utilizing the Novel Nonlinear Dynamic Inversion", was supervised by John Doyle.
Recognition.
Gregory was named as a Fellow of the American Institute of Aeronautics and Astronautics in 2021.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{L}_1"
}
]
| https://en.wikipedia.org/wiki?curid=74600162 |
7460579 | Weighted fair queueing | Network scheduling algorithm
Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity in equal subparts, WFQ allows schedulers to specify, for each flow, which fraction of the capacity will be given.
Weighted fair queuing is also known as packet-by-packet GPS (PGPS or P-GPS) since it approximates generalized processor sharing "to within one packet transmission time, regardless of the arrival patterns."
Parametrization and fairness.
Like other GPS-like scheduling algorithms, the choice of the weights is left to the network administrator. There is no unique definition of what is "fair" (see for further discussion).
By regulating the WFQ weights dynamically, WFQ can be utilized for controlling the quality of service, for example, to achieve guaranteed data rate.
Proportionally fair behavior can be achieved by setting the weights to formula_0, where formula_1 is the cost per data bit of data flow formula_2. For example, in CDMA spread spectrum cellular networks, the cost may be the required energy (the interference level), and in dynamic channel allocation systems, the cost may be the number of nearby base station sites that can not use the same frequency channel, in view to avoid co-channel interference.
Algorithm.
In WFQ, a scheduler handling N flows is configured with one weight formula_3 for each flow. Then, the flow of number formula_2 will achieve an average data rate of formula_4, where formula_5 is the link rate. A WFQ scheduler where all weights are equal is a FQ scheduler.
Like all fair-queuing schedulers, each flow is protected from the others, and it can be proved that if a data flow is leaky bucket constrained, an end-to-end delay bound can be guaranteed.
The algorithm of WFQ is very similar to the one of FQ. For each packet, a virtual theoretical departure date will be computed, defined as the departure date if the scheduler was a perfect GPS scheduler. Then, each time the output link is idle, the packet with the smallest date is selected for emission.
The pseudo code can be obtained simply from the one of FQ by replacing the computation of the virtual departure time by
packet.virFinish = virStart + packet.size / Ri
with formula_6.
WFQ as a GPS approximation.
WFQ, under the name PGPS, has been designed as "an excellent approximation to GPS", and it has been proved that it approximates GPS "to within one packet transmission time, regardless of the arrival patterns."
Since WFQ implementation is similar to fair queuing, it has the same "O(log(n))" complexity, where "n" is the number of flows. This complexity comes from the need to select the queue with the smallest virtual finish time each time a packet is sent.
After WFQ, several other implementations of GPS have been defined.
History.
The introduction of parameters to share the bandwidth in an arbitrary way in mentioned at the end of as a possible extension to FQ. The term "weighted" first appears in.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w_i=1/c_i"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "w_i"
},
{
"math_id": 4,
"text": "\\frac{w_i}{(w_1+w_2+...+w_N)}R"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "R_i = \\frac{w_i}{(w_1+w_2+...+w_N)}R"
}
]
| https://en.wikipedia.org/wiki?curid=7460579 |
74609356 | Force control | Force control is given by the machine
Force control is the control of the force with which a machine or the manipulator of a robot acts on an object or its environment. By controlling the contact force, damage to the machine as well as to the objects to be processed and injuries when handling people can be prevented. In manufacturing tasks, it can compensate for errors and reduce wear by maintaining a uniform contact force. Force control achieves more consistent results than position control, which is also used in machine control. Force control can be used as an alternative to the usual motion control, but is usually used in a complementary way, in the form of hybrid control concepts. The acting force for control is usually measured via force transducers or estimated via the motor current.
Force control has been the subject of research for almost three decades and is increasingly opening up further areas of application thanks to advances in sensor and actuator technology and new control concepts. Force control is particularly suitable for contact tasks that serve to mechanically process workpieces, but it is also used in telemedicine, service robot and the scanning of surfaces.
For force measurement, force sensors exist that can measure forces and torques in all three spatial directions. Alternatively, the forces can also be estimated without sensors, e.g. on the basis of the motor currents. Indirect force control by modeling the robot as a mechanical resistance (impedance) and direct force control in parallel or hybrid concepts are used as control concepts. Adaptive approaches, fuzzy controllers and machine learning for force control are currently the subject of research.
General.
Controlling the contact force between a manipulator and its environment is an increasingly important task in the environment of mechanical manufacturing, as well as industrial and service robot. One motivation for the use of force control is safety for man and machine. For various reasons, movements of the robot or machine parts may be blocked by obstacles while the program is running. In service robot these can be moving objects or people, in industrial robotics problems can occur with cooperating robots, changing work environments or an inaccurate environmental model. If the trajectory is misaligned in classical motion control and thus it is not possible to approach the programmed robot pose(s), the motion control will increase the manipulated variable - usually the motor current - in order to correct the position error. The increase of the manipulated variable can have the following effects:
A force control system can prevent this by regulating the maximum force of the machine in these cases, thus avoiding damage or making collisions detectable at an early stage.
In mechanical manufacturing tasks, unevenness of the workpiece often leads to problems with motion control. As can be seen in the adjacent figure, surface unevenness causes the tool to penetrate too far into the surface during position control (red) formula_0 or lose contact with the workpiece during position control (red) formula_1. This results, for example, in an alternating force effect on the workpiece and tool during grinding and polishing. Force control (green) is useful here, as it ensures uniform material removal through constant contact with the workpiece.
Application.
In force control, a basic distinction can be made between applications with pronounced contact and applications with potential contact. We speak of pronounced contact when the contact of the machine with the environment or the workpiece is a central component of the task and is explicitly controlled. This includes, above all, tasks of mechanical deformation and surface machining. In tasks with potential contact, the process function variable is the positioning of the machine or its parts. Larger contact forces between machine and environment occur due to dynamic environment or inaccurate environment model. In this case, the machine should yield to the environment and avoid large contact forces.
The main applications of force control today are mechanical manufacturing operations. This means in particular manufacturing tasks such as grinding, polishing and deburring as well as force-controlled processes such as controlled joining, bending and pressing of bolts into prefabricated bores. Another common use of force control is scanning unknown surfaces. Here, force control is used to set a constant contact pressure in the normal direction of the surface and the scanning head is moved in the surface direction via position control. The surface can then be described in Cartesian coordinates via direct kinematics.
Other applications of force control with potential contact can be found in medical technology and cooperating robots. Robots used in telemedicine, i.e. robot-assisted medical operations, can avoid injuries more effectively via force control. In addition, direct feedback of the measured contact forces to the operator by means of a force feedback control device is of great interest here. Possible applications for this extend to internet-based teleoperations.
In principle, force control is also useful wherever machines and robots cooperate with each other or with humans, as well as in environments where the environment is not described exactly or is dynamic and cannot be described exactly. Here, force control helps to deal with obstacles and deviations in the environmental model and to avoid damage.
History.
The first important work on force control was published in 1980 by John Kenneth Salisbury at Stanford University. In it, he describes a method for active stiffness control, a simple form of impedance control. However, the method does not yet allow a combination with motion control, but here force control is performed in all spatial directions. The position of the surface must therefore be known. Because of the lower performance of robot controllers of that time, force control could only be performed on mainframe computers. Thus, a controller cycle of ≈100 ms was achieved.
In 1981, Raibert and Craig presented a paper on hybrid force/position control which is still important today. In this paper, they describe a method in which a matrix (separation matrix) is used to explicitly specify for all spatial directions whether motion or force control is to be used. Raibert and Craig merely sketch the controller concepts and assume them to be feasible.
In 1989, Koivo presented an extended exposition of the concepts of Raibert and Craig. Precise knowledge of the surface position is still necessary here, which still does not allow for the typical tasks of force control today, such as scanning surfaces.
Force control has been the subject of intense research over the past two decades and has made great strides with the advancement of sensor technology and control algorithms. For some years now, the major automation technology manufacturers have been offering software and hardware packages for their controllers to allow force control. Modern machine controllers are capable of force control in one spatial direction in real time computing with a cycle time of less than 10 ms.
Force measurement.
To close the force control loop in the sense of a closed-loop control, the instantaneous value of the contact force must be known. The contact force can either be measured directly or estimated.
Direct force measurement.
The trivial approach to force control is the direct measurement of the occurring contact forces via force/torque sensors at the end effector of the machine or at the wrist of the industrial robot. Force/torque sensors measure the occurring forces by measuring the deformation at the sensor. The most common way to measure deformation is by means of strain gauges.
In addition to the widely used strain gauges made of variable electrical resistances, there are also other versions that use piezoelectric, optical or capacitive principles for measurement. In practice, however, they are only used for special applications. Capacitive strain gages, for example, can also be used in the high-temperature range above 1000 °C.
Strain gages are designed to have as linear a relationship as possible between strain and electrical resistance within the working space. In addition, several possibilities exist to reduce measurement errors and interference. To exclude temperature influences and increase measurement reliability, two strain gauges can be arranged in a complementary manner.
Modern force/torque sensors measure both forces and torques in all three spatial directions and are available with almost any value range. The accuracy is usually in the per mil range of the maximum measured value. The sampling rates of the sensors are in the range of about 1 kHz. An extension of the 6-axis force/torque sensors are 12- and 18-axis sensors which, in addition to the six force or torque components, are also capable of measuring six velocity and acceleration components each.
Six-axis force/torque sensor.
In modern applications, so-called six-axis force/torque sensors are frequently used. These are mounted between the robot hand and the end effector and can record both forces and torques in all three spatial directions. For this purpose, they are equipped with six or more strain gauges (possibly strain measurement bridges) that record deformations in the micrometer range. These deformations are converted into three force and torque components each via a calibration matrix.
Force/torque sensors contain a digital signal processor that continuously acquires and filters the sensor data (strain) in parallel, calculates the measurement data (forces/torques) and makes it available via the sensor's communication interface.
The measured values correspond to the forces at the sensor and usually still have to be converted into the forces and torques at the end effector or tool via a suitable transformation.
Since force/torque sensors are still relatively expensive (between €4,000 and €15,000) and very sensitive to overloads and disturbances, they - and thus force control - have been reluctantly used in industry. Indirect force measurement or estimation is one solution, allowing force control without costly and disturbance-prone force sensors.
Force estimation.
A cost-saving alternative to direct force measurement is force estimation (also known as "indirect force measurement"). This makes it possible to dispense with the use of force/torque sensors. In addition to cost savings, dispensing with these sensors has other advantages: Force sensors are usually the weakest link in the mechanical chain of the machine or robot system, so dispensing with them brings greater stability and less susceptibility to mechanical faults. In addition, dispensing with force/torque sensors brings greater safety, since there is no need for sensor cables to be routed out and protected directly at the manipulator's wrist.
A common method for indirect force measurement or force estimation is the measurement of the motor currents applied for motion control. With some restrictions, these are proportional to the torque applied to the driven robot axis. Adjusted for gravitational, inertial and frictional effects, the motor currents are largely linear to the torques of the individual axes. The contact force at the end effector can be determined via the torques thus known.
Separation of dynamic and static forces.
During force measurement and force estimation, filtering of the sensor signals may be necessary. Numerous side effects and secondary forces can occur which do not correspond to the measurement of the contact force. This is especially true if a larger load mass is mounted on the manipulator. This interferes with the force measurement when the manipulator moves with high accelerations.
To be able to adjust the measurement for side effects, both an accurate dynamic model of the machine and a model or estimate of the load must be available. This estimate can be determined via reference movements (free movement without object contact). After estimating the load, the measurement or estimate of the forces can be adjusted for Coriolis, centripetal and centrifugal forces, gravitational and frictional effects, and inertia. Adaptive approaches can also be used here to continuously adjust the estimate of the load.
Control concepts.
Various control concepts are used for force control. Depending on the desired behavior of the system, a distinction is made between the concepts of direct force control and indirect control via specification of compliance or mechanical impedance. As a rule, force control is combined with motion control. Concepts for force control have to consider the problem of coupling between force and position: If the manipulator is in contact with the environment, a change of the position also means a change of the contact force.
Impedance control.
Impedance control, or compliance control, regulates the compliance of the system, i.e., the link between force and position upon object contact. Compliance is defined in the literature as a "measure of the robot's ability to counteract contact forces." There are passive and active approaches to this. Here, the compliance of the robot system is modeled as mechanical impedance, which describes the relationship between applied force and resulting velocity. Here, the robot's machine or manipulator is considered as a mechanical resistance with positional constraints imposed by the environment. Accordingly, the causality of mechanical impedance describes that a movement of the robot results in a force. In mechanical admittance, on the other hand, a force applied to the robot results in a resulting motion.
Passive impedance control.
Passive compliance control (also known as compliance control) does not require force measurement because there is no explicit force control. Instead, the manipulator and/or end effector is flexibly designed in a way that can minimize contact forces that occur during the task to be performed. Typical applications include insertion and gripping operations. The end effector is designed in such a way that it allows translational and rotational deviations orthogonal to the gripping or insertion direction, but has high stiffness in the gripping or insertion direction. The figure opposite shows a so-called Remote Center of Compliance (RCC) that makes this possible. As an alternative to an RCC, the entire machine can also be made structurally elastic.
Passive impedance control is a very good solution in terms of system dynamics, since there are no latency due to the control. However, passive compliance control is often limited by the mechanical specification of the end effector in the task and cannot be readily applied to different and changing tasks or environmental conditions.
Active impedance control.
Active compliance control refers to the control of the manipulator based on a deviation of the end effector. This is particularly suitable for guiding robots by an operator, for example as part of a teach-in process.
Active compliance control is based on the idea of representing the system of machine and environment as a spring-damper-mass system. The force formula_2 and the motion (position formula_3, velocity formula_4, and acceleration formula_5 are directly related via the spring-damper-mass equation:
formula_6
The compliance or mechanical impedance of the system is determined by the stiffness formula_7, the damping formula_8 and the inertia formula_9 and can be influenced by these three variables. The control is given a mechanical target impedance via these three variables, which is achieved by the machine control.
The figure shows the block diagram of a force-based impedance control. The impedance in the block diagram represents the mentioned components L, A and . A position-based impedance control can be designed analogously with internal position or motion control.
Alternatively and analogously, the compliance (admittance) can be controlled instead of the resistance. In contrast to the impedance control, the admittance appears in the control law as the reciprocal of the impedance.
Direct force control.
The above concepts are so-called indirect force control, since the contact force is not explicitly specified as a command variable, but is determined indirectly via the controller parameters damping, stiffness and (virtual) mass. Direct force control is presented below.
Direct force control uses the desired force as a setpoint within a closed control loop. It is implemented as a parallel force/position control in the form of a cascade control or as a hybrid force/position control in which switching takes place between position and force control.
Parallel force/position control.
One possibility for force control is parallel force/position control. The control is designed as a cascade control and has an external force control loop and an internal position control loop. As shown in the following figure, a corresponding infeed correction is calculated from the difference between the nominal and actual force. This infeed correction is offset against the position command values, whereby in the case of the fusion of formula_10 and formula_11, the position command of force control (formula_11 )has a higher priority, i.e. a position error is tolerated in favor of the correct force control. The offset value is the input variable for the inner position control loop.
Analogous to an inner position control, an inner velocity control can also take place, which has a higher dynamic. In this case, the inner control loop should have a saturation in order not to generate a (theoretically) arbitrarily increasing velocity in the free movement until contact is made.
Hybrid force/position control.
An improvement over the above concepts is offered by hybrid force/position control, which works with two separate control systems and can also be used with hard, inflexible contact surfaces. In hybrid force/position control, the space is divided into a constrained and an unconstrained space. The constrained space contains restrictions, for example in the form of obstacles, and does not allow free movement; the unconstrained space allows free movement. Each dimension of the space is either constrained or unconstrained.
In hybrid force control, force control is used for the restricted space, and position control is used for the unrestricted space. The figure shows such a control. The matrix Σ indicates which space directions are restricted and is a diagonal matrix consisting of zeros and ones.
Which spatial direction is restricted and which is unrestricted can, for example, be specified statically. Force and position control is then explicitly specified for each spatial direction; the matrix Σ is then static. Another possibility is to switch the matrix Σ dynamically on the basis of force measurement. In this way, it is possible to switch from position control to force control for individual spatial directions when contact or collision is established. In the case of contact tasks, all spatial directions would be motion-controlled in the case of free movement, and after contact is established, the contact direction would be switched to force control by selecting the appropriate matrix Σ.
Research.
In recent years, the subject of research has increasingly been adaptive concepts, the use of fuzzy control system and machine learning, and force-based whole-body control.
Adaptive force control.
The previously mentioned, non-adaptive concepts are based on an exact knowledge of the dynamic process parameters. These are usually determined and adjusted by experiments and calibration. Problems can arise due to measurement errors and variable loads. In adaptive force control, position-dependent and thus time-variable parts of the system are regarded as parameter fluctuations and are constantly adapted in the course of the control by adaptation.
Due to the changing control, no guarantee can be given for dynamic stability of the system. Adaptive control is therefore usually first used offline and the results are intensively tested in simulation before being used on the real system.
Fuzzy control and machine learning.
A prerequisite for the application of classical design methods is an explicit system model. If this is difficult or impossible to represent, fuzzy controllers or machine learning can be considered. By means of fuzzy logic, knowledge acquired by humans can be converted into a control behavior in the form of fuzzy control specifications. Explicit specification of the controller parameters is thus no longer necessary.
Approaches using machine learning, moreover, no longer require humans to create the control behavior, but use machine learning as the basis for control.
Whole body control.
Due to the high complexity of modern robotic systems, such as humanoid robots, a large number of actuated degrees of freedom must be controlled. In addition, such systems are increasingly used in the direct environment of humans. Accordingly, concepts from force and impedance control are specifically used in this area to increase safety, as this allows the robot to interact with the environment and humans in a compliant manner. | [
{
"math_id": 0,
"text": "P'_1"
},
{
"math_id": 1,
"text": "P'_2"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "x(t)\\!\\,"
},
{
"math_id": 4,
"text": "\\dot x(t)"
},
{
"math_id": 5,
"text": "\\ddot x(t)"
},
{
"math_id": 6,
"text": "F(t) = c \\cdot x(t) + d \\cdot \\dot x(t) + m \\cdot \\ddot x(t)"
},
{
"math_id": 7,
"text": "c"
},
{
"math_id": 8,
"text": "d"
},
{
"math_id": 9,
"text": "m"
},
{
"math_id": 10,
"text": "X_{soll}"
},
{
"math_id": 11,
"text": "X_{korr}"
}
]
| https://en.wikipedia.org/wiki?curid=74609356 |
746117 | History of calculus | Calculus, originally called infinitesimal calculus, is a mathematical discipline focused on limits, continuity, derivatives, integrals, and infinite series. Many elements of calculus appeared in ancient Greece, then in China and the Middle East, and still later again in medieval Europe and in India. Infinitesimal calculus was developed in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz independently of each other. An argument over priority led to the Leibniz–Newton calculus controversy which continued until the death of Leibniz in 1716. The development of calculus and its uses within the sciences have continued to the present.
Etymology.
In mathematics education, "calculus" denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word "calculus" is Latin for "small pebble" (the diminutive of "calx," meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to mean a method of computation. In this sense, it was used in English at least as early as 1672, several years prior to the publications of Leibniz and Newton.
In addition to the differential calculus and integral calculus, the term is also used widely for naming specific methods of calculation. Examples of this include propositional calculus in logic, the calculus of variations in mathematics, process calculus in computing, and the felicific calculus in philosophy.
Early precursors of calculus.
Ancient.
Egypt and Babylonia.
The ancient period introduced some of the ideas that led to integral calculus, but does not seem to have developed these ideas in a rigorous and systematic way. Calculations of volumes and areas, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (c. 1820 BC), but the formulas are only given for concrete numbers, some are only approximately true, and they are not derived by deductive reasoning. Babylonians may have discovered the trapezoidal rule while doing astronomical observations of Jupiter.
Greece.
From the age of Greek mathematics, Eudoxus (c. 408–355 BC) used the method of exhaustion, which foreshadows the concept of the limit, to calculate areas and volumes, while Archimedes (c. 287–212 BC) developed this idea further, inventing heuristics which resemble the methods of integral calculus. Greek mathematicians are also credited with a significant use of infinitesimals. Democritus is the first person recorded to consider seriously the division of objects into an infinite number of cross-sections, but his inability to rationalize discrete cross-sections with a cone's smooth slope prevented him from accepting the idea. At approximately the same time, Zeno of Elea discredited infinitesimals further by his articulation of the paradoxes which they seemingly create.
Archimedes developed this method further, while also inventing heuristic methods which resemble modern day concepts somewhat in his "The Quadrature of the Parabola", "The Method", and "On the Sphere and Cylinder". It should not be thought that infinitesimals were put on a rigorous footing during this time, however. Only when it was supplemented by a proper geometric proof would Greek mathematicians accept a proposition as true. It was not until the 17th century that the method was formalized by Cavalieri as the method of Indivisibles and eventually incorporated by Newton into a general framework of integral calculus. Archimedes was the first to find the tangent to a curve other than a circle, in a method akin to differential calculus. While studying the spiral, he separated a point's motion into two components, one radial motion component and one circular motion component, and then continued to add the two component motions together, thereby finding the tangent to the curve.
China.
The method of exhaustion was independently invented in China by Liu Hui in the 4th century AD in order to find the area of a circle. In the 5th century, Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere.
Medieval.
Middle East.
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, where the formulas for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. Roshdi Rashed has argued that the 12th century mathematician Sharaf al-Dīn al-Tūsī must have used the derivative of cubic polynomials in his "Treatise on Equations". Rashed's conclusion has been contested by other scholars, who argue that he could have obtained his results by other methods which do not require the derivative of the function to be known.
India.
Evidence suggests Bhāskara II was acquainted with some ideas of differential calculus. Bhāskara also goes deeper into the 'differential calculus' and suggests the differential coefficient vanishes at an extremum value of the function, indicating knowledge of the concept of 'infinitesimals'. There is evidence of an early form of Rolle's theorem in his work. The modern formulation of Rolle's theorem states that if formula_0, then formula_1 for some formula_2 with formula_3. In his astronomical work, Bhāskara gives a result that looks like a precursor to infinitesimal methods: if formula_4 then formula_5. This leads to the derivative of the sine function, although he did not develop the notion of a derivative.
Some ideas on calculus later appeared in Indian mathematics, at the Kerala school of astronomy and mathematics. Madhava of Sangamagrama in the 14th century, and later mathematicians of the Kerala school, stated components of calculus such as the Taylor series and infinite series approximations. However, they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the powerful problem-solving tool we have today.
Europe.
The mathematical study of continuity was revived in the 14th century by the Oxford Calculators and French collaborators such as Nicole Oresme. They proved the "Merton mean speed theorem": that a uniformly accelerated body travels the same distance as a body with uniform speed whose speed is half the final velocity of the accelerated body.
Modern precursors.
Integrals.
Johannes Kepler's work "Stereometrica Doliorum" published in 1615 formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse.
A significant work was a treatise inspired by Kepler's methods published in 1635 by Bonaventura Cavalieri on his method of indivisibles. He argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. He discovered Cavalieri's quadrature formula which gave the area under the curves "x""n" of higher degree. This had previously been computed in a similar way for the parabola by Archimedes in "The Method", but this treatise is believed to have been lost in the 13th century, and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
Torricelli extended Cavalieri's work to other curves such as the cycloid, and then the formula was generalized to fractional and negative powers by Wallis in 1656. In a 1659 treatise, Fermat is credited with an ingenious trick for evaluating the integral of any power function directly. Fermat also obtained a technique for finding the centers of gravity of various plane and solid figures, which influenced further work in quadrature.
Derivatives.
In the 17th century, European mathematicians Isaac Barrow, René Descartes, Pierre de Fermat, Blaise Pascal, John Wallis and others discussed the idea of a derivative. In particular, in "Methodus ad disquirendam maximam et minima" and in "De tangentibus linearum curvarum" distributed in 1636, Fermat introduced the concept of adequality, which represented equality up to an infinitesimal error term. This method could be used to determine the maxima, minima, and tangents to various curves and was closely related to differentiation.
Isaac Newton would later write that his own early ideas about calculus came directly from "Fermat's way of drawing tangents."
Fundamental theorem of calculus.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time, and Fermat's adequality. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670.
James Gregory, influenced by Fermat's contributions both to tangency and to quadrature, was then able to prove a restricted version of the second fundamental theorem of calculus, that integrals can be computed using any of a function's antiderivatives.
The first full proof of the fundamental theorem of calculus was given by Isaac Barrow.
Other developments.
One prerequisite to the establishment of a calculus of functions of a real variable involved finding an antiderivative for the rational function formula_6 This problem can be phrased as quadrature of the rectangular hyperbola "xy" = 1. In 1647 Gregoire de Saint-Vincent noted that the required function "F" satisfied formula_7 so that a geometric sequence became, under "F", an arithmetic sequence. A. A. de Sarasa associated this feature with contemporary algorithms called "logarithms" that economized arithmetic by rendering multiplications into additions. So "F" was first known as the hyperbolic logarithm. After Euler exploited e = 2.71828..., and "F" was identified as the inverse function of the exponential function, it became the natural logarithm, satisfying formula_8
The first proof of Rolle's theorem was given by Michel Rolle in 1691 using methods developed by the Dutch mathematician Johann van Waveren Hudde. The mean value theorem in its modern form was stated by Bernard Bolzano and Augustin-Louis Cauchy (1789–1857) also after the founding of modern calculus. Important contributions were also made by Barrow, Huygens, and many others.
Newton and Leibniz.
Before Newton and Leibniz, the word "calculus" referred to any body of mathematics, but in the following years, "calculus" became a popular term for a field of mathematics based upon their insights. Newton and Leibniz, building on this work, independently developed the surrounding theory of infinitesimal calculus in the late 17th century. Also, Leibniz did a great deal of work with developing consistent and useful notation and concepts. Newton provided some of the most important applications to physics, especially of integral calculus.
By the middle of the 17th century, European mathematics had changed its primary repository of knowledge. In comparison to the last century which maintained Hellenistic mathematics as the starting point for research, Newton, Leibniz and their contemporaries increasingly looked towards the works of more modern thinkers.
Newton came to calculus as part of his investigations in physics and geometry. He viewed calculus as the scientific description of the generation of motion and magnitudes. In comparison, Leibniz focused on the tangent problem and came to believe that calculus was a metaphysical explanation of change. Importantly, the core of their insight was the formalization of the inverse properties between the integral and the differential of a function. This insight had been anticipated by their predecessors, but they were the first to conceive calculus as a system in which new rhetoric and descriptive terms were created.
Newton.
Newton completed no definitive publication formalizing his fluxional calculus; rather, many of his mathematical discoveries were transmitted through correspondence, smaller papers or as embedded aspects in his other definitive compilations, such as the "Principia" and "Opticks". Newton would begin his mathematical training as the chosen heir of Isaac Barrow in Cambridge. His aptitude was recognized early and he quickly learned the current theories. By 1664 Newton had made his first important contribution by advancing the binomial theorem, which he had extended to include fractional and negative exponents. Newton succeeded in expanding the applicability of the binomial theorem by applying the algebra of finite quantities in an analysis of infinite series. He showed a willingness to view infinite series not only as approximate devices, but also as alternative forms of expressing a term.
Many of Newton's critical insights occurred during the plague years of 1665–1666, which he later described as, "the prime of my age for invention and minded mathematics and [natural] philosophy more than at any time since." It was during his plague-induced isolation that the first written conception of fluxionary calculus was recorded in the unpublished "De Analysi per Aequationes Numero Terminorum Infinitas". In this paper, Newton determined the area under a curve by first calculating a momentary rate of change and then extrapolating the total area. He began by reasoning about an indefinitely small triangle whose area is a function of "x" and "y". He then reasoned that the infinitesimal increase in the abscissa will create a new formula where "x" = "x" + "o" (importantly, "o" is the letter, not the digit 0). He then recalculated the area with the aid of the binomial theorem, removed all quantities containing the letter "o" and re-formed an algebraic expression for the area. Significantly, Newton would then "blot out" the quantities containing "o" because terms "multiplied by it will be nothing in respect to the rest".
At this point Newton had begun to realize the central property of inversion. He had created an expression for the area under a curve by considering a momentary increase at a point. In effect, the fundamental theorem of calculus was built into his calculations. While his new formulation offered incredible potential, Newton was well aware of its logical limitations at the time. He admits that "errors are not to be disregarded in mathematics, no matter how small" and that what he had achieved was "shortly explained rather than accurately demonstrated".
In an effort to give calculus a more rigorous explication and framework, Newton compiled in 1671 the "Methodus Fluxionum et Serierum Infinitarum". In this book, Newton's strict empiricism shaped and defined his fluxional calculus. He exploited instantaneous motion and infinitesimals informally. He used math as a methodological tool to explain the physical world. The base of Newton's revised calculus became continuity; as such he redefined his calculations in terms of continual flowing motion. For Newton, variable magnitudes are not aggregates of infinitesimal elements, but are generated by the indisputable fact of motion. As with many of his works, Newton delayed publication. "Methodus Fluxionum" was not published until 1736.
Newton attempted to avoid the use of the infinitesimal by forming calculations based on ratios of changes. In the "Methodus Fluxionum" he defined the rate of generated change as a fluxion, which he represented by a dotted letter, and the quantity generated he defined as a fluent. For example, if formula_9 and formula_10 are fluents, then formula_11 and formula_12 are their respective fluxions. This revised calculus of ratios continued to be developed and was maturely stated in the 1676 text "De Quadratura Curvarum" where Newton came to define the present day derivative as the ultimate ratio of change, which he defined as the ratio between evanescent increments (the ratio of fluxions) purely at the moment in question. Essentially, the ultimate ratio is the ratio as the increments vanish into nothingness. Importantly, Newton explained the existence of the ultimate ratio by appealing to motion:For by the ultimate velocity is meant that, with which the body is moved, neither before it arrives at its last place, when the motion ceases nor after but at the very instant when it arrives... the ultimate ratio of evanescent quantities is to be understood, the ratio of quantities not before they vanish, not after, but with which they vanishNewton developed his fluxional calculus in an attempt to evade the informal use of infinitesimals in his calculations.
Leibniz.
While Newton began development of his fluxional calculus in 1665–1666 his findings did not become widely circulated until later. In the intervening years Leibniz also strove to create his calculus. In comparison to Newton who came to math at an early age, Leibniz began his rigorous math studies with a mature intellect. He was a polymath, and his intellectual interests and achievements involved metaphysics, law, economics, politics, logic, and mathematics. In order to understand Leibniz's reasoning in calculus his background should be kept in mind. Particularly, his metaphysics which described the universe as a Monadology, and his plans of creating a precise formal logic whereby, "a general method in which all truths of the reason would be reduced to a kind of calculation".
In 1672, Leibniz met the mathematician Huygens who convinced Leibniz to dedicate significant time to the study of mathematics. By 1673 he had progressed to reading Pascal's "Traité des Sinus du Quarte Cercle" and it was during his largely autodidactic research that Leibniz said "a light turned on". Like Newton, Leibniz saw the tangent as a ratio but declared it as simply the ratio between ordinates and abscissas. He continued this reasoning to argue that the integral was in fact the sum of the ordinates for infinitesimal intervals in the abscissa; in effect, the sum of an infinite number of rectangles. From these definitions the inverse relationship or differential became clear and Leibniz quickly realized the potential to form a whole new system of mathematics. Where Newton over the course of his career used several approaches in addition to an approach using infinitesimals, Leibniz made this the cornerstone of his notation and calculus.
In the manuscripts of 25 October to 11 November 1675, Leibniz recorded his discoveries and experiments with various forms of notation. He was acutely aware of the notational terms used and his earlier plans to form a precise logical symbolism became evident. Eventually, Leibniz denoted the infinitesimal increments of abscissas and ordinates "dx" and "dy", and the summation of infinitely many infinitesimally thin rectangles as a long s (∫ ), which became the present integral symbol formula_13.
While Leibniz's notation is used by modern mathematics, his logical base was different from our current one. Leibniz embraced infinitesimals and wrote extensively so as, "not to make of the infinitely small a mystery, as had Pascal." According to Gilles Deleuze, Leibniz's zeroes "are nothings, but they are not absolute nothings, they are nothings respectively" (quoting Leibniz' text "Justification of the calculus of infinitesimals by the calculus of ordinary algebra"). Alternatively, he defines them as, "less than any given quantity". For Leibniz, the world was an aggregate of infinitesimal points and the lack of scientific proof for their existence did not trouble him. Infinitesimals to Leibniz were ideal quantities of a different type from appreciable numbers. The truth of continuity was proven by existence itself. For Leibniz the principle of continuity and thus the validity of his calculus was assured. Three hundred years after Leibniz's work, Abraham Robinson showed that using infinitesimal quantities in calculus could be given a solid foundation.
Legacy.
The rise of calculus stands out as a unique moment in mathematics. Calculus is the mathematics of motion and change, and as such, its invention required the creation of a new mathematical system. Importantly, Newton and Leibniz did not create the same calculus and they did not conceive of modern calculus. While they were both involved in the process of creating a mathematical system to deal with variable quantities their elementary base was different. For Newton, change was a variable quantity over time and for Leibniz it was the difference ranging over a sequence of infinitely close values. Notably, the descriptive terms each system created to describe change was different.
Historically, there was much debate over whether it was Newton or Leibniz who first "invented" calculus. This argument, the Leibniz and Newton calculus controversy, involving Leibniz, who was German, and the Englishman Newton, led to a rift in the European mathematical community lasting over a century. Leibniz was the first to publish his investigations; however, it is well established that Newton had started his work several years prior to Leibniz and had already developed a theory of tangents by the time Leibniz became interested in the question.
It is not known how much this may have influenced Leibniz. The initial accusations were made by students and supporters of the two great scientists at the turn of the century, but after 1711 both of them became personally involved, accusing each other of plagiarism.
The priority dispute had an effect of separating English-speaking mathematicians from those in continental Europe for many years. Only in the 1820s, due to the efforts of the Analytical Society, did Leibnizian analytical calculus become accepted in England. Today, both Newton and Leibniz are given credit for independently developing the basics of calculus. It is Leibniz, however, who is credited with giving the new discipline the name it is known by today: "calculus". Newton's name for it was "the science of fluents and fluxions".
The work of both Newton and Leibniz is reflected in the notation used today. Newton introduced the notation formula_14 for the derivative of a function "f". Leibniz introduced the symbol formula_15 for the integral and wrote the derivative of a function "y" of the variable "x" as formula_16, both of which are still in use.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
Developments.
Calculus of variations.
The calculus of variations may be said to begin with a problem of Johann Bernoulli (1696). It immediately occupied the attention of Jakob Bernoulli but Leonhard Euler first elaborated the subject. His contributions began in 1733, and his "Elementa Calculi Variationum" gave to the science its name. Joseph Louis Lagrange contributed extensively to the theory, and Adrien-Marie Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. To this discrimination Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Denis Poisson (1831), Mikhail Vasilievich Ostrogradsky (1834), and Carl Gustav Jakob Jacobi (1837) have been among the contributors. An important general work is that of Sarrus (1842) which was condensed and improved by Augustin Louis Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Carll (1885), but perhaps the most important work of the century is that of Karl Weierstrass. His course on the theory may be asserted to be the first to place calculus on a firm and rigorous foundation.
Operational methods.
Antoine Arbogast (1800) was the first to separate the symbol of operation from that of quantity in a differential equation. Francois-Joseph Servois (1814) seems to have been the first to give correct rules on the subject. Charles James Hargreave (1848) applied these methods in his memoir on differential equations, and George Boole freely employed them. Hermann Grassmann and Hermann Hankel made great use of the theory, the former in studying equations, the latter in his theory of complex numbers.
Integrals.
Niels Henrik Abel seems to have been the first to consider in a general way the question as to what differential equations can be integrated in a finite form by the aid of ordinary functions, an investigation extended by Liouville. Cauchy early undertook the general theory of determining definite integrals, and the subject has been prominent during the 19th century. Frullani integrals, David Bierens de Haan's work on the theory and his elaborate tables, Lejeune Dirichlet's lectures embodied in Meyer's treatise, and numerous memoirs of Legendre, Poisson, Plana, Raabe, Sohncke, Schlömilch, Elliott, Leudesdorf and Kronecker are among the noteworthy contributions.
Eulerian integrals were first studied by Euler and afterwards investigated by Legendre, by whom they were classed as Eulerian integrals of the first and second species, as follows:
formula_17
formula_18
although these were not the exact forms of Euler's study.
If "n" is a positive integer:
formula_19
but the integral converges for all positive real formula_20 and defines an analytic continuation of the factorial function to all of the complex plane except for poles at zero and the negative integers. To it Legendre assigned the symbol formula_21, and it is now called the gamma function. Besides being analytic over positive reals formula_22, formula_21 also enjoys the uniquely defining property that formula_23 is convex, which aesthetically justifies this analytic continuation of the factorial function over any other analytic continuation. To the subject Lejeune Dirichlet has contributed an important theorem (Liouville, 1839), which has been elaborated by Liouville, Catalan, Leslie Ellis, and others. Raabe (1843–44), Bauer (1859), and Gudermann (1845) have written about the evaluation of formula_24 and formula_25. Legendre's great table appeared in 1816.
Applications.
The application of the infinitesimal calculus to problems in physics and astronomy was contemporary with the origin of the science. All through the 18th century these applications were multiplied, until at its close Laplace and Lagrange had brought the whole range of the study of forces into the realm of analysis. To Lagrange (1773) we owe the introduction of the theory of the potential into dynamics, although the name "potential function" and the fundamental memoir of the subject are due to Green (1827, printed in 1828). The name "potential" is due to Gauss (1840), and the distinction between potential and potential function to Clausius. With its development are connected the names of Lejeune Dirichlet, Riemann, von Neumann, Heine, Kronecker, Lipschitz, Christoffel, Kirchhoff, Beltrami, and many of the leading physicists of the century.
It is impossible in this article to enter into the great variety of other applications of analysis to physical problems. Among them are the investigations of Euler on vibrating chords; Sophie Germain on elastic membranes; Poisson, Lamé, Saint-Venant, and Clebsch on the elasticity of three-dimensional bodies; Fourier on heat diffusion; Fresnel on light; Maxwell, Helmholtz, and Hertz on electricity; Hansen, Hill, and Gyldén on astronomy; Maxwell on spherical harmonics; Lord Rayleigh on acoustics; and the contributions of Lejeune Dirichlet, Weber, Kirchhoff, F. Neumann, Lord Kelvin, Clausius, Bjerknes, MacCullagh, and Fuhrmann to physics in general. The labors of Helmholtz should be especially mentioned, since he contributed to the theories of dynamics, electricity, etc., and brought his great analytical powers to bear on the fundamental axioms of mechanics as well as on those of pure mathematics.
Furthermore, infinitesimal calculus was introduced into the social sciences, starting with Neoclassical economics. Today, it is a valuable tool in mainstream economics.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " f\\left(a\\right) = f\\left(b\\right) = 0 "
},
{
"math_id": 1,
"text": " f'\\left(x\\right) = 0 "
},
{
"math_id": 2,
"text": "x "
},
{
"math_id": 3,
"text": "\\ a < x < b "
},
{
"math_id": 4,
"text": "x \\approx y"
},
{
"math_id": 5,
"text": "\\sin(y) - \\sin(x) \\approx (y - x)\\cos(y)"
},
{
"math_id": 6,
"text": "f(x) \\ = \\ \\frac{1}{x} ."
},
{
"math_id": 7,
"text": "F(st) = F(s) + F(t) ,"
},
{
"math_id": 8,
"text": "\\frac{dF}{dx} \\ = \\ \\frac{1}{x} ."
},
{
"math_id": 9,
"text": "{x}"
},
{
"math_id": 10,
"text": "{y}"
},
{
"math_id": 11,
"text": "\\dot{x}"
},
{
"math_id": 12,
"text": "\\dot{y}"
},
{
"math_id": 13,
"text": "\\scriptstyle\\int"
},
{
"math_id": 14,
"text": "\\dot{f}"
},
{
"math_id": 15,
"text": "\\int"
},
{
"math_id": 16,
"text": "\\frac{dy}{dx}"
},
{
"math_id": 17,
"text": "\\int_0^1 x^{n-1}(1 - x)^{n-1} \\, dx"
},
{
"math_id": 18,
"text": "\\int_0^\\infty e^{-x} x^{n-1} \\, dx"
},
{
"math_id": 19,
"text": "\\int_0^\\infty e^{-x}x^{n-1}dx = (n-1)!,"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "\\Gamma"
},
{
"math_id": 22,
"text": "\\mathbb{R}^{+}"
},
{
"math_id": 23,
"text": "\\log \\Gamma"
},
{
"math_id": 24,
"text": "\\Gamma (x)"
},
{
"math_id": 25,
"text": "\\log \\Gamma (x)"
}
]
| https://en.wikipedia.org/wiki?curid=746117 |
7461280 | Mating type | Term in biologyMating types are the microorganism equivalent to sexes in multicellular lifeforms and are thought to be the ancestor to distinct sexes. They also occur in multicellular organisms such as fungi.
Definition.
Mating types are the microorganism equivalent to sex in higher organisms and occur in isogamous species. Depending on the group, different mating types are often referred to by numbers, letters, or simply "+" and "−" instead of "male" and "female", which refer to "sexes" or differences in size between gametes. Syngamy can only take place between gametes carrying different mating types.
Mating types are extensively studied in fungi. Among fungi, mating type is determined by chromosomal regions called mating-type loci. Furthermore, it is not as simple as "two different mating types can mate", but rather, a matter of combinatorics. As a simple example, most "basidiomycete" have a "tetrapolar heterothallism" mating system: there are two loci, and mating between two individuals is possible if the alleles on "both" loci are different. For example, if there are 3 alleles per locus, then there would be 9 mating types, each of which can mate with 4 other mating types. By multiplicative combination, it generates a vast number of mating types.
Mechanism.
As an illustration, the model organism "Coprinus cinereus" has two mating-type loci called "A" and "B". Both loci have 3 groups of genes. At the "A" locus are 6 homeodomain proteins arranged in 3 groups of 2 (HD1 and HD2), which arose by gene duplication. At the "B" locus, each of the 3 groups contain one pheromone G-protein-coupled receptor and usually two genes for pheromones.
The "A" locus ensures heterothallism through a specific interaction between HD1 and HD2 proteins. Within each group, a HD1 protein can only form a functional heterodimer with a HD2 protein from a different group, not with the HD2 protein from its own group. Functional heterodimers are necessary for a dikaryon-specific transcription factor, and its lack arrests the development process. They function redundantly, so it is only necessary for one of the three groups to be heterozygotic for the "A" locus to work.
Similarly, the "B" locus ensures heterothallism through a specific interaction between pheromone receptors and pheromones. Each pheromone receptor is activated by pheromones from other groups, but not by the pheromone encoded by the same group. This means that a pheromone receptor can only trigger a signaling cascade when it binds to a pheromone from a different group, not when it binds to the pheromone from its own group. They also function redundantly.
In both cases, the mechanism is based on a "self-incompatibility" principle, where the proteins or pheromones from the same group are incompatible with each other, but compatible with those from different groups.
Similarly, the "Schizophyllum commune" has 2 gene groups (Aα, Aβ) for homeodomain proteins on the "A" locus, and 2 gene groups (Bα, Bβ) for pheromones and receptors on the "B" locus. Aα has 9 alleles, Aβ has 32, Bα has 9, and Bβ has 9. The two gene groups at the "A" locus function independently but redundantly, so only one group out of the two needs to be heterozygotic for it to work. Similarly for the two gene groups at the "B" locus. Thus, mating between two individuals succeeds ifformula_0Thus there are formula_1 mating types, each of which can mate with formula_2 other mating types.
Occurrence.
Reproduction by mating types is especially prevalent in fungi. Filamentous ascomycetes usually have two mating types referred to as "MAT1-1" and "MAT1-2", following the yeast mating-type locus (MAT). Under standard nomenclature, MAT1-1 (which may informally be called MAT1) encodes for a regulatory protein with an alpha box motif, while MAT1-2 (informally called MAT2) encodes for a protein with a high motility-group (HMG) DNA-binding motif, as in the yeast mating type MATα1. The corresponding mating types in yeast, a non-filamentous ascomycete, are referred to as MATa and MATα.
Mating type genes in ascomycetes are called idiomorphs rather than alleles due to the uncertainty of the origin by common descent. The proteins they encode are transcription factors which regulate both the early and late stages of the sexual cycle. Heterothallic ascomycetes produce gametes, which present a single Mat idiomorph, and syngamy will only be possible between gametes carrying complementary mating types. On the other hand, homothallic ascomycetes produce gametes that can fuse with every other gamete in the population (including its own mitotic descendants) most often because each haploid contains the two alternate forms of the Mat locus in its genome.
Basidiomycetes can have thousands of different mating types.
In the ascomycete "Neurospora crassa" matings are restricted to interaction of strains of opposite mating type. This promotes some degree of outcrossing. Outcrossing, through complementation, could provide the benefit of masking recessive deleterious mutations in genes which function in the dikaryon and/or diploid stage of the life cycle.
Evolution.
Mating types likely predate anisogamy, and sexes evolved directly from mating types or independently in some lineages.
Studies on green algae have provided evidence for the evolutionary link between sexes and mating types. In 2006 Japanese researchers found a gene in males of "Pleodorina starrii" that is an orthologue to a gene for a mating type in the "Chlamydomonas reinhardtii". In Volvocales, the plus mating type is the ancestor to female.
In ciliates multiple mating types evolved from binary mating types in several lineages. As of 2019, genomic conflict has been considered the leading explanation for the evolution of two mating types.
Secondary mating types evolved alongside simultaneous hermaphrodites in several lineages.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[(A\\alpha 1 \\neq A\\alpha 2) \\mathrm{OR}(A\\beta 1 \\neq A\\beta 2)] \\mathrm{AND} [(B\\alpha 1 \\neq B\\alpha 2) \\mathrm{OR}(B\\beta 1 \\neq B\\beta 2)] "
},
{
"math_id": 1,
"text": "9 \\times 32 \\times 9 \\times 9 = 23328"
},
{
"math_id": 2,
"text": "(9 \\times 32-1) \\times (9 \\times 9-1) = 22960"
}
]
| https://en.wikipedia.org/wiki?curid=7461280 |
74620623 | Kerr–Newman–de–Sitter metric | Solution of Einstein field equations
The Kerr–Newman–de–Sitter metric (KNdS) is the one of the most general stationary solutions of the Einstein–Maxwell equations in general relativity that describes the spacetime geometry in the region surrounding an electrically charged, rotating mass embedded in an expanding universe. It generalizes the Kerr–Newman metric by taking into account the cosmological constant formula_0.
Boyer–Lindquist coordinates.
In (+, −, −, −) signature and in natural units of formula_1 the KNdS metric is
formula_2
formula_3
formula_4
formula_5
formula_6
with all the other metric tensor components formula_7, where formula_8 is the black hole's spin parameter, formula_9 its electric charge, and formula_10 the cosmological constant with formula_11 as the time-independent Hubble parameter. The electromagnetic 4-potential is
formula_12
The frame-dragging angular velocity is
formula_13
and the local frame-dragging velocity relative to constant formula_14 positions (the speed of light at the ergosphere)
formula_15
The escape velocity (the speed of light at the horizons) relative to the local corotating zero-angular momentum observer is
formula_16
The conserved quantities in the equations of motion
formula_17
where formula_18 is the four velocity, formula_19 is the test particle's specific charge and formula_20 the Maxwell–Faraday tensor
formula_21
are the total energy
formula_22
and the covariant axial angular momentum
formula_23
The overdot stands for differentiation by the testparticle's proper time formula_24 or the photon's affine parameter, so formula_25.
Null coordinates.
To get formula_26 coordinates we apply the transformation
formula_27
formula_28
and get the metric coefficients
formula_29
formula_30
formula_31
and all the other formula_7, with the electromagnetic vector potential
formula_32
Defining formula_33 ingoing lightlike worldlines give a formula_34 light cone on a formula_35 spacetime diagram.
Horizons and ergospheres.
The horizons are at formula_36 and the ergospheres at formula_37.
This can be solved numerically or analytically. Like in the Kerr and Kerr–Newman metrics, the horizons have constant Boyer-Lindquist formula_38, while the ergospheres' radii also depend on the polar angle formula_39.
This gives 3 positive solutions each (including the black hole's inner and outer horizons and ergospheres as well as the cosmic ones) and a negative solution for the space at formula_40 in the antiverse behind the ring singularity, which is part of the probably unphysical extended solution of the metric.
With a negative formula_0 (the Anti–de–Sitter variant with an attractive cosmological constant), there are no cosmic horizon and ergosphere, only the black hole-related ones.
In the Nariai limit the black hole's outer horizon and ergosphere coincide with the cosmic ones (in the Schwarzschild–de–Sitter metric to which the KNdS reduces with formula_41 that would be the case when formula_42).
Invariants.
The Ricci scalar for the KNdS metric is formula_43, and the Kretschmann scalar is
formula_44
formula_45
formula_46
formula_47
formula_48
formula_49
formula_50
formula_51
formula_52
formula_53
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Lambda"
},
{
"math_id": 1,
"text": "\\rm G=M=c=k_e=1"
},
{
"math_id": 2,
"text": "g_{\\rm tt}= \\rm -\\frac{3 \\ [ a^2 \\ \\sin^2 \\theta \\left(a^2 \\ \\Lambda \\ \\cos^2 \\theta +3\\right)+a^2 \\left(\\Lambda \\ r^2-3\\right)+\\Lambda \\ r^4-3 \\ r^2+6 \\ r-3 \\mho^2 ] }{\\left(a^2 \\ \\Lambda +3\\right)^2 \\left(a^2 \\cos^2 \\theta +r^2\\right)}"
},
{
"math_id": 3,
"text": "g_{\\rm rr}= \\rm -\\frac{a^2 \\ \\cos^2 \\theta +r^2}{\\left(a^2+r^2\\right) \\left(1-\\frac{\\Lambda \\ r^2}{3}\\right)-2 \\ r+\\mho^2}"
},
{
"math_id": 4,
"text": "g_{\\rm \\theta \\theta}= \\rm -\\frac{3 \\left(a^2 \\ \\cos^2 \\theta +r^2\\right)}{a^2 \\ \\Lambda \\ \\cos^2 \\theta +3}"
},
{
"math_id": 5,
"text": "g_{\\rm \\phi \\phi}= \\rm\\frac{9 \\ \\{ \\frac{1}{3} \\left(a^2+r^2\\right)^2 \\sin^2 \\theta \\left(a^2 \\ \\Lambda \\cos^2 \\theta +3\\right)-a^2 \\sin^4 \\theta \\ [ \\left(a^2+r^2\\right) \\left(1-\\Lambda \\ r^2/3\\right)-2 \\ r+\\mho ^2 ] \\} }{-\\left(a^2 \\ \\Lambda +3\\right)^2 \\left(a^2 \\cos^2 \\theta +r^2\\right)}"
},
{
"math_id": 6,
"text": "g_{\\rm t \\phi}= \\rm \\frac{3 \\ a \\ \\sin^2 \\theta \\ [ a^2 \\ \\Lambda \\left(a^2+r^2\\right) \\cos^2 \\theta +a^2 \\ \\Lambda \\ r^2+\\Lambda \\ r^4+6 \\ r-3 \\ \\mho^2 ] }{\\left(a^2 \\ \\Lambda +3\\right)^2 \\left(a^2 \\ \\cos^2 \\theta +r^2\\right)}"
},
{
"math_id": 7,
"text": "g_{\\mu \\nu}=0"
},
{
"math_id": 8,
"text": "\\rm a"
},
{
"math_id": 9,
"text": "\\rm \\mho"
},
{
"math_id": 10,
"text": "\\rm \\Lambda=3 H^2 "
},
{
"math_id": 11,
"text": "\\rm H"
},
{
"math_id": 12,
"text": "\\rm A_{\\mu } = \\left\\{\\frac{3 \\ r \\ \\mho }{\\left(a^2 \\ \\Lambda +3\\right) \\left(a^2 \\ \\cos^2 \\theta +r^2\\right)}, \\ 0, \\ 0, \\ -\\frac{3 \\ a \\ r \\ \\mho \\ \\sin ^2 \\theta }{\\left(a^2 \\ \\Lambda +3\\right) \\left(a^2 \\ \\cos^2 \\theta +r^2\\right)}\\right\\}"
},
{
"math_id": 13,
"text": "\\omega = \\frac{\\rm d\\phi}{\\rm d t}= -\\frac{g_{\\rm t \\phi}}{g_{\\rm \\phi \\phi}}= \\rm \\frac{a \\ [ a^2 \\ \\Lambda \\left(a^2+r^2\\right) \\cos^2 \\theta +a^2 \\ \\Lambda \\ r^2+6 \\ r+\\Lambda \\ r^4-3 \\ \\mho^2 ] }{a^2 \\ \\sin^2 \\theta \\ [ a^2 \\left(\\Lambda \\ r^2-3\\right)+6 \\ r+\\Lambda \\ r^4-3 \\ r^2-3 \\ \\mho^2 ] +a^2 \\ \\Lambda \\ \\left(a^2+r^2\\right)^2 \\cos^2 \\theta +3 \\ \\left(a^2+r^2\\right)^2}"
},
{
"math_id": 14,
"text": "\\rm \\{r, \\theta, \\phi \\}"
},
{
"math_id": 15,
"text": "\\nu = \\sqrt{g_{\\rm t \\phi} \\ g^{\\rm t \\phi}} = \\rm \\sqrt{-\\frac{a^2 \\ \\sin^2 \\theta \n \\ [ a^2 \\ \\Lambda \\left(a^2+r^2\\right) \\cos^2 \\theta +a^2 \\Lambda \\ r^2+6 \\ r+\\Lambda \\ r^4-3 \\ \\mho^2 ] ^2}{\\left(a^2 \\ \\Lambda \\ \\cos^2 \\theta +3\\right) \\left(a^2+r^2-a^2 \\sin^2 \\theta \\right)^2 [ a^2 \\left(\\Lambda \\ r^2-3\\right)+6 \\ r+\\Lambda \\ r^4-3 \\ r^2-3 \\ \\mho^2 ] }}"
},
{
"math_id": 16,
"text": " {\\rm v} = \\sqrt{ 1 - 1/g^{\\rm tt} } = \\rm \\sqrt{\\frac{3 \\left(a^2 \\Lambda \\cos^2 \\theta +3\\right) \\left(a^2+r^2-a^2 \\sin^2 \\theta \\right)^2 \\left[a^2 \\left(\\Lambda r^2-3\\right)+\\Lambda r^4-3 r^2+6 r-3 \\mho ^2\\right]}{\\left(a^2 \\Lambda +3\\right)^2 \\left( a^2 \\cos^2 \\theta +r^2\\right) \\{ a^2 \\Lambda \\left( a^2+r^2 \\right)^2 \\cos^2 \\theta +3 \\left( a^2+r^2 \\right)^2+a^2 \\sin^2 \\theta \\left[a^2 \\left(\\Lambda r^2-3\\right)+\\Lambda r^4-3 r^2+6 r-3 \\mho ^2 \\right] \\} }+1} "
},
{
"math_id": 17,
"text": "{\\rm \\ddot{x}^{\\mu} = -\\sum_{\\alpha, \\beta} \\ ( \\Gamma^{\\mu}_{\\alpha \\beta} \\ \\dot{x}^{\\alpha} \\ \\dot{x}^{\\beta} + q \\ { \\rm F}^{\\mu \\beta} \\ {\\rm \\dot{x}}^{\\alpha}} \\ g_{\\alpha \\beta})\n"
},
{
"math_id": 18,
"text": "\\rm \\dot{x}"
},
{
"math_id": 19,
"text": "\\rm q"
},
{
"math_id": 20,
"text": "\\rm F"
},
{
"math_id": 21,
"text": "\\rm { \\ F}_{\\mu \\nu}=\\frac{\\partial A_{\\mu}}{\\partial x^{\\nu}}-\\frac{\\partial A_{\\nu}}{\\partial x^{\\mu}}"
},
{
"math_id": 22,
"text": "{\\rm E = -p_t}=g_{\\rm tt} {\\rm \\dot{t}}+g_{\\rm t \\phi} {\\rm \\dot{\\phi}} + \\rm q \\ A_{t}"
},
{
"math_id": 23,
"text": "{\\rm L_z = p_{\\phi}}=-g_{\\rm \\phi \\phi} {\\rm \\dot{\\phi}}-g_{\\rm t \\phi} {\\rm \\dot{t}} - \\rm q \\ A_{\\phi}"
},
{
"math_id": 24,
"text": "\\tau"
},
{
"math_id": 25,
"text": "\\rm \\dot{x}=dx/d\\tau , \\ \\ddot{x}=d^2x/d\\tau^2"
},
{
"math_id": 26,
"text": "g_{\\rm rr}=0"
},
{
"math_id": 27,
"text": "\\rm dt=du-\\frac{dr \\left(a^2 \\ \\Lambda / 3 +1\\right) \\left(a^2+r^2\\right)}{\\left(a^2+r^2\\right) \\left(1-\\Lambda \\ r^2 / 3 \\right)-2 \\ r+\\mho ^2}"
},
{
"math_id": 28,
"text": "\\rm d \\phi = d \\varphi-\\frac{a \\ dr \\left(a^2 \\ \\Lambda / 3 +1\\right)}{\\left(a^2+r^2\\right) \\left(1-\\Lambda \\ r^2 / 3 \\right)-2 \\ r+ \\mho ^2} "
},
{
"math_id": 29,
"text": "g_{\\rm ur}=\\rm -\\frac{3}{a^2 \\ \\Lambda +3}"
},
{
"math_id": 30,
"text": "g_{\\rm r\\varphi}=\\rm \\frac{3 \\ a \\sin^2 \\theta }{a^2 \\ \\Lambda +3}"
},
{
"math_id": 31,
"text": "g_{\\rm uu}=g_{\\rm tt} \\ , \\ \\ g_{\\theta \\theta}=g_{\\theta \\theta} \\ , \\ \\ g_{\\rm \\varphi \\varphi}=g_{\\rm \\phi \\phi} \\ , \\ \\ g_{\\rm u \\varphi}=g_{\\rm t \\phi}"
},
{
"math_id": 32,
"text": "\\rm A_{\\mu}=\\left\\{\\frac{3 \\ r \\ \\mho }{\\left(a^2 \\ \\Lambda +3\\right) \\left(a^2 \\cos^2 \\theta +r^2\\right)},\\frac{3 \\ r \\ \\mho }{a^2 \\left(\\Lambda \\ r^2-3\\right)+6 \\ r+\\Lambda \\ r^4-3 \\left(r^2+\\mho ^2\\right)}, \\ 0, \\ -\\frac{3 \\ a \\ r \\ \\mho \\sin ^2 \\theta }{\\left(a^2 \\ \\Lambda +3\\right) \\left(a^2 \\cos^2 \\theta +r^2\\right)}\\right\\}"
},
{
"math_id": 33,
"text": "\\rm \\bar{t}=u-r"
},
{
"math_id": 34,
"text": "45^{\\circ}"
},
{
"math_id": 35,
"text": "\\{ \\rm \\bar{t}, \\ r \\}"
},
{
"math_id": 36,
"text": "g^{\\rm rr}=0"
},
{
"math_id": 37,
"text": "g_{\\rm tt}||g_{\\rm uu}=0"
},
{
"math_id": 38,
"text": "\\rm r"
},
{
"math_id": 39,
"text": "\\theta"
},
{
"math_id": 40,
"text": "\\rm r<0"
},
{
"math_id": 41,
"text": "\\rm a= \\mho =0"
},
{
"math_id": 42,
"text": "\\Lambda=1/9"
},
{
"math_id": 43,
"text": "\\rm R=-4 \\Lambda"
},
{
"math_id": 44,
"text": "\\rm K=\\{220 a^{12} \\Lambda ^2 \\cos (6 \\theta )+66 a^{12} \\Lambda ^2 \\cos (8 \\theta )+12 a^{12} \\Lambda ^2 \\cos (10 \\theta )+a^{12} \\Lambda ^2 \\cos (12 \\theta )+"
},
{
"math_id": 45,
"text": "\\rm 462 a^{12} \\Lambda ^2+1080 a^{10} \\Lambda ^2 r^2 \\cos (6 \\theta )+240 a^{10} \\Lambda ^2 r^2 \\cos (8 \\theta )+24 a^{10} \\Lambda ^2 r^2 \\cos (10 \\theta )+"
},
{
"math_id": 46,
"text": "\\rm 3024 a^{10} \\Lambda ^2 r^2+1920 a^8 \\Lambda ^2 r^4 \\cos (6 \\theta )+ 240 a^8 \\Lambda ^2 r^4 \\cos (8 \\theta )+8400 a^8 \\Lambda ^2 r^4-"
},
{
"math_id": 47,
"text": "\\rm 1152 a^6 \\cos (6 \\theta )-11520 a^6 +1280 a^6 \\Lambda ^2 r^6 \\cos (6 \\theta )+12800 a^6 \\Lambda ^2 r^6+207360 a^4 r^2-"
},
{
"math_id": 48,
"text": "\\rm 138240 a^4 r \\mho ^2+11520 a^4 \\Lambda ^2 r^8+16128 a^4 \\mho ^4-276480 a^2 r^4+368640 a^2 r^3 \\mho ^2+"
},
{
"math_id": 49,
"text": "\\rm 6144 a^2 \\Lambda ^2 r^{10}-104448 a^2 r^2 \\mho ^4+3 a^4 \\cos (4 \\theta ) [ 165 a^8 \\Lambda ^2+960 a^6 \\Lambda ^2 r^2+2240 a^4 \\Lambda ^2 r^4-"
},
{
"math_id": 50,
"text": "\\rm 256 a^2 (9-10 \\Lambda ^2 r^6)+256 (90 r^2-60 r \\mho ^2+5 \\Lambda ^2 r^8+7 \\mho ^4) ] +24 a^2 \\cos (2 \\theta ) [ 33 a^{10} \\Lambda ^2+"
},
{
"math_id": 51,
"text": "\\rm 210 a^8 \\Lambda ^2 r^2+560 a^6 \\Lambda ^2 r^4-80 a^4 (9-10 \\Lambda ^2 r^6)+128 a^2 (90 r^2-60 r \\mho ^2+5 \\Lambda ^2 r^8+"
},
{
"math_id": 52,
"text": "\\rm 7 \\mho ^4)+256 r^2 (-45 r^2+60 r \\mho ^2+\\Lambda ^2 r^8-17 \\mho ^4) ] +36864 r^6-73728 r^5 \\mho ^2+"
},
{
"math_id": 53,
"text": "\\rm 2048 \\Lambda ^2 r^{12}+43008 r^4 \\mho ^4 \\} \\div \\{12 [ a^2 \\cos (2 \\theta )+a^2+2 r^2 ] ^6 \\}\\text{.}"
}
]
| https://en.wikipedia.org/wiki?curid=74620623 |
74628152 | Ammann A1 tilings | Non-periodic tiling of the plane
In geometry, an Ammann A1 tiling is a tiling from the 6 piece prototile set shown on the right. They were found in 1977 by Robert Ammann. Ammann was inspired by the Robinsion tilings, which were found by Robinson in 1971. The A1 tiles are one of five sets of tiles discovered by Ammann and described in "Tilings and patterns".
The A1 tile set is aperiodic, i.e. they tile the whole Euclidean plane, but only without ever creating a periodic tiling.
Generation through matching.
The prototiles are squares with indentations and protrusions on the sides and corners that force the tiling to form a pattern of a perfect binary tree that is continued indefinitely. The markings on the tiles in the pictures emphasize this hierarchical structure, however, they have only illustrative character and do not represent additional matching rules as this is already taken care of by the indentations and protrusions.
However, the tiling produced in this way is not unique, not even up to isometries of the Euclidean group, e.g. translations and rotations. When going to the next generation, one has choices. In the picture to the left, the initial patch in the left upper corner highlighted in blue can be prolonged by either a green or a red tile, which are mirror images of each other and instances of the prototile labeled "b". Then there are two more choices in the same spirit but with prototile "e". The remainder of the next generation is then fixed. If one would deviate from the pattern for this next generation, one would run into configurations that will not match up globally at least at some later stage.
The choices are encoded by infinite words from formula_0 for the alphabet formula_1, where "g" indicates the green choice while "r" indicates the red choice. These are in bijection to a Cantor set and thus their cardinality is the continuum. Not all choices lead to a tiling of the plane. E.g. if one only sticks to the green choice one would only fill a lower right corner of the plane. If there are sufficiently generic infinitely many alteration between "g" and "r" one will however cover the whole plane. This still leaves uncountably many different A1 tilings, all of them necessarily nonperiodic. Since there are only countably many possible Euclidean isometries that respect the squares underlying the tiles to relate these different tilings, there are uncountable many A1 tilings even up to isometries.
Additionally an A1 tiling may have "faults" (also called "corridors") going off to infinity in "arms". This additionally increases the numbers of possible A1 tilings, but the cardinality remains that of the continuum. Note that the corridors allow for some part with binary tree hierarchy to be rotated compared to the other such parts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma^{\\omega}"
},
{
"math_id": 1,
"text": "\\Sigma=\\{g,r\\}"
}
]
| https://en.wikipedia.org/wiki?curid=74628152 |
74629434 | Jut (topography) | Measurement of the impressiveness of a summit
In topography, jut is a measure of the base-to-peak rise and visual impressiveness of a mountain summit or other landform. It describes how sharply or impressively a location rises above surrounding terrain by factoring both height above surroundings and steepness of ascent.
Description.
A mountain with a jut of X can be interpreted to rise as sharply or impressively as a vertical cliff of X. For example, a vertical cliff of height 100 meters, a 45° slope of height 141 meters, and a 30° slope of height 200 meters all measure a jut of 100 meters and can be interpreted to rise equally sharply. Jut can be further decomposed into base-to-peak height and base-to-peak steepness, where jut equals base-to-peak height multiplied by the sine of base-to-peak steepness.
Definition.
Jut formula_0 is the maximum "angle-reduced height" (symbol "H"'), which can be defined as the vector projection, in the line of sight, of the peak's height (or vertical separation), "H":
formula_1
where "e" is the summit's elevation angle. Height, angle-reduced height, and jut have unit of length (meter or feet). While height and angle-reduced height depend on the viewing location around the peak, jut is a constant value for a given peak. "Base" is the location where angle-reduced height is maximized.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J=\\max{H'}"
},
{
"math_id": 1,
"text": "H'=H|\\sin{e}|"
}
]
| https://en.wikipedia.org/wiki?curid=74629434 |
7463064 | Soddy's hexlet | Chain of 6 spheres tangent to 3 given spheres
In geometry, Soddy's hexlet is a chain of six spheres (shown in grey in Figure 1), each of which is tangent to both of its neighbors and also to three mutually tangent given spheres. In Figure 1, the three spheres are the red inner sphere and two spheres (not shown) above and below the plane the centers of the hexlet spheres lie on. In addition, the hexlet spheres are tangent to a fourth sphere (the blue outer sphere in Figure 1), which is not tangent to the three others.
According to a theorem published by Frederick Soddy in 1937, it is always possible to find a hexlet for any choice of mutually tangent spheres "A", "B" and "C". Indeed, there is an infinite family of hexlets related by rotation and scaling of the hexlet spheres (Figure 1); in this, Soddy's hexlet is the spherical analog of a Steiner chain of six circles. Consistent with Steiner chains, the centers of the hexlet spheres lie in a single plane, on an ellipse. Soddy's hexlet was also discovered independently in Japan, as shown by Sangaku tablets from 1822 in Kanagawa prefecture.
Definition.
Soddy's hexlet is a chain of six spheres, labeled "S"1–"S"6, each of which is tangent to three given spheres, "A", "B" and "C", that are themselves mutually tangent at three distinct points. (For consistency throughout the article, the hexlet spheres will always be depicted in grey, spheres "A" and "B" in green, and sphere "C" in blue.) The hexlet spheres are also tangent to a fourth fixed sphere "D" (always shown in red) that is not tangent to the three others, "A", "B" and "C".
Each sphere of Soddy's hexlet is also tangent to its neighbors in the chain; for example, sphere "S"4 is tangent to "S"3 and "S"5. The chain is closed, meaning that every sphere in the chain has two tangent neighbors; in particular, the initial and final spheres, "S"1 and "S"6, are tangent to one another.
Annular hexlet.
The annular Soddy's hexlet is a special case (Figure 2), in which the three mutually tangent spheres consist of a single sphere of radius "r" (blue) sandwiched between two parallel planes (green) separated by a perpendicular distance 2"r". In this case, Soddy's hexlet consists of six spheres of radius "r" packed like ball bearings around the central sphere and likewise sandwiched. The hexlet spheres are also tangent to a fourth sphere (red), which is not tangent to the other three.
The chain of six spheres can be rotated about the central sphere without affecting their tangencies, showing that there is an infinite family of solutions for this case. As they are rotated, the spheres of the hexlet trace out a torus (a doughnut-shaped surface); in other words, a torus is the envelope of this family of hexlets.
Solution by inversion.
The general problem of finding a hexlet for three given mutually tangent spheres "A", "B" and "C" can be reduced to the annular case using inversion. This geometrical operation always transforms spheres into spheres or into planes, which may be regarded as spheres of infinite radius. A sphere is transformed into a plane if and only if the sphere passes through the center of inversion. An advantage of inversion is that it preserves tangency; if two spheres are tangent before the transformation, they remain so after. Thus, if the inversion transformation is chosen judiciously, the problem can be reduced to a simpler case, such as the annular Soddy's hexlet. Inversion is reversible; repeating an inversion in the same point returns the transformed objects to their original size and position.
Inversion in the point of tangency between spheres "A" and "B" transforms them into parallel planes, which may be denoted as "a" and "b". Since sphere "C" is tangent to both "A" and "B" and does not pass through the center of inversion, "C" is transformed into another sphere "c" that is tangent to both planes; hence, "c" is sandwiched between the two planes "a" and "b". This is the annular Soddy's hexlet (Figure 2). Six spheres "s"1–"s"6 may be packed around "c" and likewise sandwiched between the bounding planes "a" and "b". Re-inversion restores the three original spheres, and transforms "s"1–"s"6 into a hexlet for the original problem. In general, these hexlet spheres "S"1–"S"6 have different radii.
An infinite variety of hexlets may be generated by rotating the six balls "s"1–"s"6 in their plane by an arbitrary angle before re-inverting them. The envelope produced by such rotations is the torus that surrounds the sphere "c" and is sandwiched between the two planes "a" and "b"; thus, the torus has an inner radius "r" and outer radius 3"r". After the re-inversion, this torus becomes a Dupin cyclide (Figure 3).
Dupin cyclide.
The envelope of Soddy's hexlets is a Dupin cyclide, an inversion of the torus. Thus Soddy's construction shows that a cyclide of Dupin is the envelope of a 1-parameter family of spheres in two different ways, and each sphere in either family is tangent to two spheres in same family and three spheres in the other family. This result was probably known to Charles Dupin, who discovered the cyclides that bear his name in his 1803 dissertation under Gaspard Monge.
Relation to Steiner chains.
The intersection of the hexlet with the plane of its spherical centers produces a Steiner chain of six circles.
Parabolic and hyperbolic hexlets.
It is assumed that spheres A and B are the same size.
In any elliptic hexlet, such as the one shown at the top of the article, there are two tangent planes to the hexlet. In order for an elliptic hexlet to exist, the radius of C must be less than one quarter that of A. If C's radius is one quarter of A's, each sphere will become a plane in the journey. The inverted image shows a normal elliptic hexlet, though, and in the parabolic hexlet, the point where a sphere turns into a plane is precisely when its inverted image passes through the centre of inversion. In such a hexlet there is only one tangent plane to the hexlet. The line of the centres of a parabolic hexlet is a parabola.
If C is even larger than that, a hyperbolic hexlet is formed, and now there are no tangent planes at all. Label the spheres "S"1 to "S"6. "S"1 thus cannot go very far until it becomes a plane (where its inverted image passes through the centre of inversion) and then reverses its concavity (where its inverted image surrounds the centre of inversion). Now the line of the centres is a hyperbola.
The limiting case is when A, B and C are all the same size. The hexlet now becomes straight. "S"1 is small as it passes through the hole between A, B and C, and grows till it becomes a plane tangent to them. The centre of inversion is now also with a point of tangency with the image of "S"6, so it is also a plane tangent to A, B and C. As "S"1 proceeds, its concavity is reversed and now it surrounds all the other spheres, tangent to A, B, C, "S"2 and "S"6. "S"2 pushes upwards and grows to become a tangent plane and "S"6 shrinks. "S"1 then obtains "S"6's former position as a tangent plane. It then reverses concavity again and passes through the hole again, beginning another round trip. Now the line of centres is a degenerate hyperbola, where it has collapsed into two straight lines.
Sangaku tablets.
Japanese mathematicians discovered the same hexlet over one hundred years before Soddy did. They analysed the packing problems in which circles and polygons, balls and polyhedrons come into contact and often found the relevant theorems independently before their discovery by Western mathematicians. They often published these as sangaku. The sangaku about the hexlet was made by Irisawa Shintarō Hiroatsu in the school of Uchida Itsumi, and dedicated to the Samukawa Shrine in May 1822. The original sangaku has been lost but was recorded in Uchida's book of "Kokonsankan" in 1832. A replica of the sangaku was made from the record and dedicated to the Hōtoku museum in the Samukawa Shrine in August, 2009.
The sangaku by Irisawa consists of three problems. The third problem relates to Soddy's hexlet: "the diameter of the outer circumscribing sphere is 30 sun. The diameters of the nucleus balls are 10 sun and 6 sun each. The diameter of one of the balls in the chain of balls is 5 sun. Then I asked for the diameters of the remaining balls. The answer is 15 sun, 10 sun, 3.75 sun, 2.5 sun and 2 + 8/11 sun."
In his answer, the method for calculating the diameters of the balls is written down and, when converted into mathematical notation, gives the following solution. If the ratios of the diameter of the outside ball to each of the nucleus balls are "a"1, "a"2, and if the ratios of the diameter to the chain balls are "c"1, ..., "c"6. we want to represent c"2, ..., "c"6 "in terms of a"1, "a"2, "and c"1." If
formula_0
then,
formula_1.
Then "c"1 + "c"4 = "c"2 + "c"5 = "c"3 + "c"6.
If "r"1, ..., "r"6 are the diameters of six balls, we get the formula:
formula_2
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K=\\sqrt{3\\left( a_1 a_2+a_2 c_1+c_1 a_1- \\left( \\frac{a_1+a_2+c_1+1}{2} \\right)^2 \\right)}"
},
{
"math_id": 1,
"text": "\\begin{align}\nc_2&=(a_1+a_2+c_1-1)/2-K \\\\\nc_3&=(3a_1+3a_2-c_1-3)/2-K \\\\\nc_4&=2a_1+2a_2-c_1-2 \\\\\nc_5&=(3a_1+3a_2-c_1-3)/2+K \\\\\nc_6&=(a_1+a_2+c_1-1)/2+K.\n\\end{align}\n"
},
{
"math_id": 2,
"text": "\\frac{1}{r_1}+\\frac{1}{r_4}=\\frac{1}{r_2}+\\frac{1}{r_5}=\\frac{1}{r_3}+\\frac{1}{r_6}."
}
]
| https://en.wikipedia.org/wiki?curid=7463064 |
74631893 | Vivek Shende | American mathematician
Vivek Vijay Shende is an American mathematician known for his work on algebraic geometry, symplectic geometry and quantum computing. He is a professor of Quantum Mathematics at Syddansk Universitet while on leave from University of California Berkeley.
Doctoral studies and early career.
Shende defended his Ph.D. dissertation "Hilbert schemes of points on integral plane curves" at Princeton University in 2011 under the supervision of Rahul Pandharipande. From 2011 to 2013, he was a Simons Postdoctoral Fellow at MIT mentored by Paul Seidel. Shende joined Berkeley as an assistant professor in 2013 and became an associate professor in 2019. He supervised at least four doctoral degrees at Berkeley.
Awards and accomplishments.
In 2021, after moving to Denmark, Shende received sizable grants intended to support the creation of a new research group. The Danish National Research Foundation awarded Shende its DNRF Chair. The Villum Foundation funded Shende's research in mathematical aspects of String theory through the Villum Investigator program. This is one of the largest and most prestigious grants for individual researchers in Denmark.
As a Berkeley professor, Shende received the National Science Foundation CAREER Award in 2017 and a Sloan Research Fellowship in Mathematics in 2015.
In 2010, Shende proved, together with Martijn Kool and Richard Thomas, the Göttsche conjecture on the universality of formulas counting nodal curves on surfaces, a problem in algebraic geometry whose history stretches back more than a century.
During his undergraduate studies at the University of Michigan, he performed computer science research with Igor L. Markov and John P. Hayes. Shende shared in 2004 the IEEE Donald O. Pederson Award in Solid-State Circuits
as the lead author of the work on synthesis of reversible logic circuits. This paper proved the existence of reversible circuits that implement certain permutations and developed algorithms for finding such circuits. Shende was also the lead author of the work on synthesis of quantum circuits that developed the quantum Shannon decomposition and algorithms for finding asymptotically optimal quantum circuits that implement a given formula_0-qubit unitary matrix, as well as quantum circuits that construct a given formula_0-qubit quantum state.
Shende obtained formulas and algorithms for implementing smallest possible quantum circuits for 2-qubit unitary matrices. For the 3-qubit Toffoli gate, he proved that six CNOT gates are necessary in a circuit that implements it, showing that the widely used six-CNOT decomposition is optimal. These publications are highly cited (per Google Scholar) and their results laid the foundation of compilers for quantum computers.
Mathematics education.
Shende taught college-level Calculus, Discrete Mathematics as well as Linear Algebra and Differential Equations courses at Berkeley. In 2021 he cosigned, along with many professional mathematicians, an open letter to Governor Gavin Newsom and other California officials asking to replace the proposed new California Math curriculum framework. The framework was adopted in 2023 despite these objections. | [
{
"math_id": 0,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=74631893 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.